About Me

My photo

Father, hacker, partner, feminist, atheist, socialist, SJW. Ex-Russian, Canadian, Québécois par adoption; universal basic income NDP-er (and I vote!); electric-car driving pansy; lapsed artist and photographer.

Abusive comments will be deleted, and abusive posters banned without warning.

Friday, February 22, 2008

Configuring pgpool-II

A follow-up to my previous post about pgpool-II.

Configuring pgpool-II requires a bit of forward thinking, as it's not obvious off-hand what various settings should be. Since our use of pgpool is limited to connection pooling mode (no query caching, parallel queries, or load balancing), I'll only discuss the two relevant parameters: num_init_children and max_pool.

Here's what the manual says about it:
The number of preforked pgpool-II server processes. Default is 32. Please note that cancelling a query creates another connection to the backend; thus, a query cannot be cancelled if the connections are full. If you want to ensure that queries can be cancelled, set this value to twice the expected connections. This parameter can only be set at server start.
The easiest way to think about it is -- how many concurrent processes will be accessing the database at peak load? A good guide here is to look at your httpd.conf and see what your MaxClients is set to. The default configuration is set to "256," which is probably a sane default.

Now, theoretically, you should simply set your num_init_children to match MaxClients or just slightly higher, but it may not be necessary. You have to estimate how many of your http requests are likely to result in a database connection -- a significant chunk of these requests is likely to be for images and similar static content, so by setting it to 256 you will probably be somewhat too generous. You have to keep in mind that pre-forking too many pgpool processes will result in unnecessary memory consumption and significantly higher load, because the master pgpool process will have to spend extra time keeping tabs on which connections are available and which ones need to be cleaned up. On the server, too, each extra connection uses up a chunk of memory, so keeping too many of them open will result in noticeably degraded performance.

If, on the other hand, you set num_init_children too low, you can get into a situation where too many requests for connections are coming in. Instead of rejecting them, the master process will add them to the waiting queue, which is okay if you are just above your capacity. If, however, you have a lot more requests than there are available pgpool children, the requests will be coming in faster than pgpool is able to process them, and you will soon have a massive traffic jam, which is likely to exhaust your MaxClients and result in "503 server too busy" errors.

In our case, since we're heavily relying on memcache to limit the number of database connections, I estimated that about a third of all requests are likely to result in a trip to the db; but I then set num_init_children even lower, at 64, or at a quarter of our MaxClients, for reasons that I discuss further down.

Here's what the docs say about max_pool:
The maximum number of cached connections in pgpool-II children processes. pgpool-II reuses the cached connection if an incoming connection is connecting to the same database by the same username. If not, pgpool-II creates a new connection to the backend. If the number of cached connections exceeds max_pool, the oldest connection will be discarded, and uses that slot for the new connection. Default value is 4. Please be aware that the number of connections from pgpool-II processes to the backend will be num_init_children * max_pool. This parameter can only be set at server start.
Our application is written to access the db as two distinct users -- a read-only user "appread" that only has SELECT permissions on the tables, and "appwrite", which has SELECT, UPDATE, INSERT, DELETE. This is a very good practice, as it helps us limit the impact of SQL injection attacks if someone happens to commit some sloppy code that doesn't correctly sanitize user input. The downside, of course, is that each http request is likely to open two distinct connections to the db -- once as user appread, and once as user appwrite.

This means that we need to set max_pool at 2 -- for each user. Furthermore, you need to keep in mind that if you connect to more than one database on the same postgresql host, you'll need to further adjust this value, as each separate database will require a separate connection. (This, in itself, is a good reason to consider whether you should really use different schemas instead of different databases if all you're interested in is clean namespace separation of your db objects. You can set per-user schema permissions just as easily as you can per-database. However, this discussion is beyond the scope of this post.)

If you set max_pool too low, you'll end up closing and re-opening your db connections unnecessarily, thus negating the very purpose of pgpool. If you set it too high, you'll just pre-open too many connections to the db for no good reason, using up your available memory both on the server and on the pgpool host.

You also have to keep in mind, and this is clearly mentioned in the docs, that your total number of cached db connections will be num_init_children times max_pool. If you have 64 children and 2 pools per each child, you will keep 128 db connections open. You have to configure your postgresql server to allow that many incoming connections, otherwise things will turn sour. Since we have three webnodes behind the load-balancer, our pgpool configuration will open 128*3=384 connections to postgres, which is why our max_connections in postgresql.conf is set well above 400.

Other things to keep in mind
As I have mentioned above, I set our num_init_children lower than my calculations called for. There is a good reason for that -- if I had set it at 128, or, in other words, half of our MaxClients, then the number of cached db connections would have been 128*2*3, or 768. Now, in all honesty, there is no way our postgres server is capable of performing 768 concurrent queries -- it's just not beefy enough for that. In my past experience, around 400 concurrent queries pushes the load to about 4, at which point things get really, really slow. Anything over that just spells "imminent doom," which is why I deliberately set the maximum number of pgpool connections at 64. Since pgpool queues incoming connections when there aren't enough available children, this is likely to result in a slowdown, but not a failure, and I'd much rather see slowness on the webnodes than unmanageable load on our db server.

Furthermore, pgpool doesn't make sense for connections that just aren't frequent enough to warrant connection pooling. We have a couple of databases on the same postgresql host that are used by a number of custom applications that are accessed very infrequently. For these, we open direct connections to the db server, bypassing pgpool.

There is also a number of options in pgpool.conf that exist solely for the purposes of restarting your pgpool children every now and again, just in case you're worried about memory leaks. These are connection_life_time, child_life_time, and child_max_connections, and their purpose is obvious from the documentation, which is available from the pgpool site.

Well, hope this was helpful to others -- I know it took me a few outtakes to get things right, so hopefully you won't have to go down the same path. :)


brant said...

Thanks for the post. Cleared some things up for me.

Patrick Sameera Jayalath said...

Here is a tutorial....


umairdojki said...

This is an excellent post! Thanks a lot.