I was tracking down a memory leak inside HTTPD and got to play with Memory Pool Debugging. In this specific case, Reverse Proxying a Windows Media Server would cause a signifigant leak. This leak was happening while streaming data to the client, so the longer the client was connected, the more memory they used.

I had suspected the bug was in the relatively new and untested mod_proxy code. mod_proxy simply hasn't had the same vetting as the core of httpd. I was surprised to find that the bug turned out to be in the coreinputfitler, far away from the newer Proxy Code. The erroneous use of aprbrigadesplit was creating a new bucket brigade every time httpd tried to read data fromt he client.

Now, on to the part where APR memory pools rock. By compiling APR with --enable-pool-debug=all, most actions against the memory pool are logged. This includes every allocation, clear or destroying of a Pool. The log includes the size of the Global APR Pool:

Quote from Example Pool Debug Entry:

POOL DEBUG: [27325/16384] PALLOC ( 244/ 244/ 256702) 0x080A0568 "plog" <strings/apr_strings.c:78> (6/6/1)

By graphing these entries, you can actually see how indiviual apache children act:

The above is a single Idle Apache Child, with just the startup allocations.

This is a Single Streaming Request. Once the Stream is established, we reach a steady state of memory usage.(That is a good thing)

This is a graph of multiple non-streaming requests. Because of how Apache puts the entire connection into a pool, once the client is done, all of the memory used for them can be released.

I made all of the above graphs using a few lines of Python. First I split the error_log into one log for every Child using split.py. Then I graphed each using plot.py.