Topic: Checking for memory consumption


Sorry for a newbie question, but how do I begin checking for memory consumption of my models and controllers ? How may I be able to assess before hand that lets say if I have 5 requests/sec coming in to a certain controller using one model, then what would the memory consumption of my app be, besides that of the mongrel process ?

Thanks for any help on this.


Re: Checking for memory consumption

Hi, my name is arnold and I’m one of the new guest writers on the TextDrive weblog. Jason has given me the privilege to talk a little about performance-tuning Rails applications for deployment on TextDrive or anywhere.

I’ve been doing Rails development since April 2005 and have deployed 3 applications already, all for client use. Since then I have encountered quite some problems which are not covered well on the Rails wiki, partly because they pertain to running Rails on shared hosting accounts.

Rails is nuclear stuff, it allows you to do wonders in minutes but you have to be careful with it. The amount of magic contained in Rails imposes quite an overhead – that you, as a developer, are obliged to manage. These things not only make your application faster, scalable and tolerant, but also bearable for people living on the same web and application servers. Failures to do this in a shared heteregenous environment certifably leads to memory leaks and hard crashes. These tips are good standard procedures for a Rails application at any level (even if you are on a dedicated cluster setup).

So here it goes, a list of 10 things I recommend to do to streamline your application and make it more bearable for others.

In short,

   1. Minimize the amount of FCGI listeners
   2. Use caching
   3. NEVER run “development” on FastCGI for more than 1-2 hours
   4. Observe your memory consumption
   5. Rotate your logs
   6. Write and run unit tests
   7. Check for memory leaks when you are developing
   8. Be careful with iterations
   9. Watch the Rails TRAC for bug reports
  10. Be vigilant when restarting your server

And in detail
(1) Minimize the amount of FCGI listeners

As already stated, Rails is not PHP. It is persistent, meaning that it “hangs there” without connections and still consumes memory when there are no clients connected. You read it right – once you start a Rails app, it remains running forever.

Even though as something Jason says when working with the Litespeed server it’s developer, George Want, found that the FCGI interface for Ruby(OnRails) does not like trully persistent FCGI connections (it’s a design issue), and when running as a truly persistant FCGI there are serious file descriptor leaks and performance issues. However, it’s not something that has come up yet with Apache and lighttpd becaue they generally don’t take advantage of persistent FCGI connections (George was nice enough to make it an option in LiteSpeed 2.1.4 to turn off persistent connections. But that’s just an aside and something a bit more illustrative.

Normally, every dispatcher handles requests to your application in a linear queue (dispatchers are not threaded). It means, that when a user is requesting a page that makes 3 requests to the Rails app you made (for instance, for a dynamic image, an AJAX request and the page itself) it is going to invoke 3 runs of Rails processing cycle, in a queue. If you have more than 1 dispatcher the requests are going to be distributed between dispatcher processes by the web server. However, for an application undergoing development, 1 dispatcher is enough. 2 is quite some. If you need more dispatchers you should think about performance optimization. Remember as we said before, A List Apart runs on 4 FCGI dispatchers and is screaming fast. So putting in more dispatchers wil most likely not make your application faster, because each individual request is going to take the same amount of time – it will only allow you to scale to more users (which, for a little content site, is most likely not necessary). Remember, load of 0.01, 4 instances of dispatch.fcgi

You should also know that your application will consume 1 persistent database connection per dispatcher and won’t release it until the dispatcher is stopped, meaning that when you start 1+N dispatchers you consume +N more database connections (and this can make MySQL a tad slower for others). You can tune it so that it closes the connection when the controller is left (in an “after_filter” for instance) if you still want more dispatchers.
(2) Use caching

If you have a content site (not an application), take care to cache all the relevant pages, fragments and actions. It is very easy to do and very easy to tune the expirations, but you will need a line in your lighttpd configuration so that page caching still works.

url.rewrite = ( "<sup>/$" => "index.html", "</sup>([^.]+)$" => "$1.html" )

(3) NEVER run “development” on FastCGI for more than 1-2 hours

The deal is simple. Under FastCGI, Rails ALWAYS leaks memory in “development” environment because of the class reloading.

There is no solution for this.

Granted, you can start your app in “development” for short debugging but pleae take care to turn it off (shutdown the server and kill off all of the FCGIs) after you’re done. A Rails app running in development for a day will consume as much memory as you have and will ask for more. One of my Rails apps grows up for 1.5 Mb RAM on every request, and Jason said that the large single application they recently saw was 4 processes from one guy’s application consuming 2.1 GIGABYTES of RAM. I personally had to restart my laptop a couple of times because of FCGI run amok.
(4) Observe your memory consumption

A Rails application usually consumes anywhere from 20 to 50-70 Mb per FastCGI dispatcher. If a Rails application is tuned properly this amount should stabilize after about 20 requests and stay the same. If it continues to bubble it means that your application leaks memory and is not production-ready.

Jason gave me an example of week old process from Strongspace

strongspace 7143 0.0 0.9 32556 29364 p0- I Wed12PM 1:12.38 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7145 0.0 1.0 38320 35096 p0- S Wed12PM 1:13.44 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7146 0.0 0.8 31432 27996 p0- I Wed12PM 1:03.00 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7147 0.0 0.9 32916 29416 p0- I Wed12PM 1:08.34 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7148 0.0 0.9 33892 30672 p0- I Wed12PM 1:12.19 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7149 0.0 1.0 35668 32444 p0- I Wed12PM 1:10.37 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7150 0.0 0.9 34152 30924 p0- I Wed12PM 1:11.18 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7151 0.0 0.9 34268 31060 p0- I Wed12PM 1:15.87 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7152 0.0 0.9 35444 32208 p0- I Wed12PM 1:10.13 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7153 0.0 0.9 35332 32056 p0- I Wed12PM 1:09.08 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7154 0.0 1.0 35720 32524 p0- I Wed12PM 1:00.74 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7155 0.0 0.9 35556 32300 p0- I Wed12PM 1:11.95 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7156 0.0 0.8 31092 27592 p0- I Wed12PM 1:05.90 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7157 0.0 1.0 35980 32824 p0- I Wed12PM 1:07.82 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7158 0.0 0.9 35376 32120 p0- I Wed12PM 2:19.23 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7159 0.0 1.0 35716 32500 p0- I Wed12PM 1:08.59 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7160 0.0 0.9 34296 31072 p0- I Wed12PM 1:17.30 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7161 0.0 0.9 35152 31916 p0- S Wed12PM 1:02.95 ruby /var/strongspace/web/public/dispatch.fcgi
strongspace 7162 0.0 0.8 31092 27872 p0- I Wed12PM 1:15.31 ruby /var/strongspace/web/public/dispatch.fcgi

Yes, that’s right. Those processes are 5 days old, exchange TBs of data, they are all about ~30-35 MBs each.

My own apps consume:
1. Address book app (in development) – 31 MB per dispatcher
2. Radar (job posting) – 32 MB per dispatcher
3. A client’s CMS site for a design studio – 70 Mb (but it uses RMagick quite often).

None of the 3 go above these numbers, some are running for weeks. Persistently. 24/7.
(5) Rotate your logs

If your app is hit often, take time to cleanup your “log” directory from old logs. And you have some shrink-wrapped goodness to do it for you.

rake clear_logs
(6) Write and run unit tests

They are smart stuff and they are so baked into Rails (and they’re one of the things that makes Rails great) that it’s foolish not to spend a lot of time writing and running unit tests.
(7) Check for memory leaks when you are developing

This is simple.

First, switch your development environment (on your local machine, my god don’t do this on your TextDrive or any other hosting account) to

dependencies.mechanism = :require

Then start your app and just hammer it’s most fragile (and heavy) controller actions with ab (apache benchmark), so that it rolls up to 400-500 requests. After that, start “top” and see where you sit. A normal Rails app (in my humble experience) never goes over 90 MB per dispatcher, and usually sits in the span between 20 MB and 70MB. Remember, that this RAM is shared by others and when you eat it away others will start to swap, and it hurts. If you’re not careful you can eat up all the server’s memory in an hour (or in a day – depending of how many hits your Rails app is serving).

And, and when you combine the resulting sluggishness with content-based checks, lighttpd restarts that don’t kill dispatch.fcgi and leaking memory, then you have a real server crasher on your hands.
(8) Be careful with iterations

ActiveRecord is smart, but don’t do

#don't do this, write a JOIN query

Post.find(:all).each do |p|

p.subscribers << @user unless p.subscribers.include?(@user)


Remember that SQL is an ultimate and highly effective weapon, and ActiveRecord is very talky with the database. The less requests to the database the faster it works.

I don’t know if it relevant but for me using reject with AR objects has led to enormous memory consumption (looks like the objects being reject’ed from the enumerable were not garbage collected)

#don't do this
Post.find(:all).reject{|p| p.comments_closed? }

#instead do this
Post.find(:all, :conditions=>"comments_closed = 0")

#or this

and implement relevant “finder” methods within your models themselves.
(9) Watch the Rails TRAC for bug reports

If you’re developing everyday, if you have an application that’s running everyday, then you should check it everyday

Since I have started I seen about 25 bugs pop up which were directly related to my own cases.
(10) Be vigilant when restarting your server

FastCGI is a bugger when restarts are needed. Currently, Rails dispatchers *won’t die* when you stop your lighttpd instance. It means that you have to do it in the following order when you need to restart:

killall lighttpd
killall -9 ruby

Then start “top” or “ps ax” and look if the dispatchers are still running. Only when you are absolutely sure that there are no dispatchers hanging around anymore you can bring your lighttpd back up.

Using deadalus for this has led to immense overspawning of processes, and I hope Jason, Jan and Marten can invent something that would manage these “forceful” restarts for us. In the meantime, you have to take care of this yourself.

Thank you for your time, and I hope that helps some avoid many of the mistakes I have made myself and also help to decrease resource abuse. Recent server outages are, as it turns out, related to the fact that all of the above was not being done.

Last edited by arnoldclab (2010-02-04 05:46:23)

Re: Checking for memory consumption

Hi Arnold

Thanks so much for this writeup - I did know couple of these points, but most are new starting points for me in understanding the issues.

Thank you for the incredible summaries !

B Rgds