On Jun 29 Nate Berkopec (@nateberkopec) tweeted this:
“Observation: when scaling on AWS/AWS-based VPS services, newer, lower-volume Rails applications tend to be memory-bottlenecked, older, high-volume Rails applications tend to be cpu-bottlenecked.”
This is really useful to know. We recently saw our tiny new Rails app running on an AWS t2.micro server getting into a “WARN” state after running a sucker punch job. It was using over 90% of its RAM, and stayed that way until we restarted the server.
So we started looking at the code with a “what’s consuming the memory?” mindset.
- We found code like this, where a background processing job persisted data updates for many models:
def save_data( data )
if ( model = Model.find_by( key_field ) )
model.update( data ) ? 1 : 0
else
Model.create( data ) ? 1 : 0
end
end
But often the model was already up-to-date, so for those cases constructing it in memory each time was wasteful.
So we added a pre-check that works in the database layer to prevent already up-to-date models being constructed on the app server:
def save_data( data )
return if Model.where( data ).count != 0
if ( model = Model.find_by( key_field ) )
model.update( data ) ? 1 : 0
else
Model.create( data ) ? 1 : 0
end
end
- The delayed job was alternating between kicking off a delayed job and an inline job. We simplified this to just run inline jobs.
This appears to have helped, but we need to continue looking for memory saving opportunities.