Memcached is crashing a lot. Why? - ruby-on-rails

I am using memcached and it seems to be crashing a lot lately. Sometimes a deploy can cause it to crash on ActionController::Base.cache_store.clear and sometimes it happens out of nowhere.
How can I get to the root cause of this? Does it have it's own log somewhere?
How can I make it more robust? Our site relies heavily on it and it going down brings the site down too. (We obviously need to figure out how to make our app still operate without it)
Any recommendations?

Check the config files where the stdio/stderror goes
Check the debug messages verbosity. Put it on max available.
What is the memory limit/memory segment sizes you use (again, in the config).
make sure they are not too small.

If memcached is crashing a lot, it's likely because you're using bad tools. Is this CentOS perhaps? libevent 1.1 or similarly ancient version?

Related

Ruby on rails memory management

I'm implementing a ruby on rails server(Ruby 2.2.1 and rails 4.1.10) and I'm facing some memory issues where the server process (puma) which can take 500MB and more. Mainly when i'm uploading large file to the server, i'm getting this kind of value. I'm using carrierwave.
My question is related to the ruby management system and garbage collection. Seen that my server is dedicated to embedded system , i really need to cap or control the memory, my process is taking from the system.
Is there a way to see which objects (with its size) are still alive and should not?
Is it right that the memory system in ruby does not retrieve back the free memory to the system if the memory is fragmented?
Please help me to figure out whats goin on when my memory is larger than 150MB idle.
Stéph
After reading a lot of posts talking about that problem, it seems that the real cause come from the Ruby GC, which is consuming a lot of server memory and causing a lot of server swapping since version 2.1.x .
To fix those performance issues, I just set RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR to a value around 1.25. You can play with that setting to find the best value for your environment.
FYI my app is running with Ruby 2.1.5 on Heroku Cedar 14 and it worked like a charm.
Hope it helps :)
I guess my problem is not related to the way GC is starting because if i ask for a manual garbage collection gc.start, i dont get back my memory. Maybe i'm having a memory leak somewhere but i would like to find a way to track it.
It fact, in your case carrierewave gem itself seems to be the source of your issue.
This gem seems to use the entire file to play with it and not chunks... So when your uploading large files you can easily hit the memory limit of your server.
This post seems to confirm what I'm saying:
https://github.com/carrierwaveuploader/carrierwave/issues/413
What I can suggest in your case is to consider using direct upload on S3 to save your server from processing the upload by doing it in background directly on Amazon servers.
I'm not very familiar with carrierwave but this gem should do the trick:
https://github.com/dwilkie/carrierwave_direct
I don't know if it could be a valid solution for you, but regarding your issue it seems to be your best alternative.
Hope it helps :) !

How can I address an app that started earlier, but is "Terminated due to Memory Pressure" now?

I am working on an iOS app in Xcode. Earlier I got it to start and run, up to a limited level of functionality. Then there were compilation failures claiming untouched boilerplate generated code had syntax errors. Copying the source code into a new project gets a different problem.
Right now, I can compile and start running, but it states before even the launch image shows up that the application was closed due to memory pressure. The total visual assets is around 272M, which could be optimized some without hurting graphical richness, and is so far the only area of the program expected to be large. (The assets may or may not be kept in memory; for instance every current loading image is populated and my code never accesses any loading image programmatically.) And it crashes before the loading image has itself loaded.
How can I address this memory issue? I may be able to slim down the way images are handled, but I suspect there is another root cause. Or is this excessive memory consumption?
Thanks,
Review the Performance Tuning section of Apple's iOS Programming documentation. Use Apple's Instruments application to determine how, when, and how much memory your app is using.
One approach you should consider is to disconnect the graphics resources from your application, and add them back one-by-one once you feel they meet the requirements and limitations of iOS.
Now, this part of my answer is opinion: it sounds like your app is a high risk for being rejected from the App Store, in case that is your intended destination for this app.

How can I find a memory leak on Heroku?

I have a Rails 3.2.8 app running on Heroku Cedar with Ruby 1.9.3. The app runs fine when it launches but after a day or so of continuous use, I start to see R14 errors on my logs. Once the memory errors start, they never go away, even if the app is idle for several hours.
Shouldnt the garbage collector clean up unused objects after a while and reduce the memory load? It seems this is not happening on Heroku. Generally, memory usage starts to creep up after running some reports with several thousand rows of data, although results are paginated.
How can I find the memory leak? Plugins like bleak_house are way out of date or dont run nicely in the Heroku environment. Can I adjust the GC settings to make it more aggressive?
The GC should do the clean up, and probably does.
You can force the GC with GC.start; if many objects were not collected this will, but I suspect that is not the issue.
Is it possible you somehow create a bunch of objects and never release them, by keeping cached copies or something?
I'm unfamiliar with the existing tools to check this, but you may want to check which objects exist using ObjectSpace. For example:
ObjectSpace.each_object.with_object(Hash.new(0)){|obj, h| h[obj.class] +=1 }
# => a Hash with the number of objects by class
If you get an unexpected number for one of your classes, for instance, you would have a better idea of where to look for.
Install the New Relic add-on. It has a bunch of useful metrics that you can use to find out the source of the leak. I think its generally a better idea to try to see which part of the code takes the longest to execute and perhaps try to optimize that, rather than tweak the GC outright.
Some of the nice features New Relic includes is being able to pinpoint the source of the longest running SQL query, for example. I encourage you to give it a try.

IPhone: Should I use 'instruments' to verify memory leaks?

I've just kind of been 'winging' it with long tests (for hours) with no crashes, and eyeballing my code for quite a while and made sure that everything looks pretty kosher as far as memory leaks. But should I be using instruments... is it mandatory to do this before uploading to app store?
I think that using Instruments is not only good practice, it's strongly recommended by the iOS development community as a whole. Even if your app seems to run fine, you still may have leaks in other use cases. Test your app thoroughly with Instruments before pushing to the App Store or you may be in for a lot of users on older generation devices complaining that the app crashes.
Some of the most crucial tools:
Leaks
Allocations
Time Profiler
Another suggestion alongside using Instruments is to compile with the -pedantic flag.
In addition to what Yuji said, turn on as many warnings as you can in the build settings, by default these are off.
No.
But at least run "Build & Analyze" in the XCode. It tells you what it can find about the memory leaks, just by analyzing the source code statically. It's basically eye-balling the code by the machine. It's infinitely better than doing that yourself. If there're any warnings issued, fix all of them. It's rare for the static analyzer to give false positives.
Also, running your app with Instruments is helpful to see how it really allocates memories. Sometimes it's fun, too.
I would never publish an app without running Instrument's leak tool.
I often miss a release somewhere. And even if I read the code 200 times I would not find it without Instruments.

iPad, any way to clear extra memory before running app?

I am creating apps for the Ipad and its driving me crazy.
The memory that is usable by the apps changes depending on what other apps were ran before it.
There is no reliable set amount of memory that can be used by your app.
i.e. If safari is ran then even after it closes it takes up some amount of memory which effects other apps.
Does anyone know if there is a way to clear the memory before my app runs so I can get the same running environment every time?
I have created several prototype apps to show to other people and it seems like after a few days they always come back to me and tell me that it crashes and to fix it.
When I test it, the reason is always because there is not enough memory (when there was enough before when I was testing). So I need to squeeze every bit of memory (which usually effects performance due to heavy loading and releasing) out of the app and tell them to restart their ipad if it continues to happen.
I read in a book that generally apps can use at max 40mb or so, most of the apps that crash are crashing at around 27mb. I want my remaining 13mb!!
While you will get a pretty good state after a reboot, what you really should look for is clean memory management and avoiding leaks.
Using the available memory wisely is solely up to the programmer. Don't tell your users to reboot the device, ever. And with every update of the OS memory things might change anyway.

Resources