What Rails load to memory before it reaches some 'limit'? - ruby-on-rails

I have a Rails application which uses ~550 mb after 3-4 hours of use and stays there for the rest of the day, which leads me to thought that it's it normal way how Rails use memory.
Here is screenshot of my memory usage from Heroku and Gemfile.lock
My goal - reduce app memory usage. For this I need to understand what parts add to this memory usage. I looked through various articles in Interenet on Ruby\Rails memory usage topic.
I used derailed_benchmarks 'mem' command to track down what is large at boot. Basically, I have two questions:
Why my memory usage goes upto some N mb value and not more or less? How I can calculate that value?
Assume we have some gem which consumes 20mb. How much times does it added to overall usage?
If there any information that can help my issue, I would be very thankful to you.

Related

How to know if I have to do memory profiling too?

I currently do CPU sampling of an ASP.NET Core application where I send huge number of requests(> 500K) to it. I see that the peak working set of the application is around ~300 MB which in my opinion is not huge considering the number of requests being made to the application. But what I have been observing is huge drop in requests per second when I enable certain pieces of functionality in my application.
Question:
Should I do memory profiling too? I ask this because even though the peak working set is ~300MB, there could be large number of short lived objects that could be created & collected by GC and since work by GC also counts as CPU, should I do memory profiling too to see if I allocate too much?
I will answer this question myself based on new information that I found out.
This is based on the tool PerfView, which provides information about GC and allocations.
When you open the GCStats view navigate the links to the process you care and you should see information like below:
Notice that view has the information has the % CPU Time spent Garbage Collecting. If you see this to be > 5% then it should be a cause of concern and you should start memory profiling.

DelayedJob doesn't release memory

I'm using Puma server and DelayedJob.
It seems that the memory taken by each job isn't released and I slowly get a bloat causing me to restart my dyno (Heroku).
Any reason why the dyno won't return to the same memory usage figure before the job was performed?
Any way to force releasing it? I tried calling GC but it doesn't seem to help.
You can have one of the following problems. Or actually all of them:
Number 1. This is not an actual problem, but a misconception about how Ruby releases memory to operating system. Short answer: it doesn't. Long answer: Ruby manages an internal list of free objects. Whenever your program needs to allocate new objects, it will get those objects from this free list. If there are no more objects there, Ruby will allocate new memory from operating system. When objects are garbage collected they go back to the free list. So Ruby still have the allocated memory. To illustrate it better, imagine that your program is normally using 100 MB. When at some point program will allocate 1 GB, it will hold this memory until you restart it.
There are some good resource to learn more about it here and here.
What you should do is to increase your dyno size and monitor your memory usage over time. It should stabilize at some level. This will show you your normal memory usage.
Number 2. You can have an actual memory leak. It can be in your code or in some gem. Check out this repository, it contains information about well known memory leaks and other memory issues in popular gems. delayed_job is actually listed there.
Number 3. You may have unoptimized code that is using more memory than needed and you should try to investigate memory usage and try to decrease it. If you are processing large files, maybe you should do it in smaller batches etc.

Neo4j inserting large files - huge difference in time between

I am inserting a set of files (pdfs, of each 2 MB) in my database.
Inserting 100 files at once takes +- 15 seconds, while inserting 250 files at once takes 80 seconds.
I am not quite sure why this big difference is happening, but I assume it is because the amount of free memory is full between this amount. Could this be the problem?
If there is any more detail I can provide, please let me know.
Not exactly sure of what is happening on your side but it really looks like what is described here in the neo4j performance guide.
It could be:
Memory issues
If you are experiencing poor write performance after writing some data
(initially fast, then massive slowdown) it may be the operating system
that is writing out dirty pages from the memory mapped regions of the
store files. These regions do not need to be written out to maintain
consistency so to achieve highest possible write speed that type of
behavior should be avoided.
Transaction size
Are you using multiple transactions to upload your files ?
Many small transactions result in a lot of I/O writes to disc and
should be avoided. Too big transactions can result in OutOfMemory
errors, since the uncommitted transaction data is held on the Java
Heap in memory.
If you are on linux, they also suggest some tuning to improve performance. See here.
You can look up the details on the page.
Also, if you are on linux, you can check memory usage by yourself during import by using this command:
$ free -m
I hope this helps!

Apple Instruments slows down app when analyzing memory allocations

When running my app in the simulator and analyzing its memory allocations using Instruments, the App runs very slow, it runs at less than 1/30 of its normal speed.
The app uses about 50 MB RAM and has approximately 900,000 life objects (according to Instruments).
Could this be the reason for the slow performance?
When running in the app on the device or in the simulator without using Instruments, it performs well (except the memory issue I am trying to debug).
Do you have any idea on how to solve this issue?
Did you encounter slow performance using the Memory Allocation
instruments?
Would you consider having more than 900,000 life
objects "concerning"?
Considering your Analyzer performance issue
In your specific case monitoring the app over a long period of time will not be necessary, as you reach the state of high memory consumption very soon. You could simply stop recording at this point. Then you won't have problems navigating through the different views and statistics to find the cause of the memory issue.
Analyzing the memory issue
Slowing down is normal. 1/30 sounds quite alarming.
You probably should track how the amount of life objects and the memory usage change while you use the app.
It is difficult to decide if a certain amount of life objects at a specific point in time is critical (though 900,000 seems very high).
In general: if life objects and memory usage grow continuously and don't shrink, that is a bad sign.
If you take a look Statistics -> Object Summary (Screenshot), Live Bytes should be a lot smaller than Overall Bytes and the amount of #Living objects should be a lot smaller than the amount of #Transitory objects.
The second thing you can look at, is the Call Tree view.
It gives you a nice overview of which parts of the application are responsible for reserving large amount of memory:
Possible solutions
Once you detect the parts of your code that are responsible for reserving the large memory amount you can look for retain-cycles or you could try to use more autorelease pools in that spot.
Check that you have enough available disk space. I had 8gb left and it seems like that was too little. Instruments was extreeemely slow. Used a minute just to start and didn't quite get around at all.
I cleared out more disk space and then it suddenly went back to being fast as before.

iOS Testing with Instruments - Best Practices

Since I'm developing iOS based applications, I'm also spending much time on getting started with testing applications and debugging performance issues.
After reading the instruments user guide, I'm now able to detect memory leaks, record the current memory size and cpu-usage and much more, which quiet helps a lot!
Now, in order to improve my testing strategy, I'm looking for a kind of bench mark values or standard values for the different instruments (such as cpu- usage, energy consumption, ...). You know what I mean? For example: I have a cpu-usage of 80% over a period of 10 seconds. Is that okay or should I think about performance optimization? Sure, in case of cpu-usage, it depends on the action that the app does over that period of time (e.g. reloading some data or something like that) but is there any rule of thumb or best practices for that?
I already did a research on the internet and only found a video of the iOS tech talk in london from Michael Jurewitz. In that talk, i figured out the following statements that are properly useful for me:
Activity Monitor: Can only be used to compare your app's resource usage to other apps
Allocations: A constantly growing allocations chart is obviously a bad sign for memory leaks. Allocations does not show the real memory size the app uses
VM Tracker: Show overall memory size; Rule of thumb: more than 100 MB dirty size of an app is far too much
...
Now I need some "rules of thumb", especially for the CPU Monitor (where is the boundary between good case and bad case?) and Energy Consumption (Level..?).
Do you have any tips for me or do you know some articles I can read through?
Thanks a lot!
Philipp

Resources