K6 Memory Consumption per VU - load-testing

I have recently started to work with k6 and really liked the approach of writing tests. When I started my first serious test I found that the memory consumption per VU is pretty high even though my test was not huge. As I read here memory consumption should be around 1-2 MB per VU if the scripts are small. In my case the memory consumption is around 5 MB per VU.
To verify how much memory a very simple script needs I created a script that is doing nothing:
export default function() {
}
When I run this script with 2000 users
docker run --rm -v /tmp:/tmp loadimpact/k6 run -u 2000 --paused --no-teardown --no-setup /tmp/MemTest.js
I end up with memory usage of 10 GB (about 5MB per VU)
So even if the JS is empty the memory usage is quite high. Is this expected?

Unfortunately you are right, it seems like either the memory usage has grown, or our previous measurements were incorrect. A brief investigation revealed that the chief culprit of the current memory usage was our use of the core.js library. I've created a new github issue to further investigate how we can improve or ameliorate the situation: https://github.com/loadimpact/k6/issues/1036
#user1171006, try using the loadimpact/k6:master docker image, VU memory usage should have been almost halved after we merged https://github.com/loadimpact/k6/pull/1038. That 2000 VU test you tried is taking just under 5GB of RAM on my machine now.

Related

Find CPU and Memory Time Series of Slurm Job?

There's a nice question (Find out the CPU time and memory usage of a slurm job) about how to retrieve the CPU time and memory usage of a slurm job and spinup has a nice answer (https://stackoverflow.com/a/56555505/4570472). However, if I understand correctly, seff <job id> returns Memory Efficiency which corresponds to MAXRSS over the entire life of the job.
How do I retrieve the time series of memory (and perhaps CPU) usage?
I'd like this to understand why my slurm jobs are running out of memory after 6+ hours of running fine.

How to configure Neo4j to run in a minimal memory environment?

For demo purposes, I am running Neo4j in a low memory environment -- A laptop with 4GB of RAM, 1644MB is use for video memory, leaving only 2452 MB available for use.. It's also running SQL Server, our WCF services, and our clients.. So there's little memory for Neo4j.
I'm running LOAD CSV cypher scripts via REST from a C# service. There are more than 20 scripts, and theyt work well in a server environment. I've written code to paginate, so that they run in smaller batches. I've reduced the batch size very low ( 25 csv rows ) and a given script may do 300 batches, but I continue to get "Java heap space" errors at some point.
I've tried configuring Neo4j with a relatively large heap space ( 640MB ) which is all the available RAM size plus setting the cache_type to none, and it gets much further before I get the java heap space error. What I don't understand is in that case, why does it grow that much? Also until I restart the neo4j service, I get these java heap space errors quickly. The batch size doesn't seem to impact how much memory is used appreciably.
However, after doing that, and I run the application with these settings, the query performance becomes very slow due to the cache settings.
I am running this on a Windows 7 laptop with 4G RAM -- using Neo4j 2.2.1 Community Edition.
Thoughts?
Perhaps you can share your LOAD CSV statement and the other queries you run.
I think you just run into this:
http://markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
So PROFILE or EXPLAIN your queries and make it not to use that much intermediate state. We can help if you share your statements.
And you should use PERIODIC COMMIT 100.
Something like:
heap=512M
dbms.pagecache.memory=200M
keep_logical_logs=false
cache_type=none
http://console.neo4j.org runs neo4j in memory putting up to 50 instances in a single gigabyte of memory. So it should be doable.

Appfog instances vs memory

I'm developing an API on Appfog and want to know what to focus on (more memory with one instances or more instances with lower memory).
Appfog gives you free 2GB of RAM and up to 16 instances if each instances get 128 MB RAM.
My application uses PHP, MySql and Memcachier.
I want to launch it soon and want to know which configuration is best for my server.
What is the benefit with more RAM or instances?
Thanks for helping :)
Best Regards,
Johnny
You want as many instances as your app will run without running out of memory :). More instances means better performance and uptime. However, if an instance runs out of memory it will be shut down leaving your app running with fewer instances until they all collapse. You can diagnose this problem with the af apps and af logs <appname> --all commands. If the app is running at < 100% regularly then the instance memory budget may be too low. When there are down instances, the logs command may reveal memory limit reached errors.
Memory Recommendations
Here are some memory recommendations to start out with: Wordpress with several installed plugins will need > 512mb to be stable. For lean custom PHP apps 128mb is usually sufficient but should be watched. If an app is using a framework try 256mb. These memory limits may seem high but it's really the peak memory usage not the average usage.
Load Test
Load testing using Seige can help find a memory / instance balance. It does this by determining if your app is peaking out over the memory limit. Scale the app down to 1 instance and siege with 5, 10, and 15 concurrent connections progressively increasing by 5 until the app falls over. If the app does stop, bump the memory up and try again.

Cloudoundry: gwt app,Memory usage steadily increases as per vmc but not in memory profiler

I deployed a gwt app(tomcat7) (file upload and displaying contents in table) .
I used probe to checkout memory usage and the issue is,as per vmc i.e cloudfoundry console,memory never goes down(abnormal) but probe displays something else altogether(normal).
My initial instinct was that maybe its memory leak in app but probe and vmc stats suggest some other issue.
Reported memory usage is actually the whole unix process RAM usage (this way this is consisten with ruby and node.js deployments). So this takes into account your whole heap + permgen + even some JVM overhead, so this may explain the difference.

Reducing Redmine's memory usage - Low Hanging Fruit

I am running a Redmine instance with Passenger and Nginx. With only a handful of issues in the database, Redmine consumes over 80mb of RAM.
Can anyone share tips for reducing Redmine's memory usage. The Redmine instance is used by 3 people and I am willing to sacrifice on speed.
There are not really and low hanging fruits. And if there were, we would've already included and activated them by default.
80 MB RSS (as opposed to virtual size which can be much more) is actually pretty good. In normal operation, it will use between 70 and 120 MB RSS per process (depending on the deployment model, rather few on passenger).
As andrea suggested, you can reduce your overall memory footprint by about one third when you use REE (Ruby Enterprise Edition, which is also free). But this saving can only achieved when you run more than one process (each requiring the above memory). REE achieves this saving by optimizing Ruby for a technique called Copy on Write, so that additional application processes take less memory.
So I'm sorry, your (hypothetical) 128 MB vServer will probably not suffice. For a small installation, you might be able to squeeze a minimal installation into 256MB, but it only starts to be anything but a complete pain in the ass at 512 MB (including database).
That's because of how Rails applications work in contrast to things like PHP. They require a running application server instance. That instance is typically able to answer one request at a time, using about the same amount of memory all the time. So your memory consumption is roughly equivalent to the number of application processes you run, independent of actual load. But if you tune your system properly, you can get quite a number of reqs/s out of one process.
May be i am replying very late but i got stuck in the same issue and I found a link to optimize ruby/rails memory usage, which works for me
http://community.webfaction.com/questions/2476/how-can-i-reduce-my-rubyrails-memory-usage-when-running-redmine
It may be helpful for someone else.

Resources