I have a java computation program that has been running for 2 days and it seems still needs 24 hours to finish. I started the program with 5g memory due to the limitation of other jobs running simultaneously. Now, other programs have finished running, I have the extra memory for running the only remaining programming. Is there a way to increase memory to a running program in JVM? thanks
No, you cannot increase heap size in a running JVM.
The heap sizes set at the JVM start time cannot be modified atthe runtime. The max heap size of your application can grow up to the value set by -Xmx parameter.
Related
I am trying to adjust a build job within jenkins, the problem is, it keeps failing due to lack of memory, I've adjusted java xmx but it did not solve the problem.
Turns out, I have RAM memory limit within the worker, I tried running those commands as part of the build script : "free -m" and "cat /proc/meminfo" and they both confirmed that job is being run with 1GB RAM limit, the server has more but the build isn't using it and it keeps failing due to lack of memory.
Please help me fix this, how can I lift that limit ? thank you
Heap or Permgen?
There are two OutOfMemoryErrors which people usually encounter. The first is related to heap space: java.lang.OutOfMemoryError: Heap space When you see this, you need to increase the maximum heap space. You can do this by adding the following to your JVM arguments -Xmx200m where you replace the number 200 with the new heap size in megabytes.
The second is related to PermGen: java.lang.OutOfMemoryError: PermGen space. When you see this, you need to increase the maximum Permanent Generation space, which is used for things like class files and interned strings. You can do this by adding the following to your JVM arguments -XX:MaxPermSize=128m where you replace the number 128 with the new PermGen size in megabytes.
Also note:
Memory Requirements for the Master
The amount of memory Jenkins needs is largely dependent on many factors, which is why the RAM allotted for it can range from 200 MB for a small installation to 70+ GB for a single and massive Jenkins master. However, you should be able to estimate the RAM required based on your project build needs.
Each build node connection will take 2-3 threads, which equals about 2 MB or more of memory. You will also need to factor in CPU overhead for Jenkins if there are a lot of users who will be accessing the Jenkins user interface.
It is generally a bad practice to allocate executors on a master, as builds can quickly overload a master’s CPU/memory/etc and crash the instance, causing unnecessary downtime. Instead, it is advisable to set up agents that the Jenkins master can delegate jobs to, keeping the bulk of the work off of the master itself.
Finally, there is a monitoring plugin from Jenkins that you can use:
https://wiki.jenkins.io/display/JENKINS/Monitoring
Sources:
https://wiki.jenkins.io/display/JENKINS/Monitoring
https://wiki.jenkins.io/display/JENKINS/Builds+failing+with+OutOfMemoryErrors
https://www.jenkins.io/doc/book/hardware-recommendations/#:~:text=The%20amount%20of%20memory%20Jenkins,single%20and%20massive%20Jenkins%20master.
I've been struggling with a memory consumption issue, my tomcat's mem usage in task manager keeps rising on every request I make to my webapp. I have read that the mem usage in task manager does not necessarily means there is a problem, since it summarizes heap, non-heap and native memory for JVM; however I still am unsure of what exactly is needed to be done in order to decrease the heap memory consumption automatically.
I am using tomcat 7.0.62 and JRE1.8.0_51 with an hibernate3/c3p0 application over a struts2 framework.
In the past few months we have increased functionalities for the application, at first I though it was a memory leak but every time we press the "Perform GC" button in jconsole the heap memory goes down in the graphs, hence the mem usage of the tomcat process stops increasing at the pace it was before.
So far i have set the following properties:
-Dcatalina.home=C:\apache-tomcat-7.0.62
-Dcatalina.base=C:\apache-tomcat-7.0.62
-Djava.endorsed.dirs=C:\apache-tomcat-7.0.62\endorsed
-Djava.io.tmpdir=C:\apache-tomcat-7.0.62\temp
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=C:\apache-tomcat-7.0.62\conf\logging.properties
-Dcom.sun.management.jmxremote.port=6060
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-XX:+UseParNewGC
-XX:SurvivorRatio=128
-XX:MaxTenuringThreshold=0
-XX:+UseTLAB
-XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled
-XX:+CMSPermGenSweepingEnabled
Is there any configuration parameter missing that you would recommend.
In my tomcat7w console the configuration is as follows:
Thanks in advance.
There are a number of different garbage collectors that can be set when the JVM is started. By default, the gc won't consider running if there is load on the system and still free memory. Use -verbosegc if you want to see how garbage collection is behaving. You can set the maximum heap usage (-Xmx) on startup and gc will kick in more frequently.
I recommend only worrying about it if the server is running other programs and they are being memory starved. The setup you have now is providing the best tomcat performance it can with the resources available.
I have been reading the documentation trying to understand when it makes sense to increase the async-thread pool size via the +A N switch.
I am perfectly prepared to benchmark, but I was wondering if there were a rule-of-thumb for when one ought to suspect that growing the pool size from 0 to N (or N to N+M) would be helpful.
Thanks
The BEAM runs Erlang code in special threads it calls schedulers. By default it will start a scheduler for every core in your processor. This can be controlled and start up time, for instance if you don't want to run Erlang on all cores but "reserve" some for other things. Normally when you do a file I/O operation then it is run in a scheduler and as file I/O operations are relatively slow they will block that scheduler while they are running. Which can affect the real-time properties. Normally you don't do that much file I/O so it is not a problem.
The asynchronous thread pool are OS threads which are used for I/O operations. Normally the pool is empty but if you use the +A at startup time then the BEAM will create extra threads for this pool. These threads will then only be used for file I/O operations which means that the scheduler threads will no longer block waiting for file I/O and the real-time properties are improved. Of course this costs as OS threads aren't free. The threads don't mix so scheduler threads are just scheduler threads and async threads are just async threads.
If you are writing linked-in drivers for ports these can also use the async thread pool. But you have to detect when they have been started yourself.
How many you need is very much up to your application. By default none are started. Like #demeshchuk I have also heard that Riak likes to have a large async thread pool as they open many files. My only advice is to try it and measure. As with all optimisation?
By default, the number of threads in a running Erlang VM is equal to the number of processor logical cores (if you are using SMP, of course).
From my experience, increasing the +A parameter may give some performance improvement when you are having many simultaneous file I/O operations. And I doubt that increasing +A might increase the overall processes performance, since BEAM's scheduler is extremely fast and optimized.
Speaking of the exact numbers – that totally depends on your application I think. Say, in case of Riak, where the maximum number of opened files is more or less predictable, you can set +A to this maximum, or several times less if it's way too big (by default it's 64, BTW). If your application contains, like, millions of files, and you serve them to web clients – that's another story; most likely, you might want to run some benchmarks with your own code and your own environment.
Finally, I believe I've never seen +A more than a hundred. Doesn't mean you can't set it, but there's likely no point in it.
Why is a php cli process using 25% of CPU, is there a way to reduce this? Right now I'm running 3 instances but obviously I would like to run much more to finish the job faster.
Background info: I'm moving data from a transbase db to mysql db.
EDIT: If I run this in a browser there isn't such a noticeable load on the CPU.
More processes doesn't mean faster processing. The PHP process takes as much CPU as it can to finisgh the task as quick as possible. It's probably 25% because you got a quad-core processor and it's a single threaded task.
Ideally, you would need 4 processes if you could assign each of them to a different code. Also, because of waiting for database or disk-I/O, a single thread cannot fully use all CPU power all the time, so go ahead and run more processes. It's not that a 5th processes will crash because all CPU power is used up; it will just take its share, while the OS divides processing power to all running processes.
Just dont' start too many; every process has a little overhead, and you won't benefit from having 200 simultaneous processes.
I am running a JRuby on Rails application. I see a lot of this randomly in my logs:
The max pool size is currently 5; consider increasing it
I understand I can increase the max pool size in my configuration to address this. The problem I'm looking to address is to understand what the optimal number should be. I am trying to avoid contention issues for connections. Clearly setting this number to something obnoxiously large will not work either.
Is there a general protocol to follow to know your apps optimal pool size setting?
From here,
The optimum size of a thread pool depends on the number of processors available and the nature of the tasks on the work queue. On an N-processor system for a work queue that will hold entirely compute-bound tasks, you will generally achieve maximum CPU utilization with a thread pool of N or N+1 threads.
For tasks that may wait for I/O to complete -- for example, a task that reads an HTTP request from a socket -- you will want to increase the pool size beyond the number of available processors, because not all threads will be working at all times. Using profiling, you can estimate the ratio of waiting time (WT) to service time (ST) for a typical request. If we call this ratio WT/ST, for an N-processor system, you'll want to have approximately N*(1+WT/ST) threads to keep the processors fully utilized.
Processor utilization is not the only consideration in tuning the thread pool size. As the thread pool grows, you may encounter the limitations of the scheduler, available memory, or other system resources, such the number of sockets, open file handles, or database connections.
So profile your application, if your threads are mostly cpu bound, then set the thread pools size to number of cores, or number of cores + 1. If you are spending most of your time waiting for database calls to complete, then experiment with a fairly large number of threads, and see how the application performs.