I have a small program that should perform parallel banking transfer using the STM, so I am testing it on different machines, 2-core and a 1-core. In the 2-core machines everything works, but in the 1-core machine, the Java Out of Memory error is thrown when I perform 1 million parallel transactions.
The error is the following "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
Also I have a Java-synchronize version of the same program which works, even if it is slower, it can reach a million transactions.
What can I do to make my Clojure application work in the 1-core machine? I am afraid the garbage collector can't handle so many Refs...what do you think?
Thanks a lot for your help!
Update:
It works now, I did java -Xmx1000m -jar myprog.jar and worked perfectly!
I didn't know it was possible to increasing the heap size for the JVM, and that was exactly my problem.
Thanks a lot to "sw1nn" for the great comment ;)
You can also add jvm-opts to your leiningen project.clj like below:
:jvm-opts ["-Xmx1500m"]
to get it specified when you run your program in leiningen.(like testing)
Related
What would be the best way to debug memory issues of a dataflow job?
My job was failing with a GC OOM error, but when I profile it locally I cannot reproduce the exact scenarios and data volumes.
I'm running it now on 'n1-highmem-4' machines, and I don't see the error anymore, but the job is very slow, so obviously using machine with more RAM is not the solution :)
Thanks for any advice,
G
Please use the option --dumpHeapOnOOM and --saveHeapDumpsToGcsPath (see docs).
This will only help if one of your workers actually OOMs. Additionally you can try running jmap -dump PID on the harness process on the worker to obtain a heap dump at runtime if it's not OOMing but if you observe high memory usage nevertheless.
For demo purposes, I am running Neo4j in a low memory environment -- A laptop with 4GB of RAM, 1644MB is use for video memory, leaving only 2452 MB available for use.. It's also running SQL Server, our WCF services, and our clients.. So there's little memory for Neo4j.
I'm running LOAD CSV cypher scripts via REST from a C# service. There are more than 20 scripts, and theyt work well in a server environment. I've written code to paginate, so that they run in smaller batches. I've reduced the batch size very low ( 25 csv rows ) and a given script may do 300 batches, but I continue to get "Java heap space" errors at some point.
I've tried configuring Neo4j with a relatively large heap space ( 640MB ) which is all the available RAM size plus setting the cache_type to none, and it gets much further before I get the java heap space error. What I don't understand is in that case, why does it grow that much? Also until I restart the neo4j service, I get these java heap space errors quickly. The batch size doesn't seem to impact how much memory is used appreciably.
However, after doing that, and I run the application with these settings, the query performance becomes very slow due to the cache settings.
I am running this on a Windows 7 laptop with 4G RAM -- using Neo4j 2.2.1 Community Edition.
Thoughts?
Perhaps you can share your LOAD CSV statement and the other queries you run.
I think you just run into this:
http://markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
So PROFILE or EXPLAIN your queries and make it not to use that much intermediate state. We can help if you share your statements.
And you should use PERIODIC COMMIT 100.
Something like:
heap=512M
dbms.pagecache.memory=200M
keep_logical_logs=false
cache_type=none
http://console.neo4j.org runs neo4j in memory putting up to 50 instances in a single gigabyte of memory. So it should be doable.
I've been tasked with analyzing some memory consumption issues with one of our web apps. I'd made myself passibly familiar with tools like Mission Control and VisualVM and used them to resolve a number of leaks, but in doing so came across behavior for which I can't account.
Setup
JBoss 7.1.1 AS
Java 1.7.0_67
Specifically, I've found that even when I run only JBoss 7 by itself (that is, I turn off the deployer and just let the server itself run) I can see regular allocations (followed by garbage collection) of about 1MB/3 seconds or so.
On a whim, I took heap dumps immediately after doing a GC and then once the allocations had been going on. It seems like the majority of the objects I'm seeing have to do with modules, either Xerces activity (reading the module XML, I guess?) or objects associated with ModuleLoader. The majority of the objects I see all have 'References' that look something like this:
http://i.stack.imgur.com/LlUmv.png (sorry, I can't mark up images)
My thinking (which may be entirely off base) was that JBoss scans for new modules to support hot deploys? The thing is though, that use case isn't one I ever use: new deployments always involve just shutting down the server, so dynamically scanning for modules is really unnecessary.
I guess my questions are:
Does my belief about module loading have any merit?
If so, is there any way to get JBoss to stop scanning?
If not, does anyone have any suggestions about what else I can investigate?
Thanks for reading!
in my Grails application using the Spring Security Core plugin for authentication. I am facing a serious problem with that because my application took 21 seconds to lift the Tomcat was carrying 43/2 after installation.
So far so good, but began to occur error 'PermGen Error' memory error Tomcat server. Before it was 64 and Aug is 256 so that the error does not crash my app so often.
I wonder whether you know some plugin configuration in order to reduce the incidence of this error or some method to effect the release of this cache because the number of users is increasing and if you can not solve it unfortunately have to leave the plugin I seems to be an excellent choice for application security.
Someone could tell me if the amount of plugins used in an application interference has this memory?
PermGen is a part of memory to store the static components of your app, mostly classes. Literally it will not be affected by either the amount of users or logs associated with user activities, which consumes heap space instead.
To reduce PermGen storage, you have to check your code, redesign those algorithms which contains unnecessary/redundant objects and operations, and consolidate variables and functions if possible. Generally speaking, simplified code will produce smaller executable files. That's the way you save the PermGen space.
Some versions of Tomcat permgen more than others. There was a minor version in the 6 line that I couldn't every get to reliably stay running. And even with the latest versions you still need to tweak your memory settings. I use the following and it works best for me. I still get them now and again, especially if I'm doing a lot of runtime compiling. In production, it is a non-issue because all the development overhead of grails isn't there.
-XX:MaxPermSize=512m -XX:PermSize=512m -Xms256m -Xmx1024m
We are using TestComplete from AQTime to test the GUI at client with our Client/Server application. It is compiled with Delphi 2007. The source of client is about 1.4 millions sourcelines. The hardware is a Intel dualcore 2.13 Mhz, 2 GB RAM using Windows XP Pro.
I compile the application with all debug options and also link in TCOpenApp, tcOpenAppClasses, tcPublicInfo, tcDUnitSupport as described in documentation to make it an Open Application. The resulting exe-file is about 50 MB.
Now when running the testscript and it works, but running very very slow. The CPU is running at 100 % and it is a bit frustrating to change the testscript because of the slowness. I have turned off all desktop effects like rounded window corners. No desktop background.
Anyone else with the same experience or even an solution ?
Your problem probably lies in the fact you compiled with debug info and are using the tcXXX units, resulting in an enormous amount of objects being created.
A transcript from AutomatedQA message boards
Did you compile it in debug mode? We have an app that when compiled in
Debug mode is slow when used with TC. This is because of the enormous # of
objects in it. If we compile w/o debug but with the TC enabler(s),
everything is fine.
and this one might help to
A couple of areas where you can
increase speed.
If you are just using record and
playback, then look into replacing the
.Keys("xxx") calls to .wText = "xxx".
The Keys function will use the ms
delay between keystrokes, while wText
just forces the text overwrite
internally.
The second suggestion (which you
likely have already looked at) is
Tools->Default Project
Properties->Project->Playback, setting
the delays to 100 ms, 5 ms, and 5 ms
to keep the pauses to a minimum.
As for the object properties, yes, TC
loads them all. You can force this
with a process refresh on yor
application, so that the data is
forced into being available without a
load delay when called. This might
help with reducing the appearance of
delay.
Edit:
We also have been evaluating TestComplete and also encountered this performance problems. I would be very interested to know if and how you've finally solved them.
That said, I think it is a product with great potential and can really help you with organizing all of your unit, integration and GUI tests.
Now when running the testscript and it works, but running very very slow. The CPU is running at 100 % and it is a bit frustrating to change the testscript because of the slowness. I have turned off all desktop effects like rounded window corners. No desktop background.
Anyone else with the same experience or even an solution ?
I recommend that you try changing the TCP ports that TestComplete use for remote connections. You can change them in the Network Suite Options Dialog. For example, you can set 6100-6102 ports. Does this help? A similar issue was described in the TC 9.20 consuming high 98% cpu SmartBear forum thread.