Im currently working on a fork of a VERY LARGE project with about 7-8 * 10^6 LoC and 100000+ classes. The problem is, of course, that the indexer or CLion in general runs out of memory or is very slow and not responsive.
I already saw the blog entry https://blog.jetbrains.com/idea/2006/04/configuring-intellij-idea-vm-options/ where you describe some memory projects but it seems not to fit for my project setup.
My .vmoptions file looks like this:
-Xss20m
-Xms2560m
-Xmx20000m
-XX:NewSize=1280m
-XX:MaxNewSize=1280m
-XX:ReservedCodeCacheSize=2048m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=500
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Dawt.useSystemAAFontSettings=lcd
-Dsun.java2d.renderer=sun.java2d.marlin.MarlinRenderingEngine
Im working on a machine with 128GB MainMemory and Intel XEON 28 Core CPU, so the resources should not be the problem.
Do you have any recommendations for the optimal memory settings?
I wrote a mail to JetBrains support and this was the answer:
The possibility to change how many cores should be used in CLion
hasn't been implemented yet, we have a related feature request:
https://youtrack.jetbrains.com/issue/CPP-3370. Please comment or
upvote. Could you please capture a CPU snapshot so we can take a look
at what is going on?
So it would be great if anybody who wants this feature +1's it on JetBrains YouTrack.
Related
I use IntelliJ products across multiple technologies and I have noticed that autocompletion in AppCode works much slower than for the other IDEs.
Example
What I already did
I've changed the default VM options, and it looks like this now:
-Xss2m
-Xms256m
-Xmx4096m
-XX:NewSize=128m
-XX:MaxNewSize=128m
-XX:ReservedCodeCacheSize=240m
-XX:+UseCompressedOops
-Dfile.encoding=UTF-8
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-XX:CICompilerCount=2
-Dsun.io.useCanonPrefixCache=false
-Djava.net.preferIPv4Stack=true
-Djdk.http.auth.tunneling.disabledSchemes=""
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Djdk.attach.allowAttachSelf
-Dkotlinx.coroutines.debug=off
-Djdk.module.illegalAccess.silent=true
-Xverify:none
-XX:ErrorFile=$USER_HOME/java_error_in_appcode_%p.log
-XX:HeapDumpPath=$USER_HOME/java_error_in_appcode.hprof
(Notice increased -Xmx)
I've also enabled memory indicator, but it usually shows less than 1GB of RAM used.
Solution
AppCode 2019.3 (currently in EAP) has improved performance a lot. Here is the video for a comparison:
You can also decrease tooltip initial delay which helps a little bit (Preferences > Appearance & Behavior > Appearance > UI Options).
It's hard to share any correct solution for such an issue without going deep into details. Projects have different structure, the amount of library is different, the project layout is unpredictable, Swift/Xcode versions can affect the performance because of changes in system frameworks and much more. The only way to solve the performance issue is the following:
If possible, share your project in a separate ticket in our tracker.
If you cannot share your project, capture the CPU snapshot as described here. Usually it provides enough information to figure out where the problem is.
Is there a place (website) where i can find information on which VM is needed (minimum/maximum) for a specific Pharo or Squeak release on a specific OS?
I don't know if that exact information is documented, but I can try to give you a brief explanation... Even Pharo and Squeak paths have diverged a lot in the last times.
Pharo Official VM is the CogVM which is a StackVM with JIT. Then it also have StackVMs for platforms where code generation is not allowed.
The official virtual machines for Pharo are listed in http://www.pharo-project.org/pharo-download, and they work for sure from Pharo 1.2 up to Pharo 2.0. You can also have a look at the complete set of built vms in the CI server https://ci.lille.inria.fr/pharo/view/Cog/.
For older releases, Pharo (1.0 and 1.1) keeps a history of one-click distribution where the vm is freezed along with the image. You can find them in here: https://gforge.inria.fr/frs/?group_id=1299
On the other side, for Squeak, the same CogVMs should work in their latest versions, otherwise you should get an interpreter VM from http://squeakvm.org/index.html.
Hope it helps a bit
As #guillepolito says, the best thing today is to take the ones from the Pharo continuous integration Jenkins server (or pick a one-click).
Squeak VMs have been fading out in my practice. I keep a number of them around but as I do use Pharo, I try to build my own version from the Jenkins source as there is a lot to be learned from using those.
It is not difficult to get them built on the main platforms and at least you know what's under.
The main problem is that Eliot Miranda keeps on doing his things in his corner instead of working on a shared source three. That's the problem of having a low truck number on that.
I'm using Dbpedia in my project and I wanted to create a local sparql end point as the online one is not reliable. I downloaded data dumps (large NT files) and decided to use Jena TDB. Using the NetBeans IDE am using an input stream to read in the source NT file and then using the following line of code to load the NT file into a datasetGraph:
TDBLoader.load(indexingDataset, inputs, true);
I let it run for about 5 hrs now and it still isnt done. Whilst doing this everything on my laptop seems to slow down probably because of it taking all my physical memory space. Is there a faster way to do this???
The documentation says to use tdbloader2 but its only available for linux while am using windows. Would be really helpful if anyone could tel me how to use this tool in windows using cygwin. Please take into consideration I have never really used Cygwin in windows.
The latest release of TDB has two command line utilities for bulk loading: tdbloader and tdbloader2. The first is pure Java and it runs on Windows as well as on any machine with a JVM. The second is a mix of Java and UNIX shell script (in particular it uses UNIX sort). It runs on Linux, I am not sure it runs on Cygwin. I suggest you use tdbloader on a 64-bit machine with as much RAM as you can find. :-)
The latest release of TDB is available here:
http://www.apache.org/dist/incubator/jena/jena-tdb-0.9.0-incubating/jena-tdb-0.9.0-incubating-distribution.zip
The development version of TDB has an additional bulk loader command: tdbloader3. This is a pure Java version of tdbloader2. Instead of using UNIX sort (which works only on text files) we used a pure Java external sort with binary files. For more details on tdbloader3, search for the JENA-117 issue.
You can find a SNAPSHOT of TDB in the Apache snapshots repository, you are warned, that has not been released yet.
For the more adventurous there is also tdbloader4 which is not included in Apache Jena and it is to be considered an experimental prototype.
tdbloader4 builds TDB indexes (i.e. B+Tree indexes) using MapReduce (this is stretching a little bit the MapReduce model, but it works).
You can find tdbloader4 here: https://github.com/castagna/tdbloader4
To conclude, my advise to you, on Windows, is: download the latest official release of TDB and use tdbloader with a 64-bit machine with a lot of RAM. If you do not have one, use an m1.xlarge EC2 instance (i.e. 15 GB of RAM) (or equivalent).
For more help, I invite you to join the official jena-users#incubator.apache.org mailing list where, I am sure, you'll have better and faster support.
I've done some search out there but couldn't find too much really helpful info on it, but could someone try to explain the basic of Java memory maps? Like where/how to use it, it's purpose, and maybe some syntax examples (inputs/outputs types)? I'm taking a Java test soon and this could be one of the topics, but through all of my tutorials Jmap has not come up. Thanks in advance
Edit: I'm referring to the tool: jmap
I would read the man page you have referenced.
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
NOTE: This utility is unsupported and may or may not be available in future versions of the JDK. In Windows Systems where dbgeng.dll is not present, 'Debugging Tools For Windows' needs to be installed to have these tools working. Also, PATH environment variable should contain the location of jvm.dll used by the target process or the location from which the Crash Dump file was produced.
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html
Its not a tool to be played with lightly. You need a good profiler which can read it output as jhat is only useful for trivial programs. (YourKit works just fine for 1+ GB heaps)
I have a program that runs fine on MacOS and Linux and cross-compiles to Windows with mingw. Recently I made the program multi-threaded.
The current design of the program has memory allocated in the main thread and freed in the slave "worker" threads. That's not a problem on MacOS and Linux because the malloc/free system is multi-threaded.
I'm concerned about the cross-compiling, however. The version of mingw that I'm using is built from MacOS ports. It's a pretty ancient version of G++ (version 3.4.5) from 2004. I've been unsuccessful in my attempts to build a more recent version (I'd like to build a 64-bit version, but gave up). I'm getting pthreads from http://sourceware.org/pthreads-win32.
My concern is that the malloc & free system in 3.4.5 is not multi-threaded.
Questions:
Should I rewrite my program so that the blocks of memory to be freed are passed back to the main thread to be freed there?
Should I try to upgrade to a more recent mingw?
Is there any way to find these concurrency problems other than massive amounts of testing? That just doesn't feel good to me.
Thanks!
Why do you say malloc & free are not multithreaded?
mingw32 by default will link with msvcrt.dll which is a multithread dll. See [1]. There was[2] a single-threaded library provided by Microsoft, but it was only available for static linking.
PS: You mention that you are cross-compiling but you seem instead to be compiling the windows program in windows. In such case, Why don't you dowload the binaries from www.mingw.org? (it's a pain to figure out in their downloads the files needed, though)
1- http://msdn.microsoft.com/en-us/library/abx4dbyh%28v=VS.71%29.aspx
2- See [1]. Removed in Visual Studio 2005 http:// msdn.microsoft.com/en-us/library/abx4dbyh%28v=VS.80%29.aspx
I would avoid this. It sounds like you're trying to dodge the main issue.
Yes, that would be a good idea in any case...
One way to detect concurrency problems related to memory allocation/deallocation is a memory leak detector. I'm not sure if valgrind works on cygwin.