How to speed up AppCode autocompletion? - ios

I use IntelliJ products across multiple technologies and I have noticed that autocompletion in AppCode works much slower than for the other IDEs.
Example
What I already did
I've changed the default VM options, and it looks like this now:
-Xss2m
-Xms256m
-Xmx4096m
-XX:NewSize=128m
-XX:MaxNewSize=128m
-XX:ReservedCodeCacheSize=240m
-XX:+UseCompressedOops
-Dfile.encoding=UTF-8
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-XX:CICompilerCount=2
-Dsun.io.useCanonPrefixCache=false
-Djava.net.preferIPv4Stack=true
-Djdk.http.auth.tunneling.disabledSchemes=""
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Djdk.attach.allowAttachSelf
-Dkotlinx.coroutines.debug=off
-Djdk.module.illegalAccess.silent=true
-Xverify:none
-XX:ErrorFile=$USER_HOME/java_error_in_appcode_%p.log
-XX:HeapDumpPath=$USER_HOME/java_error_in_appcode.hprof
(Notice increased -Xmx)
I've also enabled memory indicator, but it usually shows less than 1GB of RAM used.

Solution
AppCode 2019.3 (currently in EAP) has improved performance a lot. Here is the video for a comparison:
You can also decrease tooltip initial delay which helps a little bit (Preferences > Appearance & Behavior > Appearance > UI Options).

It's hard to share any correct solution for such an issue without going deep into details. Projects have different structure, the amount of library is different, the project layout is unpredictable, Swift/Xcode versions can affect the performance because of changes in system frameworks and much more. The only way to solve the performance issue is the following:
If possible, share your project in a separate ticket in our tracker.
If you cannot share your project, capture the CPU snapshot as described here. Usually it provides enough information to figure out where the problem is.

Related

Optimal CLion VM memory settings for very large projects

Im currently working on a fork of a VERY LARGE project with about 7-8 * 10^6 LoC and 100000+ classes. The problem is, of course, that the indexer or CLion in general runs out of memory or is very slow and not responsive.
I already saw the blog entry https://blog.jetbrains.com/idea/2006/04/configuring-intellij-idea-vm-options/ where you describe some memory projects but it seems not to fit for my project setup.
My .vmoptions file looks like this:
-Xss20m
-Xms2560m
-Xmx20000m
-XX:NewSize=1280m
-XX:MaxNewSize=1280m
-XX:ReservedCodeCacheSize=2048m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=500
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Dawt.useSystemAAFontSettings=lcd
-Dsun.java2d.renderer=sun.java2d.marlin.MarlinRenderingEngine
Im working on a machine with 128GB MainMemory and Intel XEON 28 Core CPU, so the resources should not be the problem.
Do you have any recommendations for the optimal memory settings?
I wrote a mail to JetBrains support and this was the answer:
The possibility to change how many cores should be used in CLion
hasn't been implemented yet, we have a related feature request:
https://youtrack.jetbrains.com/issue/CPP-3370. Please comment or
upvote. Could you please capture a CPU snapshot so we can take a look
at what is going on?
So it would be great if anybody who wants this feature +1's it on JetBrains YouTrack.

Where is guiclient.conf located?

The latest release notes state:
PlasticDrive can also be launched from the changesets menu of the
Windows GUI. In order to enable you have to edit your guiclient.conf
file and add the following line: > <ShowMountPlasticDrive>true</ShowMountPlasticDrive>
But I don't have a guiclient.conf file.
Yes, I bet you have one :-) Just look here:
c:\users\<your-name>\AppData\Local\plastic4\guiclient.conf
Here's mine:
C:\Users\pablo\AppData\Local\plastic4>type guiclient.conf | grep Drive
<ShowMountPlasticDrive>true</ShowMountPlasticDrive>
The feature is pretty experimental yet, but should be fully usable. It is an improved "glassfs" which has been there for quite a long time, but with some tweaks to make it more usable.
It is very useful to take a look at the code using your Visual Studio without switching branches (although the first time you launch it it will be slow, since VS reads all the files in the solution, but then they're cached and next run will fly!).

Optimizing command line GIMP

I am running a script-fu macro using GIMP from the command line. However, it is quite slow to startup and run - about 20-25 seconds. I think a lot of this time is spent on startup - loading all the plugins and such. What are some ways to optimize GIMP on the CL? Is there any way to keep it always running?
Some promising options from the GIMP docs (some of which you may already be using):
--no-interface: Run without a user interface.
--no-data: Do not load patterns, gradients, palettes, or brushes. Often useful in non-interactive situations where start-up time is to be minimized.
--no-fonts: Do not load any fonts. This is useful to load GIMP faster for scripts that do not use fonts, or to find problems related to malformed fonts that hang GIMP.
--no-splash: Do not show the splash screen while starting.
The GIMP FAQ:
The GIMP takes too long to load - how can I speed it up?
The main things are to make sure you are running at least version 1.0, and make sure you compiled with optimization on, debugging turned off, and the shared memory and X shared memory options turned on.
Or, buy a faster system with more memory. 8^)
This question on SuperUser addresses slow GIMP startup time in general and recommends:
Rebuild the font cache file by deleting C:\Documents and Settings\<username>\.fonts-cache1 and then opening GIMP.
Check for slow-loading plugins by starting up with --verbose and seeing where it hangs. Then remove problematic plugins by renaming them in C:\Program Files\GIMP-2.0\lib\gimp\<version>\plug-ins. Alternately, remove all plugins by renaming the whole plugins folder.
Not so much a solution as a different possibility for the future, but have you considered not using GIMP?
GIMP is first and foremost a GUI-based app. If you're doing a lot of repetitive image manipulation from the command line, you might be better off with a tool like ImageMagick that's designed expressly for such use. I don't know how complex your script-fu scripts are, or how easily they could be translated to ImageMagick's (admittedly complex) syntax, but you definitely wouldn't have problems with long startup time.
You could use "Script-fu Server" .
image window > Main menu > filters > script-fu > Start server.
You will be provided with a popup asking for the port to run it in. There is also "help" provided on the same popup, which also describes the protocol used by the server.

Basics of Jmapping?

I've done some search out there but couldn't find too much really helpful info on it, but could someone try to explain the basic of Java memory maps? Like where/how to use it, it's purpose, and maybe some syntax examples (inputs/outputs types)? I'm taking a Java test soon and this could be one of the topics, but through all of my tutorials Jmap has not come up. Thanks in advance
Edit: I'm referring to the tool: jmap
I would read the man page you have referenced.
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
NOTE: This utility is unsupported and may or may not be available in future versions of the JDK. In Windows Systems where dbgeng.dll is not present, 'Debugging Tools For Windows' needs to be installed to have these tools working. Also, PATH environment variable should contain the location of jvm.dll used by the target process or the location from which the Crash Dump file was produced.
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html
Its not a tool to be played with lightly. You need a good profiler which can read it output as jhat is only useful for trivial programs. (YourKit works just fine for 1+ GB heaps)

PermGen problems with Lift and Jetty

I'm developing on the standard Lift platform (maven and jetty). I'm repeatedly (once every couple of days) getting this:
Exception in thread "7048009#qtp-3179125-12" java.lang.OutOfMemoryError: PermGen space
2009-09-15 19:41:38.629::WARN: handle failed
java.lang.OutOfMemoryError: PermGen space
This is in my dev environment. It's not a problem because I can keep restarting the server. In deployment I'm not having these problems so it's not a real issue. I'm just curious.
I don't know too much about the JVM. I think I'm correct in thinking that permanent generation memory is for things like classes and interned strings? What I remember is a bit mixed up with the .NET memory model...
Any reason why this is happening? Are the defaults just crazily low? Is it to do with all the auxiliary objects that Scala has to create for Function objects and similar FP things? Every time I restart Jetty with newly written code (every few minutes) I imagine it re-loads classes etc. But even so, it cant' be that many can it? And shouldn't the JVM be able to deal with a large number of classes?
Cheers
Joe
From this post:
This exception occurred for one simple reason :
the permgenspace is where class properties, such as methods, fields, annotations, and also static variables, etc. are stored in the Java VM, but this space has the particularity to not being cleaned by the garbage collector.
So if your webapp uses or creates a lot of classes (I’m thinking dynamic generations of classes), chances are you met this problem.
Here are some solutions that helped me get rid of this exception :
-XX:+CMSClassUnloadingEnabled : this setting enables garbage collection in the permgenspace
-XX:+CMSPermGenSweepingEnabled : allows the garbage collector to remove even classes from the memory
-XX:PermSize=64M -XX:MaxPermSize=128M : raises the amount of memory allocated to the permgenspace
May be this could help.
Edit July 2012 (almost 3 years later):
Ondra Žižka comments (and I have updated the answer above):
JVM 1.6.0_27 says: Please use:
CMSClassUnloadingEnabled (Whether class unloading enabled when using CMS GC)
in place of CMSPermGenSweepingEnabled in the future
See the full Hotspot JVM Options - The complete reference for mroe.
If you see this when running mvn jetty:run,
set the MAVEN_OPTS.
For Linux:
export MAVEN_OPTS="-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
For Windows:
set "MAVEN_OPTS=-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
Should be fine now. If not, increase -XX:MaxPermSize.
You can also put these permanently to your environment.
For Linux, append the export line to ~/.bashrc
For Windows, press Win-key + PrintScreen, and go Advanced > Environment.
See also http://support.microsoft.com/kb/310519.
This is because of the reloading of classes as you suggested. If you are using lots of libraries etc. the sum of classes will rapidly grow for each restart. Try monitoring your jetty instance with VisualVM to get an overview of memory consumption when reloading.
The mailing list (http://groups.google.com/group/liftweb/) is the official support forum for Lift, and where you'll be able to get a better answer. I don't know the particulars of your dev setup (you don't go into much detail), but I assume you're reloading your war in Jetty without actually restarting it. Lift doesn't perform dynamic class generation (as suggested by VonC above), but Scala compiles each closure as a separate class. If you're adding and removing closures to your code over the course of several days, it's possible that too many classes are being loaded and never unloaded and taking up perm space. I'd suggest you enable the options JVM options mentioned by VonC above and see if they help.
The permanent generation is where the JVM puts stuff that will probably not be (garbage) collected like custom classloaders.
Depending on what you are deploying, the perm gen setting can be low. Some applications and/or containers combination do contain some memory leaks, so when an app gets undeployed sometimes some stuff like class loaders are not collected, resulting in filling the Perm Space thus generating the error you are having.
Unfortunately, currently the best option in this case is to max up the perm space with the following jvm flag (example for 192m perm size):
-XX:MaxPermSize=192M (or 256M)
The other option is to make sure that either the container or the framework do not leak memory.

Resources