¿How to fix "Insufficient memory to execute this command" issue in SPSS? - spss

This occurs only when I'm trying to carry out an ANCOVA. Please help me! Thanks in advance.

Try increasing the workspace memory. It is recommended to only do this as needed and to gradually increase it until you're able to run your task. The below syntax will increase the workspace memory to double the default value (6,184 KB).
SET WORKSPACE = 12296 .
And you can always check your workspace allocation like this:
SHOW WORKSPACE.
More detail can be found on the IBM support pages: IBM support: SPSS memory

Related

Atom is running slow when edit a file which is over 500 lines

when I use Atom to edit javascript files there has some performance issues, if the js script lines is over a number of amount, e.g, 500, to scroll the file or move cursor will be stuck. It should not because hardware problems and 500 lines is also not a big amount. Is there something I can do to make Atom to run smoothly when I edit a big size file? Thanks,
As you can read in this article, this is an ongoing issue with Atom and is currently being dealt with by the team. I don't believe it has anything to do with computer performance.
I currently run an i7 machine and, when opening large (typically minified) files, the editor will run extremely slow and, in some instances, crash completely.
Hopefully we can see a resolution soon.
Finally I found the problem occurs is because a plugin -- linter-jscs, 500 lines is not big amount, after disable this plugin, editing is on right way.
Have you considered the possibility that your machine may just be slow?
I understand this doesn't directly address your question, but if you're not bound to Atom you could experiment with other text editors. I personally recommend Visual Studio Code. Have a look:
https://code.visualstudio.com/download
Although you've posted a solution, it may be worth considering using a package such as Timecop, which displays information about where time is spent while Atom loads. You can also check similar information in the Settings > Packages view, which will list how much time each installed package will add to the startup time (see the Flight Manual section on packages).

STM32F4DIS-BB + RTOS httpserver_socket example

I try example from ST for baseboard STM32F4DIS_BB + STM32F4-discovery.
I want to use RTOS example httpserver_socket. Before I use standalone (NO RTOS) httpserver example without problems. But in RTOS example the server was freeze on every web connection. It means that task are OK for LED toggle,DHCP is OK etc. Ping from CMD is OK. But when I make connection from web client the board was freezed and go to HardFault_Handler().
Any opinion?
Have you tried debugging the hard fault to see which instruction caused it? Once you know that you will be able to place a break point in the code to see how you got there.
As an aside, we have FreeRTOS+TCP running on an STM32F4 now, but on the larger eval board, rather than the Discovery board.
thanks to Richard. I found by debug and CFSR register that problem is in mem management :
"processor attempted an instruction fetch from a location that does
not permit execution. This fault occurs on any access to an XN region,
even when the MPU is disabled or not present"
But I dont understand how can ST make example for the same hardware what I have with this error? Now I try to find a problem.
I found that problem is in FPU option in project setting. Original project has set use FPU .When I set not used then project are OK. Problem in use of FPU and freeRTOS .

Weka GUI - Not enough memory, won't load?

This same installation of Weka has loaded for me in the past. I am simply trying to load the Weka GUI (double click on the icon) and I get the following error. How can I fix it?
OutOfMemory
Not enough memory. Please load a smaller dataset or use a larger heap size.
- initial JVM size: 122.4 MB
- total memory used: 165.3 MB
- max. memory avail.: 227.6 MB
Note:
The Java heap size can be specified with the -Xmx option.
etc..
I am not loading Weka from the command line, so how can I stop this from occurring?
Just write an answer here for ubuntu users.
If you apt-get install weka, you will have a script installed at /usr/bin/weka
The first a few lines look like below:
#!/bin/bash
. /usr/lib/java-wrappers/java-wrappers.sh
# default options
CLASS="weka.gui.GUIChooser"
MEMORY="256m"
GUI=""
Just modify the line starts with MEMORY so that you have larger upper bound.
MEMORY="2048m"
I'm not sure why you were able to use it before but not now. However, you can specify a larger heap size by changing the RunWeka.ini configuration file. On a Windows machine it should be in the Weka folder of your Program Files directory. You could try a line specifying, for example,
maxheap=200m
There might already be such an option in that file that you can simply change to a larger number.
Here is how to do it on Mac:
right-click on the main Weka file (that opens the Gui) and select "Show Package Contents";
open Info.plist file with any text editor;
change the -Xmx option.
viola

Delphi7 - How can i find where my project is hanging the compiler?

I have a project in Delphi7. Its is rather large consisting of 40 odd forms and frames.
Recently, the compiler only allows me to compile the project once so i can run it, then every re-compile the IDE hangs and i have to end the Delphi process. Before this occurs, my CPU goes to 50% (on dual core machine) so my deduction is the compilation process has gone into an infinite loop. The Executable it produces is not runnable and usually at a fixed size after it hangs.
I was wondering how i can go about finding where this inconsistency in my project is. Other projects do not suffer from this same issue.
You can use Process Explorer to discover what compiler is doing (reading a file, or ...).
Check the QC 3807 issue.
Check the system resources - free disk space, memory. Clean the temp folder. Check the disk for errors. Do you have antivirus running ? If yes, then try to turn it off.
Use "process of elimination", to see if it's something in your code.
First, make a backup of where you are, or save to your CVS (you ARE using version control, right? RIGHT? good.) Revert your branch to an earlier version where it worked. See if that works. If so, merge half of the changes from the present-day version. If that works, try the other half. Keep cutting things in half, and you'll find the code that causes the problem, by process of elimination.
Or, it may turn out to be something in the configuration. Carbonite may be your friend here.
You can either:
Enable "Compilation progress display" in the "Environment Options" window, in the "Preferences" tab.
Use the command line compiler bcc32.exe to have a detailed console output.
Both will let you know which file is hanging the compiler.
Take a look at the great Delphi Speed Up tool, which allows e.g. to abort CodeCompletion and HelpInsight by ESC/mouse move.

PermGen problems with Lift and Jetty

I'm developing on the standard Lift platform (maven and jetty). I'm repeatedly (once every couple of days) getting this:
Exception in thread "7048009#qtp-3179125-12" java.lang.OutOfMemoryError: PermGen space
2009-09-15 19:41:38.629::WARN: handle failed
java.lang.OutOfMemoryError: PermGen space
This is in my dev environment. It's not a problem because I can keep restarting the server. In deployment I'm not having these problems so it's not a real issue. I'm just curious.
I don't know too much about the JVM. I think I'm correct in thinking that permanent generation memory is for things like classes and interned strings? What I remember is a bit mixed up with the .NET memory model...
Any reason why this is happening? Are the defaults just crazily low? Is it to do with all the auxiliary objects that Scala has to create for Function objects and similar FP things? Every time I restart Jetty with newly written code (every few minutes) I imagine it re-loads classes etc. But even so, it cant' be that many can it? And shouldn't the JVM be able to deal with a large number of classes?
Cheers
Joe
From this post:
This exception occurred for one simple reason :
the permgenspace is where class properties, such as methods, fields, annotations, and also static variables, etc. are stored in the Java VM, but this space has the particularity to not being cleaned by the garbage collector.
So if your webapp uses or creates a lot of classes (I’m thinking dynamic generations of classes), chances are you met this problem.
Here are some solutions that helped me get rid of this exception :
-XX:+CMSClassUnloadingEnabled : this setting enables garbage collection in the permgenspace
-XX:+CMSPermGenSweepingEnabled : allows the garbage collector to remove even classes from the memory
-XX:PermSize=64M -XX:MaxPermSize=128M : raises the amount of memory allocated to the permgenspace
May be this could help.
Edit July 2012 (almost 3 years later):
Ondra Žižka comments (and I have updated the answer above):
JVM 1.6.0_27 says: Please use:
CMSClassUnloadingEnabled (Whether class unloading enabled when using CMS GC)
in place of CMSPermGenSweepingEnabled in the future
See the full Hotspot JVM Options - The complete reference for mroe.
If you see this when running mvn jetty:run,
set the MAVEN_OPTS.
For Linux:
export MAVEN_OPTS="-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
For Windows:
set "MAVEN_OPTS=-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
Should be fine now. If not, increase -XX:MaxPermSize.
You can also put these permanently to your environment.
For Linux, append the export line to ~/.bashrc
For Windows, press Win-key + PrintScreen, and go Advanced > Environment.
See also http://support.microsoft.com/kb/310519.
This is because of the reloading of classes as you suggested. If you are using lots of libraries etc. the sum of classes will rapidly grow for each restart. Try monitoring your jetty instance with VisualVM to get an overview of memory consumption when reloading.
The mailing list (http://groups.google.com/group/liftweb/) is the official support forum for Lift, and where you'll be able to get a better answer. I don't know the particulars of your dev setup (you don't go into much detail), but I assume you're reloading your war in Jetty without actually restarting it. Lift doesn't perform dynamic class generation (as suggested by VonC above), but Scala compiles each closure as a separate class. If you're adding and removing closures to your code over the course of several days, it's possible that too many classes are being loaded and never unloaded and taking up perm space. I'd suggest you enable the options JVM options mentioned by VonC above and see if they help.
The permanent generation is where the JVM puts stuff that will probably not be (garbage) collected like custom classloaders.
Depending on what you are deploying, the perm gen setting can be low. Some applications and/or containers combination do contain some memory leaks, so when an app gets undeployed sometimes some stuff like class loaders are not collected, resulting in filling the Perm Space thus generating the error you are having.
Unfortunately, currently the best option in this case is to max up the perm space with the following jvm flag (example for 192m perm size):
-XX:MaxPermSize=192M (or 256M)
The other option is to make sure that either the container or the framework do not leak memory.

Resources