I have an OpenWhisk deployment and I am using docker. I want to increase max memory limit. I think it can be done with enviroment variables. Which enviroment variable can I use to fix my problem?
I get the following error.
Request larger than allowed: 3600245 > 1048576 bytes. (code 9RKHKJtJVqWK7jz3o608HcQagzLHFfvt)
Related
I am running a proc mixed code where the default 2GB provides insufficient memory to run.
I changed the memsize in the config file to 4G and it did change to 4GB when checking in proc options; run;. However, it is still not enough for proc mixed to execute.
When I change it to 8G, I ran proc options; run; to check the memsize and it was still stuck at 4GB unfortunately.
I have a 16GB computer so I thought I would not come across such an issue. Is there a workaround?
Are you running 64-bit SAS? The only reason I can think of it would be limited is if you're using a 32-bit version of SAS, or if you changed the wrong config file. The standard config file location should be:
C:\Program Files\SASHome\SASFoundation\9.4\nls\en\sasv9.cfg
If you're running UTF-8, it is located in:
C:\Program Files\SASHome\SASFoundation\9.4\nls\u8\sasv9.cfg
Changing -MEMSIZE to 8G should change it.
One test would be to invoke SAS with this option set explicitly:
sas.exe -MEMSIZE 8G
proc options group=memory;
run;
Should show:
MEMSIZE=8589934592
If it does not show 8GB, you must be running a 32-bit version of SAS. SAS will automatically set the max memory to 4GB even if the config file is above that value.
I want to configure the heap memory size but it's not clear for me, from the documentation.
It says:
You can control the heap size by supplying an explicit value to the startup script such as graphdb -Xms10g -Xmx10g or setting one of the following environment variables:
GDB_HEAP_SIZE - environment variable to set both the minimum and the maximum heap size (recommended)
GDB_MIN_MEM - environment variable to set only the minimum heap size
GDB_MAX_MEM - environment variable to set only the maximum heap size.
Any clearer steps?
In your case if you want to increase the Xmx you need to start GraphDB with ./graphdb -Xmx10g. The other way is to use "set GDB_HEAP_SIZE=10g" in the terminal, but I recommend passing the parameter as in the first example.
You can also use this with sh and most other shells:
GDB_HEAP_SIZE=10g ./graphdb
I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long.
Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.
I am using PHPbb , everything works fine,
But i am getting the following error in a single page inside admin.
Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in home/mytestsite/public_html/includes/template.php on line 458
How to fix this error?
As you can imagine, this error message occurs when PHP tries to use more memory than is avialable. I'm assuming that changing code is not an option but you CAN increase the amount of memory available to PHP.
To change the memory limit for one specific script, include a line such as this at the top of the script:
ini_set("memory_limit","20M");
The 12M (for example) sets the limit to 20 Megs. If this does not work, keep increasing the memory limit until your script fits or your server squeals for mercy.
You can also make this a permanent change for all PHP scripts running on the server by adding a line such as this to the server’s php.ini file:
memory_limit = 20M
Hope this helps
I have read that there's a limit to the maximum memory allocation to around 60% of device memory, and these can be changed by modifying the GPU_MAX_HEAP_SIZE and GPU_MAX_ALLOC_SIZE environment variables for GPU.
I am wonder if the AMD SDK has something similar for the CPU if I want to raise the limit of memory allocation?
For my current configuration, it returns the following:
CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2973.37MB
CL_DEVI_CEGLOBAL_MEM_SIZE = 11893.5MB
Thanks.
I was able to change this on my system. I don't know if this method was possible when you originally asked the question.
set the environment variable 'CPU_MAX_ALLOC_PERCENT' to the percentage of total memory you want to be able to allocate for a single global buffer. I have 8GB system memory, and after setting CPU_MAX_ALLOC_PERCENT to 80, clinfo reports the following:
Max memory allocation: 6871207116
Success! 6.399GB
You can also use GPU_MAX_ALLOC_PERCENT in the same way for your GPU devices.