MPLABX dsPIC33EP256MC502 Running FreeRTOS Link Error: invalid attributes for section '.heap' - freertos

I set the heap size to 5120 and I tried several other heap sizes in the project properties in MPLABX. I'm trying to build FreeRTOS. I also changed
#define configTOTAL_HEAP_SIZE
in FreeRTOSConfig.h. I'm using a dsPIC33EP256MC502. I'm getting this error:
"Link Error: invalid attributes for section '.heap'".
If I make the heap size greater than 30,000 I get an error:
"Link Error: Could not allocate section .heap, size = 30000 bytes, attributes = heap keep preserved
Link Error: Could not allocate data memory".
Why am I getting these two errors?

Related

MAximum Number of Modules

this doc states the maximum number of modules in a deployment is 20. I am having problems getting over 15. Nothing ever happens, no error messages but the modules don't get deployed.
I also would like to know if this is a soft limit and if it is, what is the process to override it.
Did you find any error in edgeAgent log? probably you hit the limit of twin message size; Maximum size per twin section (tags, desired properties, reported properties) is 8 KB.

propel orm - migration

I have this problem and I couldn't find very good informations. For those informations I have found, none is useful. Here it goes, I started a project with propel, I created a first database with a basic table in it, ran "php propel init", everything worked fine. Then I needed another table, I created in its schema.xml, but when I run any of those migration tools I get this error:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in D:\Desenvolvimento\workspace\Login\vendor\propel\propel\bin\propel.php on line 1
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in D:\Desenvolvimento\workspace\Login\vendor\propel\propel\bin\propel.php on line 1
It has been very frustrating, I feel internet lacks of this propel info.
The problem was I shouldn't manually edit the databae. Once it's created the best thing to do is use the XML to edit.

java.lang.OutOfMemoryError: Requested array size exceeds VM limit

I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long.
Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.

Issues with Profiling java application using jProbe

Im currently doing dynamic memory analysis, for our eclipse based application using jprobe.After starting the eclipse application and jprobe, when I try to profile the eclipse application, the application gets closed abruptly causing a Fatal error. A fatal error log file is generated. In the Fatal error log file, I could see that the PermGen space seems to be full. Below is a sample Heap summary which I got in the log file
Heap
def new generation total 960K, used 8K [0x07b20000, 0x07c20000, 0x08000000)
eden space 896K, 0% used [0x07b20000, 0x07b22328, 0x07c00000)
from space 64K, 0% used [0x07c00000, 0x07c00000, 0x07c10000)
to space 64K, 0% used [0x07c10000, 0x07c10000, 0x07c20000)
tenured generation total 9324K, used 5606K [0x08000000, 0x0891b000, 0x0bb20000)
the space 9324K, 60% used [0x08000000, 0x08579918, 0x08579a00, 0x0891b000)
compacting perm gen total 31744K, used 31723K [0x0bb20000, 0x0da20000, 0x2bb20000)
the space 31744K, 99% used [0x0bb20000, 0x0da1af00, 0x0da1b000, 0x0da20000)
ro space 8192K, 66% used [0x2bb20000, 0x2c069920, 0x2c069a00, 0x2c320000)
rw space 12288K, 52% used [0x2c320000, 0x2c966130, 0x2c966200, 0x2cf20000)
I tried to increase the permGen space, using the command -XX:MaxPermSize=512m. But that doesnt seem to work. I would like to know how to increase the PermGen size via command prompt. I would like to know if I have to go to the java location in my computer and execute the above command or should I increase the PermGen space specifically for the eclipse application or Jprobe ? Please advise.
Any help on this is much appreciated.

Out of Memory error in opencv

I am trying to make a training data set from the frames of a videos.
For every new frame I am finding the Feature Vector(size is 3300X1) and concatenating with old feature vector to make a training data set. But after reading of 2000 frames I am getting below specified error.
and I am getting error in the below mentione code in second line, i.e
cv::Mat frameFV = getFeatureVectorFromGivenImage(curFrame, width, height);
cv::hconcat(trainingDataPerEmotion, frameFV, trainingDataPerEmotion);
At the time of getting error the size of the cv::Mat trainingDataPerEmotion is 3300X2000(nearly)
and I am releasing old video by using
cvReleaseCapture(&capture);
before going to process the new video. And the error is
OpenCV Error: Insufficient memory (Failed to allocate 3686404 bytes) in OutOfMemoryError, file /home/naresh/OpenCV-2.4.0/modules/core/src/alloc.cpp, line 52
terminate called after throwing an instance of 'cv::Exception'
what(): /home/mario/OpenCV-2.4.0/modules/core/src/alloc.cpp:52: error: (-4) Failed to allocate 3686404 bytes in function OutOfMemoryError
Can any one suggest me that how can I over come this problem and I have to save the big training data, to train my system.
Thank you.
Check first if you have not some memory leaks.
As far as I remember OpenCV OutOfMemory error is thrown actualy when some problems with allocation occurs.
If you still can not figure out some memory leak and find the case, you must provide more code. The best will be code that allow to reproduce your error.

Resources