Out of Memory error in opencv - opencv

I am trying to make a training data set from the frames of a videos.
For every new frame I am finding the Feature Vector(size is 3300X1) and concatenating with old feature vector to make a training data set. But after reading of 2000 frames I am getting below specified error.
and I am getting error in the below mentione code in second line, i.e
cv::Mat frameFV = getFeatureVectorFromGivenImage(curFrame, width, height);
cv::hconcat(trainingDataPerEmotion, frameFV, trainingDataPerEmotion);
At the time of getting error the size of the cv::Mat trainingDataPerEmotion is 3300X2000(nearly)
and I am releasing old video by using
cvReleaseCapture(&capture);
before going to process the new video. And the error is
OpenCV Error: Insufficient memory (Failed to allocate 3686404 bytes) in OutOfMemoryError, file /home/naresh/OpenCV-2.4.0/modules/core/src/alloc.cpp, line 52
terminate called after throwing an instance of 'cv::Exception'
what(): /home/mario/OpenCV-2.4.0/modules/core/src/alloc.cpp:52: error: (-4) Failed to allocate 3686404 bytes in function OutOfMemoryError
Can any one suggest me that how can I over come this problem and I have to save the big training data, to train my system.
Thank you.

Check first if you have not some memory leaks.
As far as I remember OpenCV OutOfMemory error is thrown actualy when some problems with allocation occurs.
If you still can not figure out some memory leak and find the case, you must provide more code. The best will be code that allow to reproduce your error.

Related

When error frame generate, How does relate to Trasmit Error Count?

enter image description hereI have a question. I am using CANoe Simulation tool.
I know when a Error frame occurs. I heard that Error Frame condition is Transmit Error Count Should be
greater than 127.But first of all, On Screen Capture, An error frame occurs even though it is less than
What is the reason? and, Transmit Error Count 7 means consecutive dominant value error?
I want to clarify that ECU can't receive HU_DATC_E_03 Message because of Error Frame.
Please Help me
enter image description here
You got it backwards: functioning CAN nodes are always allowed to send error frames. Until the counter reaches 127, the node is in "error active" mode. Which means that the node is allowed to actively send error frames.
Beyond 127 it goes "error passive". This means that it is no longer allowed to send error frames, because the node is considered broken and should not be allowed to disrupt bus traffic any longer. It may still listen to the bus but not actively participate.
I don't know this specific tool, but tx error count supposedly just means that the tx error counter has reached the value 7 - that is, there have been 7 failed attempts to send a frame, for whatever reason. It shouldn't have anything necessarily have anything to do with bit stuffing (and CAN bit stuffs after 5 data bits, not 6 as some other networks do).

How to solve "OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheckTrainData" error while using emeocv?

I am learning emeocv with opencv from https://www.mkompf.com/cplus/emeocv.html. I pretty much followed it accurately. My programming environment is :
Ubuntu 14.04
opencv-2.4.8+dfsg1
In the tutorial page mentioned above, when i reach 'main program' section
sudo ./emeocv -i images -l
this command throws an error
OpenCV Error: Bad argument (train data must be floating-point matrix)
in cvCheckTrainData, file
/build/buildd/opencv-2.4.8+dfsg1/modules/ml/src/inner_functions.cpp,
line 857 terminate called after throwing an instance of
'cv::Exception' what():
/build/buildd/opencv-2.4.8+dfsg1/modules/ml/src/inner_functions.cpp:857:
error: (-5) train data must be floating-point matrix in function
cvCheckTrainData
and I am unable to proceed further.
I don't even know where this file "/build/buildd/opencv-2.4.8+dfsg1/modules/ml/src/inner_functions.cpp" exists.
How can i resolve this error, please help.
This happens when you started training mode before but didn't train any data.
Simply delete the empty trainctr.yml and start again with real data.
Source

dispatch_io_read a socket will wait for more data if receiving data size is smaller than length

Hi I am using dispatch_io_read with a socket in swift 2 on Xcode 7 Beta3. It looks like the read action will hold there when the expected receiving data size is smaller than the length I specified. For example,
If I do
dispatch_io_read(channel!, 0, 1000, inputQueue!, myReadHandler)
and the data from the server is less than 1000 bytes, myReadHandler will never get called.
To solve this, I have to do read bytes one by one, is there a better solution?
Thanks.
This probably is a little late, but for anyone who has the same problem
apple's documentation shows that..
"The length parameter indicates the number of bytes that should be read from the I/O channel. Pass SIZE_MAX to keep reading until EOF is encountered (for a channel created from a disk-based file this happens when reading past the end of the physical file)."
So, simply using SIZE_MAX will read all the available data attached to the file descriptor.
Unfortunately, this seems to not work due to a bug in Swift 3 with DispatchIO.read().

How to determine if the graphic or picture is damage?

I have Encrypted->Decrypted images and draw it to canvas.
i got no error when assigning it to jpg(TJpegImage)
DecryptJepegImage(PWordInfo(FWordList[i])^.Image, jpg); // No errors here
but i got errors when im going to draw it to canvas.
bmp.Canvas.StretchDraw(Rect(0, 0, bmp.Width, bmp.Height), jpg); // says Access violation!
My question is how to determine if it has damage so that i could use alternative Image or pics in it.
That's not really enough information to go on. The one thing that I can be pretty sure of is that it's almost certainly not caused by damage to the encrypted image. Access Violation means invalid memory access somewhere. Either you're dereferencing a pointer that's nil, or you've got corrupted memory.
Just going by my gut reaction, the first thing I'd check is that whatever you're doing with pointer casting in the first line is correct. Pointer errors are a frequent source of access violations.
Also, is this a nil pointer error or a corrupt pointer error? You can tell by the address in the access violation. If either one starts with a bunch of 0s (or in rare cases, a bunch of Fs) then that means you're dereferencing a nil somewhere. Make sure that bmp and bmp.canvas are assigned. But if the addresses both look like valid memory addresses, then you've got memory corruption. That's harder to track down, and you'll have to spend some quality time with the debugger.

CL_OUT_OF_RESOURCES for 2 millions floats with 1GB VRAM?

It seems like 2 million floats should be no big deal, only 8MBs of 1GB of GPU RAM. I am able to allocate that much at times and sometimes more than that with no trouble. I get CL_OUT_OF_RESOURCES when I do a clEnqueueReadBuffer, which seems odd. Am I able to sniff out where the trouble really started? OpenCL shouldn't be failing like this at clEnqueueReadBuffer right? It should be when I allocated the data right? Is there some way to get more details than just the error code? It would be cool if I could see how much VRAM was allocated when OpenCL declared CL_OUT_OF_RESOURCES.
I just had the same problem you had (took me a whole day to fix).
I'm sure people with the same problem will stumble upon this, that's why I'm posting to this old question.
You propably didn't check for the maximum work group size of the kernel.
This is how you do it:
size_t kernel_work_group_size;
clGetKernelWorkGroupInfo(kernel, device, CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), &kernel_work_group_size, NULL);
My devices (2x NVIDIA GTX 460 & Intel i7 CPU) support a maximum work group size of 1024, but the above code returns something around 500 when I pass my Path Tracing kernel.
When I used a workgroup size of 1024 it obviously failed and gave me the CL_OUT_OF_RESOURCES error.
The more complex your kernel becomes, the smaller the maximum workgroup size for it will become (or that's at least what I experienced).
Edit:
I just realized you said "clEnqueueReadBuffer" instead of "clEnqueueNDRangeKernel"...
My answer was related to clEnqueueNDRangeKernel.
Sorry for the mistake.
I hope this is still useful to other people.
From another source:
- calling clFinish() gets you the error status for the calculation (rather than getting it when you try to read data).
- the "out of resources" error can also be caused by a 5s timeout if the (NVidia) card is also being used as a display
- it can also appear when you have pointer errors in your kernel.
A follow-up suggests running the kernel first on the CPU to ensure you're not making out-of-bounds memory accesses.
Not all available memory can necessarily be supplied to a single acquisition request. Read up on heap fragmentation 1, 2, 3 to learn more about why the largest allocation that can succeed is for the largest contiguous block of memory and how blocks get divided up into smaller pieces as a result of using the memory.
It's not that the resource is exhausted... It just can't find a single piece big enough to satisfy your request...
Out of bounds acesses in a kernel are typically silent (since there is still no error at the kernel queueing call).
However, if you try to read the kernel result later with a clEnqueueReadBuffer(). This error will show up. It indicates something went wrong during kernel execution.
Check your kernel code for out-of-bounds read/writes.

Resources