Stack around variable corrupted when using cvBlobsLib (OpenCV) - opencv

I am trying to do simple image processing with OpenCV and the cvBlobsLib in Visual C++ 2008, and I get an error message when I try to create a CBlobResult object
IplImage* original = cvLoadImage("pic6.png",0);
cvThreshold(original, original, 100, 255, CV_THRESH_BINARY);
CBlobResult blobs = CBlobResult(original, NULL, 255);
The message is the following:
Run-Time Check Failure #2 - Stack around the variable blobs was corrupted
Why does this happen? How should I create this object? Thank you very much for your help.

Sorry. Actually it was my fault. I was trying to compile the debug version of my project with the release version of the cvBlobsLib library. As soon as I linked the debug version it worked.

Related

Metal functions failing to compile with Xcode 8

Since moving to Xcode 8 and iOS10, my metal based app fails to run at all. On launch I get the error: "Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED"
This appears two to three times in the console before crashing due to a MTLComputePipelineState not being successfully created and throwing an error when calling the MTLDevice function makeComputePipelineState(function:). The only changes I have made to the project is to update to Swift 3.0, but the console seems to imply a compiler error, which due to the crash I'm assuming is down to some metal code not compiling properly.
Any help would be appreciated, this is ageing me prematurely.
UPDATE:
I've located the line causing the trouble in the .metal file:
int gi1 = permMod12[ii+i1+perm[jj+j1+perm[kk+k1]]];
permMod12 is a static constant array declared as:
static constant int permMod12 [512] = {7,4,5,7...}
perm is similarly static and constant:
static constant int perm [512] = {151,160...}
The variables ii, i1, jj, j1, kk and k1 are all integers calculated in the same kernel.
The kernel is quite large so I'll post a link to the GitHub location. It's the functions called simplex3D and simplex4D that are causing the issue. These are very similar so only focus on one of them, they are carbon copies but 4D has another stretch of variables running (ll, l1, l etc).
The issue certainly seems to be with looking up these arrays with calculated variables as when I change the variables to simple literals there is no error.
The kernel needs to be executed in order to get the error to occur.
Any help with this new info would be great.
I also encountered the same error: "Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED". The issue was resolved. It stemmed from attempted use of 'threadgroup bool' type variables. Refactoring the code to use 'threadgroup short' variables in place of the boolean resolved the error. (Could not find in the Metal Version 2 specification if bool type is or is not a valid threadgroup type.)
I've encountered this situation, and it seems that there is no unique solution to solve this problem. In my case, the problem was occurred when a texture that uses a normalized coordinate sampler also uses read() function. When I switch read() function to sample() this weird error was removed. I hope your problem were solved already.

Lua: Read Unsigned DWORD not working in Bizhawk Emulator

When I run my code I get an error on this line:
personality = memory.readdwordunsigned(0x02024744)
This is the error message I am given by the console:
LuaInterface.LuaScriptException: [string "main"]:26: attempt to call field 'readdwordunsigned' (a nil value)
I have been doing some testing and researching around this for a while and I cannot get it to work despite this concept being used on several other projects such as this: https://projectpokemon.org/forums/showthread.php?16681-Gen-3-Lua-Scripts
Some other information:
1. I am running the lua script on the BizHawk emulator.
2. if I change the line to memory.readbyte() I receive a different message, which leads me to believe that the console does not recognise memory.readdwordunsigned() as a funciton.
3. The script is in the same folder as the executable file for the emulator.
Thank you in advance for any help
Turns out that support for memory.readdwordunsigned() is no longer supported in the BizHawk Emulator. After extensive research and help from a comment posted on my question I have managed to find a working alternative:
memory.usememorydomain("System Bus")
personality=memory.read_u32_le(0x02024744)
For anyone else who finds this answer useful, please note that a dword is unsigned and 4 bytes in size, hence the use of u32, because a dword is 32bits and unsigned. If you wanted to use a signed byte, for example, you would use s8 instead. le means little endien, be can be used instead for big endien.
It is important to state the memory domain before attempting to read from memory because the memory domain I was using (IWRAM) as well as all other memory domains except for the system bus would produce this error due to the size of the memory address.

Memory Leak issue in opencv

In my project i used CvPoint2D64f* function to store corners of the chessboard image manually.Now i got memory leak error due to un released memory.I tried both free(Corners) and
delete[] Corners.But after 11 hours it gives same memory leak error.i had a confusion.Which one is correct method to release memory?
int main()
{
CvPoint2D64f* Corners = 0;
Corners = new CvPoint2D64f[25];
......
free(Corners);
return;
}
i used c library of opencv 2.1
Thanks in advance..
if you want it to be 'C', you can't use 'new', that has to be:
Corners = (CvPoint2D64f*) malloc(25 * sizeof(CvPoint2D64f));
...
free(Corners);
but honestly, your problems are due to using an outdated version(2.1) and an outdated api(c)
those manual memory management issues were the main reason for the opencv devs to switch to c++.

Error Handling in opencv gpu

How to handle opencv gpu exceptions? Is there any specific set of error code api for opencvgpu exception handling?
I tried searching a lot but I only got 1 error code i.e. CV_GpuNotSupported.
Please help me out.
While I'm assuming that you know that CV_GpuNotSupported is NOT how OpenCV handles GPU exceptions, and in fact, it handles error when you try to call gpu methods without compiling OpenCV with -DHAVE_CUDA or -DHAVE_OPENCL, the way OpenCV (I also assume the newest OpenCV released version, 2.4.5) handles error codes, is defined in these files:
For methods that use NVIDIA CUDA:
https://github.com/Itseez/opencv/blob/2.4.5/modules/gpu/src/error.cpp
https://github.com/Itseez/opencv/blob/2.4.5/modules/gpu/src/precomp.hpp
For methods that use OpenCL:
https://github.com/Itseez/opencv/blob/2.4.5/modules/ocl/src/error.cpp
https://github.com/Itseez/opencv/blob/2.4.5/modules/ocl/src/precomp.hpp
As for the API, you can use cv::gpu::error or cv::ocl::error. Or to get the error string, getErrorString for cv::gpu and getOpenCLErrorString. And by the way for CUDA's error you have to specify whether it's an NPP, NCV, cufft, or cublas error.

Error trying to extract face region

I have written the following piece of code to extract the image of the region detected as face using the OpenCV2.2 facedetect.c code.
//Extracting the image of just the ROI
IplImage* rectImage;
rectImage->roi=NULL;
CvRect boundingBox={point1.x,point1.y,r->width,r->height};
cvSetImageROI(rectImage,boundingBox);
IplImage* originalBox=cvCreateImage(cvSize(r->width,r->height),IPL_DEPTH_8U,3);
IplImage* reSizedBox=cvCreateImage(cvSize(100,100),IPL_DEPTH_8U,3);
cvCopy(rectImage, originalBox, 0);
cvResize(originalBox,reSizedBox,CV_INTER_LINEAR);
cvSaveImage("MyFaceBox.jpg", reSizedBox);
Problem: When I build it, it gives the following error:
"error: ‘cvResize’ was not declared in this scope"
I am using xcode as a developer tool. I cannot understand what is creating the problem. Can someone please help?
Thanks
Did you include the relating header file, like
#include <imgproc/imgproc_c.h>

Resources