Exception when deleting std::vector<unsigned char> from opencv imencode - opencv

Im new to opencv and c++, Im forced to lean c++ to use opencv on my delphi app. So im exporting this function from dll to convert a pointer to mat back to bytes after image processing. This is the function im using:
DllExport unsigned char* MatToBytes(cv::Mat *src, int &outLen)
{
cv::Mat &matCvrt = *src;
std::vector<unsigned char> *poutVet = new std::vector<unsigned char>();
std::vector<unsigned char> &outVet = *poutVet;
imencode(".png", matCvrt, outVet);
outLen = outVet.size();
unsigned char* outBytes;
outBytes = new unsigned char[outVet.size()];
std::copy(outVet.begin(), outVet.end(), outBytes);
vector<unsigned char>().swap(outVet);
return outBytes;
}
I already researched about it whole day now but couldnt find an answer. if I remove the code
vector<unsigned char>().swap(outVet);
it would work fine, but leaks memory. if I put it in the code, I get "Debug Assertion Failed!" Hope someone can help me out, thanks a lot.

Do not use pointers to std::vector. This class already manage memory allocation and deallocation:
DllExport unsigned char* MatToBytes(cv::Mat *src, int &outLen)
{
std::vector<unsigned char> outVet;
imencode(".png", *src, outVet);
outLen = outVet.size();
unsigned char* outBytes;
outBytes = new unsigned char[outVet.size()];
std::copy(outVet.begin(), outVet.end(), outBytes);
return outBytes;
}

I fixed it myself, I tried it on visual studio 2010 and its working ok. I was using vc11 libraries of opencv, so now Ill just have to build opencv on visual studio 2012.

Related

Is it possible to intercept System Calls via Theos Tweak? Jailed Version

Can i intercept generic system calls like sqlite3_prepare or sqlite3_open also CC_MD5 of libcommonCrypto with a Theos (jailed versione) Tweak?
I would intercept all these calls and print on the console or into a log file.
I've read something about MSHookFunction, but i'm not sure about it.
EDIT: i add some code which i've wrote in these days. This is my Tweak.xm, where i would intercept CC_MD5 call, and after a simple message log, i would return to the normal flow. The tweak is injected, but i can not see any message.
#include <substrate.h>
#include <CommonCrypto/CommonDigest.h>
static unsigned char * (*original_CC_MD5)(const void *data, CC_LONG len, unsigned char *md);
static unsigned char * replaced_CC_MD5(const void *data, CC_LONG len, unsigned char *md) {
NSLog(#"Calling MD5");
return original_CC_MD5(data, len, md);
}
MSInitialize {
MSHookFunction(CC_MD5, replaced_CC_MD5, &original_CC_MD5);
}
I've found the problem. The Theos version that i'm using is for jailed device. With this version MSHookFunction is substituted by fishhook.
Using fishhook it's all ok: obviously the code changes
#include <substrate.h>
#include <CommonCrypto/CommonDigest.h>
#import <fishhook.h>
static unsigned char * (*original_CC_MD5)(const void *data, CC_LONG len, unsigned char *md);
static unsigned char * replaced_CC_MD5(const void *data, CC_LONG len, unsigned char *md) {
NSLog(#"Calling MD5");
return original_CC_MD5(data, len, md);
}
%ctor {
rebind_symbols((struct rebinding[1]){{"CC_MD5", replaced_CC_MD5, (void *)&original_CC_MD5}},1);
}

Using dynamic library loaded by LC_LOAD_DYLIB to interpose C functions

Firstly, what I want to do is to intercept an arbitrary standard C function (like fopen, read, write, malloc, ...) of an iOS application.
I have a libtest.dylib with this code:
typedef struct interpose_s {
void *new_func;
void *orig_func;
} interpose_t;
FILE *vg_fopen(const char * __restrict, const char * __restrict);
static const interpose_t interposing_functions[] \
__attribute__ ((section("__DATA, __interpose"))) = {
{ (void *)vg_fopen, (void *)fopen },
};
FILE *vg_fopen(const char * __restrict path, const char * __restrict mode) {
printf("vg_fopen");
return fopen(path, mode);
}
After compiled the dylib, I go to the binary of the host iOS app and add an LC_LOAD_DYLIB to the end of the LC_LOAD_COMMANDS list and point it to #executable_path/libtest.dylib
What I expect is that it will override the implementation of fopen, and print "vg_fopen" whenever fopen is called. However, I do not get it, so the interposition might have been failed.
I'd like to know what might be the reason. This is for in-house development for learning purpose only, so please don't mention about the impact or warn me about inappropriate use.
Thanks in advance.
From the dyld source:
// link any inserted libraries
// do this after linking main executable so that any dylibs pulled in by inserted
// dylibs (e.g. libSystem) will not be in front of dylibs the program uses
if ( sInsertedDylibCount > 0 ) {
for(unsigned int i=0; i < sInsertedDylibCount; ++i) {
ImageLoader* image = sAllImages[i+1];
link(image, sEnv.DYLD_BIND_AT_LAUNCH, ImageLoader::RPathChain(NULL, NULL));
// only INSERTED libraries can interpose
image->registerInterposing();
}
}
So no, only libraries inserted via DYLD_INSERT_LIBRARIES have their interposing applied.

DirectX11 CreateWICTextureFromMemory Using PNG

I've currently got textures loading using CreateWICTextureFromFile however I'd like a little more control over it, and I'd like to store images in their byte form in a resource loader. Below is just two sets of test code that return two separate results and I'm looking for any insight into a possible solution.
ID3D11ShaderResourceView* srv;
std::basic_ifstream<unsigned char> file("image.png", std::ios::binary);
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
unsigned char* buffer = new unsigned char[length];
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], sizeof(buffer), nullptr, &srv, NULL);
As a return for the above code I get Component not found.
std::ifstream file;
ID3D11ShaderResourceView* srv;
file.open("../Assets/Textures/osg.png", std::ios::binary);
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
std::vector<char> buffer(length);
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), (const uint8_t*)&buffer[0], sizeof(buffer), nullptr, &srv, NULL);
The above code returns that the image format is unknown.
I'm clearly doing something wrong here, any help is greatly appreciated. Tried finding anything even similar on stackoverflow, and google to no avail.
Hopefully someone trying to do the same thing will find this solution.
Below is the code I used to solve this problem.
std::basic_ifstream<unsigned char> file("image.png", std::ios::binary);
if (file.is_open())
{
file.seekg(0,std::ios::end);
int length = file.tellg();
file.seekg(0,std::ios::beg);
unsigned char* buffer = new unsigned char[length];
file.read(&buffer[0],length);
file.close();
HRESULT hr;
hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], (size_t)length, nullptr, &srv, NULL);
}
The important change being (size_t)length in CreateWICTextureFromMemory
It was indeed a stupid error.

Save BIO into char* (from SMIME_write_CMS)

I want to save (pipe/copy) a BIO into a char array.
When I know the size it works, but otherwise not.
For example, I can store the content of my char* into a BIO using this
const unsigned char* data = ...
myBio = BIO_new_mem_buf((void*)data, strlen(data));
But when I try to use SMIME_write_CMS which takes a BIO (what I've created before) for the output it doesn't work.
const int SIZE = 50000;
unsigned char *temp = malloc(SIZE);
memset(temp, 0, SIZE);
out = BIO_new_mem_buf((void*)temp, SIZE);
if (!out) {
NSLog(#"Couldn't create new file!");
assert(false);
}
int finished = SMIME_write_CMS(out, cms, in, flags);
if (!finished) {
NSLog(#"SMIME write CMS didn't succeed!");
assert(false);
}
printf("cms encrypted: %s\n", temp);
NSLog(#"All succeeded!");
The OpenSSL reference uses a direct file output with the BIO.
This works but I can't use BIO_new_file() in objective-c... :-/
out = BIO_new_file("smencr.txt", "w");
if (!out)
goto err;
/* Write out S/MIME message */
if (!SMIME_write_CMS(out, cms, in, flags))
goto err;
Do you guys have any suggestion?
I would suggest trying to use SIZE-1, that way you are guaranteed that it is NULL terminated. Otherwise, it is possible that it is just over running the buffer.
out = BIO_new_mem_buf((void*)temp, SIZE-1);
Let me know if that helps.
Edit:
When using BIO_new_mem_buf() it is a read only buffer, so you cannot write to it. If you want to write to memory use:
BIO *bio = BIO_new(BIO_s_mem());

Memory Issues with cvShowImage and Kinect SDK: Skeletal Viewer

I'm using cvSetData to get the rgb frame into one I can use for openCV.
I modified the SkeletalViewer slightly to produce the rgb stream.
void CSkeletalViewerApp::Nui_GotVideoAlert( )
{
const NUI_IMAGE_FRAME * pImageFrame = NULL;
IplImage* kinectColorImage = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 4);
HRESULT hr = NuiImageStreamGetNextFrame(
m_pVideoStreamHandle,
0,
&pImageFrame );
if( FAILED( hr ) )
{
return;
}
NuiImageBuffer * pTexture = pImageFrame->pFrameTexture;
KINECT_LOCKED_RECT LockedRect;
pTexture->LockRect( 0, &LockedRect, NULL, 0 );
if( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
m_DrawVideo.DrawFrame( (BYTE*) pBuffer );
cvSetData(kinectColorImage, (BYTE*) pBuffer,kinectColorImage->widthStep);
cvShowImage("Color Image", kinectColorImage);
//cvReleaseImage( &kinectColorImage );
cvWaitKey(10);
}
else
{
OutputDebugString( L"Buffer length of received texture is bogus\r\n" );
}
cvReleaseImage(&kinectColorImage);
NuiImageStreamReleaseFrame( m_pVideoStreamHandle, pImageFrame );
}
With the cvReleaseImage, I would get a cvException error. Not exactly sure which one as it didn't specify. Without cvReleaseImage, I would get the rgb video running in an openCV window but would eventually crash because it ran out of memory.
How should I release the image properly?
Just solved this problem.
After a bunch of sleuthing using breakpoints and debugging, it appears as though the problem has to do with the pointers used in cvSetData. My best guess is that Nui_GotVideoAlert() updates the address pointed to by pBuffer before cvReleaseImage is called. In addition, cvSetData never appears to copy the bytes from this address.
What happens then is that cvReleaseImage is called on an address that no longer exists.
I fixed this by declaring kinectColorImage at the top of NuiImpl.cpp, calling cvSetData in ::Nui_GotVideoAlert(), and only calling cvReleaseImage in the Nui_Uninit() method. This way, kinectColorImage will just update instead of creating a new IplImage in each call of Nui_GotVideoAlert().
That's strange. As far as I know, cvReleaseImage released both the image header and the image data. I did the piece of code below and in this certain example, cvReleaseImage does not free the buffer that contains the data. There I didn't use cvSetData but I just updated the pointer to the image data. If you uncomment the commented lines and comment the ones just below each one, program still runs but you'll get some memory leaks. I used OpenCV 2.2 (this is the legacy interface).
#include <opencv/cv.h>
#include <stdlib.h>
#define NLOOPS 1000
int main(void){
int i,j
char *buff = (char *) malloc( sizeof(char) * 3 * 640 * 480 );
for( i = 0; i < 640 * 480 * 3; i++ ) buff[i] = 128;
j = 0;
while( j++< NLOOPS ){
IplImage *im = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 3);
//cvSetData(im, buff, im->widthStep); ---> If you use that version you'll get memory leaks. Comment line below.
im->imageData = buff;
cvWaitKey(4);
cvShowImage("kk", im);
//cvReleaseImageHeader(&im); ---> If you use that version you'll get memory leaks. Comment line below.
cvReleaseImage(&im);
free(im);
}
free(buff);
return 0;
}

Resources