I am very new to OpenCV. I noticed the following code has a memory leak:
IplImage *img, *img_dest;
img = cvLoadImage("..\\..\\Sunset.jpg", CV_LOAD_IMAGE_COLOR);
while(1) // to make the mem leak obvious
{
img_dest = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
img_dest = cvCloneImage(img);
cvReleaseImage( &img_dest );
}
cvReleaseImage( &img );
How to release the unreferenced data then? And is there an easy way to make a clean copy of an IPL image (of course we could write a loop to copy each element of the data...).
For your memory leak issue:
cvCreateImage allocated memoryA for the image, and cvCloneImage allocated memoryB (and cloning whatever value stored in img as stated in your code). cvReleaseImage(&img_dest) only deallocate memoryB thus memoryA is left unreferenced but not deallocated.
For your IPL Image copying:
Declare another memory and use command cvCopy, i dont see any difficulties in using it and it is safe and efficient.
If you wish to declare an IPL image header without allocating data byte for storing image value, use CreateImageHeader instead. I would advise you to spend some time mastering cvCreateImage, cvCreateImageHeader, cvCreateData, cvReleaseImage, cvReleaseImageHeader, cvReleaseImageData and cvCloneImage.
Related
I know I can create UIImages with [UIImage imageNamed:] or other UIImage methods. I use Image I/O to create gif animation in my app. And resize images too since it's super fast!
In WWDC 2012 session: iOS App Performance: Graphics and Animations , Apple engineer suggests that I shouldn't cache images myself because image caching is handled by system. From Apple's Image I/O Programming Guide , one of features Image I/O provides is
Effective caching
So I decided let Image I/O cache all images for me. There is a key named kCGImageSourceShouldCache in
Image Source Option Dictionary Keys of which the description says
Whether the image should be cached in a decoded form.
I believe if I create an image source with an url, then the images created later are cached. But what if the image source is created with a CFDataRef object? It's just a pointer to some image data, it's not like an url which can identify a unique resource. Does Image I/O still cache images in this case? Thanks.
Here is the code I used to create image:
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
CGImageSourceRef imageSource = CGImageSourceCreateWithData(imageData, myOptions);
CFRelease(myOptions);
CGImageRef myImage = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
CFRelease(imageSource);
After some experiments with Instruments, I finally figured this out. That is, if the caching option is turned on, image decoding only happens once, and it happens when the image is needed to be drawn the first time. As long as I keep the reference of the image, the decoded version of the image is cached. The source of image is not related.
I have some well known steps in my program:
CreateBuffer
Create..View
CSSet..Views
Dispatch
At which step is the data copied to the GPU?
The reason they down-voted it is because it seems as if you didn't put any effort into a little Google search.
Answer: DirectX usually transfers data from system memory into video memory when the creation methods are called. An example of a creation method is "ID3D11Device::CreateBuffer". This method requires a pointer to the memory location of where the data is so it can be copied from system RAM to video RAM. However, if the pointer that is passed into is a null pointer then it just sets the amount of space to the side so you can copy it later.
Example:
If you create a Dynamic Vertex buffer and you don't pass the data in at first then you will have to use map/unmap to copy the data into video memory.
// Fill in a buffer description.
D3D11_BUFFER_DESC bufferDesc;
bufferDesc.Usage = D3D11_USAGE_DYNAMIC;
bufferDesc.ByteWidth = sizeof(Vertex_dynamic) * m_iHowManyVertices;
bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
bufferDesc.MiscFlags = 0;
bufferDesc.StructureByteStride = NULL;
// Fill in the subresource data.
D3D11_SUBRESOURCE_DATA InitData;
InitData.pSysMem = &_vData[0];
InitData.SysMemPitch = NULL;
InitData.SysMemSlicePitch = NULL;
// Create the vertex buffer.
/*Data is being copyed right now*/
m_pDxDevice->CreateBuffer(&bufferDesc, &InitData, &m_pDxVertexBuffer_PiecePos);
DirectX manages the memory for you and the data is copied to the GPU when it needs to be.
I want to clear sensitive data from memory in my iOS app.
In Windows I used to use SecureZeroMemory. Now, in iOS, I use plain old memset, but I'm a little worried the compiler might optimize it:
https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/coding/771-BSI.html
code snippet:
NSData *someSensitiveData;
memset((void *)someSensitiveData.bytes, 0, someSensitiveData.length);
Paraphrasing 771-BSI (link see OP):
A way to avoid having the memset call optimized out by the compiler is to access the buffer again after the memset call in a way that would force the compiler not to optimize the location. This can be achieved by
*(volatile char*)buffer = *(volatile char*)buffer;
after the memset() call.
In fact, you could write a secure_memset() function
void* secure_memset(void *v, int c, size_t n) {
volatile char *p = v;
while (n--) *p++ = c;
return v;
}
(Code taken from 771-BSI. Thanks to Daniel Trebbien for pointing out for a possible defect of the previous code proposal.)
Why does volatile prevent optimization? See https://stackoverflow.com/a/3604588/220060
UPDATE Please also read Sensitive Data In Memory because if you have an adversary on your iOS system, your are already more or less screwed even before he tries to read that memory. In a summary SecureZeroMemory() or secure_memset() do not really help.
The problem is NSData is immutable and you do not have control over what happens. If the buffer is controlled by you, you could use dataWithBytesNoCopy:length: and NSData will act as a wrapper. When finished you could memset your buffer.
Is it possible to assign the of a buffer to another buffer defined in OpenCL source code?
For example, consider the below code:
cl_mem buff;
cl_mem temp;
...
...
...
temp = buff;
Do I need to call clEnqueueBuffer() again?
You would need to copy buff to temp using clEnqueueCopyBuffer between your NDRange calls. I don't recommend doing this if you can help it though. There should be no reason why you cant use the same buffer for NDRange calls unless you are needing it for something else in the meantime.
The situation is this:
I've written a simple MovieClip replacement that converts an existing imported MovieClip to a sequence of BitmapData. This removes the requirement for Flash to render vector data in the MovieClip on each frame.
But BitmapData has a huge memory footprint. I've tried converting the BitmapData to a ByteArray and using the compress() method. This results in a significantly smaller memory footprint. But it has proven impractical. For each redraw, I tried uncompressing()'ing the ByteArray, then using SetPixels to blit the data to the screen, then re-compressing() the frame. This works but is terribly slow.
So I was wondering if anybody else has an approach I could try. In Flash, is it possible to compress bitmap data in memory and quickly blit it to the screen?
I wonder how native animated GIFs work in Flash. Does it uncompress them to BitmapData behind the scenes, or is frame decompression done on the fly?
Perhaps there is an Alchemy project that attempts to blit compressed images?
Thanks for any advice you can offer :)
#thienhaflash's response is good but has aged a year and since then Flash Player and AIR Runtime have expanded their capabilities. Today I stumbeled on this little tidbit from Adobe's AS3 Guide. As of player 11.3 there are native image compression techniques available. Here's a snippet:
// Compress a BitmapData object as a JPEG file.
var bitmapData:BitmapData = new BitmapData(640,480,false,0x00FF00);
var byteArray:ByteArray = new ByteArray();
bitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
Not sure about the practicality for blitting but it's nice that it can be done natively.
For memory reservation you need to think twice before convert a MovieClip to a Bitmap sequence. Is it really that need ? Can you break things down as there are several things (like the background) is static (or just moving around) why don't cache bitmap for each elements instead of one big Bitmap sequence ?
I usually used AnimatedBitmap (the name for bitmap sequence alternative for a MovieClip) only for small size animated icons, and other heavy calculation stuffs (like fire / smoke effects ...). Just break things down as much as you can !
As far as i know, there are no way to compress the memory used by a BitmapData located in the memory and there are nothing related to Alchemy could help improve memory used in this case.
Animated GIF won't works in Flash natively, you will need some library to do that. Search for AnimatedGIF as3 library from bytearray.com, actually the library just read the gif file in raw byteArray and convert to an animatedBitmap just like how you've done.
this is an old question, but there is recent info on this : jackson Dunstan has had a run with bitmapdatas and it turns out that Bitmap data obtained from compressed sources will "deflate" after some time unused.
here are the articles : http://jacksondunstan.com/articles/2112, and the two referred at the beginning of it.
So you could absolutely do something like :
var byteArray:ByteArray = new ByteArray();
myBitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
var loader = new Loader();
loader.addEventListener(Event.COMPLETE, function(_e:Event):void{
if(loader.content is Bitmap){
myBitmapData.dispose()
myBitmapData= Bitmap(loader.content).bitmapData;
}
});
loader.loadBytes(byteArray);
I'm not sure if it would work as is, and you definitely want to handle your memory better. but now, myBitmapData will be uncompressed when you try to read from it, and then re-compressed when you don't use it for about ten seconds.