how to calculate memory consummation for my application in IOS - ios

In application,displaying the high resolution images and have some memory leaks so i would like to calculate the memory consumption of each statement to find out the leaks.
is there any method to calculate the memory (MB or KB)?
I need Like this:
// this is my method
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
//Run a method to caclate the memory(MB or KB) --- Before
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
//Run a method to calculate the memory(MB or KB) --- After
return newBitmap;
}

Related

OpenCV detect corners of a pattern hidden in a image

I have to create a mobile application able to detect hidden (standard) pattern in a image.
The purpose is to detect corners and get some information from image (like a link).
I'm focusing on iOS for the moment but I don't know how implement the pattern and recognize it with OpenCV
So the first question is: how can I add hidden information in a image?
I found this library that implement steganography to hide some information in a image. Is this the right way?
Next step is detect with the phone's camera the image and the its corners. My idea is to create a standard pattern (like points or lines) to add on a .png image and use template matching to detect, during capture, the area where the pattern is present. But reading online I have saw that this technique is not the best for this problem.
I have successfully implemented the HSV conversation for color-tracking following this tutorial but I don't know how to proceed to the next step.
So, the second question is: how can I recognize a standard pattern and detects corners in a frame captured with the camera?
This is the code that I use to convert the sample buffet to UIImage:
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++) {
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++) {
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(quartzImage);
free(rgbBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
And this is for HSV conversation:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
#autoreleasepool {
if (self.isProcessingFrame) {
return;
}
self.isProcessingFrame = YES;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
cv::Mat matFrame = [self cvMatFromUIImage:image];
cv::cvtColor(matFrame, matFrame, CV_BGR2HSV);
cv::inRange(matFrame, cv::Scalar(0, 100,100,0), cv::Scalar(10,255,255,0), matFrame);
image = [self UIImageFromCVMat:matFrame];
// Convert to base64
NSData *imageData = UIImagePNGRepresentation(image);
NSString *encodedString = [imageData base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
self.isProcessingFrame = NO;
}
}
I hope some help, thanks!

Rotate CMSampleBuffer/CVPixelBuffer

I am currently attempting to change the orientation of a CMSampleBuffer by first converting it to a CVPixelBuffer and then using vImageRotate90_ARGB8888 to convert the buffer. The problem with my code is that when vImageRotate90_ARGB8888 executes, it crashes immediately. I know there are answers (like this one or this one), but all of these solutions fail to work in my case, and I really cannot find any type of error, or think of anything that would cause this behavior. My current code is below:
- (CVPixelBufferRef)rotateBuffer:(CMSampleBufferRef)sampleBuffer {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t currSize = bytesPerRow * height * sizeof(unsigned char);
size_t bytesPerRowOut = 4 * height * sizeof(unsigned char);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
void *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
unsigned char *outPixelData = (unsigned char *)malloc(currSize);
vImage_Buffer sourceBuffer = {baseAddress, height, width, bytesPerRow};
vImage_Buffer destinationBuffer = {outPixelData, width, height, bytesPerRowOut};
uint8_t rotation = kRotate90DegreesClockwise;
Pixel_8888 bgColor = {0, 0, 0, 0};
vImageRotate90_ARGB8888(&sourceBuffer, &destinationBuffer, rotation, bgColor, kvImageNoFlags); // Crash!
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, destinationBuffer.width, destinationBuffer.height, pixelFormat, destinationBuffer.data, destinationBuffer.rowBytes, freePixelBufferData, NULL, NULL, &rotatedBuffer);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return rotatedBuffer;
}
void freePixelBufferData(void *releaseRefCon, const void *baseAddress) {
free((void *)baseAddress);
}

Why do I see a slightly zoomed version of the image on the iPhone camera compared to what is received at the backend server?

I am capturing the image from the iOS camera on the iPhone7 and sending the captured camera image to the backend for processing.
When I save the image at the backend, I see that the backend image has lot more content than that what was visible on the iOS screen when focusing on the object.
The server image is a zoomed out version with little extra content on the horitonal and vertical axis on both the sides. I verified that I am not doing any explicit zooming or anything like that in the Objective-C code.
The question is what is causing this difference in what I see on the screen v/s what gets received at the backend.
The code that I use to capture the image is
-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{
#autoreleasepool {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = (uint8_t*) CVPixelBufferGetBaseAddress(imageBuffer);
uint8_t *yBuffer = (uint8_t*) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = (uint8_t*) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++)
{
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xFF;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:0.5f orientation:UIImageOrientationUp];
NSData *imgData = UIImageJPEGRepresentation(image, 0.8);
NSLog(#"blabla %lu", (unsigned long)[imgData length]);
// Release the Quartz image
free(rgbBuffer);
CGImageRelease(quartzImage);
return (image);

Picture loaded from my server has wrong colors on iPad app

I am developing an iPad app which presents pictures from a photographer. Those photos are uploaded on a webserver, and served directly through the app, where they are downloaded and displayed using the method below :
if([[NSFileManager defaultManager] fileExistsAtPath:[url path]]){
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef cgImage = nil;
if(source){
cgImage = CGImageSourceCreateImageAtIndex(source, 0, (CFDictionaryRef)dict);
}
UIImage *retImage = [UIImage imageWithCGImage:cgImage];
if(cgImage){
CGImageRelease(cgImage);
}
if(source){
CFRelease(source);
}
return retImage;
}
I can observe a serious change in the display of the photos' colours between the original picture (which is the same when displayed from the disk or from the web on my Mac and the photographer's Mac) and on the iPad (the result is wrong in the app and even in Safari).
After some search I found some posts explaining how iDevices do not use embedded color profile, so I found I was going this way. The photos are saved using the following info :
I found out on some articles (for example this link from imageoptim or analogsenses) that I should save the picture for device export by converting it to sRGB, without embedding the color profile, but I cant find out how could I do that ? Each time I tried (I don't have photoshop, so I used command line ImageMagick), the resulting picture has the following information, and is still not displayed correctly on my iPad (and any other iPads I've tested) :
Here is a example of a picture that do not display correctly on the iPhone or iPad, but does on the web
I would like to transform it to display it correctly, any Idea would be really welcomed :)
[EDIT] I have succeeded to obtain a correct image using "save for web" options of photoshop using the following parameters :
But I'm still unable to apply those settings automatically to all my pictures.
To read an image, just use:
UIImage *image = [UIImage imageWithContentsOfFile:path];
As for the color profile issue, try the sips command-line tool to fix the image files. Something like:
mkdir converted
sips -m "/System/Library/ColorSync/Profiles/sRGB Profile.icc" *.JPG --out converted
You can get first the color space through CGImage.
#property(nonatomic, readonly) CGImageRef CGImage
CGColorSpaceRef CGImageGetColorSpace (
CGImageRef image
);
And depping of the colorSpace apply a format conversion. So to get the color space of an image, you'd do:
CGColorSpaceRef colorspace = CGImageGetColorSpace([myUIImage CGImage]);
Note: make sure to follow the get/create/copy rules for CG objects.
Color conversion to RGB8 (also can be applied to RGB16 or RGB32, changing the bits per component in the method newBitmapRGBA8ContextFromImage):
// Create a bitmap
unsigned char *bitmap = [ImageHelper convertUIImageToBitmapRGBA8:image];
// Create a UIImage using the bitmap
UIImage *imageCopy = [ImageHelper convertBitmapRGBA8ToUIImage:bitmap withWidth:width withHeight:height];
// Display the image copy on the GUI
UIImageView *imageView = [[UIImageView alloc] initWithImage:imageCopy];
ImageHelper.h
#import <Foundation/Foundation.h>
#interface ImageHelper : NSObject {
}
/** Converts a UIImage to RGBA8 bitmap.
#param image - a UIImage to be converted
#return a RGBA8 bitmap, or NULL if any memory allocation issues. Cleanup memory with free() when done.
*/
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *)image;
/** A helper routine used to convert a RGBA8 to UIImage
#return a new context that is owned by the caller
*/
+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef)image;
/** Converts a RGBA8 bitmap to a UIImage.
#param buffer - the RGBA8 unsigned char * bitmap
#param width - the number of pixels wide
#param height - the number of pixels tall
#return a UIImage that is autoreleased or nil if memory allocation issues
*/
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *)buffer
withWidth:(int)width
withHeight:(int)height;
#end
ImageHelper.m
#import "ImageHelper.h"
#implementation ImageHelper
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
return newBitmap;
}
+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = width * bytesPerPixel;
size_t bufferLength = bytesPerRow * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) {
NSLog(#"Error allocating color space RGB\n");
return NULL;
}
// Allocate memory for image data
bitmapData = (uint32_t *)malloc(bufferLength);
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
context = CGBitmapContextCreate(bitmapData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast); // RGBA
if(!context) {
free(bitmapData);
NSLog(#"Bitmap context not created");
}
CGColorSpaceRelease(colorSpace);
return context;
}
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Support both iPad 3.2 and iPhone 4 Retina displays with the correct scale
if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
#end
#PhilippeAuriach
I think you might have problem with [UIImage imageWithCGImage:cgImage], My suggestion is use [UIImage imageWithContentsOfFile:path] instead of above.
Below code might be help you.
if([[NSFileManager defaultManager] fileExistsAtPath:[url path]]){
//Provide image path here...
UIImage *image = [UIImage imageWithContentsOfFile:path];
if(image){
return image;
}else{
//Return default image
return image;
}
}

Unable to change the colour of pixel in UIImage

I got my problem solved already by using a different code. i just want to know what is wrong with the following one?
I wanted to change colour of every pixel in UIImage using bitmap data. My code is as follows:
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIImage *image = self.imageViewMain.image;
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
char *pixels = (char *)[data bytes];
// this is where we manipulate the individual pixels
for(int i = 1; i < [data length]; i += 3)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
}
But when this code runs - I get EXC_BAD_ACCESS error shown like in the following image :
And here is some more information from debugging:
What is it that I'm missing or doing wrong ?
try to alloc memory for pixels array like following code
char *pixels = (char *)malloc(data.length);
memcpy(pixels, [data bytes], data.length);
when pixels is not necessary, release this memory by call free(pixels)

Resources