I have a UIImage which is loaded from a CIImage with:
tempImage = [UIImage imageWithCIImage:ciImage];
The problem is I need to crop tempImage to a specific CGRect and the only way I know how to do this is by using CGImage.
The problem is that in the iOS 6.0 documentation I found this:
CGImage
If the UIImage object was initialized using a CIImage object, the value of the property is NULL.
A. How to convert from CIImage to CGImage?
I'm using this code but I have a memory leak (and can't understand where):
+(UIImage*)UIImageFromCIImage:(CIImage*)ciImage {
CGSize size = ciImage.extent.size;
UIGraphicsBeginImageContext(size);
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
UIImage *remImage = [UIImage imageWithCIImage:ciImage];
[remImage drawInRect:rect];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
remImage = nil;
ciImage = nil;
//
return result;
}
Swift 3, Swift 4 and Swift 5
Here is a nice little function to convert a CIImage to CGImage in Swift.
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
On a desktop or TV, you would typically use:
let ctx = CIContext(options: [.useSoftwareRenderer: false])
let cgImage = ctx.createCGImage(output, from: output.extent)
Many other alternative hint options are available in the core image processor, such as .allowLowPower, .cacheIntermediates, .highQualityDownsample, priorities and so on.
Notes:
CIContext(options: nil) will use a software renderer and can be quite slow. To improve the performance, use CIContext(options: [CIContextOption.useSoftwareRenderer: false]) - this forces operations to run on GPU, and can be much faster.
If you use CIContext more than once, cache it as apple recommends.
See the CIContext documentation for createCGImage:fromRect:
CGImageRef img = [myContext createCGImage:ciImage fromRect:[ciImage extent]];
From an answer to a similar question: https://stackoverflow.com/a/10472842/474896
Also since you have a CIImage to begin with, you could use CIFilter to actually crop your image.
After some googling I found this method which converts a CMSampleBufferRef to a CGImage:
+ (CGImageRef)imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
(but I closed the tab so I don't know where I got it from)
Related
I have an iOS application. I take a picture from my camera and I save this then crop this with a mask. The first image from the camera is saved correctly, but when I apply the mask it is saved with a low resolution and a stretched image.
I'm using this Objective-C code into my application to apply the mask.
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)mask_Image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//UIImage *maskImage = maskImage1;
CGImageRef maskImageRef = [mask_Image CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, mask_Image.size.width, mask_Image.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat widthratio = 0;
CGFloat heightratio = 0;
widthratio = mask_Image.size.width / image.size.width;
heightratio = mask_Image.size.height / image.size.height;
CGRect rect1 = {{0, 0}, {mask_Image.size.width, mask_Image.size.height}};
CGRect rect2 = {{-((image.size.width*widthratio)-mask_Image.size.width)/2 , -((image.size.height*heightratio)-mask_Image.size.height)/2}, {image.size.width*widthratio, image.size.height*heightratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
NSData* imageData = UIImagePNGRepresentation(theImage); // get png representation
UIImage* pngImage = [UIImage imageWithData:imageData];
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
return theImage;
}
I want get this correctly like:
I take my picture from the camera:
I apply my mask image to the camera image in the position that I wanted:
And I get my cropped image masked:
How can I get the correct masked image?
We need to convert the code below from Objective-C to Swift.
Question:
There are a few function calls to release objects, e.g., CGImageRelease(newImage). Is it safe to assume no analog is needed for the Swift version since all the memory management is automatic, or do you need to free up memory in Swift as well?
Objective-C code:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:orientation];
CGImageRelease(newImage);
Swift version so far:
private func turnBufferToPNGImage(imageSampleBuffer: CMSampleBufferRef, scale: CGFloat) -> UIImage {
let imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer)
// Lock base address
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Set properties for CGBitmapContext
let pixelData = CVPixelBufferGetBaseAddress(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create CGBitmapContext
let newContext = CGBitmapContextCreate(pixelData, width, height, 8, bytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)
// Create image from context
let rawImage = CGBitmapContextCreateImage(newContext)!
let newImage = UIImage(CGImage: rawImage, scale: scale, orientation: .Up)
// Unlock base address
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Return image
return newImage
}
Per the docs:
Core Foundation types are automatically imported as full-fledged Swift classes. Wherever memory management annotations have been provided, Swift automatically manages the memory of Core Foundation objects, including Core Foundation objects that you instantiate yourself
So you can omit the calls.
No You don't need release Core Foundation object because
Apple said:
The Core Foundation CFTypeRef type completely remaps to the AnyObject
type.
And
Core Foundation objects returned from annotated APIs are automatically
memory managed in Swift—you do not need to invoke the CFRetain,
CFRelease, or CFAutorelease functions yourself.
document here
I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem
Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}
I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}
I am trying to do a pixel by pixel comparison of two UIImages and I need to retrieve the pixels that are different. Using this Generate hash from UIImage I found a way to generate a hash for a UIImage. Is there a way to compare the two hashes and retrieve the different pixels?
If you want to actually retrieve the difference, the hash cannot help you. You can use the hash to detect the likely presence of differences, but to get the actual differences, you have to use other techniques.
For example, to create a UIImage that consists of the difference between two images, see this accepted answer in which Cory Kilgor's illustrates the use of CGContextSetBlendMode with a blend mode of kCGBlendModeDifference:
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}
I'm trying to overlay picture taken from camera with some other preset images. Problem is, picture from camera might be 8MP which is huge in term of memory usage. Some of the overlays might try to cover the whole image.
I tried multiple way to merge them all into one image.
UIGraphicsBeginImageContextWithOptions(_imageView.image.size, NO, 1.0f);
[_containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
or
CGContextDrawImage(UIGraphicsGetCurrentContext(), (CGRect){CGPointZero, _imageView.image.size}, _imageView.image.CGImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), (CGRect){CGPointZero, _imageView.image.size}, *other image views*);
or
UIImage* image = _imageView.image;
CGImageRef imageRef = image.CGImage;
size_t imageWidth = (size_t)image.size.width;
size_t imageHeight = (size_t)image.size.height;
CGContextRef context = CGBitmapContextCreate(NULL, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef), CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), CGImageGetColorSpace(imageRef), CGImageGetBitmapInfo(imageRef));
CGRect rect = (CGRect){CGPointZero, {imageWidth, imageHeight}};
CGContextDrawImage(context, rect, imageRef);
CGContextDrawImage(context, rect, **other images**);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage* resultImage = nil;
NSURL* url = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent:#"x.jpg"]];
CFURLRef URLRef = CFBridgingRetain(url);
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(URLRef, kUTTypeJPEG, 1, NULL);
if (destination != NULL)
{
CGImageDestinationAddImage(destination, newImageRef, NULL);
if (CGImageDestinationFinalize(destination))
{
resultImage = [[UIImage alloc] initWithContentsOfFile:url.path];
}
CFRelease(destination);
}
CGImageRelease(newImageRef);
all of these work but essentially, doubling the current memory usage.
Is there anyway to compose them all together without the need to create new context? Maybe save all images in file system and doing the merging there without actually consuming tons of memory? Or maybe even rendering into filesystem tile per tile?
Any suggestion or pointer where do I go from this?
Thanks
Check out the following code. You can send multiple images to the following method, however as you are facing memory issues, I suggest you to call the same method repetitively to merge multiple images. This process might take more time though.
-(CGImageRef )mergedImageFromImageOne:(UIImage *)imageOne andImageTwo:(UIImage *)imageTwo
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize imageSize = imageOne.size;
UIGraphicsBeginImageContext(sizeVideo);
[imageOne drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
[imageTwo drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height) alpha:1];
CGImageRef imageRefNew = CGImageCreateWithImageInRect(UIGraphicsGetImageFromCurrentImageContext().CGImage, CGRectMake(0,0,imageOne.width,imageOne.height));
UIGraphicsEndImageContext();
[pool release];
return imageRefNew;
}
Hope this helps.