We need to convert the code below from Objective-C to Swift.
Question:
There are a few function calls to release objects, e.g., CGImageRelease(newImage). Is it safe to assume no analog is needed for the Swift version since all the memory management is automatic, or do you need to free up memory in Swift as well?
Objective-C code:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:orientation];
CGImageRelease(newImage);
Swift version so far:
private func turnBufferToPNGImage(imageSampleBuffer: CMSampleBufferRef, scale: CGFloat) -> UIImage {
let imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer)
// Lock base address
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Set properties for CGBitmapContext
let pixelData = CVPixelBufferGetBaseAddress(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create CGBitmapContext
let newContext = CGBitmapContextCreate(pixelData, width, height, 8, bytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)
// Create image from context
let rawImage = CGBitmapContextCreateImage(newContext)!
let newImage = UIImage(CGImage: rawImage, scale: scale, orientation: .Up)
// Unlock base address
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Return image
return newImage
}
Per the docs:
Core Foundation types are automatically imported as full-fledged Swift classes. Wherever memory management annotations have been provided, Swift automatically manages the memory of Core Foundation objects, including Core Foundation objects that you instantiate yourself
So you can omit the calls.
No You don't need release Core Foundation object because
Apple said:
The Core Foundation CFTypeRef type completely remaps to the AnyObject
type.
And
Core Foundation objects returned from annotated APIs are automatically
memory managed in Swift—you do not need to invoke the CFRetain,
CFRelease, or CFAutorelease functions yourself.
document here
Related
I found a similar post here and here.
I tried the following codes, it works fine in iOS 12.1.4 but empty on macOS Mojave version 10.14
id<CAMetalDrawable> lastDrawable = view.currentDrawable;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> commandBuffer) {
id<MTLTexture> drawableTexture = lastDrawable.texture;
int width = (int)drawableTexture.width;
int height = (int)drawableTexture.height;
int len = width * height * 4;
uint8_t* image = (uint8_t*)malloc(len);
[drawableTexture getBytes:image bytesPerRow:width*4 fromRegion:MTLRegionMake2D(0, 0, width, height) mipmapLevel:0];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
image,
width,
height,
8, // bitsPerComponent
4*width, // bytesPerRow
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CFRelease(cgImage);
CFRelease(bitmapContext);
free(image);
}];
Do I need some additional processes to get a correct snapshot of the current screen on mac?
The storage mode of the drawable's texture is managed. You need to use a blit command encoder to encode a -synchronize... command. Otherwise, the data isn't guaranteed to be available to the CPU.
I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem
Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}
I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}
I have a UIImage which is loaded from a CIImage with:
tempImage = [UIImage imageWithCIImage:ciImage];
The problem is I need to crop tempImage to a specific CGRect and the only way I know how to do this is by using CGImage.
The problem is that in the iOS 6.0 documentation I found this:
CGImage
If the UIImage object was initialized using a CIImage object, the value of the property is NULL.
A. How to convert from CIImage to CGImage?
I'm using this code but I have a memory leak (and can't understand where):
+(UIImage*)UIImageFromCIImage:(CIImage*)ciImage {
CGSize size = ciImage.extent.size;
UIGraphicsBeginImageContext(size);
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
UIImage *remImage = [UIImage imageWithCIImage:ciImage];
[remImage drawInRect:rect];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
remImage = nil;
ciImage = nil;
//
return result;
}
Swift 3, Swift 4 and Swift 5
Here is a nice little function to convert a CIImage to CGImage in Swift.
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
On a desktop or TV, you would typically use:
let ctx = CIContext(options: [.useSoftwareRenderer: false])
let cgImage = ctx.createCGImage(output, from: output.extent)
Many other alternative hint options are available in the core image processor, such as .allowLowPower, .cacheIntermediates, .highQualityDownsample, priorities and so on.
Notes:
CIContext(options: nil) will use a software renderer and can be quite slow. To improve the performance, use CIContext(options: [CIContextOption.useSoftwareRenderer: false]) - this forces operations to run on GPU, and can be much faster.
If you use CIContext more than once, cache it as apple recommends.
See the CIContext documentation for createCGImage:fromRect:
CGImageRef img = [myContext createCGImage:ciImage fromRect:[ciImage extent]];
From an answer to a similar question: https://stackoverflow.com/a/10472842/474896
Also since you have a CIImage to begin with, you could use CIFilter to actually crop your image.
After some googling I found this method which converts a CMSampleBufferRef to a CGImage:
+ (CGImageRef)imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
(but I closed the tab so I don't know where I got it from)
I'm trying to make an adobe native extension h.264 file encoder for iOS. I have the encoder part working. It run fine from an xcode test project. The problem is that when i try to run it from the ane file it doesn't work.
My code to add frames converted from a bitmapData into a CGImage:
//convert first argument in a bitmapData
FREObject objectBitmapData = argv[0];
FREBitmapData bitmapData;
FREAcquireBitmapData( objectBitmapData, &bitmapData );
CGImageRef theImage = getCGImageRefFromBitmapData(bitmapData);
[videoRecorder addFrame:theImage];
In this case the CGImageRef has data, but when i try to open the video, it only show a black screen.
When i test it from an xcode project it also save a black screen video, but if i create the CGImage from a UIImage file, and then modify this CGImage and pass it to the addFrame, it work fine.
My guess is that the CGImageRef theImage is not created right.
The code i'm using to create the CGImageRef is this: https://stackoverflow.com/a/8528969/800836
Why the CGImage is not working fine when it is create using the CGImageCreate?
Thanks!
In case someone has the same problem, my solution was to create a CGImageRef with 0 bytes per row:
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, 1024, 768, 8, /*bytes per row*/0, colorSpace, bitmapInfo);
// create image from context
CGImageRef tmpImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
Then copy the pixel data to this tmpImage and then create a new one based on this image:
CGImageRef getCGImageRefFromRawData(FREBitmapData bitmapData) {
CGImageRef abgrImageRef = tmpImage;
CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(abgrImageRef));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(abgrData);
int length = CFDataGetLength(abgrData);
uint32_t* input = bitmapData.bits32;
int index2 = 0;
for (int index = 0; index < length; index+= 4) {
pixelData[index] = (input[index2]>>0) & 0xFF;
pixelData[index+1] = (input[index2]>>8) & 0xFF;
pixelData[index+2] = (input[index2]>>16) & 0xFF;
pixelData[index+3] = (input[index2]>>24) & 0xFF;
index2++;
}
// grab the bgra image info
size_t width = CGImageGetWidth(abgrImageRef);
size_t height = CGImageGetHeight(abgrImageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(abgrImageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(abgrImageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(abgrImageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(abgrImageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(abgrImageRef);
// create the argb image
CFDataRef argbData = CFDataCreate(NULL, pixelData, length);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(argbData);
CGImageRef argbImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
// release what we can
CFRelease(abgrData);
CFRelease(argbData);
CGDataProviderRelease(provider);
return argbImageRef;
}
I have a freehand drawing view (users can draw lines with their finger). I only use a few colors, so I have a compression algorithm that I wrote (want to send it over a local network to another iPad) but I can't seem to get the data out of the graphics context accurately, even with this simple test:
//Get the data
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
CGContextRef c = UIGraphicsGetCurrentContext();
[self.layer renderInContext:c];
baseImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = baseImageView.image.CGImage;
NSData *dataToUse = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
//Reuse the data
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)dataToUse);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef test = CGImageCreate(width,height,8,32,4*width,colorSpace,
kCGBitmapByteOrder32Big|kCGImageAlphaPremultipliedLast,provider,NULL,false,
kCGRenderingIntentDefault); //I get width and height from another part of the program
imageView.image = [UIImage imageWithCGImage:test];
I simply copied out the data from one CGImage and tried to insert it into another. However, the result is garbage and not only that, for some reason it comes out as BGRA when I copy the data, but CGImageCreate wants RGBA. Where am I going wrong with this round-trip test?
Looks like the answer is that it's not enough to just get the data provider. You need to actually render the image into a bitmap context and take the data from there. Revised way:
//Get the data
CGImageRef imageRef = baseImageView.image.CGImage;
size_t height = CGImageGetHeight(imageRef);
size_t width = CGImageGetWidth(imageRef);
size_t bufferLength = width * height * 4;
unsigned char *rawData = (unsigned char *)malloc(bufferLength);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(bitmapData, width, height, 8,
4*width, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
NSData *dataToUse = [NSData dataWithBytes:rawData length:bufferLength];
//Later free(rawData);
Using the data is still the same.