I'm trying to convert a CVPixelBufferRef into a UIImage using the following snippet:
UIImage *image = nil;
CMSampleBufferRef sampleBuffer = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(_queue);
if (sampleBuffer)
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSUInteger width = CVPixelBufferGetWidth(pixelBuffer);
NSUInteger height = CVPixelBufferGetHeight(pixelBuffer);
CIImage *coreImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:nil];
CGImageRef imageRef = [_context createCGImage:coreImage fromRect:CGRectMake(0, 0, width, height)];
image = [UIImage imageWithCGImage:imageRef];
CFRelease(sampleBuffer);
CFRelease(imageRef);
}
My problem is that it works fine when I run the code on a device but fails to render when run on simulator, the console outputs the following:
Render failed because a pixel format YCC420f is not supported
Any Ideas?
Related
I am trying to get each frame from the replaykit using startCaptureWithHandler.
startCaptureWithHandler returns a CMSampleBufferRef which i need to convert to an image.
Im using this method to convert to UIImage but its always white.
- (UIImage *) imageFromSampleBuffer3:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
CVPixelBufferLockFlags flags = (CVPixelBufferLockFlags)0;
CVPixelBufferLockBaseAddress(pxbuffer, flags);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
// CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, flags);
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];
CGImageRelease(quartzImage);
return image;
}
Can anyone tell me where im going wrong?
sampleBuffer is 420f format and it has 2 planar.
For locking memory, CVPixelBufferLockBaseAddress(imageBuffer, 0 then 1).
For getting Y plane data, CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0).
For UV plane data, CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1).
Do not forget unlock memory. I am not sure how to convert YUV to RGB.
On your code, you do not read image data from imageBuffer.
You only work on pxbuffer and pxdata which has no image data.
I want to capture raw pixel data for manipulation using GPUImage framework. I capture the data like this:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
//raw values
UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);
//test out dropbox upload here
[self uploadDropbox:dataForRawBytes];
//end of dropbox upload
// Do whatever with your bytes
// [self processImages:dataForRawBytes];
CVPixelBufferUnlockBaseAddress(cameraFrame, 0); }];
I am using the following settings for camera:
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
For testing purposes I want to save the image i capture to dropbox, to do that I need to save it to a tmp directory, how would i save dataForRawBytes?
Any help would be very appreciated!
So i was able to figure out how to gain a UIImage from the raw data, here is my modified code:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
size_t width = CVPixelBufferGetWidth(cameraFrame);
size_t height = CVPixelBufferGetHeight(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
// Do whatever with your bytes
// create suitable color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
// release color space
CGColorSpaceRelease(colorSpace);
//Create a CGImageRef from the CVImageBufferRef
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
//is the image captured, now we can test saving it.
I needed to create properties such as colourspace and generate a CDContexyRef and work with that to finally get a UIImage, and when debugging I can properly see the image i captured.
I am trying to use this method to then upload imageData to my service but the resulting imageData does not take into account orientation and I am uploading images in the incorrect orientation.
Any suggestions?
I solve the exact same problem using this f°, to get the original image size, you just have to pass size == CGSizeMake (10000000000, 10000000000), hope this help :)
+ (UIImage *)createThumbFromImageIO:(UIImage*)image maxPixelSize:(CGSize)size
{
NSData *imagedata = UIImageJPEGRepresentation(image, 1);
CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imagedata, NULL);
if (!imageSource) return nil;
CGSize imageSize = image.size;
imageSize = CGSizeMake(size.width,size.height);
CFDictionaryRef options = (__bridge CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
(id)[NSNumber numberWithFloat:imageSize.height], (id)kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef imgRef = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options);
image = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CFRelease(imageSource);
return image;
}
I've been struggling for two days to display an animated webp image in a UIImageView with no success whatsoever.
Mainly the problem is in the decoding step of the file which gives this error: VP8_STATUS_UNSUPPORTED_FEATURE.
I tried
https://github.com/seanooi/iOS-WebP
https://github.com/mattt/WebPImageSerialization
These projects provide code for creating UIImage with webp files and they work fine with images with no animation but they both fail with the same error as above when attempting to decode images with animation.
I am jailbroken and checking the filesystem I saw that Facebook's Messenger app has some of it's stickers in .webp format and also in the License they mention Google's "webp" library so I'm sure somehow it's possible.
Managed to decode animated .webp using the code snippet at the top of this header which also contains explanations of the data structures used.
static NSDictionary* DecodeWebPURL(NSURL *url) {
NSMutableDictionary *info = [NSMutableDictionary dictionary];
NSMutableArray *images = [NSMutableArray array];
NSData *imgData = [NSData dataWithContentsOfURL:url];
WebPData data;
WebPDataInit(&data);
data.bytes = (const uint8_t *)[imgData bytes];
data.size = [imgData length];
WebPDemuxer* demux = WebPDemux(&data);
int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);
int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);
uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);
if (flags & ANIMATION_FLAG) {
WebPIterator iter;
if (WebPDemuxGetFrame(demux, 1, &iter)) {
WebPDecoderConfig config;
WebPInitDecoderConfig(&config);
config.input.height = height;
config.input.width = width;
config.input.has_alpha = iter.has_alpha;
config.input.has_animation = 1;
config.options.no_fancy_upsampling = 1;
config.options.bypass_filtering = 1;
config.options.use_threads = 1;
config.output.colorspace = MODE_RGBA;
[info setObject:[NSNumber numberWithInt:iter.duration] forKey:#"duration"];
do {
WebPData frame = iter.fragment;
VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);
if (status != VP8_STATUS_OK) {
NSLog(#"Error decoding frame");
}
uint8_t *data = WebPDecodeRGBA(frame.bytes, frame.size, &width, &height);
CGDataProviderRef provider = CGDataProviderCreateWithData(&config, data, config.options.scaled_width * config.options.scaled_height * 4, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, 4 * width, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);
[images addObject:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
} while (WebPDemuxNextFrame(&iter));
WebPDemuxReleaseIterator(&iter);
}
}
WebPDemuxDelete(demux);
[info setObject:images forKey:#"frames"];
return info;
}
In my app, I am having an imageView, the image in the imageView should be give some effects like sepia, black and white.
To do sepia effect I used the following code,
-(UIImage*)makeSepiaScale:(UIImage*)image
{
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
int imagWidth = image.size.width;
int imageheight = image.size.height;
NSInteger myDataLength = imagWidth * imageheight * 4;
for (int i = 0; i < myDataLength; i+=4)
{
UInt8 r_pixel = data[i];
UInt8 g_pixel = data[i+1];
UInt8 b_pixel = data[i+2];
int outputRed = (r_pixel * .393) + (g_pixel *.769) + (b_pixel * .189);
int outputGreen = (r_pixel * .349) + (g_pixel *.686) + (b_pixel * .168);
int outputBlue = (r_pixel * .272) + (g_pixel *.534) + (b_pixel * .131);
if(outputRed>255)outputRed=255;
if(outputGreen>255)outputGreen=255;
if(outputBlue>255)outputBlue=255;
data[i] = outputRed;
data[i+1] = outputGreen;
data[i+2] = outputBlue;
}
CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, data, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imagWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(imagWidth, imageheight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider2, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider2); // YOU CAN RELEASE THIS NOW
CFRelease(bitmapData);
UIImage *sepiaImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
return sepiaImage;
}
It is working perfectly, but only for .png image, when I use .jpg images, it just displays a black colour view for the imageView. Any help will be appreciated
Use core image processing,
-(void)makeSepiaScale
{
CIImage *beginImage =[[CIImage alloc]initWithImage:imageForView];
CIContext * context=[CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", #0.8, nil];
CIImage *outImage = [filter outputImage];
CGImageRef cgimg=[context createCGImage:outImage fromRect:[outImage extent]];
[imageView setImage:[UIImage imageWithCGImage:cgimg]];
}
Use CGImageCreateWithJPEGDataProvider to get the bitmap from JPEG:
-(UIImage*)makeSepiaScale:(UIImage*)image {
CGImageRef cgJpegImage = [image CGImage];
CGDataProviderRef jpegProvider = CGImageGetDataProvider(cgJpegImage);
CGImageRef cgBitmapImage = CGImageCreateWithJPEGDataProvider(jpegProvider, nil, NO, kCGRenderingIntentRelativeColorimetric);
CGDataProviderRef bitmapProvider = CGImageGetDataProvider(cgBitmapImage);
CFDataRef bitmapData = CGDataProviderCopyData(bitmapProvider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
// other code to make sepia effect
}
You can't use JPEG encoded image like plain RGB. So, just decode it with CGImageCreateWithJPEGDataProvider first.
I suspect you are dealing with non RGBA format data .... JPG for one does not have an Alpha channel. Rather than deal with the intricacies of the data I suggest that you consider using Core Image filters, I've used them with jpg and PNG without any issues.
There is a variety of Core Image filters including one for Sepia and another for grayscale / black and white here is the code for Sepia:
-(UIImage*)makeSepiaScale:(UIImage*)input_image output:(UIImage *)output
{
CIImage *cimage = [[CIImage alloc] initWithImage:input_image];
CIFilter *myFilter = [CIFilter filterWithName:#"CISepiaTone"];
[myFilter setDefaults];
[myFilter setValue:cimage forKey:#"inputImage"];
[myFilter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"inputIntensity"];
CIImage *image = [myFilter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:image fromRect:image.extent];
output = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return (output);}
PS the Apple documentation on CoreImage is good, I suggest you read up on it.