I'm using the OpenCV library in my app and I want to the use the final result as a UIImage.
I use this code to convert between IplImage to UIImage:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
My problem is that after the conversion the quality of the original image decreases and the image is kind of blurry.
What is wrong with my code?
Related
I want to capture raw pixel data for manipulation using GPUImage framework. I capture the data like this:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
//raw values
UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);
//test out dropbox upload here
[self uploadDropbox:dataForRawBytes];
//end of dropbox upload
// Do whatever with your bytes
// [self processImages:dataForRawBytes];
CVPixelBufferUnlockBaseAddress(cameraFrame, 0); }];
I am using the following settings for camera:
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
For testing purposes I want to save the image i capture to dropbox, to do that I need to save it to a tmp directory, how would i save dataForRawBytes?
Any help would be very appreciated!
So i was able to figure out how to gain a UIImage from the raw data, here is my modified code:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
size_t width = CVPixelBufferGetWidth(cameraFrame);
size_t height = CVPixelBufferGetHeight(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
// Do whatever with your bytes
// create suitable color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
// release color space
CGColorSpaceRelease(colorSpace);
//Create a CGImageRef from the CVImageBufferRef
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
//is the image captured, now we can test saving it.
I needed to create properties such as colourspace and generate a CDContexyRef and work with that to finally get a UIImage, and when debugging I can properly see the image i captured.
I cannot find a combination of parameters that works with float (I managed to save with unsigned byte):
float *rawImagePixels = (float*)malloc(width * height * sizeof(float));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, width, height, GL_RED_EXT, GL_HALF_FLOAT_OES, rawImagePixels);
NSData *data = [NSData dataWithBytes:rawImagePixels length:width*height];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGImageRef iref = CGImageCreate(width, height, 16, 16, width, colorspace, kCGImageAlphaNone|kCGBitmapByteOrder16Big, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *myImage = [UIImage imageWithCGImage:iref];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);
My texture is a 1-channel half float texture. How can I save it as a UIImage?
Extracted from the question
I got semi-decent results with this code:
GLhalf *rawImagePixels = (GLhalf*)malloc(width * height * sizeof(GLhalf));
glPixelStorei(GL_PACK_ALIGNMENT, 2);
glReadPixels(0, 0, width, height, GL_RED_EXT, GL_HALF_FLOAT_OES, rawImagePixels);
NSData *data = [NSData dataWithBytes:rawImagePixels length:width * height * sizeof(GLhalf)];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
int bitsPerComponent = 16, bitsPerPixel = 16, bytesPerRow = width * 2;
CGImageRef iref = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, kCGImageAlphaNone|kCGBitmapByteOrder16Little, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *myImage = [UIImage imageWithCGImage:iref];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);
In my app, I am having an imageView, the image in the imageView should be give some effects like sepia, black and white.
To do sepia effect I used the following code,
-(UIImage*)makeSepiaScale:(UIImage*)image
{
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
int imagWidth = image.size.width;
int imageheight = image.size.height;
NSInteger myDataLength = imagWidth * imageheight * 4;
for (int i = 0; i < myDataLength; i+=4)
{
UInt8 r_pixel = data[i];
UInt8 g_pixel = data[i+1];
UInt8 b_pixel = data[i+2];
int outputRed = (r_pixel * .393) + (g_pixel *.769) + (b_pixel * .189);
int outputGreen = (r_pixel * .349) + (g_pixel *.686) + (b_pixel * .168);
int outputBlue = (r_pixel * .272) + (g_pixel *.534) + (b_pixel * .131);
if(outputRed>255)outputRed=255;
if(outputGreen>255)outputGreen=255;
if(outputBlue>255)outputBlue=255;
data[i] = outputRed;
data[i+1] = outputGreen;
data[i+2] = outputBlue;
}
CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, data, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imagWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(imagWidth, imageheight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider2, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider2); // YOU CAN RELEASE THIS NOW
CFRelease(bitmapData);
UIImage *sepiaImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
return sepiaImage;
}
It is working perfectly, but only for .png image, when I use .jpg images, it just displays a black colour view for the imageView. Any help will be appreciated
Use core image processing,
-(void)makeSepiaScale
{
CIImage *beginImage =[[CIImage alloc]initWithImage:imageForView];
CIContext * context=[CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", #0.8, nil];
CIImage *outImage = [filter outputImage];
CGImageRef cgimg=[context createCGImage:outImage fromRect:[outImage extent]];
[imageView setImage:[UIImage imageWithCGImage:cgimg]];
}
Use CGImageCreateWithJPEGDataProvider to get the bitmap from JPEG:
-(UIImage*)makeSepiaScale:(UIImage*)image {
CGImageRef cgJpegImage = [image CGImage];
CGDataProviderRef jpegProvider = CGImageGetDataProvider(cgJpegImage);
CGImageRef cgBitmapImage = CGImageCreateWithJPEGDataProvider(jpegProvider, nil, NO, kCGRenderingIntentRelativeColorimetric);
CGDataProviderRef bitmapProvider = CGImageGetDataProvider(cgBitmapImage);
CFDataRef bitmapData = CGDataProviderCopyData(bitmapProvider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
// other code to make sepia effect
}
You can't use JPEG encoded image like plain RGB. So, just decode it with CGImageCreateWithJPEGDataProvider first.
I suspect you are dealing with non RGBA format data .... JPG for one does not have an Alpha channel. Rather than deal with the intricacies of the data I suggest that you consider using Core Image filters, I've used them with jpg and PNG without any issues.
There is a variety of Core Image filters including one for Sepia and another for grayscale / black and white here is the code for Sepia:
-(UIImage*)makeSepiaScale:(UIImage*)input_image output:(UIImage *)output
{
CIImage *cimage = [[CIImage alloc] initWithImage:input_image];
CIFilter *myFilter = [CIFilter filterWithName:#"CISepiaTone"];
[myFilter setDefaults];
[myFilter setValue:cimage forKey:#"inputImage"];
[myFilter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"inputIntensity"];
CIImage *image = [myFilter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:image fromRect:image.extent];
output = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return (output);}
PS the Apple documentation on CoreImage is good, I suggest you read up on it.
When I save the image to the album, it failed.
error info: PEG Application transferred too few scanlines
Anyone has ever met this? Thanks.
// make data provider with data.
Float32 picSize = texture->image_size.width * texture->image_size.height * texture->bytesPerPixel;
NSLog(#"pic size:%f", picSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, texture->data, texture->image_size.width * texture->image_size.height * texture->bytesPerPixel, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);
I encountered a similar Error message when I was feeding a Greyscale image to a program that expected it to be TrueColor
I am using QCAR library, and I get the camera Frame from the library in each frame.
I am trying to show this frame by using setImage on my UIImageView* mCurFrameView. This works at first, and I am able to see the frames run smooth, but after 20 seconds it crashes.
Sometimes I get EXC_BAD_ACCESS on
int retVal = UIApplicationMain(argc, argv, nil, nil);
Sometimes it's just gdb and paused.
Sometimes before he crash I get
2012-02-24 15:59:15.726 QRAR_nextGen[226:707] Received memory warning.
Here's my code:
-(void)SaveCurrentFrame:(UIImage*)image
{
mCurFrameView.image = image;
}
- (void)renderFrameQCAR
{
cout<<"I am starting"<<endl;
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::setFrameFormat(QCAR::RGB888, true);
const QCAR::Image *image = state.getFrame().getImage(1);// 0: YUV, 1: Grayscale image
if (image)
{
const char *data = (const char *)image->getPixels();
int width = image->getWidth(); int height = image->getHeight();
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrderDefault;
provider = CGDataProviderCreateWithData(NULL, data, width*height*3, NULL);
intent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(width, height, 8, 8*3, width * 3, colorSpace, bitmapInfo, provider, NULL, NO, intent);
mCurFrame = [UIImage imageWithCGImage:imageRef];
cout<<"I am waiting"<<endl;
[self performSelectorOnMainThread:#selector(SaveCurrentFrame:) withObject:mCurFrame waitUntilDone:YES];
}
I've tried several things: showing CALayer to show the camera, release, retain, autorelease, defining and not defining property and synthesizing.
I'd appreciate it a lot if some one could help me. I am losing my mind. Thanks A lot.
You are not releasing (leaking) each CGDataProvider, that means you are storing all the images data on memory. Try doing CGDataProviderRelease(provider) after performSelectorOnMainThread.
Maybe you are missing this:
QCAR::Renderer::getInstance().end();
Here's a snippet working on iPhone 4S/5 iOS 6
- (void)renderFrameIntoImage {
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::setFrameFormat(QCAR::GRAYSCALE, true);
const QCAR::Image *image = state.getFrame().getImage(0); // 0: YUV, 1: Grayscale image
const char *data = (const char *)image->getPixels();
int width = image->getWidth(); int height = image->getHeight();
CGColorSpace *colorSpace = CGColorSpaceCreateDeviceGray();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGDataProvider *provider = CGDataProviderCreateWithData(NULL, data, width*height, NULL);
CGColorRenderingIntent intent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 8, width * 1, colorSpace, bitmapInfo, provider, NULL, NO, intent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
QCAR::Renderer::getInstance().end();
UIImageWriteToSavedPhotosAlbum(myImage, nil, nil, NULL);
}
Regards...
You have three leaks.
You must call a Release method for every CoreFoundation method with the word Create in it.
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
A C++ method to do the QCAR::Image to UIImage conversion
inline UIImage *imageWithQCARCameraImage(const QCAR::Image *cameraImage)
{
UIImage *image = nil;
if (cameraImage) {
CGColorSpaceRef colorSpace = NULL;
QCAR::PIXEL_FORMAT pixelFormat = cameraImage->getFormat();
int bitsPerPixel = QCAR::getBitsPerPixel(pixelFormat);
switch (pixelFormat) {
case QCAR::RGB888:
colorSpace = CGColorSpaceCreateDeviceRGB();
break;
case QCAR::GRAYSCALE:
colorSpace = CGColorSpaceCreateDeviceGray();
break;
case QCAR::YUV:
case QCAR::RGB565:
case QCAR::RGBA8888:
case QCAR::INDEXED:
std::cerr << "Image format conversion not implemented." << std::endl;
break;
case QCAR::UNKNOWN_FORMAT:
std::cerr << "Image format unknown." << std::endl;
break;
}
float width = cameraImage->getWidth();
float height = cameraImage->getHeight();
int bytesPerRow = cameraImage->getStride();
const void *baseAddress = cameraImage->getPixels();
size_t totalBytes = QCAR::getBufferSize(width, height, pixelFormat);
if (bitsPerPixel > 0 && colorSpace != NULL) {
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
baseAddress,
totalBytes,
NULL);
int bitsPerComponent = 8;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
image = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
}
}
return image;
}