I got my problem solved already by using a different code. i just want to know what is wrong with the following one?
I wanted to change colour of every pixel in UIImage using bitmap data. My code is as follows:
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIImage *image = self.imageViewMain.image;
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
char *pixels = (char *)[data bytes];
// this is where we manipulate the individual pixels
for(int i = 1; i < [data length]; i += 3)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
}
But when this code runs - I get EXC_BAD_ACCESS error shown like in the following image :
And here is some more information from debugging:
What is it that I'm missing or doing wrong ?
try to alloc memory for pixels array like following code
char *pixels = (char *)malloc(data.length);
memcpy(pixels, [data bytes], data.length);
when pixels is not necessary, release this memory by call free(pixels)
Related
I am developing an iPad app which presents pictures from a photographer. Those photos are uploaded on a webserver, and served directly through the app, where they are downloaded and displayed using the method below :
if([[NSFileManager defaultManager] fileExistsAtPath:[url path]]){
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef cgImage = nil;
if(source){
cgImage = CGImageSourceCreateImageAtIndex(source, 0, (CFDictionaryRef)dict);
}
UIImage *retImage = [UIImage imageWithCGImage:cgImage];
if(cgImage){
CGImageRelease(cgImage);
}
if(source){
CFRelease(source);
}
return retImage;
}
I can observe a serious change in the display of the photos' colours between the original picture (which is the same when displayed from the disk or from the web on my Mac and the photographer's Mac) and on the iPad (the result is wrong in the app and even in Safari).
After some search I found some posts explaining how iDevices do not use embedded color profile, so I found I was going this way. The photos are saved using the following info :
I found out on some articles (for example this link from imageoptim or analogsenses) that I should save the picture for device export by converting it to sRGB, without embedding the color profile, but I cant find out how could I do that ? Each time I tried (I don't have photoshop, so I used command line ImageMagick), the resulting picture has the following information, and is still not displayed correctly on my iPad (and any other iPads I've tested) :
Here is a example of a picture that do not display correctly on the iPhone or iPad, but does on the web
I would like to transform it to display it correctly, any Idea would be really welcomed :)
[EDIT] I have succeeded to obtain a correct image using "save for web" options of photoshop using the following parameters :
But I'm still unable to apply those settings automatically to all my pictures.
To read an image, just use:
UIImage *image = [UIImage imageWithContentsOfFile:path];
As for the color profile issue, try the sips command-line tool to fix the image files. Something like:
mkdir converted
sips -m "/System/Library/ColorSync/Profiles/sRGB Profile.icc" *.JPG --out converted
You can get first the color space through CGImage.
#property(nonatomic, readonly) CGImageRef CGImage
CGColorSpaceRef CGImageGetColorSpace (
CGImageRef image
);
And depping of the colorSpace apply a format conversion. So to get the color space of an image, you'd do:
CGColorSpaceRef colorspace = CGImageGetColorSpace([myUIImage CGImage]);
Note: make sure to follow the get/create/copy rules for CG objects.
Color conversion to RGB8 (also can be applied to RGB16 or RGB32, changing the bits per component in the method newBitmapRGBA8ContextFromImage):
// Create a bitmap
unsigned char *bitmap = [ImageHelper convertUIImageToBitmapRGBA8:image];
// Create a UIImage using the bitmap
UIImage *imageCopy = [ImageHelper convertBitmapRGBA8ToUIImage:bitmap withWidth:width withHeight:height];
// Display the image copy on the GUI
UIImageView *imageView = [[UIImageView alloc] initWithImage:imageCopy];
ImageHelper.h
#import <Foundation/Foundation.h>
#interface ImageHelper : NSObject {
}
/** Converts a UIImage to RGBA8 bitmap.
#param image - a UIImage to be converted
#return a RGBA8 bitmap, or NULL if any memory allocation issues. Cleanup memory with free() when done.
*/
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *)image;
/** A helper routine used to convert a RGBA8 to UIImage
#return a new context that is owned by the caller
*/
+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef)image;
/** Converts a RGBA8 bitmap to a UIImage.
#param buffer - the RGBA8 unsigned char * bitmap
#param width - the number of pixels wide
#param height - the number of pixels tall
#return a UIImage that is autoreleased or nil if memory allocation issues
*/
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *)buffer
withWidth:(int)width
withHeight:(int)height;
#end
ImageHelper.m
#import "ImageHelper.h"
#implementation ImageHelper
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
return newBitmap;
}
+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = width * bytesPerPixel;
size_t bufferLength = bytesPerRow * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) {
NSLog(#"Error allocating color space RGB\n");
return NULL;
}
// Allocate memory for image data
bitmapData = (uint32_t *)malloc(bufferLength);
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
context = CGBitmapContextCreate(bitmapData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast); // RGBA
if(!context) {
free(bitmapData);
NSLog(#"Bitmap context not created");
}
CGColorSpaceRelease(colorSpace);
return context;
}
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Support both iPad 3.2 and iPhone 4 Retina displays with the correct scale
if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
#end
#PhilippeAuriach
I think you might have problem with [UIImage imageWithCGImage:cgImage], My suggestion is use [UIImage imageWithContentsOfFile:path] instead of above.
Below code might be help you.
if([[NSFileManager defaultManager] fileExistsAtPath:[url path]]){
//Provide image path here...
UIImage *image = [UIImage imageWithContentsOfFile:path];
if(image){
return image;
}else{
//Return default image
return image;
}
}
I'm using following code for reading on image from OpenGL ES scene:
- (UIImage *)drawableToCGImage
{
CGRect myRect = self.bounds;
NSInteger myDataLength = myRect.size.width * myRect.size.height * 4;
glFinish();
glPixelStorei(GL_PACK_ALIGNMENT, 4);
int width = myRect.size.width;
int height = myRect.size.height;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer2);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 < width * 4; x1++) {
buffer[(height - 1 - y1) * width * 4 + x1] = buffer2[y1 * 4 * width + x1];
}
}
free(buffer2);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * myRect.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(myRect.size.width, myRect.size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
It works perfectly for iPad and old iPhone versions, but I noticed that on iPhone 6 (both device and simulators) it looks like monochrome glitches.
What could it be?
Also, here is my code for CAEAGLLayer properties:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
#YES, kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
Could somebody shed the light on this crazy magic, please?
Thanks to #MaticOblak, I've figured out the problem.
Buffer was filled incorrectly, because float values of rect size were not correctly rounded (yeah, only for iPhone 6 dimension). Instead integer values should be used.
UPD: my issue was fixed with the following code:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = viewport[3];
I'm working in a WebRTC app for iOS. My goal is record a video from WebRTC objects.
I have the delegate RTCVideoRenderer that provides me this method.
-(void)renderFrame:(RTCI420Frame *)frame{
}
My question is: How can I convert the object RTCI420Frame in a usefull object for show image or save to disk.
RTCI420Frames use the YUV420 format. You can easily convert them to RGB using OpenCV, then convert them to a UIImage. Make sure you #import <RTCI420Frame.h>
-(void) processFrame:(RTCI420Frame *)frame {
cv::Mat mYUV((int)frame.height + (int)frame.chromaHeight,(int)frame.width, CV_8UC1, (void*) frame.yPlane);
cv::Mat mRGB((int)frame.height, (int)frame.width, CV_8UC1);
cvtColor(mYUV, mRGB, CV_YUV2RGB_I420);
UIImage *image = [self UIImageFromCVMat:mRGB];
}
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You may want to do this on a separate thread, especially if you are doing any video processing. Also, remember to use the .mm file extension so you can use C++.
If you don't want to use OpenCV, it is possible to do it manually. The following code kind of works, but the colors are messed up and it crashes after a few seconds.
int width = (int)frame.width;
int height = (int)frame.height;
uint8_t *data = (uint8_t *)malloc(width * height * 4);
const uint8_t* yPlane = frame.yPlane;
const uint8_t* uPlane = frame.uPlane;
const uint8_t* vPlane = frame.vPlane;
for (int i = 0; i < width * height; i++) {
int rgbOffset = i * 4;
uint8_t y = yPlane[i];
uint8_t u = uPlane[i/4];
uint8_t v = vPlane[i/4];
uint8_t r = y + 1.402 * (v - 128);
uint8_t g = y - 0.344 * (u - 128) - 0.714 * (v - 128);
uint8_t b = y + 1.772 * (u - 128);
data[rgbOffset] = r;
data[rgbOffset + 1] = g;
data[rgbOffset + 2] = b;
data[rgbOffset + 3] = UINT8_MAX;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(gtx);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
free(data);
I use this code, but it very slow. Is there any other way to do it?
I tried use methods indexOfObject and containsObject for array of images but it not works for me.
BOOL haveDublicate = NO;
UIImage *i = [ImageManager imageFromPath:path];
NSArray *photoImages = [ImageManager imagesFromPaths:photoPaths];
for (UIImage *saved in photoImages)
{
if ([ UIImagePNGRepresentation( saved ) isEqualToData:
UIImagePNGRepresentation( i ) ])
{
haveDublicate = YES;
}
}
I think you should check the size of the image first. If size and scale of both images are equal, check the pixel data directly for equality, and not the images PNG representation. this will be much faster. (The link shows you how to get the pixel data. To compare it, use memcmp.)
From that post (slightly modified):
NSData *rawDataFromUIImage(UIImage *image)
{
assert(image);
// Get the image into the data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int byteSize = height * width * 4;
unsigned char *rawData = (unsigned char*) malloc(byteSize);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return [NSData dataWithBytes:rawData length:byteSize];
}
About why this is faster: UIImagePNGRepresentation (1) fetches the raw binary data and then (2) converts it to PNG format. Skipping the second step can only improve performance, because it is much more work than just doing step 1. And memcmp is faster than everything else in this example.
I am using QCAR library, and I get the camera Frame from the library in each frame.
I am trying to show this frame by using setImage on my UIImageView* mCurFrameView. This works at first, and I am able to see the frames run smooth, but after 20 seconds it crashes.
Sometimes I get EXC_BAD_ACCESS on
int retVal = UIApplicationMain(argc, argv, nil, nil);
Sometimes it's just gdb and paused.
Sometimes before he crash I get
2012-02-24 15:59:15.726 QRAR_nextGen[226:707] Received memory warning.
Here's my code:
-(void)SaveCurrentFrame:(UIImage*)image
{
mCurFrameView.image = image;
}
- (void)renderFrameQCAR
{
cout<<"I am starting"<<endl;
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::setFrameFormat(QCAR::RGB888, true);
const QCAR::Image *image = state.getFrame().getImage(1);// 0: YUV, 1: Grayscale image
if (image)
{
const char *data = (const char *)image->getPixels();
int width = image->getWidth(); int height = image->getHeight();
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrderDefault;
provider = CGDataProviderCreateWithData(NULL, data, width*height*3, NULL);
intent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(width, height, 8, 8*3, width * 3, colorSpace, bitmapInfo, provider, NULL, NO, intent);
mCurFrame = [UIImage imageWithCGImage:imageRef];
cout<<"I am waiting"<<endl;
[self performSelectorOnMainThread:#selector(SaveCurrentFrame:) withObject:mCurFrame waitUntilDone:YES];
}
I've tried several things: showing CALayer to show the camera, release, retain, autorelease, defining and not defining property and synthesizing.
I'd appreciate it a lot if some one could help me. I am losing my mind. Thanks A lot.
You are not releasing (leaking) each CGDataProvider, that means you are storing all the images data on memory. Try doing CGDataProviderRelease(provider) after performSelectorOnMainThread.
Maybe you are missing this:
QCAR::Renderer::getInstance().end();
Here's a snippet working on iPhone 4S/5 iOS 6
- (void)renderFrameIntoImage {
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::setFrameFormat(QCAR::GRAYSCALE, true);
const QCAR::Image *image = state.getFrame().getImage(0); // 0: YUV, 1: Grayscale image
const char *data = (const char *)image->getPixels();
int width = image->getWidth(); int height = image->getHeight();
CGColorSpace *colorSpace = CGColorSpaceCreateDeviceGray();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGDataProvider *provider = CGDataProviderCreateWithData(NULL, data, width*height, NULL);
CGColorRenderingIntent intent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 8, width * 1, colorSpace, bitmapInfo, provider, NULL, NO, intent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
QCAR::Renderer::getInstance().end();
UIImageWriteToSavedPhotosAlbum(myImage, nil, nil, NULL);
}
Regards...
You have three leaks.
You must call a Release method for every CoreFoundation method with the word Create in it.
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
A C++ method to do the QCAR::Image to UIImage conversion
inline UIImage *imageWithQCARCameraImage(const QCAR::Image *cameraImage)
{
UIImage *image = nil;
if (cameraImage) {
CGColorSpaceRef colorSpace = NULL;
QCAR::PIXEL_FORMAT pixelFormat = cameraImage->getFormat();
int bitsPerPixel = QCAR::getBitsPerPixel(pixelFormat);
switch (pixelFormat) {
case QCAR::RGB888:
colorSpace = CGColorSpaceCreateDeviceRGB();
break;
case QCAR::GRAYSCALE:
colorSpace = CGColorSpaceCreateDeviceGray();
break;
case QCAR::YUV:
case QCAR::RGB565:
case QCAR::RGBA8888:
case QCAR::INDEXED:
std::cerr << "Image format conversion not implemented." << std::endl;
break;
case QCAR::UNKNOWN_FORMAT:
std::cerr << "Image format unknown." << std::endl;
break;
}
float width = cameraImage->getWidth();
float height = cameraImage->getHeight();
int bytesPerRow = cameraImage->getStride();
const void *baseAddress = cameraImage->getPixels();
size_t totalBytes = QCAR::getBufferSize(width, height, pixelFormat);
if (bitsPerPixel > 0 && colorSpace != NULL) {
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
baseAddress,
totalBytes,
NULL);
int bitsPerComponent = 8;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
image = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
}
}
return image;
}