Why image stretch(UIImage to CIImage to UIImage) - ios

Due to some reasons, I need to transform an image from UIImage to CIImage(for processing) and to UIImage back again. I do it by UIImage-CGImage-CIImage-UIImage.Ignoring the irrelevant code, the code is showing behind.
UIImage *originalPic = [[UIImage alloc]init];//get this from UIImagePickerController
CIImage *originalCIPic = [[CIImage alloc]init];
originalCIPic = [CIImage imageWithCGImage:originalPic.CGImage];
UIImage *finalResultUIImage = [UIImage imageWithCIImage:originalCIPic];
//then I put the image on an Image View(contentMode has been set)
self.viewOfImage.contentMode = UIViewContentModeScaleAspectFit;
self.viewOfImage.image=finalResultUIImage;
//I used NSLog to check the size, but they are the same.(before transforming and after)
NSLog(#"this is the final UIImage,size is %# ",NSStringFromCGSize(finalResultUIImage.size));
But the image showing on the screen is stretched.
What's wrong with me? Or Is there any alternative choice for me?
Thank you very much andI am so sorry I have not enough reputation to post picture.
:)

Try this:
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"abcd" ofType:#"png"];
CIImage *ciImage = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:filePath]];
UIImage *imageByCIImage = [UIImage imageWithCIImage:ciImage scale:1.0f orientation:UIImageOrientationUp];
UIGraphicsBeginImageContext(imageByCIImage.size);
[imageByCIImage drawInRect:CGRectMake(0,0,imageByCIImage.size.width,imageByCIImage.size.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
self.imageview.image = newImage;
UIGraphicsEndImageContext();

Related

Objective-C (iOS), Image resizing using CIFilter produces invalid output (UIImage)

I'm trying to resize JPEG image in Objective-C (iOS). Input is a JPG file and output should be UIImage.
I have this code:
// Load image from a file
NSData *imageData = [NSData dataWithContentsOfFile:jpgFile];
UIImage *inputImage = [UIImage imageWithData:imageData];
CIImage *ciImage = inputImage.CIImage;
// Set Lanczos filter
CIFilter *scaleFilter = [CIFilter filterWithName:#"CILanczosScaleTransform"];
[scaleFilter setValue:ciImage forKey:#"inputImage"];
[scaleFilter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputScale"];
[scaleFilter setValue:[NSNumber numberWithFloat:1.0] forKey:#"inputAspectRatio"];
// Get an output
CIImage *finalImage = [scaleFilter valueForKey:#"outputImage"];
UIImage *outputImage = [[UIImage alloc] initWithCIImage:finalImage];
But the output image is invalid (outputImage.size.height is 0), and it causes following errors in an other processing:
CGContextDrawImage: invalid context 0x0. If you want to see the
backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental
variable. ImageIO: JPEG Empty JPEG image (DNL not supported)
Update:
I don't know, what is wrong with the code above (except the initialization mentioned by Sulthan below - thank him for that). I used following code at the end (this code works OK):
CIImage *input_ciimage = [[CIImage alloc] initWithImage:inputImage];
CIImage *output_ciimage =
[[CIFilter filterWithName:#"CILanczosScaleTransform" keysAndValues:
kCIInputImageKey, input_ciimage,
kCIInputScaleKey, [NSNumber numberWithFloat:0.5],
nil] outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef output_cgimage = [context createCGImage:output_ciimage fromRect:[output_ciimage extent]];
UIImage *output_uiimage = [UIImage imageWithCGImage:output_cgimage];
CGImageRelease(output_cgimage);
This line is the problem:
CIImage *ciImage = inputImage.CIImage
If the image is not initialized from a CIImage then it's own CIImage is nil.
A safer approach is:
CIImage *ciImage = [[CIImage alloc] initWithImage:inputImage];
Also, make sure the image has been loaded successfully from your data.

Get black & white image from UIImage in iPhone SDK?

I want to convert image to pure black & white. I tried and got the result which is left image I have attached and the result should be the right image I have attached according to the requirements of the application.
I have used lots of CIFilter like (CIColorMonochrome, CIColorControls, CIPhotoEffectTonal etc.) but none of this is working.
Please check below code and attached result of Images.
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithData: UIImageJPEGRepresentation(self.captureImage.image, .1)];
//CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *output = [CIFilter filterWithName:#"CIColorMonochrome" keysAndValues:kCIInputImageKey, beginImage, #"inputIntensity", [NSNumber numberWithFloat:1.0], #"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
CGImageRelease(cgiimage);
return newImage;
}
My Result----------------------------------------------Expected Result
You can use blendmode: kCGBlendModeLuminosity with alpha value:1.0

GPUImageColorBalanceFilter not working

I am using GPUImageColorBalanceFilter as defined in https://github.com/liovch/GPUImage. But the filtered image doesn't appear changed from original image at all on being displayed! The code runs well without errors. What am i doing wrong?
CGImageRef cgimg = [context createCGImage:beginImage
fromRect:[beginImage extent]];
BOOL preserveLuminosity = NO;
GPUVector3 shadows= (GPUVector3){1,0.0,1};
GPUVector3 midtones = (GPUVector3){-1,-1,0.13};
GPUVector3 highlights = (GPUVector3){0.0,-1,1};
UIImage *img1 = [UIImage imageWithCGImage:cgimg];
GPUImagePicture *pic1 = [[GPUImagePicture alloc] initWithImage:img1];
GPUImageColorBalanceFilter *filter1 = [[GPUImageColorBalanceFilter alloc] init];
[filter1 setShadows:shadows];
[filter1 setMidtones:midtones];
[filter1 setHighlights:highlights];
[filter1 setPreserveLuminosity:preserveLuminosity];
[pic1 addTarget:filter1];
[pic1 processImage];
UIImage *new= filter1.imageFromCurrentlyProcessedOutput;
self.imageView.image = new;
CGImageRelease(cgimg);
Note: Since I was already working with bradlarson's GPUImage library, I added GPUImageColorBalanceFilter.h, GPUImageColorBalanceFilter.m, GPUImageOpenGLESContext.h, GPUImageOpenGLESContext.m from https://github.com/liovch/GPUImage to my codebase.

how to compress the saved image in iphone application [duplicate]

This question already has answers here:
UIImage: Resize, then Crop
(16 answers)
Closed 9 years ago.
I have app in which i capture the image it works fine but i want to reduce the size of the image i searched from net i found some code which is here
This is my code working fine when image captured
NSString *type = [info objectForKey:UIImagePickerControllerMediaType];
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerOriginalImage];
NSData *imageData = UIImagePNGRepresentation(pickedImage);
NSString *documentsDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
I want to reduce the size of picked image i got the following code but i am did not getting it use like it use newSize but where it is using the original image to compress
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
what you can do is....
NSString *type = [info objectForKey:UIImagePickerControllerMediaType];
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerOriginalImage];
CGSize sizeCropped = CGSizeMake(602, 450);//you can adjust any size here...
pickedImage = [yourclassname imageWithImage:pickedImage scaledToSize:sizeCropped];//call below function...
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
let me know it is working or not!!!!
Happy coding!!!!!
+(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize;
{
   UIGraphicsBeginImageContext( newSize ); 
   [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
   UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
   UIGraphicsEndImageContext();
   return newImage;
}
you can call this function where you want the compressed image using these lines of code.
CGSize firstSize = CGSizeMake(210.0,210.0);
   UIImage *compImage=[mainclass imageWithImage:image scaledToSize:firstSize];
if you want to save that new Image with new size then after your code add that image in library with bellow line..
UIImageWriteToSavedPhotosAlbum(UIImage *image, id completionTarget, SEL completionSelector, void *contextInfo);
EX:
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);

What is the proper way to handle retina images from resized nsdata?

When a user selects an image from the their photo library, I'm resizing it and then uploading it to my server and then at some other point in the app the user can view all their photos. (I'm simplifying the work flow)
The uiimageview on the "detail" screen is 320 x 320. Based upon the below method should I be using:
UIImage *image = [UIImage imageWithCGImage:img];
or
UIImage *image = [UIImage imageWithCGImage:img scale:[UIScreen mainScreen].scale orientation:img.imageOrientation];
Part B would be when I request download the image (nsdata) should I use imageWithCGIImage or imageWithCGIImage:scale:orientation
- (UIImage *)resizedImageForUpload:(UIImage *)originalImage {
static CGSize __maxSize = {640, 640};
NSMutableData *data = [[NSMutableData alloc] initWithData:UIImageJPEGRepresentation(originalImage, 1.0)];
CFMutableDataRef dataRef = (__bridge CFMutableDataRef)data;
CGImageSourceRef imgSrc = CGImageSourceCreateWithData(dataRef, NULL);
CGFloat width = [originalImage maxDimensionForConstraintSize:__maxSize];
NSNumber *maxWidth = [NSNumber numberWithFloat:width];
NSDictionary *options = #{
(__bridge NSString *)kCGImageSourceCreateThumbnailFromImageAlways : #YES,
(__bridge NSString *)kCGImageSourceCreateThumbnailWithTransform: #YES,
(__bridge NSString *)kCGImageSourceThumbnailMaxPixelSize : maxWidth
};
CFDictionaryRef cfOptions = (__bridge CFDictionaryRef)options;
CGImageRef img = CGImageSourceCreateThumbnailAtIndex(imgSrc, 0, cfOptions);
CFStringRef type = CGImageSourceGetType(imgSrc);
CGImageDestinationRef imgDest = CGImageDestinationCreateWithData(dataRef, type, 1, NULL);
CGImageDestinationAddImage(imgDest, img, NULL);
CGImageDestinationFinalize(imgDest);
UIImage *image = [UIImage imageWithCGImage:img];
CFRelease(imgSrc);
CGImageRelease(img);
CFRelease(imgDest);
return image;
}
It appears I've found the answer to my own question. The resizeImageForUpload shouldn't try to scale based upon the device. Since I'm defining a max size 640,640 (retina size for my 320,320 uiimageview) no other manipulation is necessary. I've added some caching for the images and I'm handing the scaling at that point:
UIImage *image = [UIImage imageWithData:imgData scale:[UIScreen mainScreen].scale];
and then return. The reason why I thought I had messed something up was that I was trying to scale an already scaled image. Lessons learned.

Resources