Saving UIImage in custom pixel dimension - ios

I have the below implementation to save a UIView as an image to the device's photo album. It works correctly, however the saved image adapts to the device's screen resolution. As an instance if I run it on a iPhone 5, the saved image will be 640 x 640 px. My goal is to save custom sized images like 1800 x 1800 px or something like that on every device. So I would really appreciate if somebody could give me an example or any guidance, that helps me to find the right solution. Any other tips welcomed, my goal is to make custom pixel sized images, it doesn't matter if I have to use a different implementation.
- (IBAction)saveImg:(id)sender {
UIImage *imageToSave = [[UIImage alloc] init];
// self.fullVw holds the image, that I want to save
imageToSave = [self.fullVw pb_takeSnapshot];
NSData *pngData = UIImagePNGRepresentation(imageToSave);
UIImage *imageToSave2 = [[UIImage alloc] init];
imageToSave2 = [UIImage imageWithData:pngData];
UIImageWriteToSavedPhotosAlbum(imageToSave2,nil, nil, nil);
}
// This method is in a UIView category
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, 0.0);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

Related

Combining Multiple UI Images and UI Labels into 1 Image

Basically I have a main UIImage, which acts as a background/border. Within that UIImage I have 2 more UIImages, vertically split with a gap around them so you can still see a border of the main background UIImage. On each side I have a UILabel, to describe the images. Below is a picture of what I mean to help put into context.
What I want to achieve is to make this into 1 image, but keeping all of the current positions, layouts, image layouts (Aspect Fill) and label sizes and label background colours the same. I also want this image to be the same quality so it still looks good.
I have looked at many other stackoverflow questions and have so far come up with the follow, but it has the following problems:
Doesn't position the image labels to their correct places and sizes
Doesn't have the background colour for the labels or main image
Doesn't have the images as Aspect Fill (like the UIImageViews) so the outside of each picture is shown as well and isn't cropped properly, like in the above example.
Below is my code so far, can anyone help me achieve it like the image above please? I am fairly new to iOS development and am struggling a bit:
-(UIImage *)renderImagesForSharing{
CGSize newImageSize = CGSizeMake(640, 640);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0);
[self.mainImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.beforeImageSide.image drawInRect:CGRectMake(0,0,(newImageSize.width/2),newImageSize.height)];
[self.afterImageSize.image drawInRect:CGRectMake(320,0,(newImageSize.width/2),newImageSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.beforeLabel drawTextInRect:CGRectMake(60.0f, 0.0f, 200.0f, 50.0f)];
[self.afterLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9);
UIImage * imagePNG = [UIImage imageWithData:imgData]; //wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
}
Thank you in advance for any help guys!
I don't understand why you want use drawInRect: to accomplish this task.
Since you have the images and everything with you, you can easily create a view as you have shown in the image. Then take a screenshot of it like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData=UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Change the self.view to the view just created
It will give some idea.
UIGraphicsBeginImageContextWithOptions(DiagnosisView.bounds.size, DiagnosisView.opaque, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor redColor] set];
CGContextFillRect(ctx, DiagnosisView.frame);
[DiagnosisView.layer renderInContext:ctx];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSString *imagePath = [KdiagnosisFolderPath stringByAppendingPathComponent:FileName];
NSData *pngData = UIImagePNGRepresentation(img);
[pngData writeToFile:imagePath atomically:YES];
pngData = nil,imagePath = nil;

GPUImage: strange image deformation (but only with some photo) [duplicate]

I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...
Crop area:
https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG
Bad Result=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurRadiusInPixels = 8.0;
[stillImageFilter prepareForImageCapture];
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
[imageSource removeAllTargets];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
UPDATE:
In the meantime, when you crop your images, do so to the nearest
multiple of 8 pixels in width and you should see the correct result
Thank u #Brad Larson ! i resize image width to nearest multiple of 8 and get what i want
-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
float fixSize = next8(image.size.width);
CGSize newSize = CGSizeMake(fixSize, image.size.height);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
float next8(float n) {
int bits = (int)n & 7; // give us the distance to the previous 8
if (bits == 0)
return (float)n;
return (float)n + (8-bits);
}
Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.
What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.
In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.

ios GPUImage,bad result of image processing with small-size images?

I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...
Crop area:
https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG
Bad Result=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurRadiusInPixels = 8.0;
[stillImageFilter prepareForImageCapture];
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
[imageSource removeAllTargets];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
UPDATE:
In the meantime, when you crop your images, do so to the nearest
multiple of 8 pixels in width and you should see the correct result
Thank u #Brad Larson ! i resize image width to nearest multiple of 8 and get what i want
-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
float fixSize = next8(image.size.width);
CGSize newSize = CGSizeMake(fixSize, image.size.height);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
float next8(float n) {
int bits = (int)n & 7; // give us the distance to the previous 8
if (bits == 0)
return (float)n;
return (float)n + (8-bits);
}
Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.
What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.
In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.

Very slow to resize animated UIImage frame by frame

I'm trying to resize down a GIF image in iOS, so I'm looping thru each frame as you can see in the code below.
NSMutableArray *animationImages = [NSArray arrayWithArray:sourceImage.images];
NSMutableArray *newAnimationImages = [[NSMutableArray alloc] init];
//#autoreleasepool {
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
for(UIImage *frame in animationImages) {
[frame drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newFrame = UIGraphicsGetImageFromCurrentImageContext();
[newAnimationImages addObject:newFrame];
}
UIGraphicsEndImageContext();
//}
newImage = [UIImage animatedImageWithImages:newAnimationImages duration:sourceImage.duration];
Unfortunately, for some reason, it's taking ages to resize (around 3-4 seconds for a 2-4 second gif)
Any ideas what I'm doing wrong?
Quick update - I ended up using the Aspect Fit view option for a UIImageView, that did exactly what I needed, then did the actual resizing server side.

How to draw retina-captured image back to retina and maintain high quality?

I have an app that does a screen capture on a retina-iPad (iPad3.) Or maybe I just do the screen capture with HOME+POWER and get the image that way, it doesn't matter. In the end, I have a captured image that is 2048x1536 -- that's 1024x768 #2x.
Now I want to display this image -- this 2048x1536 image -- back on the screen and I want to preserve the lovely retina-ness of it.
When I do something like this (code typed in web browser):
UIImage *myImage = [UIImage imageWithContentsOfFile: pathToMyHiresImage];
UIImageView *myImageView = [[UIImageView alloc] initWithImage: myImage];
myImageView.frame = screenBounds; // make it fit
// etc...
I end up with a low-rez (jaggy diagonal lines & text) version of the image.
So my question is: what do I have to do in order to display that image back on the screen and get it to display #2x-DPI, so it has that pretty retina-look?
Thanks!
This should do the trick:
UIImage *myImage = [UIImage imageWithContentsOfFile: pathToMyHiresImage];
// *****Added Code*****
myImage = [UIImage imageWithCGImage:myImage.CGImage scale:2 orientation:myImage.imageOrientation];
// ********************
UIImageView *myImageView = [[UIImageView alloc] initWithImage: myImage];
myImageView.frame = screenBounds; // make it fit
// etc ...
Good luck!
EDIT (Olie)
Here is the final code, taking DavidH's suggestion into account:
UIImage *myImage = [[UIImage alloc] initWithContentsOfFile: path];
if ([[UIScreen mainScreen] respondsToSelector: #selector(scale)])
{
float screenScale = [[UIScreen mainScreen] scale];
if (screenScale > 1.)
{
id oldImage = myImage;
myImage = [[UIImage imageWithCGImage: myImage scale: screenScale orientation: myImage.imageOrientation] retain];
[oldImage release];
}
}
This is where Quartz, i.e. the real data, and UIImage, the veneer, collide. So you get a UIImage of what you created. Ask it for the CGImage and then look at that image. The width and height better be 2048x1536 or something is wrong.
Once you can figure out how to get a "real" sized image, you can save that as "foo#2x.png", then resize it by half, and save that as "foo.png".
UIImage makes thing really easy for nobodies, but when you want real control, you have to dive into Quartz and work where the rubber hits the road.
My guess is that you think you got a retina image, but didn't, then when you go to show it its half rez.

Resources