Objective-c UIImage bad image quality - ios

I am not the best graphic designer, but need to setup few icons. I made them with gimp with options like Border, etc. All are primary designed in 1000x1000px and used in objective-c code in UIImageView.
I am wondering why resized icon look that terrible. Any suggestions?
In app:
http://s14.postimg.org/k2g9uayld/Screen_Shot_2014_12_18_at_11_22_53.png
http://s14.postimg.org/biwvwjq8x/Screen_Shot_2014_12_18_at_11_23_02.png
Original image:
Can't Post more than 2 links so: s17.postimg.org/qgi4p80an/fav.png
I dont think that matters but
UIImage *image = [UIImage imageNamed:#"fav.png"];
image = [UIImage imageWithCGImage:[image CGImage] scale:25 orientation:UIImageOrientationUp];
self.navigationItem.titleView = [[UIImageView alloc] initWithImage:image];
But one of images has been set up in storyboard and effect is the same.

[self.favO.layer setMinificationFilter:kCAFilterTrilinear];
[self.favO setImage:[self resizeImage:[UIImage imageNamed:#"fav.png"] newSize:CGSizeMake(35,35)]
forState:UIControlStateNormal];
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
This fixed problem. This problem occurs while UIImageView takes care of resizing.

To avoid this kind of behavior I suggest you make SVGs instead of png files. Using the lib SVGKit you can then resize it to your heart content. Since SVG format is a vectorial format there won't be any loss in your scaling.
https://github.com/SVGKit/SVGKit
EDIT:
To add up to this solution, your PNG file at 1000X1000px must be really heavy, on the other hand a SVG file is a text file so it makes it very lightweight

Related

How to resize an image which has transparent portions in objective C

I am trying to resize a PNG which has transparent sections, first I used:
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
[sourceImage drawInRect:CGRectMake(0,0, newSize.width, newSize.height)];
UIImage* targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but the resultant image is opaque. Then I read that drawInRect by defaults draws the image as opaque, so I modified the drawInRect line to:
[fromImage drawInRect:CGRectMake(0,0, newSize.width, newSize.height) blendMode:kCGBlendModeNormal alpha:0.0];
The resultant image is blank, I think there should be some combination of parameters in the drawInRect that will retain the transparency of the image.
I have searched other threads, and everywhere I see the generic resizing code, but nowhere it talks about images with transparent portions.
Anybody has any Idea ?
CGRect rect = CGRectMake(0,0,newsize.width,newsize.height);
UIGraphicsBeginImageContext( rect.size );
[sourceImage drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
textureColor=[UIImage imageWithData:imageData];

unable to resize image properly

I am having problem with UIImage resizing, image masking is working fine but after applying mask UIImage is starched, the problem is with scaling as image is not scaled properly.
CCClippingNode *clippingNode = [[CCClippingNode alloc] initWithStencil:pMaskingFrame ];
pTobeMasked.scaleX = (float)pMaskingFrame.contentSize.width / (float)pTobeMasked.contentSize.width;
pTobeMasked.scaleY = (float)pMaskingFrame.contentSize.height / (float)pTobeMasked.contentSize.height;
clippingNode.alphaThreshold = 0;
[pContainerNode addChild:clippingNode];
pTobeMasked.position = ccp(pMaskingFrame.position.x, pMaskingFrame.position.y);
[clippingNode addChild:pTobeMasked];
One of my project I have used below function to resize an image;
/*
method parameters definition
image : original image to be resized
size : new size
*/
+ (UIImage*)resizeImage:(UIImage *)image size:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//here is the scaled image which has been changed to the size specified
UIGraphicsEndImageContext();
return newImage;
}
This will work like a charm. It's similar to the already posted answer, but it has some more options:
+(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Why does this code resizing an image rotate it 90 degrees from original? [duplicate]

This question already has an answer here:
How to get a correctly rotated UIImage from an ALAssetRepresentation?
(1 answer)
Closed 9 years ago.
I have an iPad app where I'm using the camera. The original image is 480 x 640. I am attempting to resize it to 124 x 160 and then store it in CoreData using this code that I found on the internet:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
The image is returned to me rotated counter-clockwise 90 degrees and I don't see why. I have tried commenting out this statement:
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
but it makes no difference. What is wrong here?
Thanks to everybody who made suggestions.. I kept looking and found this, which scales and keeps the orientation:
CGRect screenRect = CGRectMake(0, 0, 120.0, 160.0);
UIGraphicsBeginImageContext(screenRect.size);
[image drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to consider the orientation of the image when drawing it like this. Check this answer iOS - UIImageView - how to handle UIImage image orientation

how to change the color of an individual pixel in uiimage

I have a UIImage which is an image of a mountainous landscape. I need to show locations on this image, by turning corresponding 2-3 pixels at the spot to red. How do I accomplish this? Thanks!
You can draw image to graphics context like this:
CGRect imageRect = CGRectMake(0, 0, width, height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
//Save current status of graphics context
CGContextSaveGState(context);
CGContextDrawImage(context, imageRect, image.CGImage);
And then just draw a point on it wherever you want like this:
//CGContextFillRect(context, CGRectMake(x,y,1,1));
//Fix error according to #gsempe's comment
CGContextFillRect(context, CGRectMake(x,y,1./(image.scale),1./(image.scale)))
Then just save it to UIImage again:
CGContextRestoreGState(context);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
You also should take care of image's orientation. Here is some good article on it.

UIImage's drawInrect: smoothes image

I'm trying to draw image using UIImage's drawInRect: method. Here is the code:
UIImage *image = [UIImage imageNamed:#"OrangeBadge.png"];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem is that the resulting image is blurry. Here is the resulting image (on the right side) compared to the source image (on the left side):
I've tried both CGContextSetAllowsAntialiasing(UIGraphicsGetCurrentContext(), NO) CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), NO) but this did not solve the problem.
Any ideas?
Thanks in advance.
If you are developing on a retina device, possibly the issue is related to the resolution of your graphics context. Would you try with:
UIGraphicsBeginImageContextWithOptions(size, NO, 2.0f);
This will enable retina resolution. Also, your image should be available at #2x resolution for this to work.
If you want to support non-retina devices as well, you can use:
if ([UIScreen instancesRespondToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0f);
} else {
UIGraphicsBeginImageContext(newSize);
}

Resources