I am using custom png images for items of .tabBarItem of my UITabBarController.
But my png images are too big (64x64), so I use the method below to redraw the image in a smaller rect (for example, make the size parameter (25,25) ).
-(UIImage*) getSmallImage:(UIImage*)image inSize: (CGSize)size
{
CGSize originalImageSize = image.size;
CGRect newRect = CGRectMake(0, 0, size.width, size.height);
float ratio = MAX(newRect.size.width/originalImageSize.width,
newRect.size.height/originalImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
[path addClip];
CGRect projectRect;
projectRect.size.width = ratio * originalImageSize.width;
projectRect.size.height = ratio * originalImageSize.height;
//projectRect.origin.x = (newRect.size.width - projectRect.size.width) / 2.0;
//projectRect.origin.y = (newRect.size.height - projectRect.size.height) / 2.0;
// Draw the image on it
[image drawInRect:projectRect];
// Get the image from the image context
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup image context resources
UIGraphicsEndImageContext();
return smallImage;
}
Every image I use was returned by this method. Everything was fine on simulators, but those images were not displaying when I test them on my iphone.
But if I abandon the method above and import the image directly like this: self.tabBarItem.image = [UIImage imageNamed:#"Input"]; Then the images were correctly shown on my phone, but only too big.
How can I fix this problem?
I'll answer this question by myself.
After hours of debugging, here is the problem:
in the method given above, originproperty of CGRect projectRectwas not set.
After I set both origin.x & origin.y to 0, everything worked out.
Tip: every time you meet a WTF problem, be patient and try to test your code in different ways. 'Cause in 99.9% of this kind of cases, there is something wrong with your code in stead of a bug of Xcode.
Though I still don't know why the code in my question works well in simulators, I'll let it go because I guess someday when I become an expert, this kind of question would be easy as well as silly.
Related
My app sends a GET request to google to attain certain user information. One piece of crucial returned data is a users picture which is placed inside a UIImageView that is always exactly (100, 100) then redrawn to create a round mask for this imageView. These pictures come from different sources and thus always have different aspect ratios. Some have a smaller width compared to their height, sometimes it's vice-versa. This results in the image looking compressed. I've tried things such as the following (none of them worked):
_personImage.layer.masksToBounds = YES;
_personImage.layer.borderWidth = 0;
_personImage.contentMode = UIViewContentModeScaleAspectFit;
_personImage.clipsToBounds = YES;
Here is the code I use to redraw my images (it was attained from user fnc12 as the third answer in Making a UIImage to a circle form):
/** Returns a redrawn image that had a circular mask created for the inputted image. */
-(UIImage *)roundedRectImageFromImage:(UIImage *)image size:(CGSize)imageSize withCornerRadius:(float)cornerRadius
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0); //<== Notice 0.0 as third scale parameter. It is important because default draw scale ≠ 1.0. Try 1.0 - it will draw an ugly image...
CGRect bounds = (CGRect){CGPointZero, imageSize};
[[UIBezierPath bezierPathWithRoundedRect:bounds cornerRadius:cornerRadius] addClip];
[image drawInRect:bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
This method is always called like so:
[_personImage setImage:[self roundedRectImageFromImage:image size:CGSizeMake(_personImage.frame.size.width, _personImage.frame.size.height) withCornerRadius:_personImage.frame.size.width/2]];
So I end up having a perfectly round image but the image it self isn't right aspect-wise. Please help.
P.S. Here's how images look when their width is roughly 70% that of their height before the redrawing of the image to create a round mask:
Hello dear friend there!
Here is my version that works:
Code in ViewController:
[self.profilePhotoImageView setContentMode:UIViewContentModeCenter];
[self.profilePhotoImageView setContentMode:UIViewContentModeScaleAspectFill];
[CALayer roundView:self.profilePhotoImageView];
roundView function in My CALayer+Additions class:
+(void)roundView:(UIView*)view{
CALayer *viewLayer = view.layer;
[viewLayer setCornerRadius:view.frame.size.width/2];
[viewLayer setBorderWidth:0];
[viewLayer setMasksToBounds:YES];
}
May be you should try to change your way to create rounded ImageView using my version that create rounded ImageView by modifying ImageView's view layer . Hope it helps.
To maintain aspect ratio of UIImageView, after setting image use following line of code.
[_personImage setContentMode:UIViewContentModeScaleAspectFill];
For detailed description follow reference link:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImageView_Class/
I am trying to resize an image using UIGraphics. The image is one taken with the camera, and I am using this code:
CGSize origImageSize = photograph.size;
//this saves as 140*140 for retina
CGRect newRect = CGRectMake(0, 0, 70, 70);
//scaling ratio
float ratio = MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
CGRect projectRect;
projectRect.size.width= ratio*origImageSize.width;
projectRect.size.height=ratio*origImageSize.height;
//center the image
projectRect.origin.x= ((newRect.size.width-projectRect.size.width)/2);
projectRect.origin.y=((newRect.size.height-projectRect.size.height)/2);
[photograph drawInRect:projectRect];
//get the image from the image context
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
For some reason the final photo isn't as sharp, it's slightly blurry. Am I doing anything wrong here? Any pointers would be really appreciated. thanks
I assume you calculate rectangle properly. Then make sure you use integral rectangle. Non-integral values may cause sub pixel rendering.
Run your projectRect through CGRectIntegral to get integral rectangle, then use it to render your image.
projectRect = CGRectIntegral(projectRect);
I am able to save the contents of a UIView as an image to the library.
The image i am trying to save is very a good resolution on the app, but saving it to the photo library reduces the resolution significantly. The image i am trying to save, is of the same width as the screen but many times the height of the screen.
The image i save looks like how i want on the UIScrollView in the app but the image it saves has a lower resolution than the actual image. How can i prevent it from doing this?
Thanks in advanced :D
edit: adding the code...
- (void) saveImageToLibrary
{
UIImage* image;// = nil;
UIGraphicsBeginImageContext(scrollview.contentSize);
{
CGPoint savedContentOffset = scrollview.contentOffset;
CGRect savedFrame = scrollview.frame;
scrollview.contentOffset = CGPointZero;
scrollview.frame = CGRectMake(0, 0, scrollview.contentSize.width, scrollview.contentSize.height);
self.view.frame = CGRectMake(0, 0, scrollview.contentSize.width, scrollview.contentSize.height);
[scrollview.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
scrollview.contentOffset = savedContentOffset;
scrollview.frame = savedFrame;
self.view.frame = savedFrame;
}
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil,nil,nil);
}
i hope this is helpful.
Are you setting the correct scale when you try to create the image? Could you post the code?
OK guys and goyals... i think i figured out the problem.
the issue is the iPhone simulator. depending on the hardware you are sumulating, image will be optimised for the resolution of the sumilator.
im not entirely conviced by this because i was previously simulating an iPhone 5 (which has retina display), but simulating iphone 6 seems to work fine.
here is a link if you want to follow further details or if you want to add to this thread :D
https://stackoverflow.com/questions/8032528/uiimage-image-loaded-from-appbundle-is-saved-at-lower-resolution-in-app-direct
Use this instead:
UIGraphicsBeginImageContextWithOptions(scrollView.contentSize, scrollView.opaque, [[UIScreen mainScreen] scale]);
EDIT
You can also try setting the scale to zero:
UIGraphicsBeginImageContextWithOptions(scrollView.contentSize, scrollView.opaque, 0.0);
Does that help you?
I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.
After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks