I'm trying to implement an strechable image that resembles a dialog bubble however I'm not getting it right,
with the following code:
UIImage *ballon = [[UIImage imageNamed:#"strech.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(12, 11, 12, 9)];
I'm getting the following result:
this is my original leftBubble.png
What could I possibly be doing wrong?
You shouldn't have gradient on the entire image. That's where the lines are coming from. The center of you image (the part that gets stretched, needs to be a solid color since it's getting repeated, not only horizontally, but vertically as well.
If you were only stretching the image horizontally, your image miay have worked just fine.
Try this image I made for you, and use some different capInsets:
UIImage *ballon = [[UIImage imageNamed:#"strech.png"]resizableImageWithCapInsets:UIEdgeInsetsMake(12, 20, 22, 12)];
Notice all of my styling happens on the edge of the bubble, and the center is solid.
EDIT:
Here's a smaller version of the image. I made the larger one so you could see what I was doing.
I can't comment on jhilgert00's answer, but I'd like to add something: -[UIImage resizableImageWithCapInsets:] tiles the inner pixels, rather than stretching, which is why the gradient doesn't work, as jhilgert00 said.
If you're working with iOS 6.0 or higher, you can use -[UIImage resizableImageWithCapInsets:capInsets resizingMode:UIImageResizingModeStretch]
From http://developer.apple.com/library/ios/#documentation/uikit/reference/UIImage_Class/Reference/Reference.html
-(UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets
[The] pixel area not covered by the cap in each direction is tiled,
left-to-right and top-to-bottom, to resize the image.
and
-(UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets resizingMode:(UIImageResizingMode)resizingMode
You should only call this method in place of its counterpart if you specifically want your image to be resized with the UIImageResizingModeStretch resizing mode.
Related
I want to use image slicing in Xcode to produce a resizable image with a central area that is held static. It seems that this is impossible in Xcode as it only allows for one stretched area and one shrunk area in each direction. Is this correct?
Does anyone know of a nice workaround for this? I need to use the image for a button so I can't do an arrangement of UIImageViews on top of each other unless I pass touches through and that gets a bit messy.
Many thanks.
As stretchableImageWithLeftCapWidth is deprecated with iOS 5.0, You can use resizableImageWithCapInsets. Check Apple Documentation, It states,
You use this method to add cap insets to an image or to change the existing cap insets of an image. In both cases, you get back a new image and the original image remains untouched. For example, you can use this method to create a background image for a button with borders and corners: when the button is resized, the corners of the image remain unchanged, but the borders and center of the image expand to cover the new size.
Another method is also available, resizableImage(withCapInsets:resizingMode:) if you want to set resizingMode
you can do something like,
UIImage *image = [UIImage imageNamed:#"yourImageName"];
UIImage *streachedImage = [image resizableImageWithCapInsets:UIEdgeInsetsMake(5, 5, 5, 5)]; //Edgeinsect that you want
// OR
UIImage *streachedImage2 = [image resizableImageWithCapInsets:UIEdgeInsetsMake(5, 5, 5, 5) resizingMode:UIImageResizingModeStretch];
// OR
UIImage *streachedImage3 = [image resizableImageWithCapInsets:UIEdgeInsetsMake(5, 5, 5, 5) resizingMode:UIImageResizingModeTile];
Check the Documentation for more details.
Update : (According to comment)
You need to just set backgroundImage and image both to your button. You can set image inset also to manage your image's position.
The following code block is used in my application to take a screenshot of the current screen of an iPad mini(768 x 1024):
UIImage *img;
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
In a different viewcontroller, I present a UIScrollView with a width of 540 and a height of 290. I display the screencapture UIImage in a UIImageView which I create programmatically initWithFrame with a rectangle width of 250 and height of 250. The content size of the scrollview is 768 by 250.
Now running the application, I display four rectangles and screenshot the screen using the above block of code. Transitioning to the UIScrollView, the image is not clear (and by not clear, some rectangles are missing sides while some are thicker than others). Is there a way to display the image clearer? I know the image has to be scaled down from the original 768 by 1024 to 250 by 250. Could this be the problem? If so, what would be the best fix?
Edit:
Above a screenshot of the image I want to capture.
Below is the UIImage in UIImageView within a UIScrollView:
Cast each coordinate to int, or use CGRectIntegral, to do that directly on a CGRect, decimal point requires AA and makes images blurry.
Try changing the content mode of your UIImageViews. If you use UIViewContentModeScaleAspectFill, you shouldn't see any extra space around the edges.
Update: From the screenshots you posted, it looks like this is just an effect of the built-in downscaling in UIKit. Try manually downscaling the image to fit using Core Graphics first. Alternatively, you might want to use something like the CILanczosScaleTransform Core Image filter (iOS 6+).
I am working for IOS 5.0 as minTarget
I have a UIImageView to which i want to assign a resizable image, so that image don't get stretch from corners.
i have tried setting content mode of UIImageView to UIViewContentModeScaleToFill. But the image appears as tiled.
here's the code
UIImage *bgImage = [[UIImage imageNamed:#"ImgViewBg"] resizableImageWithCapInsets:UIEdgeInsetsMake(2, 2, 2, 2)];
imgView.contentMode = UIViewContentModeScaleToFill;
imgView.image = bgImage;
I am looking for the same effect as we have with 9patch images in android
here's the image i am trying on
I just gave an another look at the documentation of resizeableImageWithCapInsets. It says it tiles the the area which is not under cap. I think that what causing the tiled pattern. Is there any workaround to this so that i can have 9Patch style image??
EDIT
According to Apple Docs
upto IOS 5.0 following works for my req.
[[UIImage imageNamed:#"textViewBg"] stretchableImageWithLeftCapWidth:6 topCapHeight:6];
(as mentioned by Dipen Panchasara)
IOS6.0 and later following works for my req.
- (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets resizingMode:(UIImageResizingMode)resizingMode
IOS 5.0 and later (which i required)
- (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets
above function does not work in my case, as it tiles the image not under cap.
so for IOS 5.0 to IOS 6.0 i was not able to find anything which solves my requirement.
For now i moving to use Dipen Panchasara solution, i hope it stays stable
// Use following code to make stretchableBackground
UIImage *bgImage = [[UIImage imageNamed:#"ImgViewBg"] stretchableImageWithLeftCapWidth:13 topCapHeight:13];
[imgView setImage:bgImage];
ScaleToFill will stretch the image to fill the entire image view. If you don't want to stretch the image and maintain it's aspect ratio, use UIViewContentModeScaleAspectFit OR UIViewContentModeScaleAspectFill.
You can find more information about these constants here.
I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?
It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.
I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?
The following code tiles the image area within the specified insets:
UIEdgeInsets imgInsets = UIEdgeInsetsMake(10.f, 5.f, 13.f, 44.f);
UIImage *image = [[UIImage imageNamed:#"fileName"] resizableImageWithCapInsets:imgInsets];
However this is only available in iOS5. How can I achieve the same result for before-iOS5 compatibility?
[UIImage stretchableImageWithLeftCapWidth: topCapHeight:] is not appropriate as far as I understand, because it assumes that the tile-able area is 1px wide. In other words it doesn't tile, it streches. Therefore it doesn't work with patterns, only with single-color images. This is demonstrated in the screenshot below.
Then there is [UIColor colorWithPatternImage:], but this assumes that the entire image needs to be tiled, it doesn't allow for insets that must remain capped.
Any help appreciated, thanks.
.
I've been looking for a solution to this too. At this point I think I'll use respondsToSelector:#selector(resizableImageWithCapInsets:) on the original UIImage to see if the method is available. If not, then use stretchable image.
I'm still looking for a better solution, and if one comes up, I'll update the answer.