I'm searching a way to draw stretchable image as background of my custom cell background view. I would like to use drawRect method and draw an image stretched exactly as it would be stretched with stretchableImageWithLeftCapWidth in a UIImageView... how can i continue this code to make it happen ?
- (void)drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *bgImg =[[UIImage imageNamed:#"bg_table_top"]stretchableImageWithLeftCapWidth:3 topCapHeight:0];
//How to draw the image stretched as the self.bounds size ?
....
}
Any reason not to let UIImageView do this? (Include one as a child of your custom cell.) It's true that reducing child views can be a performance improvement in tables, but UIImageView is also pretty good at getting good performance when drawing images.
My guess is otherwise you're going to have to do multiple draw calls, in order to get the ends and middle drawn correctly.
Related
I have a UIView and a UIImageView both at 40*40 dimensions. I also have an image at 120*120 and a compressed copy at 40*40 with worse quality.
When I use UIView, in the drawRect: function I have [backgroundImage drawInRect: rect]; It displays two images equally vaguely.
When I use UIImageView, I put imageView.image = backgroundImage; It displays the 120*120 image at a way better quality.
I want to know why is that? What can I do to display the expected quality in the UIView without using a UIImageView?
I have the following method to take a screenshot (UIImage) of a UIView which is far too slow
+ (UIImage *)imageWithView:(UIView *)view
{
CGSize size = view.bounds.size;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext()
return image;
}
On my iPad I now have an app that needs this method to make a copy of a view that is drag&dropped. This view is one with rounded corners and therefore is not opaque (which not makes a difference to if I would set the isOpaque param to YES I found out)...
Also the view that is screenshotted contains a UITableView with quite some complex entries in it...
Do you have any suggestions on how I can improve the speed of the screenshotting. Right now, for a bit bigger tableview (maybe 20 entries) it takes about 1 second (!!!)
And the view is already on screen, rendered correctly... so I just need the Pixels to but into an UIImageView...
I need to support iOS 6+.
I use this same code to take a screenshot of a really complex views. I think your bottleneck is using a big image for the drag&drop. Maybe you can resize the UIImage.
In my case the performance in a iPad2 is about 100ms for screenshot.
How can I implement the image below pragmatically - meaning the digits can change at runtime or even be replaced with a movie?
Just add a blurred UIView on top of your thing.
For example...make a UIImage of your desired view size, blur it using CIFilter and then add it to your view .It should achieve the desired effect.
This is generally the same question and is answered by quite a few methods.. Anyway I would propose 1 more:
Get the image from UIView
+ (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
rather yet play around a bit with this to get the desired part of the view as the image. Now create a new view and add to it image views (with the image you get from layer). Then move the centers of the image views to achieve gaussian algorithm and take the image from this layer again and place it back on the original view.
Moving the center should be defined by radius fragment (I'd start with .5f) and resample range.
for(int i=1; i<resampleCount; i++) {
view1.center = CGPointMake(view1.center.x + radiusFragment*i, view1.center.y);
view2.center = CGPointMake(view2.center.x - radiusFragment*i, view2.center.y);
view3.center = CGPointMake(view3.center.x, view3.center.y + radiusFragment*i);
view4.center = CGPointMake(view4.center.x, view4.center.y - radiusFragment*i);
//add the subviews
}
//get the image from view
All the subviews need to have alpha set to 1.0f/(resampleCount*4)
This method might not be the fastest but it would be extremely easy to implement and if you can pimp the radius and resample range to minimum fragments it should do pretty well.
use a UIView whith white background and decrease the alpha property
blurView.backgroundColor=[UIColor colorWithRed:255 green:255 blue:255 alpha:0.3]
I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)
How can I truncate the left side of an image stored in a UIImage object. Basically in certain situations I just want to show part of an image.
How can I do this on with the iOS sdk?
P.S. I tried changing the frame size of the UIImage but that just scales the image and distorts it.
A very simple way is to load the image into a UIImageView, and then add the view to another view. You can then position the image view so that its .frame.origin.x property is negative, which will place it off to the left. The parent view needs to have setMasksToBounds:YES called on it, or else the image view will still be fully-visible.
There are many other ways to achieve this effect as well, but this may be the simplest for you to implement.
to crop a UIImage, you can use one of the UIImage categories available out there, such as http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
For example, this frame will remove 100 pixel from the left side of a 200x200 pixel UIImage
CGRect clippedRect = CGRectMake(100, 0, 100, 200);
UIImage *cropped = [self imageByCropping:lightsOnImage toRect:clippedRect];