renderInContext flips the origin of colorWithPatternImage - ios

I have UIView with backgroundColor set with colorWithPatternImage. As expected, the background image is drawn starting at the top left corner.
Problem appears when I'm doing renderInContext on that view: the background image is drawn starting at the bottom left corner. Everything else seems to render fine.
Here's the source and destination images:
Here is the code:
// here is the layer to be rendered into an image
UIView *src = [[UIView alloc] initWithFrame:(CGRect){{0, 0}, {100, 100}}];
src.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"background.png"]];
[self.view addSubview:src];
// here we'll display the image
UIImageView *dest = [[UIImageView alloc] initWithFrame:(CGRect){{110, 0}, src.bounds.size}];
[self.view addSubview:dest];
// render `src` to an image in `dest`
UIGraphicsBeginImageContext(src.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[src.layer renderInContext:context];
dest.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Is there a way to keep the image to tile in right direction, as in the src view?

I managed to work around this by calling CGContextSetPatternPhase with a height of modf(viewHeight, patternHeight)

https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html#//apple_ref/doc/uid/TP30001066-CH202-CJBBAEEC
This doc explains why this happens, in particular this part:
Important: The above discussion is essential to understand if you plan to write applications that directly target Quartz on iOS, but it is not sufficient. On iOS 3.2 and later, when UIKit creates a drawing context for your application, it also makes additional changes to the context to match the default UIKIt conventions. In particular, patterns and shadows, which are not affected by the CTM, are adjusted separately so that their conventions match UIKit’s coordinate system. In this case, there is no equivalent mechanism to the CTM that your application can use to change a context created by Quartz to match the behavior for a context provided by UIKit; your application must recognize the what kind of context it is drawing into and adjust its behavior to match the expectations of the context.

Related

UIImageView image aspect ratio is messed up after redrawing it to create a round mask

My app sends a GET request to google to attain certain user information. One piece of crucial returned data is a users picture which is placed inside a UIImageView that is always exactly (100, 100) then redrawn to create a round mask for this imageView. These pictures come from different sources and thus always have different aspect ratios. Some have a smaller width compared to their height, sometimes it's vice-versa. This results in the image looking compressed. I've tried things such as the following (none of them worked):
_personImage.layer.masksToBounds = YES;
_personImage.layer.borderWidth = 0;
_personImage.contentMode = UIViewContentModeScaleAspectFit;
_personImage.clipsToBounds = YES;
Here is the code I use to redraw my images (it was attained from user fnc12 as the third answer in Making a UIImage to a circle form):
/** Returns a redrawn image that had a circular mask created for the inputted image. */
-(UIImage *)roundedRectImageFromImage:(UIImage *)image size:(CGSize)imageSize withCornerRadius:(float)cornerRadius
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0); //<== Notice 0.0 as third scale parameter. It is important because default draw scale ≠ 1.0. Try 1.0 - it will draw an ugly image...
CGRect bounds = (CGRect){CGPointZero, imageSize};
[[UIBezierPath bezierPathWithRoundedRect:bounds cornerRadius:cornerRadius] addClip];
[image drawInRect:bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
This method is always called like so:
[_personImage setImage:[self roundedRectImageFromImage:image size:CGSizeMake(_personImage.frame.size.width, _personImage.frame.size.height) withCornerRadius:_personImage.frame.size.width/2]];
So I end up having a perfectly round image but the image it self isn't right aspect-wise. Please help.
P.S. Here's how images look when their width is roughly 70% that of their height before the redrawing of the image to create a round mask:
Hello dear friend there!
Here is my version that works:
Code in ViewController:
[self.profilePhotoImageView setContentMode:UIViewContentModeCenter];
[self.profilePhotoImageView setContentMode:UIViewContentModeScaleAspectFill];
[CALayer roundView:self.profilePhotoImageView];
roundView function in My CALayer+Additions class:
+(void)roundView:(UIView*)view{
CALayer *viewLayer = view.layer;
[viewLayer setCornerRadius:view.frame.size.width/2];
[viewLayer setBorderWidth:0];
[viewLayer setMasksToBounds:YES];
}
May be you should try to change your way to create rounded ImageView using my version that create rounded ImageView by modifying ImageView's view layer . Hope it helps.
To maintain aspect ratio of UIImageView, after setting image use following line of code.
[_personImage setContentMode:UIViewContentModeScaleAspectFill];
For detailed description follow reference link:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImageView_Class/

Blur screen with iOS 7's snapshot API

I believe the NDA is down, so I can ask this question. I have a UIView subclass:
BlurView *blurredView = ((BlurView *)[self.view snapshotViewAfterScreenUpdates:NO]);
blurredView.frame = self.view.frame;
[self.view addSubview:blurredView];
It does its job so far in capturing the screen, but now I want to blur that view. How exactly do I go about this? From what I've read I need to capture the current contents of the view (context?!) and convert it to CIImage (no?) and then apply a CIGaussianBlur to it and draw it back on the view.
How exactly do I do that?
P.S. The view is not animated, so it should be OK performance wise.
EDIT: Here is what I have so far. The problem is that I can't capture the snapshot to a UIImage, I get a black screen. But if I add the view as a subview directly, I can see the snapshot is there.
// Snapshot
UIView *view = [self.view snapshotViewAfterScreenUpdates:NO];
// Convert to UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Apply the UIImage to a UIImageView
BlurView *blurredView = [[BlurView alloc] initWithFrame:CGRectMake(0, 0, 500, 500)];
[self.view addSubview:blurredView];
blurredView.imageView.image = img;
// Black screen -.-
BlurView.m:
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.imageView = [[UIImageView alloc] init];
self.imageView.frame = CGRectMake(20, 20, 200, 200);
[self addSubview:self.imageView];
}
return self;
}
Half of this question didn't get answered, so I thought it worth adding.
The problem with UIScreen's
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
and UIView's
- (UIView *)resizableSnapshotViewFromRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
withCapInsets:(UIEdgeInsets)capInsets
Is that you can't derive a UIImage from them - the 'black screen' problem.
In iOS7 Apple provides a third piece of API for extracting UIImages, a method on UIView
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
It is not as fast as snapshotView, but not bad compared to renderInContext (in the example provided by Apple it is five times faster than renderInContext and three times slower than snapshotView)
Example use:
UIGraphicsBeginImageContextWithOptions(image.size, NULL, 0);
[view drawViewHierarchyInRect:rect];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then to get a blurred version
UIImage* lightImage = [newImage applyLightEffect];
where applyLightEffect is one of those Blur category methods on Apple's UIImage+ImageEffects category mentioned in the accepted answer (the enticing link to this code sample in the accepted answer doesn't work, but this one will get you to the right page: the file you want is iOS_UIImageEffects).
The main reference is to WWDC2013 session 226, Implementing Engaging UI on iOS
By the way, there is an intriguing note in Apple's reference docs for renderInContext that hints at the black screen problem..
Important: The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
The note hasn't been updated since 10.5, so I guess 'future versions' may still be a while off, and we can add our new CASnapshotLayer (or whatever) to the list.
Sample Code from WWDC ios_uiimageeffects
There is a UIImage category named UIImage+ImageEffects
Here is its API:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius
tintColor:(UIColor *)tintColor
saturationDeltaFactor:(CGFloat)saturationDeltaFactor
maskImage:(UIImage *)maskImage;
For legal reason I can't show the implementation here, there is a demo project in it. should be pretty easy to get start with.
To summarize how to do this with foundry's sample code, use the following:
I wanted to blur the entire screen just slightly so for my purposes so I'll use the main screen bounds.
CGRect screenCaptureRect = [UIScreen mainScreen].bounds;
UIView *viewWhereYouWantToScreenCapture = [[UIApplication sharedApplication] keyWindow];
//screen capture code
UIGraphicsBeginImageContextWithOptions(screenCaptureRect.size, NO, [UIScreen mainScreen].scale);
[viewWhereYouWantToScreenCapture drawViewHierarchyInRect:screenCaptureRect afterScreenUpdates:NO];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//blur code
UIColor *tintColor = [UIColor colorWithWhite:1.0 alpha:0];
UIImage *blurredImage = [capturedImage applyBlurWithRadius:1.5 tintColor:tintColor saturationDeltaFactor:1.2 maskImage:nil];
//or use [capturedImage applyLightAffect] but I thought that was too much for me
//use blurredImage in whatever way you so desire!
Notes on the screen capture part
UIGraphicsBeginImageContextWithOptions() 2nd argument is opacity. It should be NO unless you have nothing with any alpha other than 1. If you return yes the screen capture will not look at transparency values so it will go faster but will probably be wrong.
UIGraphicsBeginImageContextWithOptions() 3rd argument is the scale. Probably want to put in the scale of the device like I did to make sure and differentiate between retina and non-retina. But I haven't really tested this and I think 0.0f also works.
drawViewHierarchyInRect:afterScreenUpdates: watch out what you return for the screen updates BOOL. I tried to do this right before backgrounding and if I didn't put NO the app would go crazy with glitches when I returned to the foreground. You might be able to get away with YES though if you're not leaving the app.
Notes on blurring
I have a very light blur here. Changing the blurRadius will make it blurrier, and you can change the tint color and alpha to make all sorts of other effects.
Also you need to add a category for the blur methods to work...
How to add the UIImage+ImageEffects category
You need to download the category UIImage+ImageEffects for the blur to work. Download it here after logging in: https://developer.apple.com/downloads/index.action?name=WWDC%202013
Search for "UIImageEffects" and you'll find it. Just pull out the 2 necessary files and add them to your project. UIImage+ImageEffects.h and UIImage+ImageEffects.m.
Also, I had to Enable Modules in my build settings because I had a project that wasn't created with xCode 5. To do this go to your target build settings and search for "modules" and make sure that "Enable Modules" and "Link Frameworks Automatically" are both set to yes or you'll have compiler errors with the new category.
Good luck blurring!
Check WWDC 2013 sample application "running with a snap".
The blurring is there implemented as a category.

Render layer of UIView which is not in the View Hierarchy

What i want to achieve is to take an image of an UIView which has not been added as a subview, present and do stuff with the image and afterwards add the view to the view hierarchy.
I've searched and tried now for a while and believe, that it is simply not possible.
Obviously the problem is, that the view hasn't been drawn (called drawRect: i guess) if it hasn't been added as a subview.
Actually i thought renderInContext: would call drawRect/layer on its own.
It isn't even enough to add it as subview right before draw it to an imageContext because it won't be rendered immediately.
I take the screenshot with renderInContext: with the layer of the view, see my code here:
[self.view addSubView:view];
UIGraphicsBeginImageContextWithOptions(view.frame.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So my question is, has anybody managed to render a not visible UIView and if how?
Well this is awkward.
After a mail conversation with a very kind apple dev support, we reviewed my code and we noticed that i simply set the hidden property to YES. - Just don't do that.
So it is straight forward to make a screenshot of a view which is not in the view hierarchy.
It was total my fault why it didn't work.
Try to addd UIView to hierarchy but keep it hidden.
- (void)takeScreenSnapshot {
UIView *capturedView = self.view;
UIView *hiddenView = self.hiddeniew; // hidden view which is
// a part of capturedView
hiddenView.hidden = NO;
BOOL retina = [self isRetinaDisplay];
UIImage *image = [capturedView captureImageWithScale:(retina) ? 2.f : 1.f];
hiddenView.hidden = YES;
}

How to set the imageView property in a UITableCellView to show only a background color?

I have a custom class inheriting from UITableViewCell class that shows either an image (left to the title) or a generic dark-colored square if the image is not available). The following code shows a dark square on a light-colored cell background:
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(11, 6, 40, 40)];
[imageView setBackgroundColor:kBackgroundGreyColour];
[cell.contentView addSubview:imageView];
However, instead of creating a custom subview in each table cell I would rather like to use the existing imageView property of the generic UITableViewCell class and modify it somehow to show the square as the code above does. This is what I am trying at this moment:
UIImageView* iv = [[UIImageView alloc] initWithFrame:CGRectMake(11, 6, 40, 40)];
[iv setBackgroundColor:[UIColor redColor]];
self.imageView.hidden = NO;
self.imageView.opaque = iv.opaque;
self.imageView.alpha = iv.alpha;
self.imageView.image = iv.image;
[self bringSubviewToFront:self.imageView];
[self.imageView setBackgroundColor:[UIColor redColor]];
I added all those lines to set as many of the existing UIImageView properties to the same values as the created UIImageView instance in the first code snippet, and yet the second code snippet doesn't show any dark square. It just doesn't show anything at all and the cell looks like there is just the light background and no image view visible. But I see that the imageView property is not nil so executing all those lines of code in the second snippet should show something?
However, as soon as I assign a new image to the imageView property (e.g. self.imageView.image = [[UIImage alloc] init...], the square shows the assigned image without problems.
Edit: Just a note that in the second case I am setting the frame of the imageView in layoutSubview function, e.g.:
-(void)layoutSubviews
{
[super layoutSubviews];
self.imageView.frame = CGRectMake(11, 6, 40, 40);
}
So my questions are:
1. Which properties of the existing imageView property I would need to set and to what values so that the code will show a square filled with a specific color (like the first snippet of code does)?
Is there a way of creating the UIImage programatically so that it shows only a background color without any image associated with it (and which I could use to set the imageView.image property to show that color).
Is it possible to replace the existing imageView property in a UITableViewCell class with a custom view without adding a custom subview (like the first code snippet did), so that I can show a placeholder UIView with a background color when the image is not available?
The reason why your code doesn't work, is as you guessed; Because when you set the background colour of an imageview, it doesn't create anything on the image property.
And, you've figured out that you can't directly set the imageview property of the cell either.
I'd say your best bet, is the former option; To create a UIImage programmatically.
Although, I'd highly suggest simply creating one in your favourite image editing software then including it in the bundle. It makes for easy replacement later, for when you may get a better image, and next to no code and effort required to replace.
But if you still wish to do it all programmatically, it's not as simple as you'd hope.
CGRect rect = CGRectMake(11, 6, 40, 40);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [kBackgroundGreyColour CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageview.image = image;
Should do the trick.
This defines the image size, creates a graphics context (think of it as a canvas), picks your grey colour to use, paints the canvas with it, then scans it into your computer into the small little size you wanted.
The little green imp does it all behind the screen (Sorry, too much Terry Pratchett).

Use stretchable UIImage in CGContext

I'm searching a way to draw stretchable image as background of my custom cell background view. I would like to use drawRect method and draw an image stretched exactly as it would be stretched with stretchableImageWithLeftCapWidth in a UIImageView... how can i continue this code to make it happen ?
- (void)drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *bgImg =[[UIImage imageNamed:#"bg_table_top"]stretchableImageWithLeftCapWidth:3 topCapHeight:0];
//How to draw the image stretched as the self.bounds size ?
....
}
Any reason not to let UIImageView do this? (Include one as a child of your custom cell.) It's true that reducing child views can be a performance improvement in tables, but UIImageView is also pretty good at getting good performance when drawing images.
My guess is otherwise you're going to have to do multiple draw calls, in order to get the ends and middle drawn correctly.

Resources