Custom drawing with CATiledLayer creates artifacts - ios

I've got a simple UIView class that draws some text in it's drawRect routine:
[mString drawInRect:theFrameRect withFont:theFont];
That looks OK at regular resolution, but when zoomed, it's fuzzy:
[image removed, not enough posts]
So, I added some tiling:
CATiledLayer *theLayer = (CATiledLayer *) self.layer;
theLayer.levelsOfDetailBias = 8;
theLayer.levelsOfDetail = 8;
theLayer.tileSize = CGSizeMake(1024,1024);
(plus the requisite layerClass routine)
but now the text will draw twice, when zoomed, when the size of the frame is larger than the size of tile:
[image removed, not enough posts]
I'm not clear as to the solution for this. Drawing the text is an atomic operation. I could figure out how to calculate what portion of the text to draw based on the rect that's passed in...but is that really the way to go? Older code examples use drawLayer, but that seems to have been obviated by iOS 5, and is clearly more cumbersome than a straight drawRect call.

Related

Drawing an object of NSString in a Rectangle.

I'm trying to put a string in a rectangular form via drawInRect:withAttributes: method. The official documentation writes: "Draws the receiver with the font and other display characteristics of the given attributes, within the specified rectangle in the currently focused UIView."
This action should be done on a button event. But it doesn't work at all. Nothing is shown.
The code that I use:
- (IBAction)buttonPressed:(id)sender
{
// some area on the top (a part of UIView)
CGRect rect = CGRectMake(50, 50, 100, 100);
// my text
NSString *str = #"Eh, what's wrong with this code?";
// crappy attributes
NSDictionary *attributes = #{NSFontAttributeName: [UIFont boldSystemFontOfSize:8]};
// and finally passing a message
[str drawInRect:rect withAttributes:attributes];
}
Is there a particular kind of areas that might be used for drawing strings in rects? Or is it possible to draw a string in a rect where I'd like it to be? Please, help me understand it better!
But it doesn't work at all. Nothing is shown.
That's because no drawing context is set up when you run that code. You're looking at drawing as something that you can do whenever you feel like it, but in fact drawing is something that you should only do when the system asks you to. At other times, you should instead remember what you want to draw, and then draw it when the time comes.
When is "the time" to draw? It's when your view's -drawRect: method is called. Since your method is an action, it looks like you're probably trying to do the drawing in a view controller. What you want to do instead is to subclass UIView and override -drawRect: to draw what you want.
It doesn't work that way, you first need to make a CGContextRef where you will draw the text.
Your code is fine, just place everything within drawRect.

view.layer drawInContext:UIGraphicsGetCurrentContext() not drawing

I am trying to draw a UIView into a UIImage. Here is the code that I'm using:
UIGraphicsBeginImageContextWithOptions(myView.bounds.size, YES, 0.f);
[myView.layer drawInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I've verified myView.bounds.size to be valid. myView displays correctly on screen. However, img is completely black (I've tried displaying on both in an UIImageView and tried writing the JPEG representation to file.) The image dimensions are correct, the color space (in JPEG file output) is RGB, color profile is sRGB etc. which means that we're not dealing with a corrupted image (in the sense that image/bitmap data itself is valid). I've tested the case on both 6.0 simulator, 7.0 simulator, and 7.0.6 device, all the same. The layer doesn't have any sublayers, and I've tried setting masksToBounds to NO which didn't change anything.
What could be causing the view's layer not to draw?
You need to change:
[myView.layer drawInContext:UIGraphicsGetCurrentContext()];
to:
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
Note that drawInContext: does not actually do anything by default:
The default implementation of this method does not doing any drawing
itself. If the layer’s delegate implements the drawLayer:inContext:
method, that method is called to do the actual drawing.
Subclasses can override this method and use it to draw the layer’s
content. When drawing, all coordinates should be specified in points
in the logical coordinate space.
A UIView's layer delegate is set to the UIView, but it does not look like the UIView necessarily draws to the provided context. More investigation is necessary on this point.
update: per Rob Jones' comment, not these APIs return UIViews, not images.
There is a new API in iOS 7 that looks promising:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates NS_AVAILABLE_IOS(7_0);
- (UIView *)resizableSnapshotViewFromRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates withCapInsets:(UIEdgeInsets)capInsets NS_AVAILABLE_IOS(7_0); // Resizable snapshots will default to stretching the center
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates NS_AVAILABLE_IOS(7_0);
Check the UIView (UISnapshotting) category in UIView.h

Implement Blur over parts of view

How can I implement the image below pragmatically - meaning the digits can change at runtime or even be replaced with a movie?
Just add a blurred UIView on top of your thing.
For example...make a UIImage of your desired view size, blur it using CIFilter and then add it to your view .It should achieve the desired effect.
This is generally the same question and is answered by quite a few methods.. Anyway I would propose 1 more:
Get the image from UIView
+ (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
rather yet play around a bit with this to get the desired part of the view as the image. Now create a new view and add to it image views (with the image you get from layer). Then move the centers of the image views to achieve gaussian algorithm and take the image from this layer again and place it back on the original view.
Moving the center should be defined by radius fragment (I'd start with .5f) and resample range.
for(int i=1; i<resampleCount; i++) {
view1.center = CGPointMake(view1.center.x + radiusFragment*i, view1.center.y);
view2.center = CGPointMake(view2.center.x - radiusFragment*i, view2.center.y);
view3.center = CGPointMake(view3.center.x, view3.center.y + radiusFragment*i);
view4.center = CGPointMake(view4.center.x, view4.center.y - radiusFragment*i);
//add the subviews
}
//get the image from view
All the subviews need to have alpha set to 1.0f/(resampleCount*4)
This method might not be the fastest but it would be extremely easy to implement and if you can pimp the radius and resample range to minimum fragments it should do pretty well.
use a UIView whith white background and decrease the alpha property
blurView.backgroundColor=[UIColor colorWithRed:255 green:255 blue:255 alpha:0.3]

How to add Stretchable UIImage in the CALayer.contents?

I have a CALayer and I want to add to it a stretchable image. If I just do:
_layer.contents = (id)[[UIImage imageNamed:#"grayTrim.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0.0, 15.0, 0.0, 15.0)].CGImage;
it won't work since the layers' default contentGravity is kCAGravityResize.
I've read that this could be accomplished using the contentsCenter but I cannot seem to figure out how exactly would I use that to achieve the stretched image in my CALayer.
Any ideas are welcome!
Horatiu
The response to this question is this one. Lets say you have a stretchable image which stretches only in width and has the height fixed (for simplicity sake).
The image is 31px width ( 15px fixed size - doesn't stretch -, 1px will be stretched)
Assuming your layer is a CALayer subclass your init method should look like this:
- (id)init
{
self = [super init];
if (self) {
UIImage *stretchableImage = (id)[UIImage imageNamed:#"stretchableImage.png"];
self.contents = (id)stretchableImage.CGImage;
self.contentsScale = [UIScreen mainScreen].scale; //<-needed for the retina display, otherwise our image will not be scaled properly
self.contentsCenter = CGRectMake(15.0/stretchableImage.size.width,0.0/stretchableImage.size.height,1.0/stretchableImage.size.width,0.0/stretchableImage.size.height);
}
return self;
}
as per documentation the contentsCenter rectangle must have values between 0-1.
Defaults to the unit rectangle (0.0,0.0) (1.0,1.0) resulting in the entire image being scaled. If the rectangle extends outside the unit rectangle the result is undefined.
This is it. Hopefully someone else will find this useful and it will save some development time.

Use stretchable UIImage in CGContext

I'm searching a way to draw stretchable image as background of my custom cell background view. I would like to use drawRect method and draw an image stretched exactly as it would be stretched with stretchableImageWithLeftCapWidth in a UIImageView... how can i continue this code to make it happen ?
- (void)drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *bgImg =[[UIImage imageNamed:#"bg_table_top"]stretchableImageWithLeftCapWidth:3 topCapHeight:0];
//How to draw the image stretched as the self.bounds size ?
....
}
Any reason not to let UIImageView do this? (Include one as a child of your custom cell.) It's true that reducing child views can be a performance improvement in tables, but UIImageView is also pretty good at getting good performance when drawing images.
My guess is otherwise you're going to have to do multiple draw calls, in order to get the ends and middle drawn correctly.

Resources