iOS: How do I support Retina Display with CGLayer? - ios

I'm drawing a graph on a CALayer in its delegate method drawLayer:inContext:.
Now I want to support Retina Display, as the graph looks blurry on the latest devices.
For the parts that I draw directly on the graphics context passed by the CALayer, I could nicely draw in high resolution by setting the CALayer's contentScale property as follows.
if ([myLayer respondsToSelector:#selector(setContentsScale:)]) {
myLayer.contentsScale = [[UIScreen mainScreen] scale];
}
But for the parts that I use CGLayer are still drawn blurry.
How do I draw on a CGLayer in high resolution to support Retina Display?
I want to use CGLayer to draw the same plot shapes of the graph repeatedly, as well as to cut off the graph lines exceeding the edge of the layer.
I get CGLayer by CGLayerCreateWithContex with the graphics context passed from the CALayer, and draw on its context using CG functions such as CGContextFillPath or CGContextAddLineToPoint.
I need to support both iOS 4.x and iOS 3.1.3, both Retina and legacy display.
Thanks,
Kura

This is how to draw a CGLayer correctly for all resolutions.
When first creating the layer, you need to calculate the correct bounds by multiplying the dimensions with the scale:
int width = 25;
int height = 25;
float scale = [self contentScaleFactor];
CGRect bounds = CGRectMake(0, 0, width * scale, height * scale);
CGLayer layer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
You then need to set the correct scale for your layer context:
CGContextScaleCTM(layerContext, scale, scale);
If the current device has a retina display, all drawing made to the layer will now be drawn twice as large.
When you finally draw the contents of your layer, make sure you use CGContextDrawLayerInRect and supply the unscaled CGRect:
CGRect bounds = CGRectMake(0, 0, width, height);
CGContextDrawLayerInRect(context, bounds, layerContext);
That's it!

I decided not to use CGLayer and directly draw on the graphics context of the CALayer, and now it's drawn nicely in high resolution on retina display.
I found the similar question here, and found that there is no point of using CGLayer in my case.
I used CGLayer because of the Apple's sample program "Using Multiple CGLayer Objects to Draw a Flag" found in the Quartz 2D Programming guide. In this example, it creates one CGLayer for a star and uses it multiple times to draw 50 stars. I thought this was for a performance reason, but I didn't see any performance difference.
For the purpose of cutting off the graph lines exceeding the edge of the layer, I decided to use multiple CALayers.

Related

Access iPhone Absolute Pixel Position

In the screenspace of an iPhone/iPad, Apple uses points, which are typically half the actual resolution of the screen. My question is, is it possible to access the actual pixels themselves? For example, if i take a UIView, and make it have a size/location of 0,0,0.5,0.5 with a background color of red, i can't see it on the screen.
Just wondering if this is possible.
Thanks!
Sure it's possible.
The code you already have should be working (a UIView with a size of (0.5, 0.5)). I just ran it and captured this result from the simulator:
Yea. That's difficult to see. Let's zoom that in.
So yes, you can draw on-screen in smaller values than a single point.
However, to draw a single pixel, you'll want to be using a point value that is 1/scaleOfScreen (as not all devices have 2x displays). So, for example, you'll want your code to look something like this:
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat pixelPointWidth = 1/scale;
UIView* v = [[UIView alloc] initWithFrame:CGRectMake(20, 20, pixelPointWidth, pixelPointWidth)];
v.backgroundColor = [UIColor redColor];
[self.view addSubview:v];
This will now create a UIView that occupies a single pixel on-screen.
Although, if you want to be doing a lot of pixel-perfect drawing, you should probably be using something lower level than a single UIView (have a look at Core Graphics).
However.
You may encounter some issues with this method when drawing on an iPhone 6 Plus. Because it's screen's scale differs from its nativeScale, it will first render your content in the logical coordinate space of 3x and then downsample to the actual screen resolution (around 2.6x).
This will most probably result in some pixel bleeding, where your 'pixel' view can be rendered in neighboring pixels (although usually at a reduced brightness).
Unfortunately, there is no easy way around this problem without using an even lower level API such as OpenGL or Metal, where you can circumvent this automatic scaling and then downsampling, and draw directly into the screen's actual coordinate space.
Have a look here for a nice little overview on how different devices render content onto their screens.
Have a look here for more info on how pixel bleeding can occur on the iPhone 6 Plus.
You can guess the pixels based on the point depending on the device resolution (in ppi) by multiplying a coefficient but you don't want to do this.
Also, in your exemple you did not state that you normalized the coordinates so basically you are trying to display a red box at the first pixel (top left) with a size of half a point, which is why you can't see it.
EDIT
To draw a red box you can use this sample code :
// Draw a red box
[[UIColor redColor] set];
UIRectFill(CGRectMake(20, 20, 100, 100)); // position (x : 20, y: 20) (still top left) and size (100*100 points)

Tool or technique for examining retina pixels iOS?

I am working on an iOS app that requires very precise drawing and would like to have some way of visually inspecting what, exactly, is being drawn to each (physical) pixel on my iOS device screen. This would be similar to the Pixie app dev tool on OS X, but for iOS -- instead of simply blowing up and anti-aliasing the screen, it would show a very clear grid of each and every pixel, and what shades/colors are being drawn to those pixels.
Does anyone know of such a tool or technique?
Here's a screenshot from Pixie on OS X on my Retina MacBook that shows the kind of output I'm looking for. You can clearly see, for example, that the designers specified 1 point (which spans two retina pixels) for the "minus" sign in the yellow minimize icon.
Assuming that you are using Quartz to do your drawing to a UIView you can draw on pixel boundaries and not on point boundaries by using CGContextScaleCTM. Here is a rough outline of what you would do to do this with a screen shot of your app. You could also have the user take a screen shot of a different app and then import it into yours.
-(void)drawRect:(CGRect)rect
{
UIView* rootView = <GET_YOUR_ROOT_VIEW>;
//You will probably want to change rect so you don't get distortion
//This assumes that this view is the same size as the screen
[rootView drawViewHierarchyInRect:CGRectMake(0,0,rect.size.width*8, rect.size.height*8, afterScreenUpdates:YES];
CGContextRef cxt = UIGraphicsGetCurrentContext();
//Assumes this is #2x retina. You should check the contentScale to be sure
//and change accordingly
CGContextScaleCTM(ctx, 0.5, 0.5);
//Since we made the screen shot be 8x bigger the pixels
//on an #2x device are in increments of 4
for (int x = 0; x < rect.size.width*8; x+=4)
{
//Draw a line at x to ctx
}
for (int y = 0; y < rect.size.height*8; y+=4)
{
//Draw a line at y to ctx
}
}
I am sorry that I don't have time to actually write and test this code myself, so there is probably a few little issues with it. But this should get you going in the right direction.
Also, since you are blowing up the image you actually don't need to change the scale with CGContextScaleCTM, you just need to draw your lines at the right intervals.

iOS7 UIImage drawAtPoint not retina

i am trying to draw an image with the following code:
[img drawAtPoint:CGPointZero];
but the problem is that on an iphone with a retina display the image doesn´t get drawn in retina scale. It seems like the image gets upscaled and then drawn.
I don´t want to use drawInRect because the image is in right size and it´s way slower to use drawInRect.
Any ideas?
You probably are not setting the appropriate scale factor. When you create the bitmap context one of the arguments is the scale:
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
According to the official documentation scale is:
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
You're probably passing 1.0f which will result in the issue you've described. Try passing 0.0f.

What does CALayer.contentsScale mean?

I'm reading this tutorial, iOS 7 Blur Effects with GPUImage. I have read the document, this variable means x px / y pt. But I don't get this line of code.
_blurView.layer.contentsScale = (MENUSIZE / 320.0f) * 2;
What's the logic behind this line? How should I determine the contentsScale in my code?
If I don't set the contentsScale, which is default to 2.0, the screen looks like:
But after I set it to (MENUSIZE / 320.0f) * 2, the screen is:
This is strange because the contentsScale decreased but the image grow bigger. MENUSIZE is 150.0f.
contentsScale determines the size of the backing store bitmap, so that the bitmap will work on both nonretina and retina screens.
Let's say you make a layer (CALayer) into which you intend to draw. Lets say its size is 100x100. Then to make this layer look good on a double-resolution screen, you will want its contentsScale to be 2.0. This means that behind the scenes the bitmap is 200x200. But it is transformed so that you still treat it as 100x100 when you draw into it; you think in points, just as you normally would, and the backing store is scaled to match the doubled pixels of a retina device.
In most cases you don't have to worry about this because if a layer is the main layer of a view, its contentSize is set automatically for the current device. But if you create a layer yourself, in code, out of whole cloth, then setting its contentsScale based on the scale of the main UIScreen is up to you.

renderInContext: producing an image with blurry text

I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?
Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.
I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral
I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}

Resources