What does CALayer.contentsScale mean? - ios

I'm reading this tutorial, iOS 7 Blur Effects with GPUImage. I have read the document, this variable means x px / y pt. But I don't get this line of code.
_blurView.layer.contentsScale = (MENUSIZE / 320.0f) * 2;
What's the logic behind this line? How should I determine the contentsScale in my code?
If I don't set the contentsScale, which is default to 2.0, the screen looks like:
But after I set it to (MENUSIZE / 320.0f) * 2, the screen is:
This is strange because the contentsScale decreased but the image grow bigger. MENUSIZE is 150.0f.

contentsScale determines the size of the backing store bitmap, so that the bitmap will work on both nonretina and retina screens.
Let's say you make a layer (CALayer) into which you intend to draw. Lets say its size is 100x100. Then to make this layer look good on a double-resolution screen, you will want its contentsScale to be 2.0. This means that behind the scenes the bitmap is 200x200. But it is transformed so that you still treat it as 100x100 when you draw into it; you think in points, just as you normally would, and the backing store is scaled to match the doubled pixels of a retina device.
In most cases you don't have to worry about this because if a layer is the main layer of a view, its contentSize is set automatically for the current device. But if you create a layer yourself, in code, out of whole cloth, then setting its contentsScale based on the scale of the main UIScreen is up to you.

Related

Get pixel values of image and crop the black part of the image : Objective C

Can I get pixel value of image and crop its black part. For instance, I have the this image:
.
And I want something like this
without the black part.
Any possible solution on how to do this? Any libraries/code?
I am using Objective C.
I have seen this solution to the similar question but I don't understand it in detail. Please kindly provide steps in detail. Thanks.
Probably the fastest way of doing this is iterating through the image and find the border pixels which are not black. Then redraw the image to a new context clipping the rect received by border pixels.
By border pixels I mean the left-most, top-most, bottom-most and right-most. You can find a way to get the raw RGBA buffer from the UIImage through which you may then iterate through width and height and set the border values when appropriate. That means for instance to get leftMostPixel you would first set it to some large value (or to the image width) and then in the iteration if the pixel is not black and if leftMostPixel > x then leftMostPixel = x.
Now that you have the 4 bounding values you can create a frame from it. To redraw just the target rectangle you may use various tools with contexts but probably the easiest is creating the view with size of bounding rect and put an image view with the size of the original image on it and create a screenshot of the view. The image view origin must be minus the origin of the bounded rect though (we put it offscreen a bit).
You may encounter some issues with the orientation of the image though. If the image will have some orientation other then up the raw data will not respect that. So you need to take that into account when creating the bounded rect... Or redraw the image first to make it oriented correctly... Or you can even create a sub buffer with RGBA data and create the CGImage from those data and applying the same orientation to the output UIImage as with input.
So after getting the bounds there are quite a few procedures. Some are slower, some take more memory, some are simply hard to code and have edge cases.

'Display Zoom' iPhone 6/6s setting blurs graphics

I'm writing a project in Xcode 7 / Swift 2 that it is optimized for iPhone 6/6s (i.e. the project has a launch screen file and launch screen images for iPhone 6/6s).
Fortunately or unfortunately, iPhone 6 users have the ability to turn on the ‘Display Zoom’ setting on device which enlarges elements of the interface. When turned on, this setting effectively enlarges a standard iPhone 5 screen size to fit in the iPhone 6 screen space, upsampling to x1.171875. This upsampling causes elements that are raster based such as images, icons, or views that contain UIBezierPath() drawings to display blurred (mildly but noticable).
A few questions:
Appreciate any experienced responses on this conundrum. Thanks.
1 - How can I instruct elements (e.g. a UIView) on the Storyboard in code to disregard the Display Zoom setting when the user has turned it on?
2 - What techniques are there to ensure pixel perfect accuracy remains when Display Zoom is on? (e.g. Is it possible to render graphics using OpenGL rendering, if so, how?)
3 - Is it possible to replace a x2 image with a x4 image to reduce any blurring when Display Zoom is on? (i.e. will iOS downsample a x4 image to x2 image on iPhone 6?)
4 - How can UIBezierPath() drawings maintain pixel perfect accuracy when Display Zoom is on?
There's nothing you can do about this. A user who chooses zoomed mode is deliberately throwing away pixel accuracy. The points in the drawing no longer match the pixels on the screen one-to-one (or one-to-two or one-to-three or any integral ratio). This choice therefore blurs the screen for everything the user does, not just your app.
Nor can you detect what is happening, because in effect zoomed iPhone 6 is presented to your app as an iPhone 5 (and a zoomed 6 Plus is presented to your app as a 6).
As #matt says, there's nothing you can do about this for normal UIKit content
However, for Open GL ES or Metal content, you are able to opt-out of the sampling that the device does, and render straight into the device's physical coordinates - allowing for pixel perfect drawing.
In a graphics app that uses Metal or OpenGL ES, content can be easily rendered at the precise dimensions of the display without requiring an additional sampling stage. This is critical in high-performance 3D apps that perform many calculations for each rendered pixel. Instead, create buffers to render into that are the exact resolution of the display.
Open GL ES
Set the contentsScale of the CAEAGLayer to the [UIScreen mainScreen].bounds.nativeScale, or use a GLKView which will automatically do this.
You will then want to create your framebuffer with the size of the device's physical coordinates.
Metal
Set the contentsScale of your CAMetalLayer to the [UIScreen mainScreen].bounds.nativeScale, or use an MTKView which will automatically do this.
You will also want to adjust the drawable size to account for the scale (lifted from the docs):
CGSize drawableSize = self.bounds.size;
drawableSize.width *= self.contentScaleFactor;
drawableSize.height *= self.contentScaleFactor;
metalLayer.drawableSize = drawableSize;
See also this interesting blog post on how the iPhone 6 Plus renders content, plus the follow-up post specifically about Display Zoom.

Access iPhone Absolute Pixel Position

In the screenspace of an iPhone/iPad, Apple uses points, which are typically half the actual resolution of the screen. My question is, is it possible to access the actual pixels themselves? For example, if i take a UIView, and make it have a size/location of 0,0,0.5,0.5 with a background color of red, i can't see it on the screen.
Just wondering if this is possible.
Thanks!
Sure it's possible.
The code you already have should be working (a UIView with a size of (0.5, 0.5)). I just ran it and captured this result from the simulator:
Yea. That's difficult to see. Let's zoom that in.
So yes, you can draw on-screen in smaller values than a single point.
However, to draw a single pixel, you'll want to be using a point value that is 1/scaleOfScreen (as not all devices have 2x displays). So, for example, you'll want your code to look something like this:
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat pixelPointWidth = 1/scale;
UIView* v = [[UIView alloc] initWithFrame:CGRectMake(20, 20, pixelPointWidth, pixelPointWidth)];
v.backgroundColor = [UIColor redColor];
[self.view addSubview:v];
This will now create a UIView that occupies a single pixel on-screen.
Although, if you want to be doing a lot of pixel-perfect drawing, you should probably be using something lower level than a single UIView (have a look at Core Graphics).
However.
You may encounter some issues with this method when drawing on an iPhone 6 Plus. Because it's screen's scale differs from its nativeScale, it will first render your content in the logical coordinate space of 3x and then downsample to the actual screen resolution (around 2.6x).
This will most probably result in some pixel bleeding, where your 'pixel' view can be rendered in neighboring pixels (although usually at a reduced brightness).
Unfortunately, there is no easy way around this problem without using an even lower level API such as OpenGL or Metal, where you can circumvent this automatic scaling and then downsampling, and draw directly into the screen's actual coordinate space.
Have a look here for a nice little overview on how different devices render content onto their screens.
Have a look here for more info on how pixel bleeding can occur on the iPhone 6 Plus.
You can guess the pixels based on the point depending on the device resolution (in ppi) by multiplying a coefficient but you don't want to do this.
Also, in your exemple you did not state that you normalized the coordinates so basically you are trying to display a red box at the first pixel (top left) with a size of half a point, which is why you can't see it.
EDIT
To draw a red box you can use this sample code :
// Draw a red box
[[UIColor redColor] set];
UIRectFill(CGRectMake(20, 20, 100, 100)); // position (x : 20, y: 20) (still top left) and size (100*100 points)

PDF vector images in iOS. Why does having a smaller image result in jagged edges?

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

iOS CATiledLayer and TilingView scale problems?

I am using the TilingView from the Apple PhotoScroller example to tile some images. This works great for most of my images, but I have a few get weird scale values. I set the level of detail to 4. My images are all scaled at different values, 100,50,25,12.5 scales then tiled 256x256 at those levels.
In TilingView drawRect method, the scale I get here must be one of 4 values and normally is 1.0,0.50,0.25,0.125. Since I store my images off based on these scale values when I get a weird scale value it breaks and cannot load the images. For example I have an image that at .50 scale the actual value I get is 0.499798.
Any ideas whats going on here? If I tell the CATiledLayer to have 4 levels of detail, how do I end up with these weird values?
CGFloat scale = CGContextGetCTM(context).a;
NSLog(#"scale = %f",scale);
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
How can I ensure that the image size I pass actually will return me one of the 4 scales 100,50,25,12,5 for any image size I specify?
There are several bugs in that sample's code, one of which involves proper rounding of those scale values, which leads to the issue you are seeing. But there are also other subtle issues. Please have a look at this question, where those issues (and the fixes) are described in more detail.

Resources