iOS CATiledLayer and TilingView scale problems? - ios

I am using the TilingView from the Apple PhotoScroller example to tile some images. This works great for most of my images, but I have a few get weird scale values. I set the level of detail to 4. My images are all scaled at different values, 100,50,25,12.5 scales then tiled 256x256 at those levels.
In TilingView drawRect method, the scale I get here must be one of 4 values and normally is 1.0,0.50,0.25,0.125. Since I store my images off based on these scale values when I get a weird scale value it breaks and cannot load the images. For example I have an image that at .50 scale the actual value I get is 0.499798.
Any ideas whats going on here? If I tell the CATiledLayer to have 4 levels of detail, how do I end up with these weird values?
CGFloat scale = CGContextGetCTM(context).a;
NSLog(#"scale = %f",scale);
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
How can I ensure that the image size I pass actually will return me one of the 4 scales 100,50,25,12,5 for any image size I specify?

There are several bugs in that sample's code, one of which involves proper rounding of those scale values, which leads to the issue you are seeing. But there are also other subtle issues. Please have a look at this question, where those issues (and the fixes) are described in more detail.

Related

How to show images as pixelated instead of blurry on iOS [duplicate]

When you put a UIImage into a UIImageView that is smaller than the view, and the content mode of the view is ScaleToFit, iOS enlarges the bitmap, as expected. What I have never expected is that it blurs the edges of the pixels it has scaled-up. I can see this might be a nice touch if you're looking at photographs, but in many other cases I want to see those nasty, hard, straight edges!
Does anyone know how you can configure a UIImage or UIImageView to enlarge with sharp pixel edges? That is: Let it look pixellated, not blurred.
Thanks!
If you want to scale up any image in UIImageView with sharpen edge, use the following property of CALayer.
imageview.layer.magnificationFilter = kCAFilterNearest;
It seems that magnificationFilter affects an interpolation method of contents of UIView. I recommend you to read an explanation of the property in CALayer.h.
/* The filter types to use when rendering the `contents' property of
* the layer. The minification filter is used when to reduce the size
* of image data, the magnification filter to increase the size of
* image data. Currently the allowed values are `nearest' and `linear'.
* Both properties default to `linear'. */
#property(copy) NSString *minificationFilter, *magnificationFilter;
I hope that my answer is useful for you.

PDF vector images in iOS. Why does having a smaller image result in jagged edges?

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

Glitching GPUImageAmatorkaFilter with images that are certain dimensions

Has anyone seen issues with image sizes when using GPUImage's GPUImageAmatorkaFilter?
It seems to be related to multiples of 4 - when the width and height aren't multiples of 4, it glitches the output.
For example, if I try and filter an image with width and height 749, it glitches.
If I scale it to 752 or 744, it works.
The weird thing is, it glitches at 748. Which is multiple of 4, but an un-even multiple (187).
The initial workaround is to do some calculations to make the image smaller, but its a rubbish solution, I'd obviously much prefer to be able to filter any size.
Before
After
GPUImageAmatorkaFilter use GPUImageLookupFilter with lookup_amatorka.png as lookup texture. This texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors. I tested GPUImageAmatorkaFilter with image 749*749px and it works (first check your code is up-to-date). I believe you are using lookup texture of wrong size, it should be 512*512px.

What does CALayer.contentsScale mean?

I'm reading this tutorial, iOS 7 Blur Effects with GPUImage. I have read the document, this variable means x px / y pt. But I don't get this line of code.
_blurView.layer.contentsScale = (MENUSIZE / 320.0f) * 2;
What's the logic behind this line? How should I determine the contentsScale in my code?
If I don't set the contentsScale, which is default to 2.0, the screen looks like:
But after I set it to (MENUSIZE / 320.0f) * 2, the screen is:
This is strange because the contentsScale decreased but the image grow bigger. MENUSIZE is 150.0f.
contentsScale determines the size of the backing store bitmap, so that the bitmap will work on both nonretina and retina screens.
Let's say you make a layer (CALayer) into which you intend to draw. Lets say its size is 100x100. Then to make this layer look good on a double-resolution screen, you will want its contentsScale to be 2.0. This means that behind the scenes the bitmap is 200x200. But it is transformed so that you still treat it as 100x100 when you draw into it; you think in points, just as you normally would, and the backing store is scaled to match the doubled pixels of a retina device.
In most cases you don't have to worry about this because if a layer is the main layer of a view, its contentSize is set automatically for the current device. But if you create a layer yourself, in code, out of whole cloth, then setting its contentsScale based on the scale of the main UIScreen is up to you.

How to improve uiimage detail quality? (too blurry)

These UIImages are all a bit blurry when the detail is EDIT: more complex.
Can anyone advise me on this one?
I have already tried using CGRectIntegral and the images are always the same size of the uiimageview frame.
Conclusion:
You should always try to keep the same frame in the real image and the imageview. Same pixel size (width and height)
You can use CGRectIntegral to arrange some minor mismatches. (Fixing the odd placing of the images for instance)
You should use file type .png and keep dpis at 72 at least.
If you want to scale the image for a bigger format you should scale it using the vector of the image or if that is not possible scale it and keep 72 dpis minimum

Resources