PDF vector images in iOS. Why does having a smaller image result in jagged edges? - ios

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks

I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

Related

Get pixel values of image and crop the black part of the image : Objective C

Can I get pixel value of image and crop its black part. For instance, I have the this image:
.
And I want something like this
without the black part.
Any possible solution on how to do this? Any libraries/code?
I am using Objective C.
I have seen this solution to the similar question but I don't understand it in detail. Please kindly provide steps in detail. Thanks.
Probably the fastest way of doing this is iterating through the image and find the border pixels which are not black. Then redraw the image to a new context clipping the rect received by border pixels.
By border pixels I mean the left-most, top-most, bottom-most and right-most. You can find a way to get the raw RGBA buffer from the UIImage through which you may then iterate through width and height and set the border values when appropriate. That means for instance to get leftMostPixel you would first set it to some large value (or to the image width) and then in the iteration if the pixel is not black and if leftMostPixel > x then leftMostPixel = x.
Now that you have the 4 bounding values you can create a frame from it. To redraw just the target rectangle you may use various tools with contexts but probably the easiest is creating the view with size of bounding rect and put an image view with the size of the original image on it and create a screenshot of the view. The image view origin must be minus the origin of the bounded rect though (we put it offscreen a bit).
You may encounter some issues with the orientation of the image though. If the image will have some orientation other then up the raw data will not respect that. So you need to take that into account when creating the bounded rect... Or redraw the image first to make it oriented correctly... Or you can even create a sub buffer with RGBA data and create the CGImage from those data and applying the same orientation to the output UIImage as with input.
So after getting the bounds there are quite a few procedures. Some are slower, some take more memory, some are simply hard to code and have edge cases.

iOS: Keep real photo resolution when making screen capture with UIGraphicsGetImageFromCurrentImageContext

I want to make a basic photo editing in my application and now I need to be able to add a text over a photo. Original photo have something like >2000 pixels width and height so it will be scaled to fit in screen without modifying its ratio.
So , I put the image in an UIImageView, dragged a Label over it and then save the image on screen with UIGraphicsGetImageFromCurrentImageContext. The problem is I will get a small image (320 X some height).
What is the best approach to accomplish this task but not shrink the resolution?
Thanks a lot!
I had this exact same problem in an app.
The thing I realised is that you can't do this by doing a screen capture. In turn, this means that dragging labels and text onto the image can't really be done (it can but bear with me) with UILabels etc...
What you need to do is keep a track of everything that's going on data-wise.
At the moment you have the frame of your UIImageView. This, in reality is irrelevant. It is purely there to show the user a representation of what is going on.
This is the same for the UILabel. It has a frame too. Again, this is irrelevant to the final image.
What you need is to store the data behind it in terms that are not absolute and then convert those values into frames for displaying on the device.
So, if you have an image that is 3200x4800 pixels (just making it easy for me) and this is displayed on the device and "shrunk" down to 320x480. Now, the user places a label with a frame of 10, 10, 100, 21 with the text "Hello, world" at a particular font size.
Storing the frame 10, 10, 100, 21 is useless because what you need when the image is output is... 100, 100, 1000, 210 (i.e. ten times the size).
So, really you should be storing information in the background like...
frame = 0.031, 0.021, 0.312, 0.044
// these are all percentages
Now, you have percentage values of where the label should be and how big it should be based on the size of the image.
So, for the shrunk image size it will return 10, 10, 100, 21 and for the full image size it will be 100, 100, 1000, 210 and so will look the same size when printed out.
You could create a compound UIView by having a UIView with a UIImageView and a UILabel then you just have to resize it to the full image size before rendering it. That would be the easy but naive way of approaching it initially.
Or you could create a UIView with CALayers backing it that display the image and text.
Or you could render out the image and text with some sort of draw method.
Either way, you can't just use a screen capture.
And yes, this is a lot more complex than it first appears.

Glitching GPUImageAmatorkaFilter with images that are certain dimensions

Has anyone seen issues with image sizes when using GPUImage's GPUImageAmatorkaFilter?
It seems to be related to multiples of 4 - when the width and height aren't multiples of 4, it glitches the output.
For example, if I try and filter an image with width and height 749, it glitches.
If I scale it to 752 or 744, it works.
The weird thing is, it glitches at 748. Which is multiple of 4, but an un-even multiple (187).
The initial workaround is to do some calculations to make the image smaller, but its a rubbish solution, I'd obviously much prefer to be able to filter any size.
Before
After
GPUImageAmatorkaFilter use GPUImageLookupFilter with lookup_amatorka.png as lookup texture. This texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors. I tested GPUImageAmatorkaFilter with image 749*749px and it works (first check your code is up-to-date). I believe you are using lookup texture of wrong size, it should be 512*512px.

When iOS shrinks an image, does it clip/pixelate it?

I have 2 relatively small pngs that will be images inside UIButtons.
Once our app is finished, we might want to resize the buttons and make them smaller.
Now, we can easily do this by resizing the button frame; the system automatically re-sizes the images smaller.
Would the system's autoresize cause the image to look ugly after shrinking the image? (i.e., would it clip pixels and make it look less smooth than if I were to shrink it in a photo editor myself?)
Or would it better to make the image the sizes they are intended to be?
It is always best to make the images of correct size from the beginning. All resize-functions will have negative impact on the end result. If you scale it up to a larger image it will be a big different, but even if you scale it down to a smaller it is usually creating visible noise in the image. Let's say that you have a line of one pixel in your image. scale it down to 90% of the original size, this line will just use 90% of a pixel wide and other parts of the images will influence the colors of the same pixels.

How to improve uiimage detail quality? (too blurry)

These UIImages are all a bit blurry when the detail is EDIT: more complex.
Can anyone advise me on this one?
I have already tried using CGRectIntegral and the images are always the same size of the uiimageview frame.
Conclusion:
You should always try to keep the same frame in the real image and the imageview. Same pixel size (width and height)
You can use CGRectIntegral to arrange some minor mismatches. (Fixing the odd placing of the images for instance)
You should use file type .png and keep dpis at 72 at least.
If you want to scale the image for a bigger format you should scale it using the vector of the image or if that is not possible scale it and keep 72 dpis minimum

Resources