These UIImages are all a bit blurry when the detail is EDIT: more complex.
Can anyone advise me on this one?
I have already tried using CGRectIntegral and the images are always the same size of the uiimageview frame.
Conclusion:
You should always try to keep the same frame in the real image and the imageview. Same pixel size (width and height)
You can use CGRectIntegral to arrange some minor mismatches. (Fixing the odd placing of the images for instance)
You should use file type .png and keep dpis at 72 at least.
If you want to scale the image for a bigger format you should scale it using the vector of the image or if that is not possible scale it and keep 72 dpis minimum
Related
I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral
This is more sort of a logical question, everything is working fine.
I have an ImageView and for that I download images from the web server. Our web server keeps the biggest size image and then I am rescaling the images down for required devices. So lets say, if I have an UIImageView with size 200 * 200 and I am downloading image of 400 * 400, I rescale the image to 200 * 200 and then I put it in the imageview, I tried putting 400 by 400 image in 200 by 200 image view and it looks fine to me (no pixelation). The way I implemented the downscaling is
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
within the image context. Now I feel like apple might be doing this anyway because it is rescaling my image to fit in the image view, so is it really required? Or should I just put high resolution images directly in the image view?
Suggestions required.
You should be fine to just assign a 400x400 UIImage to a 200x200 UIImageView. CoreAnimation will deal with the image scaling underneath.
Image Quality
If you want to experiment with different image scaling qualities, you can set the minificationFilter on the UIImageView's layer. The default is kCAFilterLinear, which I think would be fine for your usage. Multiple pixels from the 400x400 image will be selected and linearly blended together to get the 200x200 image pixel color. kCAFilterNearest will get you better performance at the cost of image quality (a single pixel from the 400x400 image is selected to get the color for the 200x200 image pixel).
You could experiment using kCAFilterTrilinear instead, which should get you better image quality at the cost of some performance. The documentation doesn't make it clear which devices this will actually have an affect, although this guy's had success using it on an iPad 2 which makes me think it may be supported on all devices now. The documentation also notes your image may need to be of dimensions of a power of 2 for this to have affect.
Performance
You could scale the image down from 200x200 perhaps as a performance optimization to save memory and CoreAnimation render time (including the image scaling), but I wouldn't do that unless you have reason to think your app's performance would actually benefit from this.
Has anyone seen issues with image sizes when using GPUImage's GPUImageAmatorkaFilter?
It seems to be related to multiples of 4 - when the width and height aren't multiples of 4, it glitches the output.
For example, if I try and filter an image with width and height 749, it glitches.
If I scale it to 752 or 744, it works.
The weird thing is, it glitches at 748. Which is multiple of 4, but an un-even multiple (187).
The initial workaround is to do some calculations to make the image smaller, but its a rubbish solution, I'd obviously much prefer to be able to filter any size.
Before
After
GPUImageAmatorkaFilter use GPUImageLookupFilter with lookup_amatorka.png as lookup texture. This texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors. I tested GPUImageAmatorkaFilter with image 749*749px and it works (first check your code is up-to-date). I believe you are using lookup texture of wrong size, it should be 512*512px.
I have 2 relatively small pngs that will be images inside UIButtons.
Once our app is finished, we might want to resize the buttons and make them smaller.
Now, we can easily do this by resizing the button frame; the system automatically re-sizes the images smaller.
Would the system's autoresize cause the image to look ugly after shrinking the image? (i.e., would it clip pixels and make it look less smooth than if I were to shrink it in a photo editor myself?)
Or would it better to make the image the sizes they are intended to be?
It is always best to make the images of correct size from the beginning. All resize-functions will have negative impact on the end result. If you scale it up to a larger image it will be a big different, but even if you scale it down to a smaller it is usually creating visible noise in the image. Let's say that you have a line of one pixel in your image. scale it down to 90% of the original size, this line will just use 90% of a pixel wide and other parts of the images will influence the colors of the same pixels.
I have a series of images that I would look to loop through using iOS's [UIView startAnimating]. My trouble is that, when I exported the images, they all came standard in a 240x160 size, although only 50x50 contains the actual image, the rest being transparent parts that are just taking up space.
When I set the frame of the image automatically using image.size.width and image.size.height, iOS takes into images' original size of 240x160, so I am unable to get a frame that conforms to the actual parts of the image. I was wondering if there is a way using Illustrator or Photoshop, or any other graphics editing software for me to export the images based on their natural dimensions, and not a fixed dimension. Thanks!
I am a fan of vector graphics and thinks everything in the world should be vector ;-) so here is what you do in illustrator: file - document setup - edit artboards. Then click on the image, and the artboard should adjust to the exact size. You can of course have multiple artboards, or simply operate with one artboard and however-many images.