Bigger size image in smaller image view - ios

This is more sort of a logical question, everything is working fine.
I have an ImageView and for that I download images from the web server. Our web server keeps the biggest size image and then I am rescaling the images down for required devices. So lets say, if I have an UIImageView with size 200 * 200 and I am downloading image of 400 * 400, I rescale the image to 200 * 200 and then I put it in the imageview, I tried putting 400 by 400 image in 200 by 200 image view and it looks fine to me (no pixelation). The way I implemented the downscaling is
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
within the image context. Now I feel like apple might be doing this anyway because it is rescaling my image to fit in the image view, so is it really required? Or should I just put high resolution images directly in the image view?
Suggestions required.

You should be fine to just assign a 400x400 UIImage to a 200x200 UIImageView. CoreAnimation will deal with the image scaling underneath.
Image Quality
If you want to experiment with different image scaling qualities, you can set the minificationFilter on the UIImageView's layer. The default is kCAFilterLinear, which I think would be fine for your usage. Multiple pixels from the 400x400 image will be selected and linearly blended together to get the 200x200 image pixel color. kCAFilterNearest will get you better performance at the cost of image quality (a single pixel from the 400x400 image is selected to get the color for the 200x200 image pixel).
You could experiment using kCAFilterTrilinear instead, which should get you better image quality at the cost of some performance. The documentation doesn't make it clear which devices this will actually have an affect, although this guy's had success using it on an iPad 2 which makes me think it may be supported on all devices now. The documentation also notes your image may need to be of dimensions of a power of 2 for this to have affect.
Performance
You could scale the image down from 200x200 perhaps as a performance optimization to save memory and CoreAnimation render time (including the image scaling), but I wouldn't do that unless you have reason to think your app's performance would actually benefit from this.

Related

When iOS shrinks an image, does it clip/pixelate it?

I have 2 relatively small pngs that will be images inside UIButtons.
Once our app is finished, we might want to resize the buttons and make them smaller.
Now, we can easily do this by resizing the button frame; the system automatically re-sizes the images smaller.
Would the system's autoresize cause the image to look ugly after shrinking the image? (i.e., would it clip pixels and make it look less smooth than if I were to shrink it in a photo editor myself?)
Or would it better to make the image the sizes they are intended to be?
It is always best to make the images of correct size from the beginning. All resize-functions will have negative impact on the end result. If you scale it up to a larger image it will be a big different, but even if you scale it down to a smaller it is usually creating visible noise in the image. Let's say that you have a line of one pixel in your image. scale it down to 90% of the original size, this line will just use 90% of a pixel wide and other parts of the images will influence the colors of the same pixels.

How to get rid of empty transparent areas in a PNG image so that it conforms to actual image size?

I have a series of images that I would look to loop through using iOS's [UIView startAnimating]. My trouble is that, when I exported the images, they all came standard in a 240x160 size, although only 50x50 contains the actual image, the rest being transparent parts that are just taking up space.
When I set the frame of the image automatically using image.size.width and image.size.height, iOS takes into images' original size of 240x160, so I am unable to get a frame that conforms to the actual parts of the image. I was wondering if there is a way using Illustrator or Photoshop, or any other graphics editing software for me to export the images based on their natural dimensions, and not a fixed dimension. Thanks!
I am a fan of vector graphics and thinks everything in the world should be vector ;-) so here is what you do in illustrator: file - document setup - edit artboards. Then click on the image, and the artboard should adjust to the exact size. You can of course have multiple artboards, or simply operate with one artboard and however-many images.

How to improve uiimage detail quality? (too blurry)

These UIImages are all a bit blurry when the detail is EDIT: more complex.
Can anyone advise me on this one?
I have already tried using CGRectIntegral and the images are always the same size of the uiimageview frame.
Conclusion:
You should always try to keep the same frame in the real image and the imageview. Same pixel size (width and height)
You can use CGRectIntegral to arrange some minor mismatches. (Fixing the odd placing of the images for instance)
You should use file type .png and keep dpis at 72 at least.
If you want to scale the image for a bigger format you should scale it using the vector of the image or if that is not possible scale it and keep 72 dpis minimum

Blurry Images when rendering to PDF using UIKit/Coregraphics

Everything seems pretty standard I downloaded PDF GENERATION SAMPLE and used my own assets at normal resolutions and my images look a little off.
Here's the asset
Here's what it looks like in app
And this is what it looks like in the PDF at 100% zoom
The code in the drawImage function is as simple as it gets
UIImage * demoImage = [UIImage imageNamed:#"icon_map_project.png"];
[demoImage drawInRect:CGRectMake( (pageSize.width - demoImage.size.width)/2,
350,
demoImage.size.width,
demoImage.size.height)];
Nothing fancy at all. I do admit that my familiarity with the details of how PDF work, DPI, and things like that are beyond me at this point.
I've looked at LibHaru and think it's a great system but I'd rather keep this within the confines of UIKit/CoreGraphics.
You'll notice a strange jaggedness on the right side, even shrinking the image down by 50% doesn't seem to help.
Here's a zoomed up image using Digital Color Meter with the PDF at 100% and then the app
As you can see the image simply does not render correctly into the PDF and I'm struggling to find a solution for this.
Thanks for any advice.
You draw the image in the PDF in a rectangle that matches the image size. This results in a 72dpi for the image. Because the viewer application use 96dpi or a higher value as reference for 100% zoom, when the file is displayed at 100% your image will be rendered 100% * 96/72 scale. If you enlarge the bitmap at that scale with an imaging tool you'll see a similar jaggedness. The solution is to use a larger image drawn in a 37x36pt rectangle so that the resulting image dpi is higher.
If you zoom your PDF file to 75% the image size displayed on the page should match the image size in your application (this assumption is based on a 96 dpi screen).

Does UIImage Cache the File or the Rendered Bitmap?

I generally use Fireworks PNGs (with different layers, some hidden, etc.) in my iOS projects (loaded into NIBs in UIImageView instances). Often, I take the PNG and resave it as PNG-32 to make the file smaller, but I'm questioning this now (because then I have to store the Fireworks PNG separately)....
In my cursory tests, a smaller file size does NOT affect the resultant memory use. Is there a relationship, or is it the final rendered bitmap that matters?
Note: I'm not asking about using an image that is too big in pixels. A valid comparison would be a high-quality jpeg that weights 1mb vs. a low-quality jpeg of the same content that weights 100K. Is the memory use the same?
UIImageView does not do any processing so if you set a large image the whole thing is loaded into memory when the imageView needs it regardless of the size of the imageView. So, yes, it does make a difference. You should store the smallest images that work within the imageView.
While your current example is using NIB's, if you were creating an app that displays large images acquired from other sources (e.g. the device camera or an external service) then you would scale those to a display size before using them in a UIImageView.

Resources