UIImageView content mode and scale factor - ios

I have a programmatically created UIImage image, using this kind of code:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(128, 128), NO, 0.0f);
// Render in context
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Since my context options specify scale of 0, on retina devices it will be set to 2, and I can confirm that on resulting UIImage scale property.
Now, the problem is that this image size is 128x128, scale 2. When I am putting it into UIImageView of size 64x64 and contentMode = Center, it renders my image outside imageview, presumably rendering into 128x128 box without any scaling.
My understanding of retina graphics was that if an image has scale factor 2.0, the it should be rendered at 1/2 size, thus resulting in higher DPI.
So I was expecting the image view to render 64x64 image at retina quality. Where am I wrong?

The image will be rendered at the size you give it - 128 x 128. The scale factor means that you will have better rendered curves etc, but the image will still be 128 x 128 points. As stated in the documentation, the size parameter is:
The size (measured in points) of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
If you want a retina-quality 64x64 image, use a 64x64 size for your context.

Related

Why the property Size of UIImage is half of the real picture size?

I have a picture named pic of 268*381,
I have defined
UIImage* tempImg = [UIImage imageNamed:#"pic"];
But when I print
NSStringFromCGSize(tempImg.size)
it shows {134, 190.5}
I can't understand the principle of this, and I would be very grateful the solver!
From the docs for UIImage size:
In iOS 4.0 and later, this value reflects the logical size of the image and is measured in points
Remember that pixels is equal to points times scale. Or points is pixels divided by scale.
Your 268x381 size is in pixels. The output of NSStringFromCGSize(tempImg.size) is in points. A result of {134, 190.5} means this is an image with a scale of 2. Most likely you have this image as pic#2x.png.

What size a CGPDFContext has to have?

I have to write a CGPDFContext. This PDF contains just one image.
The image has 2100 x 3000 pixels.
Suppose I open this image on Photoshop and print it at 300 dpi. Photoshop will use 300 pixels to build every inch of printed material, so this image will have 7 x 10 inches.
This image has this size because of that.
Now I have to create a CGPDFContext. I read somewhere that this context has to be created in points and there is a mention that on a CGPDFContext one point = 1/72 inches, meaning that the context will have 72 dpi (?).
So, what size should I create this context to have maximum quality at 300 dpi.
Another question is this: supposing this context is created based on the 1/72 stuff, than 7 x 10 inches will represent 504 x 720 pt. If this is the size I have to create the context, what happens when I write the image to the context using this?
CGImageRef imageRef = image.CGImage; // this image is 2100x3000 pixels
// mediaBox = 0,0,504,720
CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, &mediaBox, auxillaryInformation);
CGContextDrawImage(pdfContext, CGRectMake(0.0f, 0.0f, 504, 720), imageRef);
will the 2100x3000 pixels image be embedded on the context without losing pixels? I don't want the image to be reduced to 504x720
If your image is 2100*3000 pixels and you draw it on a PDF page that is 7*10 inches (504*720 points) then your image will be embedded at 300 dpi.
The image size will be kept at 2100*3000 pixels and it will not be downscaled at 504*720 pixels.
Both other answers are technically correct, but let me try to answer explicitly on both of your questions:
Now I have to create a CGPDFContext. I read somewhere that this
context has to be created in points and there is a mention that on a
CGPDFContext one point = 1/72 inches, meaning that the context will
have 72 dpi (?).
The parameter you have to pass is the location and size of the mediabox of the PDF you want to create. The mediabox is the canvas you have at your disposal to put stuff on.
PDF uses a space where 1 point equals 1/72 inch, but you shouldn't think of this as dpi in the traditional "image" way. This is simply a convention so that you know that specifying a mediabox with a size of 72 x 72 points will give you a PDF file that is 1 inch high and wide in the real world.
So, what size should I create this context to have maximum quality at
300 dpi.
Your code is correct :-)
The image you place on this context will always be inserted as full size (meaning, Apple will not throw away pixels behind your back). That means that the final resolution of your image is determined by the number of pixels of your image and the size of the rectangle (again in points) where you chose to put it on the page.
So you're fine in this simple example. However (at the risk of hurting your brain), keep in mind that PDF contexts - just as any other contexts - allow you to transform them. You could rotate or shear the current transformation matrix of your PDF context. In that case your image would still have all pixels, but it's effective resolution would be dependent on what evil thing you did to the transformation matrix.
The resolution is independent of the media size, in PDF. The media size is given in 1/72 inch, so 2100x3000 'units' (aka points) is a media size of 29.166x41.666 inches. PDF files do not have a resolution.
In general the content of a PDF is vector information and so is resolution independent. Bitmaps may be drawn in a PDF, and those do have a resolution, but its not 72 dpi. The resolution of the image depends on the number of image samples in each dimension, and the scale factor which is applied to put it on the medium.
Consider an image which is 300x300 image samples. If we place that onto a PDF which is 72x72 (ie 1 inch square), and scale it to fit exactly, then the image is, effectively, 300 dpi internally.
In PDF terms I can take the same image, make a PDF page which is 144x144 (2x2 inches), and scale the image to fit that. Now the image is 150 dpi. The image hasn't changed, but the scale factor has.
Now the final 'resolution' of any images in your PDF file, when rendered, will depend on the number of samples and scale factor (as above) and the resolution you render the PDF file at. Taking the 2 cases above, if I render at 300 dpi, the image won't change at all, but in the first case will map 1:1 the original image samples onto the final output pixels. The second image, however, will map each image sample into 4 pixels in the output (because its been scaled by 2 in each direction).
If you render your PDF file (2100x3100 points) in Photoshop #300 dpi then Photoshop will create a bitmap which is 8750x12500 pixels. It will still be 29.16x41.66 inches, at 300 dots per inch. If you render it at 600 dpi, then you will get 17500x25000 pixels, and so on.
It sounds like the context is created in the default PDF space of 1/72 inch, so you just need to use the media size from the PDF file, ie 2100x3100.

iOS7 UIImage drawAtPoint not retina

i am trying to draw an image with the following code:
[img drawAtPoint:CGPointZero];
but the problem is that on an iphone with a retina display the image doesn´t get drawn in retina scale. It seems like the image gets upscaled and then drawn.
I don´t want to use drawInRect because the image is in right size and it´s way slower to use drawInRect.
Any ideas?
You probably are not setting the appropriate scale factor. When you create the bitmap context one of the arguments is the scale:
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
According to the official documentation scale is:
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
You're probably passing 1.0f which will result in the issue you've described. Try passing 0.0f.

Drawing retina versus non-retina images

UIImage 1: Loaded from a file with the #2x modifier with size 400x400, thus UIImage 1 will report its size as 200x200
UIImage 2: Loaded from a file without the #2x modifier with size 400x400, thus UIImage 2 will report its size as 400x400
I then create 2 images from the above applying the code below to each
UIGraphicsBeginImageContextWithOptions(CGSizeMake(400,400), YES, 1.0);
[image drawInRect:CGRectMake(0, 0, 400, 400)];
UIImage *rescaledI = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Considering the above, can I expect the image quality for both resulting images to be exactly the same? (I am trying to determine if drawing a 200x200 retina image to a 400x400 non-retina context will degrade quality versus drawing the same image not loaded as a retina image)
Just return the current image's size.
UIImage *image1 = [UIImage imagedNamed:#"myimage.png"];
//access width and height like this
image1.size.width;
image1.size.height;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image1.size.width, image1.size.height), YES, 1.0);
[image drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
UIImage *rescaledI = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Of course if you should replace image1 with whatever image you are trying to get the size of. Some switch statement or if statement should do the trick for you.
Never hardcode sizes/dimensions/locations etc. You should always pull that info dynamically by asking your image its size. Then you can change the size of your image without fear of having to locate the hardcoded size in your application.
The image is always 400*400 pixels: the difference is that in a retina display 400*400 pixels cover less space, that is exactly half (200*200 Core Graphics points). If you are not applying any transformation to the image will stay exactly the same.
The code you wrote renders the image as is because you are overriding the device scale factor and setting it to always 1 (1 pixel to 1 point).
You should use two images, one twice as big, if you want your image to cover the same amount of screen on both retina and non retina devices.

renderInContext: producing an image with blurry text

I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?
Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.
I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral
I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}

Resources