I have to write a CGPDFContext. This PDF contains just one image.
The image has 2100 x 3000 pixels.
Suppose I open this image on Photoshop and print it at 300 dpi. Photoshop will use 300 pixels to build every inch of printed material, so this image will have 7 x 10 inches.
This image has this size because of that.
Now I have to create a CGPDFContext. I read somewhere that this context has to be created in points and there is a mention that on a CGPDFContext one point = 1/72 inches, meaning that the context will have 72 dpi (?).
So, what size should I create this context to have maximum quality at 300 dpi.
Another question is this: supposing this context is created based on the 1/72 stuff, than 7 x 10 inches will represent 504 x 720 pt. If this is the size I have to create the context, what happens when I write the image to the context using this?
CGImageRef imageRef = image.CGImage; // this image is 2100x3000 pixels
// mediaBox = 0,0,504,720
CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, &mediaBox, auxillaryInformation);
CGContextDrawImage(pdfContext, CGRectMake(0.0f, 0.0f, 504, 720), imageRef);
will the 2100x3000 pixels image be embedded on the context without losing pixels? I don't want the image to be reduced to 504x720
If your image is 2100*3000 pixels and you draw it on a PDF page that is 7*10 inches (504*720 points) then your image will be embedded at 300 dpi.
The image size will be kept at 2100*3000 pixels and it will not be downscaled at 504*720 pixels.
Both other answers are technically correct, but let me try to answer explicitly on both of your questions:
Now I have to create a CGPDFContext. I read somewhere that this
context has to be created in points and there is a mention that on a
CGPDFContext one point = 1/72 inches, meaning that the context will
have 72 dpi (?).
The parameter you have to pass is the location and size of the mediabox of the PDF you want to create. The mediabox is the canvas you have at your disposal to put stuff on.
PDF uses a space where 1 point equals 1/72 inch, but you shouldn't think of this as dpi in the traditional "image" way. This is simply a convention so that you know that specifying a mediabox with a size of 72 x 72 points will give you a PDF file that is 1 inch high and wide in the real world.
So, what size should I create this context to have maximum quality at
300 dpi.
Your code is correct :-)
The image you place on this context will always be inserted as full size (meaning, Apple will not throw away pixels behind your back). That means that the final resolution of your image is determined by the number of pixels of your image and the size of the rectangle (again in points) where you chose to put it on the page.
So you're fine in this simple example. However (at the risk of hurting your brain), keep in mind that PDF contexts - just as any other contexts - allow you to transform them. You could rotate or shear the current transformation matrix of your PDF context. In that case your image would still have all pixels, but it's effective resolution would be dependent on what evil thing you did to the transformation matrix.
The resolution is independent of the media size, in PDF. The media size is given in 1/72 inch, so 2100x3000 'units' (aka points) is a media size of 29.166x41.666 inches. PDF files do not have a resolution.
In general the content of a PDF is vector information and so is resolution independent. Bitmaps may be drawn in a PDF, and those do have a resolution, but its not 72 dpi. The resolution of the image depends on the number of image samples in each dimension, and the scale factor which is applied to put it on the medium.
Consider an image which is 300x300 image samples. If we place that onto a PDF which is 72x72 (ie 1 inch square), and scale it to fit exactly, then the image is, effectively, 300 dpi internally.
In PDF terms I can take the same image, make a PDF page which is 144x144 (2x2 inches), and scale the image to fit that. Now the image is 150 dpi. The image hasn't changed, but the scale factor has.
Now the final 'resolution' of any images in your PDF file, when rendered, will depend on the number of samples and scale factor (as above) and the resolution you render the PDF file at. Taking the 2 cases above, if I render at 300 dpi, the image won't change at all, but in the first case will map 1:1 the original image samples onto the final output pixels. The second image, however, will map each image sample into 4 pixels in the output (because its been scaled by 2 in each direction).
If you render your PDF file (2100x3100 points) in Photoshop #300 dpi then Photoshop will create a bitmap which is 8750x12500 pixels. It will still be 29.16x41.66 inches, at 300 dots per inch. If you render it at 600 dpi, then you will get 17500x25000 pixels, and so on.
It sounds like the context is created in the default PDF space of 1/72 inch, so you just need to use the media size from the PDF file, ie 2100x3100.
Related
I am using opencv in python to rotate an image and the original and the resulted images are differrent is somethings, I am doing my transformation through this part of code:
img = cv2.imread("image.tif")
new_image = cv2.getRotationMatrix2D((cols / 2, rows / 2), correction_angle, 1)
dst = cv2.warpAffine(img, new_image , (cols, rows))
cv2.imwrite("Rotated_image.tif", dst)
The original image's size is 1.7 Mb, The image's resolution is 300
dpi, and the color space is YCbCr.
The issue is that the resulting image with 12.5 Mb size, 96 dpi, the color space is RGB, and with compression "LZW"!
My question is that: Can I keep the main properties of the original image? and why rotating an image changes the size this way?
Note: The bit depth is 24 in both images.
Calling cv2.imread with only the name of the file uses the default value cv.IMREAD_COLOR for the second parameter, about which the documentation says:
If set, always convert image to the 3 channel BGR color image.
So, your image is always converted to RGB. You can try using cv.IMREAD_ANYCOLOR for the second parameter of imread, but I don't think you can use cv2.warpAffine on it trivially then.
The difference in the stored DPI information stems from the fact that you write the new image without any meta data. imwrite allows you to specify parameters (see here), but, unfortunately, they are not very well documented. I am not sure if this kind of data can be written out of the box with OpenCV.
This is in regards to the pictures taken through the iPhone's camera. No matter what, I can't understand why image sizes are in the order of 1000s and image scale always 1.0.
For example, I printed out an image's details and this is what I got:
<UIImage: 0x134def110> size {3024, 4032} orientation 3 scale 1.000000
What does 3024x4032 mean? And why is the scale 1.0, when my screen size is really 375x667? Orientation 3 means the image is roated 90º counterclockwise. So if the original image is 375x500 (in pixels), after rotation it should be 500x375. Then why does the size shown not change accordingly?
And on a similar note, how would I get the size of the image in pixels from this size that's printed out? Because no matter what the size of the camera preview, if the ratio of the camera preview is 4:3, the resulting size of the image (image.size.width and image.size.height) is always 3024x4032.
What does 3024x4032 mean?
Those are the dimensions of the image. I think you're missing one point: the iPhone's camera can take photographs with a much higher resolution than its screen size. Just because an image is shown on the screen, it doesn't mean the image dimensions are that size.
Size: An uncropped, landscape 12.2MP photo (that's default size when shot on the iPhone 7 rear camera) is 3024 * 4032 pixels, so that's where that number comes from. Extra crispy in case you want to frame it and hang it up on your wall! See source.
Scale: Generally 1.0 (or 100%), it's the magnitude of which you've reduced your image file size. So if you wanted a 50% smaller file, you could scale the image down to 0.5 (50%), obviously losing some quality in the process.
tl;dr: those dimensions are the scale of the photo in storage, not the dimensions at which it's rendered on the phone.
I need to make from big png-file image less in width/height with maximum quality, despite of its size
I use imagemagick command like:
/usr/bin/convert -sample 1201x847 -density 600 "source_file.png" -quality 100 "dest_file.png"
I get png file but I would like to get image of better quality...
I set width/height of output file as -sample 1201x847
That is clear with -quality 100 - best quality
Not clear with density parameter
I read a doc:
-density width
-density widthxheight
Set the horizontal and vertical resolution of an image for rendering to devices.
This option specifies the image resolution to store while encoding a raster image or the canvas resolution while rendering (reading) vector formats such as Postscript, PDF, WMF, and SVG into a raster image. Image resolution provides the unit of measure to apply when rendering to an output device or raster image. The default unit of measure is in dots per inch (DPI). The -units option may be used to select dots per centimeter instead.
The default resolution is 72 dots per inch, which is equivalent to one point per pixel (Macintosh and Postscript standard). Computer screens are normally 72 or 96 dots per inch, while printers typically support 150, 300, 600, or 1200 dots per inch. To determine the resolution of your display, use a ruler to measure the width of your screen in inches, and divide by the number of horizontal pixels (1024 on a 1024x768 display).
If the file format supports it, this option may be used to update the stored image resolution. Note that Photoshop stores and obtains image resolution from a proprietary embedded profile. If this profile is not stripped from the image, then Photoshop will continue to treat the image using its former resolution, ignoring the image resolution specified in the standard file header.
The -density option sets an attribute and does not alter the underlying raster image. It may be used to adjust the rendered size for desktop publishing purposes by adjusting the scale applied to the pixels. To resize the image so that it is the same size at a different resolution, use the -resample option.
Did they mean params of monitor of client screen.width and screen.height ?
As it was written:
use a ruler to measure the width of your screen in inches, and divide by the number of horizontal pixels (1024 on a 1024x768 display
It was not actually very clear about these parameters and how to calc it.
Also googling I see using of parameters -sharpen, -trim, -resample - do they influence quality of resulting and if yes how to use them ?
About source png file I know only that it is result of fabrics js canvas using html2canvas function.
How to get image of better quality?
Thanks!
I have a programmatically created UIImage image, using this kind of code:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(128, 128), NO, 0.0f);
// Render in context
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Since my context options specify scale of 0, on retina devices it will be set to 2, and I can confirm that on resulting UIImage scale property.
Now, the problem is that this image size is 128x128, scale 2. When I am putting it into UIImageView of size 64x64 and contentMode = Center, it renders my image outside imageview, presumably rendering into 128x128 box without any scaling.
My understanding of retina graphics was that if an image has scale factor 2.0, the it should be rendered at 1/2 size, thus resulting in higher DPI.
So I was expecting the image view to render 64x64 image at retina quality. Where am I wrong?
The image will be rendered at the size you give it - 128 x 128. The scale factor means that you will have better rendered curves etc, but the image will still be 128 x 128 points. As stated in the documentation, the size parameter is:
The size (measured in points) of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
If you want a retina-quality 64x64 image, use a 64x64 size for your context.
Server Config:
Windows Server 2003
IIS 6
ColdFusion 8 Standard Edition
Java Version 6 Update 18
I have a ColdFusion application that allows users to upload images that will be added to an existing PDF. When the images are added to the PDF, they have to fit within a minimum/maximum height and width, so the uploaded image needs to be scaled to fit.
For instance, let's say the minimum height and width for a given image is 100x100, and the maximum height and width is 200x200, and the user uploads an image that is 500x1000. I use the logic below to scale that image down without skewing the image (it keeps its original shape) to 100x200. For an image smaller than the minimum, it is scaled up (in the example above, a 50x50 image would be scaled up to 100x100).
The problem I'm noticing is that when ColdFusion scales the image using its built-in functions, it reduces the resolution to 72dpi. Is there a way to prevent this loss of resolution, as the images are being added to PDFs which need to be print-quality?
Here's the [scaled-down] code I'm using to scale the images:
<cfscript>
imagePath = "/uploads/image.tif";
scaledWidth = 100;
scaledHeight = 100;
scaledImage = ImageNew(imagePath);
ImageSetAntialiasing(scaledImage, "on");
ImageScaleToFit(scaledImage, scaledWidth, scaledHeight);
</cfscript>
I think you may want to skip scaling the image at all and add the original image to the pdf document. Then have whatever pdf creation tool you are using "resize" and position the image on the document canvas. Similar to setting width and height on images in html to something other than its native resolution. I have not had to add images to PDFs docs like you described but this post might point you in the right direction:
Adding a dynamic image to a PDF using ColdFusion and iText