Opencv increases image's size after rotation - opencv

I am using opencv in python to rotate an image and the original and the resulted images are differrent is somethings, I am doing my transformation through this part of code:
img = cv2.imread("image.tif")
new_image = cv2.getRotationMatrix2D((cols / 2, rows / 2), correction_angle, 1)
dst = cv2.warpAffine(img, new_image , (cols, rows))
cv2.imwrite("Rotated_image.tif", dst)
The original image's size is 1.7 Mb, The image's resolution is 300
dpi, and the color space is YCbCr.
The issue is that the resulting image with 12.5 Mb size, 96 dpi, the color space is RGB, and with compression "LZW"!
My question is that: Can I keep the main properties of the original image? and why rotating an image changes the size this way?
Note: The bit depth is 24 in both images.

Calling cv2.imread with only the name of the file uses the default value cv.IMREAD_COLOR for the second parameter, about which the documentation says:
If set, always convert image to the 3 channel BGR color image.
So, your image is always converted to RGB. You can try using cv.IMREAD_ANYCOLOR for the second parameter of imread, but I don't think you can use cv2.warpAffine on it trivially then.
The difference in the stored DPI information stems from the fact that you write the new image without any meta data. imwrite allows you to specify parameters (see here), but, unfortunately, they are not very well documented. I am not sure if this kind of data can be written out of the box with OpenCV.

Related

How to change GIMP print size with script-fu

I am using GIMP 2.10.24. I have some image and I need to change Print Size Width to 21mm and Height to 30mm.
I can do that with Set Image Print Resolution Dialog (Menu->Image->Print Size):
screenshot
But there is my question: how could I do that using script-fu or python-fu?
Print size, size in pixels, and print definition are completely related:
print size = size in pixels ÷ print definition
So to change the image print definition you use
In Python:
pdb.gimp_image_set_resolution(image, xresolution, yresolution)
In Script-fu:
(gimp-image-set-resolution image xresolution yresolution)
In both case the X/Y resolutions are in dots per inch.
However if you are using Gimp just for this creating a Gimp script is overkill (the learning curve is quite steep). If the image is in a common format (JPEG, PNG, TIFF) the print definition is part of the image metadata (JPEG header, or EXIF data) and can be changed directly without decoding/reencoding the image using CLI utilities. For instance with ExifTool:
exiftool ${your_image} -xResolution=321 -yResolution=321

MPSImageIntegral returns all zeroes when images are smaller

I have a Metal shader that processes an iPad Pro video frame to generate a (non-displayed) RGBA32Float image in a color attachment. That texture is then put through an MPSImageIntegral filter, encoded into the same command buffer as the shader, which results in an output image of the same size and format. In the command buffer’s completion handler, I read out the last pixel in the filtered image (containing the sum of all pixels in the input image) using this code:
let src = malloc(16) // 4 Floats per pixel * 4 bytes/Float
let region = MTLRegionMake2D(imageWidth - 1, imageHeight - 1, 1, 1) // last pixel in image
outputImage!.getBytes(src!, bytesPerRow: imageWidth * 16, from: region, mipmapLevel: 0)
let sum = src!.bindMemory(to: Float.self, capacity: 4)
NSLog("sum = \(sum[0]), \(sum[1]), \(sum[2]), \(sum[3])")
That works correctly as long as the textures holding the input and filtered images are both the same size as the IPad’s display, 2048 x 2732, though it's slow with such large images.
To speed it up, I had the shader generate just a ¼ size (512 x 683) RGBA32Float image instead, and use that same size and format for the filter’s output. But in that case, the sum that I read out is always just zeroes.
By capturing GPU frames in the debugger, I can see that the dependency graphs look the same in both cases (except for the reduced texture sizes in the latter case), and that the shader and filter work as expected in both cases, based on the appearance of the input and filtered textures as shown in the debugger. So why is it I can no longer successfully read out that filtered data, when the only change was to reduce the size of the filter's input and output images?
Some things I’ve already tried, to no avail:
Using 512 x 512 (and other size) images, to avoid possible padding artifacts in the 512 x 683 images.
Looking at other pixels, near the middle of the output image, which also contain non-zero data according to the GPU snapshots, but which read as 0 when using the smaller images.
Using a MTLBlitCommandEncoder in the same command buffer to copy the output pixel to a MTLBuffer, instead of, or in addition to, using getBytes. (That was suggested by the answer to this MacOS question, which is not directly applicable to iOS.)
I've found that if I change the render pass descriptor's storeAction for the shader's color attachment that receives the initial RGBA32Float input image from .dontCare to .store, then the code works for 512 x 683 images as well as 2048 x 2732 ones.
Why it worked without that for the larger images I still don't know.
I also don't know why this store action matters, as the filtered output image was already being successfully generated, even when its input was not stored.

from big png-file image less in width/height with maximum quality

I need to make from big png-file image less in width/height with maximum quality, despite of its size
I use imagemagick command like:
/usr/bin/convert -sample 1201x847 -density 600 "source_file.png" -quality 100 "dest_file.png"
I get png file but I would like to get image of better quality...
I set width/height of output file as -sample 1201x847
That is clear with -quality 100 - best quality
Not clear with density parameter
I read a doc:
-density width
-density widthxheight
Set the horizontal and vertical resolution of an image for rendering to devices.
This option specifies the image resolution to store while encoding a raster image or the canvas resolution while rendering (reading) vector formats such as Postscript, PDF, WMF, and SVG into a raster image. Image resolution provides the unit of measure to apply when rendering to an output device or raster image. The default unit of measure is in dots per inch (DPI). The -units option may be used to select dots per centimeter instead.
The default resolution is 72 dots per inch, which is equivalent to one point per pixel (Macintosh and Postscript standard). Computer screens are normally 72 or 96 dots per inch, while printers typically support 150, 300, 600, or 1200 dots per inch. To determine the resolution of your display, use a ruler to measure the width of your screen in inches, and divide by the number of horizontal pixels (1024 on a 1024x768 display).
If the file format supports it, this option may be used to update the stored image resolution. Note that Photoshop stores and obtains image resolution from a proprietary embedded profile. If this profile is not stripped from the image, then Photoshop will continue to treat the image using its former resolution, ignoring the image resolution specified in the standard file header.
The -density option sets an attribute and does not alter the underlying raster image. It may be used to adjust the rendered size for desktop publishing purposes by adjusting the scale applied to the pixels. To resize the image so that it is the same size at a different resolution, use the -resample option.
Did they mean params of monitor of client screen.width and screen.height ?
As it was written:
use a ruler to measure the width of your screen in inches, and divide by the number of horizontal pixels (1024 on a 1024x768 display
It was not actually very clear about these parameters and how to calc it.
Also googling I see using of parameters -sharpen, -trim, -resample - do they influence quality of resulting and if yes how to use them ?
About source png file I know only that it is result of fabrics js canvas using html2canvas function.
How to get image of better quality?
Thanks!

What size a CGPDFContext has to have?

I have to write a CGPDFContext. This PDF contains just one image.
The image has 2100 x 3000 pixels.
Suppose I open this image on Photoshop and print it at 300 dpi. Photoshop will use 300 pixels to build every inch of printed material, so this image will have 7 x 10 inches.
This image has this size because of that.
Now I have to create a CGPDFContext. I read somewhere that this context has to be created in points and there is a mention that on a CGPDFContext one point = 1/72 inches, meaning that the context will have 72 dpi (?).
So, what size should I create this context to have maximum quality at 300 dpi.
Another question is this: supposing this context is created based on the 1/72 stuff, than 7 x 10 inches will represent 504 x 720 pt. If this is the size I have to create the context, what happens when I write the image to the context using this?
CGImageRef imageRef = image.CGImage; // this image is 2100x3000 pixels
// mediaBox = 0,0,504,720
CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, &mediaBox, auxillaryInformation);
CGContextDrawImage(pdfContext, CGRectMake(0.0f, 0.0f, 504, 720), imageRef);
will the 2100x3000 pixels image be embedded on the context without losing pixels? I don't want the image to be reduced to 504x720
If your image is 2100*3000 pixels and you draw it on a PDF page that is 7*10 inches (504*720 points) then your image will be embedded at 300 dpi.
The image size will be kept at 2100*3000 pixels and it will not be downscaled at 504*720 pixels.
Both other answers are technically correct, but let me try to answer explicitly on both of your questions:
Now I have to create a CGPDFContext. I read somewhere that this
context has to be created in points and there is a mention that on a
CGPDFContext one point = 1/72 inches, meaning that the context will
have 72 dpi (?).
The parameter you have to pass is the location and size of the mediabox of the PDF you want to create. The mediabox is the canvas you have at your disposal to put stuff on.
PDF uses a space where 1 point equals 1/72 inch, but you shouldn't think of this as dpi in the traditional "image" way. This is simply a convention so that you know that specifying a mediabox with a size of 72 x 72 points will give you a PDF file that is 1 inch high and wide in the real world.
So, what size should I create this context to have maximum quality at
300 dpi.
Your code is correct :-)
The image you place on this context will always be inserted as full size (meaning, Apple will not throw away pixels behind your back). That means that the final resolution of your image is determined by the number of pixels of your image and the size of the rectangle (again in points) where you chose to put it on the page.
So you're fine in this simple example. However (at the risk of hurting your brain), keep in mind that PDF contexts - just as any other contexts - allow you to transform them. You could rotate or shear the current transformation matrix of your PDF context. In that case your image would still have all pixels, but it's effective resolution would be dependent on what evil thing you did to the transformation matrix.
The resolution is independent of the media size, in PDF. The media size is given in 1/72 inch, so 2100x3000 'units' (aka points) is a media size of 29.166x41.666 inches. PDF files do not have a resolution.
In general the content of a PDF is vector information and so is resolution independent. Bitmaps may be drawn in a PDF, and those do have a resolution, but its not 72 dpi. The resolution of the image depends on the number of image samples in each dimension, and the scale factor which is applied to put it on the medium.
Consider an image which is 300x300 image samples. If we place that onto a PDF which is 72x72 (ie 1 inch square), and scale it to fit exactly, then the image is, effectively, 300 dpi internally.
In PDF terms I can take the same image, make a PDF page which is 144x144 (2x2 inches), and scale the image to fit that. Now the image is 150 dpi. The image hasn't changed, but the scale factor has.
Now the final 'resolution' of any images in your PDF file, when rendered, will depend on the number of samples and scale factor (as above) and the resolution you render the PDF file at. Taking the 2 cases above, if I render at 300 dpi, the image won't change at all, but in the first case will map 1:1 the original image samples onto the final output pixels. The second image, however, will map each image sample into 4 pixels in the output (because its been scaled by 2 in each direction).
If you render your PDF file (2100x3100 points) in Photoshop #300 dpi then Photoshop will create a bitmap which is 8750x12500 pixels. It will still be 29.16x41.66 inches, at 300 dots per inch. If you render it at 600 dpi, then you will get 17500x25000 pixels, and so on.
It sounds like the context is created in the default PDF space of 1/72 inch, so you just need to use the media size from the PDF file, ie 2100x3100.

How to get rid of empty transparent areas in a PNG image so that it conforms to actual image size?

I have a series of images that I would look to loop through using iOS's [UIView startAnimating]. My trouble is that, when I exported the images, they all came standard in a 240x160 size, although only 50x50 contains the actual image, the rest being transparent parts that are just taking up space.
When I set the frame of the image automatically using image.size.width and image.size.height, iOS takes into images' original size of 240x160, so I am unable to get a frame that conforms to the actual parts of the image. I was wondering if there is a way using Illustrator or Photoshop, or any other graphics editing software for me to export the images based on their natural dimensions, and not a fixed dimension. Thanks!
I am a fan of vector graphics and thinks everything in the world should be vector ;-) so here is what you do in illustrator: file - document setup - edit artboards. Then click on the image, and the artboard should adjust to the exact size. You can of course have multiple artboards, or simply operate with one artboard and however-many images.

Categories

Resources