MPSImageIntegral returns all zeroes when images are smaller - ios

I have a Metal shader that processes an iPad Pro video frame to generate a (non-displayed) RGBA32Float image in a color attachment. That texture is then put through an MPSImageIntegral filter, encoded into the same command buffer as the shader, which results in an output image of the same size and format. In the command buffer’s completion handler, I read out the last pixel in the filtered image (containing the sum of all pixels in the input image) using this code:
let src = malloc(16) // 4 Floats per pixel * 4 bytes/Float
let region = MTLRegionMake2D(imageWidth - 1, imageHeight - 1, 1, 1) // last pixel in image
outputImage!.getBytes(src!, bytesPerRow: imageWidth * 16, from: region, mipmapLevel: 0)
let sum = src!.bindMemory(to: Float.self, capacity: 4)
NSLog("sum = \(sum[0]), \(sum[1]), \(sum[2]), \(sum[3])")
That works correctly as long as the textures holding the input and filtered images are both the same size as the IPad’s display, 2048 x 2732, though it's slow with such large images.
To speed it up, I had the shader generate just a ¼ size (512 x 683) RGBA32Float image instead, and use that same size and format for the filter’s output. But in that case, the sum that I read out is always just zeroes.
By capturing GPU frames in the debugger, I can see that the dependency graphs look the same in both cases (except for the reduced texture sizes in the latter case), and that the shader and filter work as expected in both cases, based on the appearance of the input and filtered textures as shown in the debugger. So why is it I can no longer successfully read out that filtered data, when the only change was to reduce the size of the filter's input and output images?
Some things I’ve already tried, to no avail:
Using 512 x 512 (and other size) images, to avoid possible padding artifacts in the 512 x 683 images.
Looking at other pixels, near the middle of the output image, which also contain non-zero data according to the GPU snapshots, but which read as 0 when using the smaller images.
Using a MTLBlitCommandEncoder in the same command buffer to copy the output pixel to a MTLBuffer, instead of, or in addition to, using getBytes. (That was suggested by the answer to this MacOS question, which is not directly applicable to iOS.)

I've found that if I change the render pass descriptor's storeAction for the shader's color attachment that receives the initial RGBA32Float input image from .dontCare to .store, then the code works for 512 x 683 images as well as 2048 x 2732 ones.
Why it worked without that for the larger images I still don't know.
I also don't know why this store action matters, as the filtered output image was already being successfully generated, even when its input was not stored.

Related

What should I do if a texture formatted BC3_UNORM don't have a resolution multiple of 4?

enter image description here
I have a texture of which resolution is 95x90 and format is DXGI_FORMAT_BC3_UNORM.
I try to load this texture but get an error below.
D3D11 ERROR: ID3D11Device::CreateTexture2D: A Texture2D created with the following Format (0x4d, BC3_UNORM) experiences aligment restrictions on the dimensions of the Resource. The dimensions, which are (Width: 95, Height: 90), must be multiples of (Width: 4, Height: 4). [ STATE_CREATION ERROR #101: CREATETEXTURE2D_INVALIDDIMENSIONS]
It makes no sense that I open the Painter apps and resize the original texture.
Is there nice way to fix this error?
+
If I load the texture using Visual Studio, it has dimension multiply of 4.
enter image description here
It is different when I load the texture with DirectX Tex.
auto hr = DirectX::CreateDDSTextureFromFile(Graphic::Device(), path.data(), &_texture, &_srv);
The formal rules of DirectX is that BC compressed images must have a multiple of 4 for the 'top-level' image width & height, but mipmaps can obviously result in non-multiple-of-4 values. There are also special rules for dealing with 1x1, 1x2, 2x1, and 2x2 pixel cases.
DirectXTex can block-compress artibrary sized images to support mipmaps.
I have a -fixbc4x4:switch in my texconv tool specifically for this case. It 'resizes' the top-most mip level without a decompression/compression cycle, but it will lose any mip levels so if you want to regenerate them you'll end up decompressing/compressing for that.
So in short this will create a version of the DDS texture with just the top-most level rounded up to 4x4 multiples without any modification of the blocks.
texconv -fixbc4x4 -m 1 inputdds.dds
This will fix the top-most level without any modifications of the blocks, decompress the image, generate mips, and then recompress.
texconv -fixbc4x4 -m 0 inputdds.dds

Opencv increases image's size after rotation

I am using opencv in python to rotate an image and the original and the resulted images are differrent is somethings, I am doing my transformation through this part of code:
img = cv2.imread("image.tif")
new_image = cv2.getRotationMatrix2D((cols / 2, rows / 2), correction_angle, 1)
dst = cv2.warpAffine(img, new_image , (cols, rows))
cv2.imwrite("Rotated_image.tif", dst)
The original image's size is 1.7 Mb, The image's resolution is 300
dpi, and the color space is YCbCr.
The issue is that the resulting image with 12.5 Mb size, 96 dpi, the color space is RGB, and with compression "LZW"!
My question is that: Can I keep the main properties of the original image? and why rotating an image changes the size this way?
Note: The bit depth is 24 in both images.
Calling cv2.imread with only the name of the file uses the default value cv.IMREAD_COLOR for the second parameter, about which the documentation says:
If set, always convert image to the 3 channel BGR color image.
So, your image is always converted to RGB. You can try using cv.IMREAD_ANYCOLOR for the second parameter of imread, but I don't think you can use cv2.warpAffine on it trivially then.
The difference in the stored DPI information stems from the fact that you write the new image without any meta data. imwrite allows you to specify parameters (see here), but, unfortunately, they are not very well documented. I am not sure if this kind of data can be written out of the box with OpenCV.

from big png-file image less in width/height with maximum quality

I need to make from big png-file image less in width/height with maximum quality, despite of its size
I use imagemagick command like:
/usr/bin/convert -sample 1201x847 -density 600 "source_file.png" -quality 100 "dest_file.png"
I get png file but I would like to get image of better quality...
I set width/height of output file as -sample 1201x847
That is clear with -quality 100 - best quality
Not clear with density parameter
I read a doc:
-density width
-density widthxheight
Set the horizontal and vertical resolution of an image for rendering to devices.
This option specifies the image resolution to store while encoding a raster image or the canvas resolution while rendering (reading) vector formats such as Postscript, PDF, WMF, and SVG into a raster image. Image resolution provides the unit of measure to apply when rendering to an output device or raster image. The default unit of measure is in dots per inch (DPI). The -units option may be used to select dots per centimeter instead.
The default resolution is 72 dots per inch, which is equivalent to one point per pixel (Macintosh and Postscript standard). Computer screens are normally 72 or 96 dots per inch, while printers typically support 150, 300, 600, or 1200 dots per inch. To determine the resolution of your display, use a ruler to measure the width of your screen in inches, and divide by the number of horizontal pixels (1024 on a 1024x768 display).
If the file format supports it, this option may be used to update the stored image resolution. Note that Photoshop stores and obtains image resolution from a proprietary embedded profile. If this profile is not stripped from the image, then Photoshop will continue to treat the image using its former resolution, ignoring the image resolution specified in the standard file header.
The -density option sets an attribute and does not alter the underlying raster image. It may be used to adjust the rendered size for desktop publishing purposes by adjusting the scale applied to the pixels. To resize the image so that it is the same size at a different resolution, use the -resample option.
Did they mean params of monitor of client screen.width and screen.height ?
As it was written:
use a ruler to measure the width of your screen in inches, and divide by the number of horizontal pixels (1024 on a 1024x768 display
It was not actually very clear about these parameters and how to calc it.
Also googling I see using of parameters -sharpen, -trim, -resample - do they influence quality of resulting and if yes how to use them ?
About source png file I know only that it is result of fabrics js canvas using html2canvas function.
How to get image of better quality?
Thanks!

What size a CGPDFContext has to have?

I have to write a CGPDFContext. This PDF contains just one image.
The image has 2100 x 3000 pixels.
Suppose I open this image on Photoshop and print it at 300 dpi. Photoshop will use 300 pixels to build every inch of printed material, so this image will have 7 x 10 inches.
This image has this size because of that.
Now I have to create a CGPDFContext. I read somewhere that this context has to be created in points and there is a mention that on a CGPDFContext one point = 1/72 inches, meaning that the context will have 72 dpi (?).
So, what size should I create this context to have maximum quality at 300 dpi.
Another question is this: supposing this context is created based on the 1/72 stuff, than 7 x 10 inches will represent 504 x 720 pt. If this is the size I have to create the context, what happens when I write the image to the context using this?
CGImageRef imageRef = image.CGImage; // this image is 2100x3000 pixels
// mediaBox = 0,0,504,720
CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, &mediaBox, auxillaryInformation);
CGContextDrawImage(pdfContext, CGRectMake(0.0f, 0.0f, 504, 720), imageRef);
will the 2100x3000 pixels image be embedded on the context without losing pixels? I don't want the image to be reduced to 504x720
If your image is 2100*3000 pixels and you draw it on a PDF page that is 7*10 inches (504*720 points) then your image will be embedded at 300 dpi.
The image size will be kept at 2100*3000 pixels and it will not be downscaled at 504*720 pixels.
Both other answers are technically correct, but let me try to answer explicitly on both of your questions:
Now I have to create a CGPDFContext. I read somewhere that this
context has to be created in points and there is a mention that on a
CGPDFContext one point = 1/72 inches, meaning that the context will
have 72 dpi (?).
The parameter you have to pass is the location and size of the mediabox of the PDF you want to create. The mediabox is the canvas you have at your disposal to put stuff on.
PDF uses a space where 1 point equals 1/72 inch, but you shouldn't think of this as dpi in the traditional "image" way. This is simply a convention so that you know that specifying a mediabox with a size of 72 x 72 points will give you a PDF file that is 1 inch high and wide in the real world.
So, what size should I create this context to have maximum quality at
300 dpi.
Your code is correct :-)
The image you place on this context will always be inserted as full size (meaning, Apple will not throw away pixels behind your back). That means that the final resolution of your image is determined by the number of pixels of your image and the size of the rectangle (again in points) where you chose to put it on the page.
So you're fine in this simple example. However (at the risk of hurting your brain), keep in mind that PDF contexts - just as any other contexts - allow you to transform them. You could rotate or shear the current transformation matrix of your PDF context. In that case your image would still have all pixels, but it's effective resolution would be dependent on what evil thing you did to the transformation matrix.
The resolution is independent of the media size, in PDF. The media size is given in 1/72 inch, so 2100x3000 'units' (aka points) is a media size of 29.166x41.666 inches. PDF files do not have a resolution.
In general the content of a PDF is vector information and so is resolution independent. Bitmaps may be drawn in a PDF, and those do have a resolution, but its not 72 dpi. The resolution of the image depends on the number of image samples in each dimension, and the scale factor which is applied to put it on the medium.
Consider an image which is 300x300 image samples. If we place that onto a PDF which is 72x72 (ie 1 inch square), and scale it to fit exactly, then the image is, effectively, 300 dpi internally.
In PDF terms I can take the same image, make a PDF page which is 144x144 (2x2 inches), and scale the image to fit that. Now the image is 150 dpi. The image hasn't changed, but the scale factor has.
Now the final 'resolution' of any images in your PDF file, when rendered, will depend on the number of samples and scale factor (as above) and the resolution you render the PDF file at. Taking the 2 cases above, if I render at 300 dpi, the image won't change at all, but in the first case will map 1:1 the original image samples onto the final output pixels. The second image, however, will map each image sample into 4 pixels in the output (because its been scaled by 2 in each direction).
If you render your PDF file (2100x3100 points) in Photoshop #300 dpi then Photoshop will create a bitmap which is 8750x12500 pixels. It will still be 29.16x41.66 inches, at 300 dots per inch. If you render it at 600 dpi, then you will get 17500x25000 pixels, and so on.
It sounds like the context is created in the default PDF space of 1/72 inch, so you just need to use the media size from the PDF file, ie 2100x3100.

What's the largest image size that the iOS browser display without downsampling?

What the title says; basically, if you display an image in the iOS browser (say, the iPhone), when the image dimensions surpass a certain limit, the image is downsampled and displayed at a fraction of the original resolution (ie. 1/n the number of pixels of the original).
This is, I'm guessing, a way to avoid crashing the browser when image sizes become too large due to running out of memory.
My question is, what is the upper limit to the image dimensions before WebView (or the browser) starts sampling every n-th pixel?
EXAMPLE: When displaying a 5760×1800 image, it is downsampled into 1440×450 in the browser (a 1:4 ratio).
Just finished a few tests and it seems that iOS (on iPhone 3GS with iOS 4.2.1 at least) downsamples the image when it hits the 1024px limit. Any image above this size has its pixels sampled by every n-th pixel, where n is the smallest divisor that yields a dimension <= 1024px.
For some reason Safari mobile is reducing the size but thankful there is a way to force the actual size in the css using:
-webkit-background-size: widthpx heightpx;
-webkit-background-size:980px 2574px;
/* (simply put in the exact size of the wrapper image) */
originally found here:
http://srihosting.com/blog/web-development/ios-safari-background-image-size-problem/
According to the apple documentation there is a little difference between JPEG and other file formats.
For GIF, PNG and TIFF, if the device has less than 256MB of ram the
maximum size is 3 megapixels. If the device has more than 256MB then
the limit is 5 megapixels.
For JPEG, the maximum size is always at 2 megapixels.
So a JPEG image can have at max 2 * 1024 * 1024 pixels (2'097'152).
If I'm correct, here is the math needed to find the largest image dimension:
ratio = √(2 * 1024 * 1024) / √(5760 * 1800)= √2097152 / √10368000 ≈ 1448.154 / 3219.937 ≈ 0.449746
optimal_width = 5760 * ratio ≈ 2590 // it's better to approximate 1 pixel smaller
optimal_height = 1800 * ratio ≈ 809 // else the image is quickly too big since we multiply each dimensions
optimal_size = 2590 * 809 = 2095310 // less than 2097152
source for the restrictions: https://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/CreatingContentforSafarioniPhone/CreatingContentforSafarioniPhone.html (the "Know iOS Resource Limits" section)

Resources