Get screen's physical ppi with Delphi 10.4 in windows 10 - delphi

I am developing an application that draws on screen. I want to know the physical ppi of the screen to draw eg. a rectangle 2x2 inches to its actual size, so the user be able to measure it on screen as 2x2 inches.

Related

how to calculate ppi of this camera?

My camera is HIKIROBOT MV-CE060-10UC 6MP USB Camera and its lens is HIKIROBOT MVL-HF2528M-6MPE 25MM lens
Camera Resolution : 3072 x 2048 == 6 MP
more details about camera
more details about lens
and the operating distance 370 mm.
My need is find the Dimensions of object using python. I try some method its not give accurate value.
I search in online to calculate the ppi link of website.I follow the steps to calculate ppi but i don't know the diagonal in inches
diagonal image
That website give some examples to calculate the ppi like
They calculate the ppi of computer screen here thay give diagonal inches of screen, but in my case don't know the daigonal inches. What to do ?

How to set resolution of image

I am using OpenCV to generate images with depth of 1 bit to cut in a laser cutter (Github repo here). I save them with:
cv2.imwrite(filepath, img, [cv2.IMWRITE_PNG_BILEVEL, 1])
Each pixel corresponds to 0.05mm (called "scan gap" in the laser cutter). A sample image has 300 x 306 pixels and appears in the laser cutter software (LaserCut Pro 5) with size 30 mm x 30 mm. This corresponds to a resolution of 254 pixels per inch and the uncommon value could be from the software. I want a size of 15 mm x 15.3 mm and want to set a higher resolution to achieve that. I could resize by hand, but if I make a mistake, the pixels are no longer exactly aligned with the scan gap of the laser, resulting in inaccuracies in the engraving.
Does OpenCV have a way to set the resolution or final size of the image?

How works Image scale with device scale

I try to understand how images are rendering on devices with the different scale of device and image.
We have image 100x100px if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50 points on device screen with x2 scale(iphone7) and on x3 scale(ipxoneX) why?
How is this working? Will be very thankful for detailed explanation
What we have is image, buffer and coordinate system. Image has its size, buffer has its size and coordinate system may have it.
The scales were introduced in context of coordinate system and buffers when retina displays bace a thing. Basically we used to develop for solid 320x480 coordinate systems which was also the size of a buffer and so a 320x480 image was drawn exactly as full screen perfectly.
Then retina displays suddenly made devices 2x resulting in 640x960 buffers. If there was no scale then all the hardcoded values (we used to have) would produce a messed up layout. So Apple persisted the coordinate system at 320x480 and introduced scale which basically means that UIKit frame-related logic stayed the same. From developer perspective all that changed is that #2x image would initialize a half smaller image view when initialized with image from assets. So a 512x512 image would produce a 256x256 image view using UIImageView(image: myImage).
Now we have quite a lot more then 320x480 and their multipliers and use auto-layout. We have 1x, 2x and 3x and we may get 10x for what e care in the future (probably not going to happen due to limitations of our eyes). And these scales are here actually so that all devices have similar PPI (points per inch) where these points are considered in coordinate system. What that means is that if you place a button with height of 50px it will physically produce similar height on all devices no matter the scale.
So what does all of this have to do with rendering an image? Well, nothing actually. The scale is just a converter between your coordinate system and your buffer. So at scale of 2x if you in code (or IB) create a 50x50 image view you can expect it's buffer to be 100x100. Ergo if you want the image to look nice you should use a 100x100 image. But since you want to do it for all the relevant scales you should have 3 images 50x50, 100x100 and 150x150 and naming them same with suffixing #2x and #3x you will ensure that UIImage(name:) will use correct image depending on current device scale.
So to your question directly:
if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50: You usually don't set a scale of an image. UIImage is a wrapper of CGImage which will always have direct size. UIImage uses its orientation and scale to transform that size. But if you mean you are setting a 100x100 image with #2x then on 2x devices an image view initialized with this image will have a size of 50x50 which is coordinate system but would be drawn to a 100x100 buffer as it is.
It really might be hard to explain this but the key you are looking for is that coordinate system has a scale toward display resolution (rather toward used buffer). If it did not we would need to increase everything on devices with higher resolution; If we put 2 devices together with same physical size (lets say 320x480) of the screen where first had 1x and second 2x resolution then all components would be half the size on the second: An icon of 32x32 would take 10% of width on 1x and 5% on 2x. So to simulate the same physical size we actually need 64x64 icon but then we also need to set the frame to 64x64. We would also need to use larger font sizes or all the texts would suddenly be very small...
I try to understand how images are rendering on devices with the different scale of device and image.: There is no "scale" in concept between device and image. Device (in this case a display screen) will receive a buffer of pixels that it needs to draw. This buffer will be of appropriate size for it so we are only talking about rendering image on this buffer. Image may be rendered through UIKit on this buffer any way you want; If you want you can draw a 400x230 image to 100x300 part of a buffer. But optimally a 400x230 image will be drawn to 400x230 part of the buffer meaning it was not magnified, shrunk or otherwise transformed. So assuming:
image size: 64x64 (actual image pixels)
UIView size: 320x320 (.frame.size)
UIView scale: 2x
Icon size: 32x32 (UIImageView.frame.size)
Buffer size: 640x640 (size of the actual buffer on which the image will be drawn)
UIImage size: 32x32 (.size you get from loaded image)
UIImage scale: 2x (.scale you get from loaded image)
Now from buffer perspective you are drawing 64x64 image to 640x640 buffer which takes 10% of the buffer per dimension. And from coordinate system perspective you are drawing 32x32 image to 320x320 canvas which takes 10% of the canvas per dimension.

Real distance on image sensor

I have captured an image using iPhone rear camera. Now I can measure the distance between two points in the image by pixel, and I can easily convert that distance to inch or millimeter using PPI of that iPhone.
distance_in_inch = distance_in_pixel / PPI
However, I want to know the distance in inch or millimeter on the image sensor?
How can I calculate it?

Pixels of an image

I have a stupid question:
I have a black circle on white background, something like:
I have a code in Matlab that gets an image with a black circle and returns the number of pixels in the circle.
will I get the same number of pixels in a camera of 5 mega pixel and a camera of 8 mega pixel?
The short answer is: Under most circumstances, No. 8MP should have more pixels than 5MP, However...
That depends on many factors related to the camera and the images that you take:
Focal length of the cameras, and other optics parameters. Consider a fish-eye lens to understand my point.
Distance of the circle from the camera. Obviously, closer objects appear larger.
What the camera does with the pixels from the sensor. For example, 5MP cameras that works in a down-scaled regime, outputting 3MP instead.
its depends on the Resolution is how many pixels you have counted horizontally or vertically when used to describe a stored image.
Higher mega pixel cameras offer the ability to print larger images.
For example a 6mp camera offers a resolution of 3000 x 2000 pixels. If you allow 300dpi (dots per inch) for print quality, this would give you a print of approx 10 in x 7 in. 3000 divided by 300 = 10, 2000 divided by 300 = approx 7
A 3.1mp camera offers a resolution of 2048 x 1536 pixels which gives a print size of 7in x 5in

Resources