How Resolution changes X & Y cords - delphi

I am tracking the color of a pixel at X & Y at a resolution of 1920 by 1080, I am simply wondering if there is any mathematically way to remain accurate in tracking the same pixel throughout various resolutions.
The pixel is not moving and is static, however I am aware that changing resolutions affects scaling and the X & Y system of the monitor.
So any suggestions would be great!

As always the whole screen area is filled, the same location on that physical screen (determined as the ratio of xLocation divided by the xWidth and the yLocation divided by the yHeight, this in centimeters or inches) will also always be at the same ratio of xPixelindex divided by xTotalpixels and yPixelindex divided by yTotalpixels.
Lets assume you have xRefererence and yReference of the target pixel, in a resolution WidthReference and HeightReference in which these coordinates mark the desired pixel.
Lets assume you have WidthCurrent and HeightCurrent of your screen size in pixels, for the resolurion in which you want to target a pixel at the same physical location.
Lets assume that you need to determine xCurrent and yCurrent as the coordinates for the pixel in the current resolution.
Then calculate the current coordinates as:
xCurrent = (1.0 * WidthCurrent) / WidthReference * xReference;
yCurrent = (1.0 * HeightCurrent)/ HeightReference * yReference;

Related

How to calculate the distance between object and camera, knowing the pixels occupied by the object in an image

By using the segmentation I am able to find the number of pixels occupied by an object in an image. Now I need to kind the distance by using the pixels occupied.
object real dimensions (H x W) = 11 x 5.5 cm.
The object is placed at 50 cm distance pixels occupied are = 42894
The object is placed at 60 cm distance pixels occupied are = 31269.
The total pixel in an image = 480 x 640 = 307200.
what is the distance if the image occupies 22323 pixels ???
The distance to the object is 67.7cm
Please read https://en.wikipedia.org/wiki/Pinhole_camera_model
Image size is inversely proportional to the distance. Repeat your experiement for a few distances and plot a size vs distance to see for yourself.
Of course this is a simplyfied model that only works for a fixed focal length.

How can i measure the size of objects in an photo taken by iPhone without ARKit?

we’ll be using the United States quarter as our reference object and throughout all examples
I need to determine our “pixels per metric” ratio, which describes the number of pixels that can “fit” into a given number of inches, millimeters, meters, etc.
I need output should look something like the following:
You can find code on this blog to find size of objects.
Pixels per metric -
It is defined as Number of pixels per metric (mm, cm, mtr). First you need to find this ratio for a single object ( US Quarter in your case ). You will then use this ratio to find sizes of other objects. Now to find pixels per metric in your case -
1) Filter the image and find contours.
2) Sort the contours from left to right
3) Find the Corners of the first Contour (US quarter)
4) Find the Distance between any two adjacent corners of the object in pixels
5) pixelspermetric = Distance_between_corners_in_pixels / Distance_between_corners_in_cm
For example
Suppose the distance between two adjacent corners of US Quarter is 200 pixels and actual width of coin is 0.955 inches. So,
PixelsPerMetric = 200/0.955 = 209.4240
Now you can find size of other object as -
size = length_in_pixels/PixelsPerMetric
This ratio remains constant for a certain height (distance between object and camera). You need to calculate the ratio again if height changes.

OpenCV calculate distance from object with known size

Is it possible to calculate the distance of an object with known size?
I would like to do this with an ball which has 7cm diamater. Now for the first calculation I would put him in 30cm distance to the webcam and in the second 50cm.
Is there a linear function or formular to calculate somehow the distance?
Lets say in the first measure it has a diamater of 6 pixel and in the second only 4. There must be a formular for this?
Best regards
In optical scheme you have two similar right triangles with edges F (objective focus distance), PixelSize, Distance and Size
Distance / Size = F / PixelSize
So having parameters for some known Distance0, you can get F (in pixel units, consider it as some constant)
F = Distance0 * PixelSize0 / Size0
and use it to calculate unknown distance (until zoom changes)
Distance = F * Size / PixelSize
(Note that you can vary object size)

Place object at retina pixel location that standard resolution can't access?

I am animating something's position on the screen in xcode.
Currently it moves at "1" pixel every .1 seconds.
This means it's not moving at 1 pixel every .1 seconds on a retina display but 2 pixels every .1 seconds.
I want it to move at true 1 pixel every .1 seconds on a retina display. Is there any way to do this?
Any way to set an objects location to be a retina location or something?
You want to move in pixels. All coordinates in iOS are given in points. So you need to convert your points to pixels. This can be done by doing:
CGFloat screenScale = [UIScreen mainScreen].scale;
CGFloat ratio = 1.0 / screenScale;
Use ratio to increment your animation.
On a non-retina device, ratio will be 1 point. On current retina devices, ratio will be 0.5 point.
As you animate, move your x and y coordinates by ratio points and you will get one pixel of movement each time.
Starting in iOS 4, dimensions are measured in “points” instead of pixels. In non-Retina screens a point is one pixel, and in Retina screens a point is two pixels, draw a one-point line and it shows up two pixels wide.
Therefore, when on Retina screens you can move 0.5 points (which will equal 1 pixel).
Have a look at Apple's drawing concepts.

Window width and center calculation of DICOM image

What is "rescale intercept" and "rescale slope" in DICOM image (CT)?
How to calculate window width and window center with that?
The rescale intercept and slope are applied to transform the pixel values of the image into values that are meaningful to the application.
For instance, the original pixel values could store a device specific value that has a meaning only when used by the device that generated it: applying the rescale slope/intercept to pixel value converts the original values into optical density or other known measurement units (e.g. hounsfield).
When the transformation is not linear, then a LUT (lookup table) is applied.
After the modality transform has been applied (rescale slope/intercept or LUT) then the window width/center specify which pixels should be visible: all the pixels outside the values specified by the window are displayed as black or white.
For instance, if the window center is 100 and the window width is 20 then all the pixels with a value smaller than 90 are displayed as black and all the pixels with a value bigger than 110 are displayed as white.
This allow to display only portions of the images (for instance just the bones or just the tissues).
Hounsfield scale: http://en.wikipedia.org/wiki/Hounsfield_scale
How to apply the rescale slope/intercept:
final_value = original_value * rescale_slope + rescale_intercept
How to calculate the pixels to display using the window center/width:
lowest_visible_value = window_center - window_width / 2
highest_visible_value = window_center + window_width / 2
Rescale intercept and slope are a simple linear transform applied to the raw pixel data before applying the window width/center. The basic formula is:
NewValue = (RawPixelValue * RescaleSlope) + RescaleIntercept

Resources