iOs - Image scaling and positioning in larger image - ios

My question is related to calculating position. Scale the INNER IMAGE in FRONT END then i need to find the relative position on same in BACKGROUND PROCESS. If any one experience in it please share here like an equation or something.
The FRONT END FRAME IMAGE have a size of 188x292(WidthxHeight)
and Larger FRAME IMAGE have the size of 500x750(WidthxHeight).
INNER IMAGE 75x75(WidthxHeight) and Larger INNER IMAGE
199.45x199.45(WidthxHeight)
Question : When i scale the INNER IMAGE in FRONT END. That is 75x75 to 100x100, then we have the x and y position of that. And i need to calculate the exact position in BACKGROUND PROCESS. It's for scale that image programatically.

After INNER IMAGE scale you will have x and y position for it, now convert it to %.
if position of INNER IMAGE is
relativeX = (x * 100)/frameImageWidth;
relativeY = (y * 100)/frameImageHeight;
position of INNER IMAGE for Background will be
x = (relativeX * backgroundFrameImageWidth)/100;
y = (relativeY * backgroundFrameImageHeight)/100;

CGPoint foregroundLocation = CGPointMake(x, y);
static float xscale = BACKGROUND_IMAGE_WIDTH / FOREGROUND_IMAGE_WIDTH;
static float yscale = BACKGROUND_IMAGE_HEIGHT / FOREGROUND_IMAGE_HEIGHT;
CGPoint backgroundLocation = CGPointApplyAffineTransform(foregroundLocation, CGAffineTransformMakeScale(xscale, yscale));

Related

How do I convert pixel/screen coordinates to cartesian coordinates?

How do I convert pixel/screen coordinates to cartesian coordinates(x,y)?
The info I have on the pictures is (see image below):
vFov in degrees, hFov in degrees, pixel width, pixel height
Basically what I want is to take any pixel on the image, and calculate the pixel position from image center in degrees.
For my answer I assume that your image represents a projection onto a planar surface.
Then a virtual camera can be constructed such that it sees the width/height of the image in exactly the right field of view. To get the distance d between the image and the camera(in pixels) a constructed right triangle can be used:
tan(FOV/2) = width/2 / d → d = width/(2tan(FOV/2))
The same equation should hold for the height.
In a similar way the angle of the pixel can be calculated(assuming the center of the image is (0, 0)):
tan(angleX) = x/d → angleX = arctan(x/d) = arctan(x/width * 2tan(hFov))
tan(angleY) = y/d → angleY = arctan(y/d) = arctan(y/width * 2tan(vFov))
In case the image is warped the d's of the vertical and the horizontal can be different and therefor you should not precalculate d.

Flutter - How to rotate an image around the center with canvas

I am trying to implement a custom painter that can draw an image (scaled down version) on the canvas and the drawn image can be rotated and scaled.
I get to know that to scale the image I have to scale the canvas using scale method.
Now the questions is how to rotate the scaled image on its center (or any other point). The rotate method of canvas allow only to rotate on top left corner.
Here is my implementation that can be extended
Had the same problem, Solution was simply making your own rotation method in three lines
void rotate(Canvas canvas, double cx, double cy, double angle) {
canvas.translate(cx, cy);
canvas.rotate(angle);
canvas.translate(-cx, -cy);
}
We thus first move the canvas towards the point you want to pivot around. We then rotate along the the topleft (default for Flutter) which in coordinate space is the pivot you want and then put the canvas back to the desired position, with the rotation applied. Method is very efficient, requiring only 4 additions for the translation and the rotation cost is identical to the original one.
This can achieve by shifting the coordinate space as illustrated in figure 1.
The translation is the difference in coordinates between C1 and C2, which are exactly as between A and B in figure 2.
With some geometry formulas, we can calculate the desired translation and produce the rotated image as in the method below
ui.Image rotatedImage({ui.Image image, double angle}) {
var pictureRecorder = ui.PictureRecorder();
Canvas canvas = Canvas(pictureRecorder);
final double r = sqrt(image.width * image.width + image.height * image.height) / 2;
final alpha = atan(image.height / image.width);
final beta = alpha + angle;
final shiftY = r * sin(beta);
final shiftX = r * cos(beta);
final translateX = image.width / 2 - shiftX;
final translateY = image.height / 2 - shiftY;
canvas.translate(translateX, translateY);
canvas.rotate(angle);
canvas.drawImage(image, Offset.zero, Paint());
return pictureRecorder.endRecording().toImage(image.width, image.height);
}
alpha, beta, angle are all in radian.
Here is the repo of the demo app
If you don't want to rotate the image around the center of the image you can use this way. You won't have to care about what the offset of the canvas should be in relation to the image rotation, because the canvas is moved back to its original position after the image is drawn.
void rotate(Canvas c, Image image, Offset focalPoint, Size screenSize, double angle) {
c.save();
c.translate(screenSize.width/2, screenSize.height/2);
c.rotate(angle);
// To rotate around the center of the image, focal point is the
// image width and height divided by 2
c.drawImage(image, focalPoint*-1, Paint());
c.translate(-screenSize.width/2, -screenSize.height/2);
c.restore();
}

How to get the rendered/displayed size of a scaled image/frame in Roblox Studio

I need to get the width and height of an image within a frame. Both the frame and image use the Scale property instead of the Offset property to set the size. I have an UIAspectRatioConstraint on the frame that the image is in. Everything scales with the screen size just fine.
However, I need to be able to get the current width/height of the image (or the frame) so that I can perform some math functions in order to move a marker over the image to a specific position (X, Y). I cannot get the size of the image/frame, and therefore cannot update the position.
Is there a way to get the currently rendered width of an image or frame that is using the Scale size options with the UIAspectRatioConstraint?
I'm sleepy. I hope this makes sense...
My current math for getting a position on another image that uses Offset instead of Size is:
local _x = (_miniMapImageSize.X.Offset / _worldCenterSize.X) * (_playerPos.X - _worldCenterPos.X) + (_miniMapFrameSize.X.Offset / 2)
local _y = (_miniMapImageSize.Y.Offset / _worldCenterSize.Z) * (_playerPos.Z - _worldCenterPos.Z) + (_miniMapFrameSize.Y.Offset / 2)
Which gives me the player position within my mini-map. But that doesn't scale. The actual map does, and I need to position the player's marker on that map as well.
Work-Around
For now (for anyone else looking for a solution), I have created a work-around. I now specify my actual image size:
local _mapSize = Vector2.new(814, 659)
Then I use the screen width and height to decide if I need to scale based off the x-axis or the y-axis. (Scale my math formula, not the image.)
if (_mouse.ViewSizeX / _mouse.ViewSizeY) - (_mapSize.X / _mapSize.Y) <= 0 then
-- If the width of the screen is at the same or smaller ratio with the height of the screen
-- then calculate the new size based off the width
local _smallerByPercent = (_mouse.ViewSizeX * 0.9) / _mapSize.X
_mapWidth = _mapSize.X * _smallerByPercent
mapHeight = _mapSize.Y * _smallerByPercent
else
local _smallerByPercent = (_mouse.ViewSizeY * 0.9) / _mapSize.Y
_mapWidth = _mapSize.X * _smallerByPercent
_mapHeight = _mapSize.Y * _smallerByPercent
end
After that, I can create the position for my marker on my map.
_x = ((_mapWidth / _worldCenterSize.X) * (_playerPos.X - _worldCenterPos.X)) * -1
_y = ((_mapHeight / _worldCenterSize.Z) * (_playerPos.Z - _worldCenterPos.Z)) * -1
_mapCharacterArrow.Position = UDim2.new(0.5, _x, 0.5, _y)
Now my marker is able to be placed where my character is within the larger map opened when I press "M".
HOWEVER
I would still love to know of a way to get the rendered/displayed image size... I was trying to make it to where I did not have to enter the image size into the script manually. I want it to be dynamic.
So it seems there is a property of most GUI elements called AbsoluteSize. This is the actual display size of the element, no matter what it is scaled to. (It is not that it stays the same when scaled, but it changes as it is scaled to give you the new size.)
With this, I was able to re-write my code to:
local _x = (_mapImageSize.X / _worldCenterSize.X) * (_playerPos.X - _worldCenterPos.X) * -1
local _y = (_mapImageSize.Y / _worldCenterSize.Z) * (_playerPos.Z - _worldCenterPos.Z) * -1
_mapCharacterArrow.Position = UDim2.new(0.5, _x, 0.5, _y)
Where _mapImageSize = [my map image].AbsoluteSize.
Much better than before.

Crop half of an image in OpenCV

How can I crop an image and only keep the bottom half of it?
I tried:
Mat cropped frame = frame(Rect(frame.cols/2, 0, frame.cols, frame.rows/2));
but it gives me an error.
I also tried:
double min, max;
Point min_loc, max_loc;
minMaxLoc(frame, &min, &max, &min_loc, &max_loc);
int x = min_loc.x + (max_loc.x - min_loc.x) / 2;
Mat croppedframe = = frame(Rect(x, min_loc.y, frame.size().width, frame.size().height / 2));
but it doesn't work as well.
Here's a the python version for any beginners out there.
def crop_bottom_half(image):
cropped_img = image[image.shape[0]/2:image.shape[0]]
return cropped_img
The Rect function arguments are Rect(x, y, width, height). In OpenCV, the data are organized with the first pixel being in the upper left corner, so your rect should be:
Mat croppedFrame = frame(Rect(0, frame.rows/2, frame.cols, frame.rows/2));
To quickly copy paste:
image = YOURIMAGEHERE #note: image needs to be in the opencv format
height, width, channels = image.shape
croppedImage = image[int(height/2):height, 0:width] #this line crops
Explanation:
In OpenCV to select a part of an image,you can simply select the start and end pixels from the image. The meaning is:
image[yMin:yMax, xMin:xMax]
In human speak: yMin = top | yMax = bottom | xMin = left | xMax = right |
" : " means from the value on the left of the : to the value on the right
To keep the bottom half we simply do [int(yMax/2):yMax, xMin:xMax] which means from half the image to to the bottom. x is 0 to the max width.
Keep in mind that OpenCV starts from the top left of an image and increasing the Y value means downwards.
To get the width and height of an image you can do image.shape which gives 3 values:
yMax,xMax, amount of channels of which you probably won't use the channels. To get just the height and width you can also do:
height, width = image.shape[0:2]
This is also known as getting the Region of Interest or ROI

Calculate real width based on picture, knowing distance

I know the distance between the camera and the object
I know the type of camera used
I know the width in pixel on the picture
Can I figure the real life width of the object?
you have to get the angle of camera. For example, iphone 5s is 61.4 in vertical and 48.0 horizontal. call it alpha.
then you calculate the width of object by this way:
viewWidth = distance * tan(alpha / 2) * 2;
objWidth = viewWidth * (imageWidth / screenWidth)

Resources