How to Rotate Image without library functions in C++? - image-processing

My friends
I got assignment to rotate image without using any library function
So now i want to Know that which algorithm I should learn
and how i start work on it
i read image using opencv but the rotation should be without library function
if some know
Help me

This sounds a lot like homework... but the concept you should learn is that for every target pixel(x,y) you need to find a source pixel in the image(u,v). You need a linear transform from(x,y) to (u,v). For translation you need to expand (x,y) to (x,y,1) and use a 3x2 matrix. Loop through all x,y pixels, find u,v by multiplying x,y,1 with a matrix, fetch the image pixel at u,v and draw it at x,y.

Related

Conversion from OpenGL to OpenCV

What I have
I'm generating images using the standard perspective camera in unity. The camera is aiming to the ground plane (in unity it's the xz-plane), see image. From this I need to remove the perspective so all crop rows are parallel to each other.
Methode
The warpPerspective() function from openCV can be used to remove perspective from an image. All information is known such as, field of view, rotation, position, ... and thus I know how a 3D point maps on the 2D plane and visa versa. The problem is OpenCV uses an other system. In openCV should be a 3X3 matrix and the transformation matrix from unit is a 4X4 matrix. Is there a conversion between the two? Or should I think of another strategy?
EDIT
I can not use the orthographic camera in unity.
Fixed
Solved the issue by constructing a ray from the camera origin through each pixel and looking for an intersection with the ground plane. After this I discretised the ground plane in a grid with the same resolution of the original image. Points that map to the same cell are accumulated
I you cannot use the unity's orthographic camera, what I would try to imitate the c++ code from the examples from your link in open CV documentation. Another approach can be to try to obtain the projection matrix of the points you want the projection to be removed by multiplying by the inverse matrix (the inverse of the transformation matrix of that point). A matrix multiplied by its inverse is the identitiy so the projection transformation would be removed. I think that should be possible, you can dig on that you can obtain/change the projection matrix checking this. The point would be to undo the projection transformation. Then you would need to obtain the according othographic projection matrix and apply it to obtain the positions you're after. That should be the same thing that the unity's orthographic camera does.
To understand the projection matrix to the lowest level this source is awesome.
I think that In the camera component you just need to change the projection from prespective to orthographic:

Opencv get accurate real world coordinates from 2 known parallel planes

So I have been tinkering a little bit with opencv and I want to be able to use a camera image to get the position of certain objects that are lying flat on plane. These objects are simple shapes such as circles squares etc. They all have the same height of 5cm. To be able to relate real world points to pixels on the camera I painted 4 white squares on the plane with known distances between them.
So the steps I have been taking are:
Initialization:
Calibrate my camera using a checkerboard image and save the calibration data.
Get the input image. call cv::undistort with the calibration data for my camera.
Find the center points of the 4 squares in the image and pass that data and the real world coordinates of the squares to the cv::solvePnP function. Save the rvec and tvec return parameters.
Warp the perspective of the image so you can get a top down view from the image. This is essentially following this tutorial: https://docs.opencv.org/3.4.1/d9/dab/tutorial_homography.html
Use the resulting image to again find the 4 white squares and then calculate a "pixels per meter" translation constant which can relate a certain amount of difference in pixels between points to the real world distance on the plane where the 4 squares are.
Finding object, This is done after initialization:
Get the input image. call cv::undistort with the calibration data for my camera.
Warp the perspective of the image so you can get a top down view from the image. This is the same as step 4 during initialisation.
Find the centerpoint of the object to detect.
Since the centerpoint of the object is on a higher plane then where I calibrated I use the following formula to correct this(d = is the pixel offset from the center of the image. camHeight is the cameraHeight I measured by using a tape measure. h is height of the object):
d = x - (h * (x / camHeight))
So here for an illustration how I got this formule:
But still the coordinates are not matching up...
So I am wondering at all if this is the correct. Specifically I have the following questions:
Is using cv::undistort before using cv::solvenPnP correct? cv::solvePnP also takes the camera calibration data as input so I'm not sure if I have to pass an undistorted image to it or not.
Similar to 1. During Finding object I call cv::undistort -> cv::warpPerspective. Is this undistort necessary here?
Is my calculation to correct for the parallel planes in step 4 correct? I feel like I am missing something but I can't see what. One thing I am wondering is whether I can get the camera height from opencv once solvePnp is done.
I am a newbie to CV so If anything else is totally wrong please also point it out to me.
Thank you for reading this wall of text!

Are there any UV Coordinates (or similar) for UIImageView?

I have a simple UIImageView in my view, but I can't seem to find any feature in Apple's documentation to change the UV Coordinates of this UIImageView, to convey my idea to you, this GIF file should preview how changing 4 vertices coordinates can change how the image gets viewed on the final UIImageView.
I tried to find a solution online too (other than documentation) and found none.
I use Swift.
You can achieve that very animation using UIView.transform or CALayer.transform. You'll need basic geometry to convert UV coordinates to a CGAffineTransform or CATransform3D.
I made an assumption that affine transform would suffice because in your animation the transform is affine (parallel lines stay parallel). In that case, 3 vertices are free -- the 4th one is constrained by the other 3.
If you have 3 vertices, you can compute the affine transform matrix using: Affine transformation algorithm
To achieve the infinite repeat, use UIImageResizingMode.Tile.

transform 3d camera coordinates to 3d real world coordinates with opencv

I'm working on a stereo vision system based on openCV which current return correct 3d coordinates, but in the wrong perspective.
I have program a function which give me the camera-3d-coordinate and the expected real-world-coordinate from a cheesboard, but I didn't find out how to generate a transformation matrix from this data.
All possible functions I found in OpenCV doesn't work because they work with 2d-coordinates on a image, and not with the calculated 3d coordinates.
Check out this answer. It is for 2D points, but if You expand T to 3 + 1 element and make R 3x3, it will work.
Here is code that uses this method.

How do I use cv::remap with forward mesh, not reverse mesh, to warp images?

I have a mesh of, say, 11 x 11 vertexes. The idea is that every point(vertex) holds the normalized floating-point position of that pixel in the warped image. E.g. if I want to stretch the upper left edge, I write in the first vertex (-0.1, -0.1).
I have the grid, but not the image warping function. cv::remap does exactly that ... but in the reverse order - the mesh "says" which pixel neighborhoods to map to a regular grid on the output.
Is there a standard way in OpenCV to handle reverse warping? Can I easily transform the mesh or use another function? I am using OpenCV and Boost, but any free library/tool that does that will work for me.
PS: I need this running on a linux PC.
You need to calculate another maps for the reverse transform.
But, for that you need the transform formula, or matrix.
Step 1: Select 4 points on the remapped image. A good idea would be to take the corners, if the corners are not black (undefined)
Step 2: Find their place in the original image(look into the maps for that)
Step 3: Compute the homography between the two sets of points. findHomoraphy() is the key.
Step 4: warpPerspective the second image. Internally, it calculates the grids, then calls remap();
If you have the same transform as earlier, invert input points with output points in findHomography, or inv() the resulting matrix.
If you want to have the maps for multiple calls (it's faster than calling warpPerspective each time), you have to copy the code from warpPerspective in a new function.
You may want to take a look at http://code.google.com/p/imgwarp-opencv/. This library seems to be exactly what you need: image warping based on a sparse grid.

Resources