Given a 3D cuboid and an equirectangular image, I want to map the image onto the inside of a cuboid (only the side faces, not top or bottom). To do this I want to create textures for each face of the cuboid from the equirectangular image.
The image is from a room and I have labelled the top and bottom corners in the panorama which correspond to corners of my cuboid. The image is not from the inside of the room and I wish to create a texture for each face.
I tried solution given here: Convert 2:1 equirectangular panorama to cube map
But that is for a cube, where each texture is the same size. I have looked at other resources but I feel I need another explanation such that I can fully understand how to do this.
Thanks!
Related
I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?
The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:options:delegate:progressHandler:) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.
I need to find the size or coordinates of a rectangle that is displayed as a quadrilateral in a 3D image. The quadrilateral is on a plane that lines up with 3d world vanishing points. To clarify, the quadrilateral IS a rectangle in the 3D world, and that's the rectangle I want the size of.
I do not need to get all the textures and make a new image. I also do not know the coordinates of the target rectangle as required by the homography (perspective transformation) solutions I've seen, because I don't know the aspect ratio it's supposed to have.
I've read through this thread: proportions of a perspective-deformed rectangle and the guy seemed to find an algorithm that works. However I've read other research papers that claim to calculate a homography yet they don't say how they did it. Also it seems such a basic function there would be something in the existing openCV library.
Thanks.
Currently i am trying to read a square card by using an OCR engine. But before processing image, i want during capturing card image, user should only capture card not other surrounding noise. So for that i looked for overlay & able to create a overlay on camera screen but it is not that useful. So right now i am looking forward some help, how to draw a contour / a outline around a square card when user see it in camera eye as this example.
for ex.
Any body has done this before ?
At first use cvCanny to detect all contours on your image.
Then you can use Standard Hough Line Transform for detection of all lines on the image.
Then you can calculate their intersections and find 4 points: the leftmost and the rightmost of the top and the bottom of the image.
You can ignore small lines which are on the the left and right borders of the image by changing the property of threshold.
I have a picture of a checkerboard taken from an arbitrary camera angle. I find the two vanishing points corresponding to the two sets of lines that form the checkerboard grid. From these two vanishing points, I compute a homography from the checkerboard plane to the image plane.
I then apply the inverse homography to re-render the checkerboard from a top view. However, for certain images, the re-rendered top view is very large. That is, due to the camera angle, the inverse homography stretches certain parts of the image (i.e. the regions of the image that are very close to one of the vanishing points) to be very large.
This takes up an unnecessarily large amount of memory, and most of the region that becomes highly stretched is stuff I do not need. So, when applying the inverse homography, I would like to avoid rendering regions of the image that will be highly stretched. What is a good way to do this?
(I am coding in MATLAB)
If you just need to render the checkerboard, without the background, you could just extract the four corners of the checkerboard and compute the homography that maps them to the four corners of a square.
Then you can obtain a rectified image of the checkerboard by warping your input image with this homography, paying attention to render only the needed region (ie the square on which you map the checkerboard).
I am trying to make double chin in fat image as mentioned in my desired result image below.
I have morphed the normal face to fat face by wrapping an image on mesh and deformed the mesh.
Original image
Wrapped image on mesh grid with vertex points displaced
Current result image
I tried a lot by arranging mesh points but could not get the result like I have shown in first image.
Any ideas how to achieve this by open GL or open CV in iOS?
It's obvious from the first image that there is an added effect to produce the double or triple chin.
This actually looks like a either a preset image blended into the original or a scale and stretched version of the original chin blended into the warped image.