I have obtained a 3D wavefront model of face inside a portrait and then that face is transformed using mesh_numpy and a 2D transformed image is saved. As the images are attached. Now I want to warp this transformed image back onto the original image. It's like face swap where I have both source and target image. What is the better way to achieve it.
Original:
Transformed:
Update: If anyone faces this issue in near future, I managed to achieve this by using Homography.
Related
I'm looking for a guide on how to take a 2D image (Jpeg/PNG) and apply it to a 3D object template programmatically.
The specific use case I am trying to replicate is taking a picture and applying it to a 3D picture frame, similarly to how Cart Magician does (https://cartmagician.com/) where you can upload an image and then applies it to a picture frame object template that they provide which then renders the object with the image that can be viewed with Google AR.
Could anyone help or point me in the right direction?
Thanks in advance!
This is the AR frame with image, the image should be interchangeable.
You can create a ViewRenderable surrounded by the 3D model of a painting frame. The ViewRenderable will have the 2D image put on it.
Currently, I am object tracking with OpenCV. When OpenCV returns the keypoints of where the objects are, they are out of the range of the iPhone screen. I'm thinking that there is some sort of conversion that needs to be done in order to use these points with swift.
Does anyone know the conversion that needs to be done?
Any help would be appreciated.
The cvPoint values are image coordinates (if the tracking algorithm is valid). You can mark these points on the image using cvCircle and then display the image on the iPhone screen to check if they are valid.
When an image is displayed on the screen using UIImageView, the size of the displayed image might not be the same as the image resolution. In that case, you will need to scale the coordinates, if you want to position something on the image. See here for an example.
I am making a program that recognizes horizontal/vertically straight lines from an image file and creates a bunch of line data for other purpose.
However I got a problem that when I take pictures from diagonally sideways(or up/downwards), that picture shouldn't have horizontally/vertically straight lines so I cannot use that picture.
So I have to make image pre-processing method to invert perspective warping. To do so, I must find current projection value of the image first.
Unfortunately I couldn't find a way with OpenCV, unless I add precalculating camera matrix progress before taking picture.
I assume that most of lines in input images should be horizontal/vertically straight. Is there any methods to solve my problem in OpenCV?
For example:
This image is Perspectively warped. I wanna make it image like this :
How can I convert 3D image to 2D image in order to view it without using a 3D glasses?
Update:
- I have no solution at the moment, but, you know, you can imagine that if you have a 3d image you will wear the glass to your camera and take another image (So, I think that it a physical method to convert from 3d to 2d.
First pass it through Red and Blue filters and get the two separate images(these images will differ in their positions slightly).
Then you need to transform one image through some pixel(which you should determine - you can detect the edges in both the images and find the difference in pixels between their first edge)
This method will help you to get a 2D image.
3d image is actually a pair of images: 1 for left eye, and 1 for right. There are many ways to transfer those from computer to your eyes including magenta/cyan (your case), polarization ( http://en.wikipedia.org/wiki/Polarized_3D_glasses ), animation ( http://en.wikipedia.org/wiki/File:Home_plate_anim.gif ) etc. The point is that 3d is two different images. There is no real way to merge them to one. If you have source images (e.g. separate images for right and lelft) you can just take one of them and that will be your 2d image.
By using relatively straightforward image processing, you can run that source image through a red or blue filter (just like the glasses), to attempt to recover something like the original left or right eye image. You will still end up with two slightly different images, and you may have trouble recovering the original image colours.
Split the image into Red and Green channels. Then use any stereo vision technique to match the red channel and green channel images.
I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model