How can I convert 3D image to 2D image in order to view it without using a 3D glasses?
Update:
- I have no solution at the moment, but, you know, you can imagine that if you have a 3d image you will wear the glass to your camera and take another image (So, I think that it a physical method to convert from 3d to 2d.
First pass it through Red and Blue filters and get the two separate images(these images will differ in their positions slightly).
Then you need to transform one image through some pixel(which you should determine - you can detect the edges in both the images and find the difference in pixels between their first edge)
This method will help you to get a 2D image.
3d image is actually a pair of images: 1 for left eye, and 1 for right. There are many ways to transfer those from computer to your eyes including magenta/cyan (your case), polarization ( http://en.wikipedia.org/wiki/Polarized_3D_glasses ), animation ( http://en.wikipedia.org/wiki/File:Home_plate_anim.gif ) etc. The point is that 3d is two different images. There is no real way to merge them to one. If you have source images (e.g. separate images for right and lelft) you can just take one of them and that will be your 2d image.
By using relatively straightforward image processing, you can run that source image through a red or blue filter (just like the glasses), to attempt to recover something like the original left or right eye image. You will still end up with two slightly different images, and you may have trouble recovering the original image colours.
Split the image into Red and Green channels. Then use any stereo vision technique to match the red channel and green channel images.
Related
I am trying to create a panorama and I am stuck on the part where I have two separate warped images in two cv::Mat's and now I need to align them and create one single cv::Mat. I also need to average the pixel color value where the pixels in the images overlap to do elementary blending. Is there a built in function in opencv that can do this for me? I have been following the Basic Stitching Pipeline. I'm not sure how I can align and blend the images. I looked up a solution that does feature matching between the images and then we get the homography and just use the translation vector to align the images. Is this what I should be using?
Here are the warped images:
Image 1:
Image 1:
Generating a panaroma from a set of images is usually done using homographies. The reason for this is explained very well here.
You can refer to the code given by Eduardo here. It is also based on feature matching though.
You are right, you need to start with finding descriptors for features in the image (Brief descriptor might be a good idea) and then do feature matching. Once you have the correspondences, you will use those correspondences to estimate the homography. The homography will help you warp one of the image with respect to the other. Post this, you can simply blend them together (by simply add the two images, or taking the maximum value of the at each pixel between the two images)
I have three images attached. The vertical Red strips in each image has different redness for different images. There are two strips in each image.
Looks very close, but there is difference and I like to detect redness difference in imaging.
Currently I extract these two red vertical strips in imaging and convert to HSV and look at H channel and S channel.
But the resolution of Redness is not acceptable.
What approach image processing or hardware use better camera model or any approach I use, would it be possible to get better data of redness detection for those vertical strips at images?
Image 1
Image 2
Image 3
Those images are captured by PointGray camera with 2M pixel resolution.
CM3-U3-13S2M-CS
How do I generate images of the different moon phases?
i.e., I want something like this:
and I'm thinking about overlapping two images to get that kind of result, for example
&
based on U.S. Navy moon illumination data (any alternatives?)
Moon calendar for 2016:
http://aa.usno.navy.mil/cgi-bin/aa_moonill2.pl?form=2&year=2016&task=00&tz=6&tz_sign=-1
I made the first image with GIMP, so I'm thinking about automating that process with a GIMP script fed with the parsed moon data (illuminated fraction, at first).
I'm looking for a non-gimp dependent alternative.
Thanks for your time.
Get a selection on the moon (likely, Alpha-to-selection)
Save to channel
Remove the selection
Scale the saved channel horizontally (vertically centered)
Recreate the original selection
Subtract the scaled channel
Subtract on the halves of the image
Bucket-fill the remaining selection
The horizontal scaling factor is likely to be a cosine of the POM day.
A completely different, and non-Gimp-dependent alternative: generate a SVG file directly. In practice you would have a base image on which you would overlay a crescent (which is, like in my other answer, a half circle on one side and a half-ellipse on the other. Creating a Bezier spline that very closely approximates a circle arc or an ellipse arc is very easy.
following up on my other question, do you guys know a good example in OpenCV, with a simple Black/White-Calibration Picture and appropriate detection-algorithms?
I just want to show some B&W-image on a screen, take a picture of that image from afar and calculate the size of the shown image, to calculate the distance to said screen.
Before I invent the wheel again, I recon this is so easy that it could be achieved through many different ways in OpenCV, yet I thought I'd ask if there's a preferred way around, possibly with some sample code.
(I got some face-detection code running using haarcascade-xml files already)
PS: I already have the resolution/dpi-part of my screen covered, so I know how big a picture would be in cm on my screen.
EDIT:
I'll make it real simple, I need:
A pattern, that is easily recognizable in an Image. Right now I'm experimenting with a checkerboard. The people who made ARDefender used this.
An appropriate algorithm to tell me the exact pixel coordinates of pattern 1) in a picture using OpenCV.
Well, it's hard to say which image is the best for recognition - in different illumination any color could be interpret as another color. Simple example:
As you can see both traffic signs have red color border but even on one image upper sign border is obviously not red.
So in my opinion you should use image with many different colors (like a rainbow). And also you said that it should be easy recognizable in different angles. That's why circle shape is the best for it.
That's why your image should look like this:
So idea of detection such object is the following:
Make different color segmentation (blue, red, green etc). For this use HSV color space.
Detect circles of specific color on image.
That area which has the biggest count of circles seems to be your object.
you just have to take pictures of your B&W object from several known distances (1m, 2m, 3m, ...) and then for each distance check the size of your object in the corresponding image.
From those datas, you will be able to create a linear function giving you the distance from the size in pixels (y = ax + b should do ;) ), translate it into your code and you're done.
Cheers
I intend to make a program which will take stereo pair images, taken by a single camera, and then correct and crop them so that when the images are viewed side by side with the parallel or cross eye method, the best 3D effect will be achieved. The left image will be the reference image, the right image will be modified for corrections. I believe OpenCV will be the best software for these purposes. So far I believe the processing will occur something like this:
Correct for rotation between images.
Correct for y axis shift.
Doing so will I imagine result in irregular black borders above and below the right image so:
Crop both images to the same height to remove borders.
Compute stereo-correspondence/disparity
Compute optimal disparity
Correct images for optimal disparity
Okay, so that's my take on what needs doing and the order it occurs in, what I'm asking is, does that seem right, is there anything I've missed, anything in the wrong order etc. Also, which specific functions of OpenCV would I need to use for all the necessary steps to complete this project? Or is OpenCV not the way to go? Much thanks.
OpenCV is great for this.
There is a whole chapter in:
And all the sample code for this in the book ships with the opencv distribution
edit: Roughly the steps are:
Remap each image to remove lens distortions and rotate/translate views to image center.
Crop pixels that don't appear in both views (optional)
Find matching objects in each view (stereoblock matching) create disparity map
Reproject disparity map into 3D model