I am trying to create a panorama and I am stuck on the part where I have two separate warped images in two cv::Mat's and now I need to align them and create one single cv::Mat. I also need to average the pixel color value where the pixels in the images overlap to do elementary blending. Is there a built in function in opencv that can do this for me? I have been following the Basic Stitching Pipeline. I'm not sure how I can align and blend the images. I looked up a solution that does feature matching between the images and then we get the homography and just use the translation vector to align the images. Is this what I should be using?
Here are the warped images:
Image 1:
Image 1:
Generating a panaroma from a set of images is usually done using homographies. The reason for this is explained very well here.
You can refer to the code given by Eduardo here. It is also based on feature matching though.
You are right, you need to start with finding descriptors for features in the image (Brief descriptor might be a good idea) and then do feature matching. Once you have the correspondences, you will use those correspondences to estimate the homography. The homography will help you warp one of the image with respect to the other. Post this, you can simply blend them together (by simply add the two images, or taking the maximum value of the at each pixel between the two images)
I have three images attached. The vertical Red strips in each image has different redness for different images. There are two strips in each image.
Looks very close, but there is difference and I like to detect redness difference in imaging.
Currently I extract these two red vertical strips in imaging and convert to HSV and look at H channel and S channel.
But the resolution of Redness is not acceptable.
What approach image processing or hardware use better camera model or any approach I use, would it be possible to get better data of redness detection for those vertical strips at images?
Image 1
Image 2
Image 3
Those images are captured by PointGray camera with 2M pixel resolution.
CM3-U3-13S2M-CS
I have a set of images of a scene at different angles and the camera intrinsic parameters. I was able to generate the 3D points using point correspondences and triangulation. Is there a built-in method or a way to produce 3D images in MATLAB? From the given set of images, a 3D image as such? By 3D image, I mean a 3D view of the scene based on the colors, depth, etc.?
There was a recent MathWorks blog post on 3D surface rendering, which showed methods using built-in functions and using contributed code.
The built-in method uses isosurface and patch. To display multiple surfaces, the trick is to set the 'FaceAlpha' patch property to get transparency.
The post also highlighted the "vol_3d v2" File Exchange submission, which provides the vol3d function to render the surface. It allows explicit definition of voxel colors and alpha values.
Some file exchange from mathworks:
3D CT/MRI images interactive sliding viewer, 3D imshow, image3, and viewer3D.
If your images matrix I has the dimension of x*y*z, you can try surface as well:
[X,Y] = meshgrid(1:size(I,2), 1:size(I,1));
Z = ones(size(I,1),size(I,2));
for z=1:size(I,3)
k=z*sliceInterval;
surface('XData',X, 'YData',Y, 'ZData',Z*k,'CData',I(:,:,z), 'CDataMapping','direct', 'EdgeColor','none', 'FaceColor','texturemap')
end
The Computer Vision System Toolbox for MATLAB includes a function called estimateUncalibratedRectification, which you can use to rectify a pair of stereo images. Check out this example of how to create a 3-D image, which can be viewed with a pair of red-cyan stereo glasses.
We are planning to create a surface damage detection prototype for ceramic tiles with surface discoloration as a specific damage through the use of OpenCV. We would like to know what method should we consider using. We are new into developing these types of object recognition/object tracking programs. We've read about methods such as the Histogram method and the one where the Hue saturation value was being tracked, but still we are confused.
Also, we would like to know whether it is possible to detect the Hue saturation value of an object without the use of track bars.
Any relevant and helpful response will be greatly appreciated.
I think you can do it in sequence:
1) find tile region. Use corners detector, hough lines, etc.
2) find SIFT (or other descriprors) and recognize what image must be on this tile (find it in you tiles images database).
3) align images carefully. For example find homograpy between found in DB image and image of tile from camera (using SIFT features).
4) find color distance between every pixel in tile image from camera and tile image from database.
5) threshold differences by some value -> get problematic regions
And think about lighting. You have to provide equal lighting conditions for you measurements.
I'm looking for a possibility to convert raster images to vector data using OpenCV. There I found a function cv::findContours() which seems to be a bit primitive (more probably I did not understand it fully):
It seems to use b/w images only (no greyscale and no coloured images) and does not seem to accept any filtering/error suppresion parameters that could be helpful in noisy images, to avoid very short vector lines or to avoid uneven polylines where one single, straight line would be the better result.
So my question: is there a OpenCV possibility to vectorise coloured raster images where the colour-information is assigned to the resulting polylinbes afterwards? And how can I apply noise reduction and error suppression to such a algorithm?
Thanks!
If you want to raster image by color than I recommend you to clusterize image on some group of colors (or quantalize it) and after this extract contours of each color and convert to needed format. There are no ready vectorizing methods in OpenCV.