How show stereo camera with Oculus Rift? - opencv

I use the OpenCV for show in a new windows the left and right image from a stereo camera. Now I want to see the same thing on the Oculus Rift but when I connect the Oculus the image doesn't became in the Characteristic Circled image suitable with the lens inside the Oculus...
I need to process by myself the image ? It's not Automatic?
This is the code for show the windows:
cap >> frame; //cap= camera 1 & cap2=camera 2
cap.read(frame);
sz1 = frame.size();
//second camera
cap2 >> frame2;
cap2.read(frame2);
sz2 = frame2.size();
cv::Mat bothFrames(sz2.height, sz2.width + sz1.width, CV_8UC3);
// Move right boundary to the left.
bothFrames.adjustROI(0, 0, 0, -sz1.width);
frame2.copyTo(bothFrames);
// Move the left boundary to the right, right boundary to the right.
bothFrames.adjustROI(0, 0, -sz2.width, sz1.width);
frame.copyTo(bothFrames);
// restore original ROI.
bothFrames.adjustROI(0, 0, sz2.width, 0);
cv::imencode(".jpg", bothFrames, buf, params);
I have another problem. I'm trying to add the OVR Library to my code but I have the error "System Ambibuous Symbol" because some class inside the OVR Library used the same namaspace... This error arise when I add the
#include "OVR.h"
using namespace OVR;
-.-"

The SDK is meant to perform lens distortion correction, chromatic aberration correction (different refractive indices for different color light causes color fringing in image without correction), time warp, and possibly other corrections in the future. Unless you have a heavy weight graphics pipeline that you're hand optimizing, it's best to use the SDK rendering option.
You can learn about the SDK and different kinds of correction here:
http://static.oculusvr.com/sdk-downloads/documents/Oculus_SDK_Overview.pdf
It also explains how the distortion corrections are applied. The SDK is open source so you could also just read the source for a more thorough understanding.
To fix your namespace issue, just don't switch to the OVR namespace! Every time you refer to something from the OVR namespace, prefix it with OVR:: - e.g, OVR::Math - this is, after all, the whole point of namespaces :p

Related

Stiching top down photos of long items using OpenCV

I'm working on a project that includes taking top-down photos from three stationary cameras mounted above a long white table. Long packs of veneer are placed on the table to be photographed. The three images then need to be stitched together. The goal is to achieve a smooth transition on the border. I'm trying to use OpenCV to achieve this, but I'm running into problems. The images are either stitched with huge overlap and distortion or, most commonly, just give an "assertion failed" error. I've tried using both "panorama" and "scans" modes to no avail. I'm using the Emgu.CV.NET wrapper, but the logic behind it should be the same.
The reason I'm not joining images manually is due to slight chromatic distortions towards the edges of images, which cause the pack size to slightly differ in the place where the connection should be. Also, the exact position of cameras may shift over time, causing further shifts. I was hoping the existing algorithms could fix that.
Am I using the stitcher wrongly, or is it unfit for this task? Can someone recommend other tools or methods for this problem?
Here's example images, which error out the stitcher.
(Click to see full-size versions)
Code used:
class ImageStitching
{
public static Mat StichImages(List<Mat> images)
{
Mat output = new Mat();
VectorOfMat matVector = new VectorOfMat();
foreach (Mat img in images)
{
matVector.Push(img);
}
Stitcher stitcher = new Stitcher(Emgu.CV.Stitching.Stitcher.Mode.Scans);
Brisk detector = new Brisk();
stitcher.SetFeaturesFinder(detector);
stitcher.Stitch(matVector, output);
return output;
}
}

How to restore web-cam to default settings after ruining them with OpenCV settings?

I need video from web-cam. On Anaconda with python-3.6 and OpenCV-3 it worked fine. I tried then the same code in Idle with python-3.6 and OpenCV-4.1.0 and it did not worked in anaconda. I had two black upper and lower edges, and I could only see the middle of the image. I tried to modify some OpenCV settings and it only got worse, now I barely see anything on the image, only if I put strong light. The two edges did not disappeared.
import cv2
capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_SETTINGS, 0)
while(True):
ret, frame = capture.read()
cv2.imshow('video', frame)
if cv2.waitKey(1) == 27:
break
capture.release()
cv2.destroyAllWindows()
The line capture.set(cv2.CAP_PROP_SETTINGS, 0) opens a small settings dialog, but there are many other, like this:
CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file
CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
CV_CAP_PROP_FPS Frame rate.
CV_CAP_PROP_FOURCC 4-character code of codec.
CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.
CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).
CV_CAP_PROP_HUE Hue of the image (only for cameras).
CV_CAP_PROP_GAIN Gain of the image (only for cameras).
CV_CAP_PROP_EXPOSURE Exposure (only for cameras).
CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
CV_CAP_PROP_WHITE_BALANCE Currently unsupported
CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
I tried to install some camera drivers from asus, but couldn't find any for my model: FX504GE . Is there any combination of this settings or smth to restore my web-cam? I really need it rn...
The simple way is to use v4l2-ctrl to read in all parameters when you launch the camera. record down the initial value. After you have done in opencv. use v4l2-ctrl to set.
eg. size
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=YUYV
there are other like auto zoom auto expsoure and lot of things read all and set all
You can use guvcview to to this through a GUI. It has a "hardware defaults" button.
sudo apt-get install guvcview
See:
https://askubuntu.com/questions/205391/reset-webcam-settings-to-defaults

Distance Fog XNA 4.0

I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)

EmguCV Shape Detection affected by Image Size

I'm using the Emgu shape detection example application to detect rectangles on a given image. The dimensions of the resized image appear to impact the number of shapes detected even though the aspect ratio remains the same. Here's what I mean:
Using (400,400), actual img size == 342,400
Using (520,520), actual img size == 445,520
Why is this so? And how can the optimal value be determined?
Thanks
I replied to your post on EMGU but figured you haven't checked back but this is it. The shape detection works on the principle of thresh-holding unlikely matches, this prevents lots of false classifications. This is true for many image processing algorithms. Basically there are no perfect setting and a designer must select the most appropriate settings to produce the most desirable results. I.E. match the most objects without saying there's more than there actually is.
You will need to adjust each variable individually to see what kind of results you get. Start of with the edge detection.
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking);
Have a look at your smaller image see what the difference is between the rectangles detected and the one that isn't. You could be missing and edge or a corner which is why it's not classified. If you are adjust cannyThreshold and observe the results, if good then keep it :) if bad :( go back to the original value. Once satisfied adjust cannyThresholdLinking and observe.
You will keep repeating this until you get a preferred image the advantage here is that you have 3 items to compare you will continue until the item that's not being recognised matches the other two.
If they are the similar, likely as it is a black and white image you'll need to go onto the Hough lines detection.
LineSegment2D[] lines = cannyEdges.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
20, //threshold
30, //min Line width
10 //gap between lines
)[0]; //Get the lines from the first channel
Use the same method of adjusting one value at a time and observing the output you will hopefully find the settings you need. Never jump in with both feet and change all the values as you will never know if your improving the accuracy or not. Finally if all else fails look at the section that inspects the Hough results for a rectangle
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
Less variables to change as hough should do all the work for you. but still it could all work out here.
I'm sorry that there is no straight forward answer, but I hope you keep at it and solve the problem. Else you could always resize the image each time.
Cheers
Chris

What processing steps should I use to clean photos of line drawings?

My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.

Resources