locate a circle more accurately - opencv

I'm new bee in vision machine.
Currently, I'm detecting circle(hole) from camera.
The green border show circle detected.
Problem is detected circle position deviate +-2pixel.
For example, when I print the circle center position
( 750, 650 ) ->
( 748, 652 ) ->
( 750, 650 ) ->
( 752, 648 ) ->
( 750, 650 )
like this,,
Some one said this could be solved by subpixel algorithm but not sure.
Is it possible to fix the illumination maintaining in vision ?
I just used Opencv-python Library HoughCircle.
circles = cv2.HoughCircles(gray_frame,
cv2.HOUGH_GRADIENT,
1,
300,
param1=74,
param2=16,
minRadius=310,
maxRadius=340
)

Related

Opencv find the coordinates of a roi image

I have this image and need to coordinates of the starting point and ending point of the head(until the neck).
I use the below code to crop the image but get the below error :-
import cv2
img = cv2.imread("/Users/pr/images/dog.jpg")
print img.shape
crop_img = img[400:500, 500:400] # Crop from x, y, w, h -> 100, 200, 300, 400
# NOTE: its img[y: y + h, x: x + w] and *not* img[x: x + w, y: y + h]
cv2.imshow("cropped", crop_img)
cv2.waitKey(0)
Error:-
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /Users/travis/build/skvark/opencv-python/opencv/modules/highgui/src/window.cpp, line 325
Question:-
How can I find the coordinates of region of interest items?
If you want pick rectangle: x = 100, y =200, w = 300, h = 400, you should use code:
crop_img = img[200:600, 100:300]
and if you want cut dog's head you need:
crop_img = img[0:230, 250:550]
If you are trying to find the pixel co-ordinate of image that has to be used in the img[] you can simply use ms paint to find the pixel location. for example
img[y1:y2, x1:x2], here to find the values of x1,x2,y1 and y2 you can open the image in ms paint and place you cursor on the location where you need the co-ordinates. Paint will display the co-ordinates of that pixel at the bottom left corner of you mspaint window. consider this location as (x,y).
Screenshot of using MSpaint for getting pixel location.

Field of view of a GoPro camera

I have calibrated my GoPro Hero 4 Black using Camera calibration toolbox for Matlab and calculated its fields of view and focal length using OpenCV's calibrationMatrixValues(). These, however, differ from GoPro's specifications. Istead of 118.2/69.5 FOVs I get 95.4/63.4 and focal length 2.8mm instead of 17.2mm. Obviously something is wrong.
I suppose the calibration itself is correct since image undistortion seems to be working well.
Can anyone please give me a hint where I made a mistake? I am posting my code below.
Thanks.
Code
cameraMatrix = new Mat(3, 3, 6);
for (int i = 0; i < cameraMatrix.height(); i ++)
for (int j = 0; j < cameraMatrix.width(); j ++) {
cameraMatrix.put(i, j, 0);
}
cameraMatrix.put(0, 0, 582.18394);
cameraMatrix.put(0, 2, 663.50655);
cameraMatrix.put(1, 1, 582.52915);
cameraMatrix.put(1, 2, 378.74541);
cameraMatrix.put(2, 2, 1.);
org.opencv.core.Size size = new org.opencv.core.Size(1280, 720);
//output parameters
double [] fovx = new double[1];
double [] fovy = new double[1];
double [] focLen = new double[1];
double [] aspectRatio = new double[1];
Point ppov = new Point(0, 0);
org.opencv.calib3d.Calib3d.calibrationMatrixValues(cameraMatrix, size,
6.17, 4.55, fovx, fovy, focLen, ppov, aspectRatio);
System.out.println("FoVx: " + fovx[0]);
System.out.println("FoVy: " + fovy[0]);
System.out.println("Focal length: " + focLen[0]);
System.out.println("Principal point of view; x: " + ppov.x + ", y: " + ppov.y);
System.out.println("Aspect ratio: " + aspectRatio[0]);
Results
FoVx: 95.41677635378488
FoVy: 63.43170132212425
Focal length: 2.8063085232812504
Principal point of view; x: 3.198308916796875, y: 2.3934605770833333
Aspect ratio: 1.0005929569269807
GoPro specifications
https://gopro.com/help/articles/Question_Answer/HERO4-Field-of-View-FOV-Information
Edit
Matlab calibration results
Focal Length: fc = [ 582.18394 582.52915 ] ± [ 0.77471 0.78080 ]
Principal point: cc = [ 663.50655 378.74541 ] ± [ 1.40781 1.13965 ]
Skew: alpha_c = [ -0.00028 ] ± [ 0.00056 ] => angle of pixel axes = 90.01599 ± 0.03208 degrees
Distortion: kc = [ -0.25722 0.09022 -0.00060 0.00009 -0.01662 ] ± [ 0.00228 0.00276 0.00020 0.00018 0.00098 ]
Pixel error: err = [ 0.30001 0.28188 ]
One of the images used for calibration
And the undistorted image
You have entered 6.17mm and 4.55mm for the sensor size in OpenCV, which corresponds to an aspect ratio 1.36 whereas as your resolution (1270x720) is 1.76 (approximately 16x9 format).
Did you crop your image before MATLAB calibration?
The pixel size seems to be 1.55µm from this Gopro page (this is by the way astonishingly small!). If pixels are squared, and they should be on this type of commercial cameras, that means your inputs are not coherent. Computed sensor size should be :
[Sensor width, Sensor height] = [1280, 720]*1.55*10^-3 = [1.97, 1.12]
mm
Even if considering the maximal video resolution which is 3840 x 2160, we obtain [5.95, 3.35]mm, still different from your input.
Please see this explanation about equivalent focal length to understand why the actual focal length of the camera is not 17.2 but 17.2*5.95/36 ~ 2.8mm. In that case, compute FOV using the formulas here for instance. You will indeed find values of 93.5°/61.7° (close to your outputs but still not what is written in the specifications because there probably some optical distortion due to the wide angles).
What I do not understand though, is how the focal distance returned can be right whereas sensor size entered is wrong. Could you give more info and/or send an image?
Edits after question updates
On that cameras, with a working resolution of 1280x720, the image is downsampled but not cropped so what I said above about sensor dimensions does not apply. The sensor size to consider is indeed the one used (6.17x4.55) as explained in your first comment.
The FOV is constrained by the calibration matrix inputs (fx, fy, cx, cy) given in pixels and the resolution. You can check it by typing:
2*DEGRES(ATAN(1280/(2*582.18394))) (=95.416776...°)
This FOV value is smaller than what is expected, but by the look of the undistorted image, your MATLAB distortion model is right and the calibration is correct. The barrel distortion due to the wide angle seems well corrected by the the rewarp you applied.
However, MATLAB toolbox uses a pinhole model, which is linear and cannot account for intrinsic parameters such as lens distortion. I assume this from the page :
https://fr.mathworks.com/help/vision/ug/camera-calibration.html
Hence, my best guess is that unless you find a model which fits more accurately the Gopro camera (maybe a wide-angle lens model), MATLAB calibration will return an intrinsic camera matrix corresponding to the "linear" undistorted image and the FOV will indeed be smaller (in the case of barrel distortion). You will have to apply distortion coefficients associated to the calibration to retrieve the actual FOV value.
We can see in the corrected image that side parts of the FOV get rejected out of bounds. If you had warped the image entirely, you would find that some undistorted pixels coordinates exceed [-1280/2;+1280/2] (horizontally, and idem vertically). Then, replacing opencv.core.Size(1280, 720) by the most extreme ranges obtained, you would hopefully retrieve Gopro website values.
In conclusion, I think you can rely on the focal distance value that you obtained if you make measurements in the center of your image, otherwise there is too much distortion and it doesn't apply.

Image Processing of Function Graph

I would like to detect these points in this graph , also I wanted to detect the lines ,
I searched for edge detection and corner detection (such as Harris Corner Detector ), but I don't know how to handle such graph , I only need to know a sudo algorithm , or steps of going through such problem
Detect the vertices - segment by color( r>>max(g,b) ) and then apply the median or minimum filter of the appropriate size, or simply binary erode few times. Then just label the remaining connected blobs.
Detect the lines - use the simplified Hough Transform. Basicaly, draw a virtual line from the center of each vertix to all others and count the red pixels along the line. If there are plenty of them - the line exists, otherwise the two vertices are not connected.
Something like that:
import numpy as np
from scipy.misc import imshow, imsave, imread
from scipy.ndimage import filters, morphology, measurements
from skimage.draw import line
img = imread("laGK6.jpg")
r = img[:,:, 0]
g = img[:,:, 1]
b = img[:,:, 2]
mask = (r.astype(np.float)-np.maximum(g,b) ) > 20
mask2 = morphology.binary_erosion(mask)
mask2 = morphology.binary_erosion(mask2)
mask2 = morphology.binary_erosion(mask2)
mask2 = morphology.binary_erosion(mask2)
mask2 = morphology.binary_dilation(mask2)
label, numfeatures = measurements.label(mask2)
mc = measurements.center_of_mass(mask2, label, range(1,numfeatures+1) )
mask3 = np.zeros_like(mask2)
for p in mc:
mask3[p[0], p[1]]=255
arr = range(numfeatures)
connections=[]
for i in range( numfeatures):
arr.remove(i)
for j in arr:
rr,cc = line(mc[i][0], mc[i][1], mc[j][0], mc[j][1])
mask3[rr,cc]=255
ms = np.sum(mask[rr,cc]).astype(np.float)/len(rr)
if ms > 0.9:
connections.append((i,j))
print "vertices: ", mc
print "connections: ", connections
This outputs the following:
vertices: [(76.551724137931032, 288.72413793103448),
(76.568181818181813, 613.61363636363637), (138.72727272727272,
126.04545454545455), (139.33333333333334, 450.33333333333331), (265.18181818181819, 207.5151515151515), (264.96666666666664,
369.53333333333336), (265.41379310344826, 694.51724137931035), (265.51724137931035, 45.379310344827587), (327.57692307692309,
532.42307692307691)]
connections: [(0, 4), (0, 5), (1, 6), (1, 8), (2, 4), (2, 7), (3, 5), (3, 8)]
I am also working on a project that detects shapes in a drawing. I am not sure if it will solve your problem as well but here is what I have done for such problems.
I am assuming that you need the value of X and Y coordinates of those edge points
first thing you need is the X and Y values of the complete shape
next inside a loop put an if condition saying "get this point if
Y[i]<Y[i+1] and Y[i]<Y[i-1]". point whose next and previous points have Y greater than the value of current Y.
this condition will give you the X and Y values of the edge points.
Good Luck
If the graph is always the same color, and the vertices are always marked with squares, you can threshold the image by its color to detect lines and vertices. Then look for connected sets of pixels whose width and height are exactly the ones you can just measure.

Calculate distance (disparity) OpenCV

-- Update 2 --
The following article is really useful (although it is using Python instead of C++) if you are using a single camera to calculate the distance: Find distance from camera to object/marker using Python and OpenCV
Best link is Stereo Webcam Depth Detection. The implementation of this open source project is really clear.
Below is the original question.
For my project I am using two camera's (stereo vision) to track objects and to calculate the distance. I calibrated them with the sample code of OpenCV and generated a disparity map.
I already implemented a method to track objects based on color (this generates a threshold image).
My question: How can I calculate the distance to the tracked colored objects using the disparity map/ matrix?
Below you can find a code snippet that gets the x,y and z coordinates of each pixel. The question: Is Point.z in cm, pixels, mm?
Can I get the distance to the tracked object with this code?
Thank you in advance!
cvReprojectImageTo3D(disparity, Image3D, _Q);
vector<CvPoint3D32f> PointArray;
CvPoint3D32f Point;
for (int y = 0; y < Image3D->rows; y++) {
float *data = (float *)(Image3D->data.ptr + y * Image3D->step);
for (int x = 0; x < Image3D->cols * 3; x = x + 3)
{
Point.x = data[x];
Point.y = data[x+1];
Point.z = data[x+2];
PointArray.push_back(Point);
//Depth > 10
if(Point.z > 10)
{
printf("%f %f %f", Point.x, Point.y, Point.z);
}
}
}
cvReleaseMat(&Image3D);
--Update 1--
For example I generated this thresholded image (of the left camera). I almost have the same of the right camera.
Besides the above threshold image, the application generates a disparity map. How can I get the Z-coordinates of the pixels of the hand in the disparity map?
I actually want to get all the Z-coordinates of the pixels of the hand to calculate the average Z-value (distance) (using the disparity map).
See this links: OpenCV: How-to calculate distance between camera and object using image?, Finding distance from camera to object of known size, http://answers.opencv.org/question/5188/measure-distance-from-detected-object-using-opencv/
If it won't solve you problem, write more details - why it isn't working, etc.
The math for converting disparity (in pixels or image width percentage) to actual distance is pretty well documented (and not very difficult) but I'll document it here as well.
Below is an example given a disparity image (in pixels) and an input image width of 2K (2048 pixels across) image:
Convergence Distance is determined by the rotation between camera lenses. In this example it will be 5 meters. Convergence distance of 5 (meters) means that the disparity of objects 5 meters away is 0.
CD = 5 (meters)
Inverse of convergence distance is: 1 / CD
IZ = 1/5 = 0.2M
Size of camera's sensor in meters
SS = 0.035 (meters) //35mm camera sensor
The width of a pixel on the sensor in meters
PW = SS/image resolution = 0.035 / 2048(image width) = 0.00001708984
The focal length of your cameras in meters
FL = 0.07 //70mm lens
InterAxial distance: The distance from the center of left lens to the center of right lens
IA = 0.0025 //2.5mm
The combination of the physical parameters of your camera rig
A = FL * IA / PW
Camera Adjusted disparity: (For left view only, right view would use positive [disparity value])
AD = 2 * (-[disparity value] / A)
From here you can compute actual distance using the following equation:
realDistance = 1 / (IZ – AD)
This equation only works for "toe-in" camera systems, parallel camera rigs will use a slightly different equation to avoid infinity values, but I'll leave it at this for now. If you need the parallel stuff just let me know.
if len(puntos) == 2:
x1, y1, w1, h1 = puntos[0]
x2, y2, w2, h2 = puntos[1]
if x1 < x2:
distancia_pixeles = abs(x2 - (x1+w1))
distancia_cm = (distancia_pixeles*29.7)/720
cv2.putText(imagen_A4, "{:.2f} cm".format(distancia_cm), (x1+w1+distancia_pixeles//2, y1-30), 2, 0.8, (0,0,255), 1,
cv2.LINE_AA)
cv2.line(imagen_A4,(x1+w1,y1-20),(x2, y1-20),(0, 0, 255),2)
cv2.line(imagen_A4,(x1+w1,y1-30),(x1+w1, y1-10),(0, 0, 255),2)
cv2.line(imagen_A4,(x2,y1-30),(x2, y1-10),(0, 0, 255),2)
else:
distancia_pixeles = abs(x1 - (x2+w2))
distancia_cm = (distancia_pixeles*29.7)/720
cv2.putText(imagen_A4, "{:.2f} cm".format(distancia_cm), (x2+w2+distancia_pixeles//2, y2-30), 2, 0.8, (0,0,255), 1,
cv2.LINE_AA)
cv2.line(imagen_A4,(x2+w2,y2-20),(x1, y2-20),(0, 0, 255),2)
cv2.line(imagen_A4,(x2+w2,y2-30),(x2+w2, y2-10),(0, 0, 255),2)
cv2.line(imagen_A4,(x1,y2-30),(x1, y2-10),(0, 0, 255),2)
cv2.imshow('imagen_A4',imagen_A4)
cv2.imshow('frame',frame)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
I think this is a good way to measure the distance between two objects

Filter for red hue - emgucv/opencv

How do I filter an image for red hue? I understand that red lies around zero between 330° and 30° (represented by 165 to 15 in OpenCV?). How can I use that range with the InRange method as there is an overflow at 360° (180 in OpenCV)?
Im detecting HUE colour using the following code:
Mat img_hsv, dst ;
cap >> image;
cvtColor(image, img_hsv, CV_RGB2HSV);
inRange(img_hsv, Scalar(110, 130, 100), Scalar(140, 255, 255), dst );
where dst is Mat of the same size as img_hsv and CV_8U type.
And your scalars determine the filtered colour. In my case its:
HUE from 110 to 140
SAT from 130 to 255
VAL from 100 to 255
more info here:
OpenCV 2.4 InRange()
I'm not sure about using a hue that overflows the 180 range but I think you can calculate them separately and then add the resulting Mats.

Resources