How to pick a line closest to the center on a grid? - ios

I am drawing a simple grid, and I want the line closest to the center of the screen to be highlighted a different color.
What is the formula to determine what line was drawn that closely resembles the center of the screen?
It doesn't have to be the exact center, just one that appears to be in the middle of the screen. But it must be a line that was drawn. The user can change the size of the grid at anytime, so this line must move with it.
I am drawing a new line on the screen using a different stroke color, but I can't determine which line to overlap. I can get close but I am always off by a few pixels.
Take a look at this picture in Photoshop. The green line represents the true center of the image. While the pink line is the desired result (center line) because the grid isn't even to the screen size (look at the last grid on the right) The grid is 34x34 and the screen size is 320 x 480.
How to draw the grid:
int xStart = 0, yStart = 0;
int gsX = 19; //Distance between lines
int gsY = 25;
// draw vertical lines
for(int xId=0; xId<=(screenWidth/gsX); xId++) {
int x = xStart + xId * gsX;
[gPath moveToPoint:CGPointMake(x, yStart)];
[gPath addLineToPoint:CGPointMake(x, yStart+screenHeight)];
}
// draw horizontal lines
for(int yId=0; yId<=(screenHeight/gsY); yId++) {
int y = yStart + yId * gsY;
[gPath moveToPoint:CGPointMake(xStart, y)];
[gPath addLineToPoint:CGPointMake(xStart+screenWidth, y)];
}
My centerline code:
This moves the line based upon the grid spacing value, but it isn't drawn over one of the lines near the center.
int x = (screenWidth/gsX) /2;
NSLog(#"New X: %i gsX: %i",x, gsX);
//Veritical
[centerLines moveToPoint:CGPointMake(x, 0)];
[centerLines addLineToPoint:CGPointMake(x, screenHeight)];

Actually every one is right. I ran into something similar not to long ago actually. I couldn't explain it but it felt like the order of operations wasn't being followed correctly. So I broke your equation down so that you can follow the order of the operations. Any way your solution is as followed.
int centerX = (screenWidth/gsX);
int tempA = ( centerX / 2 );
int tempB = tempA * gsX;
NSLog(#"screenwidth / gsX = %i", centerX);
NSLog(#"Temp A: %i ", tempA);
NSLog(#"Temp B: %i ", tempB);
//Veritical
[centerLines moveToPoint:CGPointMake(tempB, 0)];
[centerLines addLineToPoint:CGPointMake(tempB, screenHeight)];
Here's whats happening. You're already drawing this line at one point in your grid code. You just have to figure out which one it is.You know that screenWidth/gsX is the last "line that will be drawn. So that number divided by 2 is the middle line. It's already a factor of the screenSize. Then just multiply that number by how big your grid is. Since it is the 'middle line' closest to the center (screenWidth/gsX) your line should show up on top of the grid
That should always draw a middle line. I don't see any code where you are changing the color. So you will have to take it on blind faith that it is being drawn. If you can change your line color you should be able to see it.
I'll leave it to you to figure out horizontal. (hint: it deals with y value ;-) )
I hope this helps!
Have fun and good luck Mr. Bourne!

Center is
int centerX = ((screenWidth/2) / gsX )* gsX;
int centerY = ((screenHeight/2) / gsY ) * gsY;
Just make sure you are doing integer math above! no floats. It should work out fine.

int x = xStart + (screenWidth/gsX)/2 * gsX;

Related

How to draw outline of a point in OpenGL?

By now points can be drawn with the following code:
// SETUP FOR VERTICES
GLfloat points[graph->vertexCount * 6];
for (int i = 0 ; i < graph->vertexCount; i++)
{
points[i*6] = (graph->vertices[i].x / (backingWidth/2) ) - 1;
points[i*6+1] = -(graph->vertices[i].y / (backingHeight/2) ) + 1;
points[i*6+2] = 1.0;
points[i*6+3] = 0.0;
points[i*6+4] = 0.0;
points[i*6+5] = 1.0;
}
glEnable(GL_POINT_SMOOTH);
glPointSize(DOT_SIZE*scale);
glVertexPointer(2, GL_FLOAT, 24, points);
glColorPointer(4, GL_FLOAT, 24, &points[2]);
glDrawArrays(GL_POINTS, 0, graph->vertexCount);
The points are rendered with red color, and I want to add a white outline outside the points. How can I draw outline of the point?
Question for better displaying
Follow #BDL 's instruction adding bigger points under the red points as outline, they look good.
outlinePoints[i*6] = (graph->vertices[i].x / (backingWidth/2) ) - 1;
outlinePoints[i*6+1] = -(graph->vertices[i].y / (backingHeight/2) ) + 1;
outlinePoints[i*6+2] = 0.9;
outlinePoints[i*6+3] = 0.9;
outlinePoints[i*6+4] = 0.9;
outlinePoints[i*6+5] = 1.0;
But when one point overlaps another point, it's outline is covered by the red point, since the outline points are rendered before all the red points.
I think the right solution is to render one outline point and red point one by one. How to do that?
If you want to render outlines for each point separately, then you can simply render a slightly larger white point first and then render the red point over it. With depth-testing enabled, you might have to adjust the polygon offset when rendering the red point to prevent them from getting hidden behind the white ones.

Crop half of an image in OpenCV

How can I crop an image and only keep the bottom half of it?
I tried:
Mat cropped frame = frame(Rect(frame.cols/2, 0, frame.cols, frame.rows/2));
but it gives me an error.
I also tried:
double min, max;
Point min_loc, max_loc;
minMaxLoc(frame, &min, &max, &min_loc, &max_loc);
int x = min_loc.x + (max_loc.x - min_loc.x) / 2;
Mat croppedframe = = frame(Rect(x, min_loc.y, frame.size().width, frame.size().height / 2));
but it doesn't work as well.
Here's a the python version for any beginners out there.
def crop_bottom_half(image):
cropped_img = image[image.shape[0]/2:image.shape[0]]
return cropped_img
The Rect function arguments are Rect(x, y, width, height). In OpenCV, the data are organized with the first pixel being in the upper left corner, so your rect should be:
Mat croppedFrame = frame(Rect(0, frame.rows/2, frame.cols, frame.rows/2));
To quickly copy paste:
image = YOURIMAGEHERE #note: image needs to be in the opencv format
height, width, channels = image.shape
croppedImage = image[int(height/2):height, 0:width] #this line crops
Explanation:
In OpenCV to select a part of an image,you can simply select the start and end pixels from the image. The meaning is:
image[yMin:yMax, xMin:xMax]
In human speak: yMin = top | yMax = bottom | xMin = left | xMax = right |
" : " means from the value on the left of the : to the value on the right
To keep the bottom half we simply do [int(yMax/2):yMax, xMin:xMax] which means from half the image to to the bottom. x is 0 to the max width.
Keep in mind that OpenCV starts from the top left of an image and increasing the Y value means downwards.
To get the width and height of an image you can do image.shape which gives 3 values:
yMax,xMax, amount of channels of which you probably won't use the channels. To get just the height and width you can also do:
height, width = image.shape[0:2]
This is also known as getting the Region of Interest or ROI

(MATH ISSUE) Creating a SPIRAL out of points: How do I change "relative" position to absolute position

Recently I had the idea to make a pendulum out of points using Processing, and with a little learning I solved it easily:
int contador = 0;
int curvatura = 2;
float pendulo;
void setup(){
size(300,300);
}
void draw(){
background(100);
contador = (contador + 1) % 360; //"CONTADOR" GOES FROM 0 TO 359
pendulo = sin(radians(contador))*curvatura; //"PENDULO" EQUALS THE SIN OF CONTADOR, SO IT GOES FROM 1 TO -1 REPEATEDLY, THEN IS MULTIPLIED TO EMPHASIZE OR REDUCE THE CURVATURE OF THE LINE.
tallo(width/2,height/3);
println(pendulo);
}
void tallo (int x, int y){ //THE FUNTION TO DRAW THE DOTTED LINE
pushMatrix();
translate(x,y);
float _y = 0.0;
for(int i = 0; i < 25; i++){ //CREATES THE POINTS SEQUENCE.
ellipse(0,0,5,5);
_y+=5;
rotate(radians(pendulo)); //ROTATE THEM ON EACH ITERATION, THIS MAKES THE SPIRAL.
}
popMatrix();
}
So, in a brief, what I did was a function that changed every point position with the rotate fuction, and then I just had to draw the ellipses in the origin coordinates as that is the real thing that changes position and creates the pendulum ilussion.
[capture example, I just need 2 more points if you are so gentile :)]
[capture example]
[capture example]
Everything was OK that far. The problem appeared when I tried to replace the ellipses for a path made of vertices. The problem is obvious: the path is never (visually) made because all vertices would be 0,0 as they move along with the zero coordinates.
So, in order to make the path possible, I need the absolute values for each vertex; and there's the question: How do I get them?
What I know I have to do is to remove the transform functions, create the variables for the X and Y position and update them inside the for, but then what? That's why I cleared this is a maths issue, which operation I have to add in the X and Y variables in order to make the path and its curvature possible?
void tallo (int x, int y){
pushMatrix();
translate(x,y);
//NOW WE START WITH THE CHANGES. LET'S DECLARE THE VARIABLES FOR THE COORDINATES
float _x = 0.0;
float _y = 0.0;
beginShape();
for(int i = 0; i < 25; i++){ //CREATES THE DOTS.
vertex(_x,_y); //CHANGING TO VERTICES AND CALLING THE NEW VARIABLES, OK.
//rotate(radians(pendulo)); <--- HERE IS MY PROBLEM. HOW DO I CONVERT THIS INTO X AND Y COORDINATES?
//_x = _x + ????;
_y = _y + 5 /* + ???? */;
}
endShape();
popMatrix();
}
We need to have in mind that pendulo's x and y values changes in each iteration of the for, it doesn't has to add the same quantity each time. The addition must be progressive. Otherwise, we would see a straight line rotating instead of a curve accentuating its curvature (if you increase curvatura's value to a number greater than 20, you will notice the spiral)
So, rotating the coordinates was a great solution to it, now it's kind of a muddle to think the mathematical solution to the x and y coordinates for the spiral, my secondary's knowledges aren't enough. I know I have to create another variable inside the for in order to do this progression, but what operation should it have?
I would be really glad to know, maths
You could use simple trigonometry. You know the angle and the hypotenuse, so you use cos to get the relative x position, and sin to the y. The position would be relative to the central point.
But before i explain in detail and draw some explanations, let me propose another solution: PVectors
void setup() {
size(400,400);
frameRate(60);
center = new PVector(width/2, height/3); //defined here because width and height only are set after size()
}
void draw() {
background(255);
fill(0);
stroke(0);
angle = arc_magn*sin( (float) frameCount/60 );
draw_pendulum( center );
}
PVector center;
float angle = 0;
float arc_magn = HALF_PI;
float wire_length = 150;
float rotation_angle = PI/20 /60 ; //we divide it by 60 so the first part is the rotation in one second
void draw_pendulum(PVector origin){
PVector temp_vect = PVector.fromAngle( angle + HALF_PI);
temp_vect.setMag(wire_length);
PVector final_pos = new PVector(origin.x+temp_vect.x, origin.y+temp_vect.y );
ellipse( final_pos.x, final_pos.y, 40, 40);
line(origin.x, origin.y, final_pos.x, final_pos.y);
}
You use PVector class static method fromAngle( float angle ) that returns a unity vector of the given angle, then use .setMag() to define it's length.
Those PVector methods will take care of the trigonometry for you.
If you still want to know the math behind it, i can make another example.

How to determine the width of the lines?

I need to detect the width of these lines:
These lines are parallel and have some noise on them.
Currently, what I do is:
1.Find the center using thinning (ZhangSuen)
ZhanSuenThinning(binImage, thin);
2.Compute the distance transform
cv::distanceTransform(binImage, distImg, CV_DIST_L2, CV_DIST_MASK_5);
3.Accumulate the half distance around the center
double halfWidth = 0.0;
int count = 0;
for(int a = 0; a < thinImg.cols; a++)
for(int b = 0; b < thinImg.rows; b++)
if(thinImg.ptr<uchar>(b, a)[0] > 0)
{
halfWidth += distImg.ptr<float>(b, a)[0];
count ++;
}
4.Finally, get the actual width
width = halfWidth / count * 2;
The result, isn't quite good, where it's wrong around 1-2 pixels. On bigger Image, the result is even worse, Any suggestion?
You can adapt barcode reader algorithms which is the faster way to do it.
Scan horizontal and vertical lines.
Lets X the length of the horizontal intersection with black line an Y the length of the vertical intersection (you can have it be calculating the median value of several X and Y if there are some noise).
X * Y / 2 = area
X²+Y² = hypotenuse²
hypotenuse * width / 2 = area
So : width = 2 * area / hypotenuse
EDIT : You can also easily find the angle by using PCA.
Al you need is find RotatedRect for each contour in your image, here is OpenCV tutorial how to do it. Then just take the values of 'size' from rotated rectangle where you will get height and width of contour, the height and width may interchange for different alignment of contour. Here in the above image the height become width and width become height.
Contour-->RotatedRect
|
'--> Size2f size
|
|-->width
'-->height
After find contour just do
RotatedRect minRect = minAreaRect( Mat(contours[i]) );
Size2f contourSize=minRect.size // width and height of the rectangle
Rotated rectangle for each contour
Here is C++ code
Mat src=imread("line.png",1);
Mat thr,gray;
blur(src,src,Size(3,3));
cvtColor(src,gray,CV_BGR2GRAY);
Canny(gray,thr,50, 190, 3, false );
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( thr.clone(),contours,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,Point(0,0));
vector<RotatedRect> minRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
minRect[i] = minAreaRect( Mat(contours[i]) );
for( int i = 0; i< contours.size(); i++ )
{
cout<<" Size ="<<minRect[i].size<<endl; //The width may interchange according to contour alignment
Size2f s=minRect[i].size;
// rotated rectangle
Point2f rect_points[4]; minRect[i].points( rect_points );
for( int j = 0; j < 4; j++ )
line( src, rect_points[j], rect_points[(j+1)%4], Scalar(0,0,255), 1, 8 );
}
imshow("src",src);
imshow("Canny",thr);
One quick and simple suggestion:
Count the total number of black pixels.
Detect the length of each line. (perhaps with CVHoughLinesP, or simply the diagonal of the bounding box around each thinned line)
Divide the number of black pixels by the sum of all line lengths, that should give you the average line width.
I am not sure whether that is more accurate than your existing approach though. The irregular end parts of each line might throw it of.
One thing you could try that could increase the accuracy for that case:
Measure the average angle of the lines
Rotate the image so the lines are aligned horizontally
crop a rectangular subsection of your shape, so all lines have the same length
(you can get the contour of your shape by morphological closing, then find a rectangle that is entirely contained within the shape. Make sure that the horizontal edges of the rectangle are inbetween lines)
then count the number of black pixels again (count gray pixels caused by rotating the image as x% of a whole pixel)
Divide by (rectangle_width * number_of_lines_in_rectangle)
Hough line fits to find each line
From each pixel on each line fit, scan in the perpendicular direction to get the distance to the edge. Find the edge using a spline fit or similar sub-pixel method.
Depending on your needs/desires, take the median or average distance. To eliminate problems with outliers, throw out the distances below the 10th percentile and above the 90th percentile before calculating the mean or median. You might also report the size using statistics: line width W, standard deviation S.
Although a connected components algorithm can be used to find the lines, it won't find the "true" edges as nicely as a spline fit.
The image like you shown is noisy/blurry and thus the number of black pixels might not reflect line properties; for example, black pixels can be partially attributed to salt-and-pepper noise. You can get rid of it with morphological erosion but this will affect your lines as well.
A better way is to extract connected components, delete small ones that likely come from noise or small blobs, then calculate the number of pixels and divide it on the number of lines. This approach will also help you to analyse the shape of the objects in your image and get rid of any artefacts other than noise or lines.
A different real word situation is when you have some grey pixels close to a line border. You can either use a threshold to discard them or count them with some weight<1. This will compensate for blur in your image. By the way, rotation of the image may increase the blur since it is typically done with interpolation and smoothing.

Building a rectangle with a group of points in opencv

I am trying to build a rectangle using opencv with these points but I am not sure how to go about it? I would like to build the rectangle to be able to get the four corner points.
Method 1
This method will be useful when your image contains contour which not represent your rectangle sides
Here first thing you need to do is find the centre of each contour,
you may proceed with find OpenCV moment or
minEnclosingCircle after findcontour. Now you have set of
points representing your rectangle.
Next step is classify the points to sides of rectangle like top, bottom, left and right. That is find the points which are lying on the same line these link and discussion might be helpful.
After sorting(classify the points which lies on same line) you can easily find out top, bottom, right and left by extending these the lines and find four intersection of each line, where the minimum y-value stand for top, minimum x stand for left, maximum x stand for right and maximum y stand of bottom.
Edit:
Method 2
Instead of doing all above step you can simply find out four corners as described below.
Find centre points of all contour.
Find points with minimum x and maximum x which will represent two corner.
Find points with minimum y and maximum y which will represent the other two corner.
Now you can decide which point top left, top right, bottom left and bottom right by looking on these values.
-> From set of four points consider set of two points with minimum y-value. Now consider these two points and your top left corner will be point with minimum x value and top right corner will be the point with maximum x.
-> Similarly from the remaining two points(Set of points with maximum y values) find the point with minimum x value which will be bottom left and points with maximum x will be bottom right corner.
Code for method 2
Mat src=imread("src.png",0);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( src, contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
vector<Point2f>center( contours.size() );
vector<float>radius( contours.size() );
for( int i = 0; i< contours.size(); i++ ){
minEnclosingCircle(contours[i], center[i], radius[i] );
circle(src,center[i],radius[i], Scalar(255),1,8,0);
}
float top_left=0, top_right=0, bot_left=0,bot_right=0;
float idx_min_x=0,idx_min_y=0,idx_max_x=0,idx_max_y=0;
for( int i = 0; i< contours.size(); i++ ){
if(center[idx_max_x].x<center[i].x) idx_max_x=i;
if(center[idx_min_x].x>center[i].x) idx_min_x=i;
if(center[idx_max_y].y<center[i].y) idx_max_y=i;
if(center[idx_max_y].y>center[i].y) idx_min_y=i;
}
vector<Point2f>corners;
corners.push_back (center[idx_max_x]);
corners.push_back (center[idx_min_x]);
corners.push_back (center[idx_max_y]);
corners.push_back (center[idx_min_y]);
Point tmp;
for( int i = 0; i< corners.size(); i++ ) {
for( int j = 0; j< corners.size()-1; j++ ) {
if(corners[j].y>corners[j+1].y){
tmp=corners[j+1];
corners[j+1]=corners[j];
corners[j]=tmp;
}
}
}
if(corners[0].x>corners[1].x){ top_left=1; top_right=0;}
else { top_left=0; top_right=1;}
if(corners[2].x>corners[3].x){ bot_left=3; bot_right=2;}
else { bot_left=2; bot_right=3;}
line(src,corners[top_left],corners[top_right], Scalar(255),1,8,0);
line(src,corners[bot_left],corners[bot_right], Scalar(255),1,8,0);
line(src,corners[top_left],corners[bot_left], Scalar(255),1,8,0);
line(src,corners[top_right],corners[bot_right], Scalar(255),1,8,0);
imshow("src",src);
waitKey();
Result:
This post seems to be about trapezoids, not about rectangles.
For everyone looking for a solution regarding rectangles:
I merged all my points into one contour: How to merge contours in opencv?.
Then create a rectangle around that contour & draw it :
rect = cv2.minAreaRect(merged_contour)
box = cv2.boxPoints(rect)
box = np.intp(box) #np.intp: Integer used for indexing (same as C ssize_t; normally either int32 or int64)
cv2.drawContours(image, [box], 0, (0,0,255), 1)

Resources