what is the equation of ellipse with two points of its large diameter? [closed] - ellipse

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I want to get the equation of an ellipse by having two ends of its large diameter. These two points can form the ellipse in any place on the plain with any orientation. I think it is possible to have its small diameter if we have the equation of its large diameter. For example, we say the length of small diameter is half of length of large diameter.
I know the standard equation for ellipse that is centered on point (0,0) is x^2/a^2 + y^2/b^2 = 1. But this is the simplest version. My ellipse can be centered at any point with any orientation.
Is it possible to have the equation with this information?
What can the equation be?

Knowing two endpoints of a major axis is not enough to make an ellipse equation.
Because an ellipse with one fixed axis can have any eccentricity between 0 and 1.
You can find a brief explanation at https://math.stackexchange.com/questions/239787/can-an-ellipse-with-fixed-semi-axis-have-different-values-of-eccentricity
So when you get enough information about your ellipse, you can build analytical equation according to formulas here https://en.wikipedia.org/wiki/Ellipse#General_ellipse

Related

What is the most appropriate ML technology for classifying/recognizing 2D scatter plot shapes? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I would like to be able to automatically classify an input scatter plot into a limited, predefined set of 2-D scatter plots (see attached image), such as a Circle, a Cross, a Straight Line and a Curvy Line - such that, given any new scatter plot as input, the system can correctly categorize it by finding the closest category match.
Ideally, the classification process should also be scale-, translation- and rotation-invariant.
Can anyone suggest an appropriate technology for the training and classification of such 2-D patterns?
It doesn't need a supervised classifier. Unsupervised method like spectral clustering is designed for this kind of nonlinear clustering problems. The scattered dots will be assumed on a manifold surface instead of in an Euclidian space. Any curvy line could be taken as a manifold surface. Geodesic distance is used for clustering instead of ball-shaped Euclidian distance with manifold kernel.

Delphi : How to plot a simple 2d graph? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm using Delphi for my computer science coursework and need my program to plot a simple graph for projectile motion. I've struggled in finding a way to implement this and was wondering if anyone has had experience in drawing graphs or would be able to point me in the right direction.
The main idea that I have tried is plotting all the x values and drawing a line to the corresponding y value at that given time, but each time it comes out really weird and doesn't work.
Steps for simple graph:
1. Provide data values in array/list
2. Find minimal and maximal values for X and Y components
3. Calculate linear formulas for mapping data values to screen coordinates (min x = > left of rectangle for drawing and so on)
4. Draw line segments applying formulas to get coordinates
5. If needed, draw axes
P.S. Is TChart using prohibited?
I agree that TChart that ships with Delphi is enough to do that task. Anyway you can check this page also.

How to subtract color pixels [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
A lot of research papers that I am reading these days just abstractly write image1-image2
I imagine they mean gray scale images. But how to extend these to color images ?
Do I take the intensities and subtract ? How would I compute these intensities by taking the average or by taking the weighted average as illustrated here?
Also I would prefer if you could quote the source of this as well preferably from a research paper or a textbook.
Edit: I am working on motion detection where there are tons of algorithms which create a background model of the video(image) and then we subtract the current frame(again a image) from this model. We see if this difference exceeds a given threshold in which case we classify the pixel as foreground pixel. So far I have been subtracting the intensities directly but don't know whether other approach is possible.
Subtraction directly at RGB space or after converting to grayscale space is possible to miss useful information, and at the same time induce many unwanted outliers. It is possible that you don't need the subtraction operation. By investigating the intensity difference between background and object at all three channels, you can determine the range of background at the three channels, and simply set them to zero. This study demonstrated such method is robust against non-salient motion (such as moving leaves) with the presence of shadows at various environments.

Match object's 3D rotation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm new to opencv, but willing to learn. I'm wondering if something described below is possible.
Camera sees a white pencil (tube, cylinder etc.) on a dark background. I want to extract object's rotation in 3D space and use it in my program. It doesn't have to be very accurate or fast (even ~10fps will do).
I'm obviously looking for a solution, but for some guidance: what to look at, what to read, how is this procedure called by professionals.
It is impossible to extract 3D rotation from one image of arbitrary object because geometrical ambiguities. You can get the same image whith different coordinates/orientations of object. But you can extract angle of rotation in screen image plane.
You can use moments to solve this problem. First you should binarize image. You can made it using some color filtering teqnique. When you have binarized image, you can evaluate moments http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html. When you done you can compute angle as below:
...
double M00=moments.m00;
double M20=moments.m20;
double M02=moments.m02;
double M11=moments.m11;
// Center of mass
double xc=M10/M00;
double yc=M01/M00;
double A=(M20/M00)-xc*xc;
double B=2*((M11/M00)-xc*yc);
double C=(M02/M00)-yc*yc;
// Ellipse axis
double LL=sqrt( ( (A+C)+sqrt(B*B+(A-C)*(A-C)) )/2)*2;
double LW=sqrt( ( (A+C)-sqrt(B*B+(A-C)*(A-C)) )/2)*2;
//
M20=moments.mu20;
M02=moments.mu02;
M11=moments.mu11;
double theta=(atan2(2*M11,(M02-M20))/2)*(180/M_PI);
...
There is way to estimate 3D rotation, if you use flat object with known size. You can estimate homography matrix. And the decompose it to rotation and translation. For example as it described here: http://hal.archives-ouvertes.fr/docs/00/17/47/39/PDF/RR-6303.pdf .

recognition of boggle/scrabble letters from an image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am interested in recognizing letters on a Boggle board, probably using openCV. The letters are all the same font but could be rotated, so using a standard text recognition library is a bit of a problem. Additionally the M and W have underscores to differentiate them, and the Q is actually a Qu.
I am fairly confident I can isolate the seperate letters in the image, I am just wondering how to do the recognition part.
It depends on how fast you need to be.
If you can isolate the square of the letter and rotate it so that the sides of the square containing the letter are horizontal and vertical then I would suggest you:
convert the images to black/white (with the letter the one colour and the rest of the die the other
make a dataset of reference images of all letters in all four possible orientations (i.e. upright and rotated 90, 180 and 270 degrees)
use a template matching function such as cvMatchTemplate to find the best matching image from your dataset for each new image.
This will take a bit of time, so optimisations are possible, but I think it will get you a reasonable result.
If getting them in a proper orientation is difficult you could also generate rotated versions of your new input on the fly and match those to your reference dataset.
If the letters have different scale then I can think of two options:
If orientation is not an issue (i.e. your boggle block detection can also put the block in the proper orientation) then you can use the boundingbox of the area that has the letter colour as rough indicator of the scale of the incoming picture, and scale that to be the same size as the boundingbox on your reference images (this might be different for each reference image)
If orientation is an issue then just add scaling as a parameter of your search space. So you search all rotations (0-360 degrees) and all reasonable sizes (you should probably be able to guess a reasonable range from the images you have).
You can use a simple OCR like Tesseract. It is simple to use and is quite fast. You'll have to do the 4 rotations though (as mentioned in #jilles de wit's answer).
I made an iOS-app that does just this, based on OpenCV. It's called SnapSolve. I wrote a blog about how the detection works.
Basically, I overlay all 26x4 possible letters + rotations on each shape, and see which letter overlaps most. A little tweak to this is to smooth the overlay image, to get rid of artefacts where letters almost overlap but not quite.

Resources