It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a ball moving inside a cube, and I detect when it goes outside of the cube (with a bounding sphere and a bounding box).
Now I would like to detect from which side the ball goes out. Then I could redirect the ball in the correct direction. How can I do this with the ball's “world” matrix?
Should I keep track of the ball's coordinates myself, or should I deduce them from the world matrix?
I'd start over with the collisions. You have six planes (each a [point,normal unit vector] pair) and a sphere (a [point,radius] pair).
Check the point against each plane. To do this, subtract the unit vector, scaled up by the radius of the sphere, of the plane from the point. (Point -= PlaneUnitVector * radius)
Now, with some vector math, you can see which side of the plane it's on.
You'll then use the unit vector of the plane for the bounce calculation.
The next problem you'll run into is the case where you cross through more than one plane at a time.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to create an application where a user can detect an unobstructed path using an Android mobile phone. I used a Gaussian filter for smoothing and then used canny edge detection to detect edges.
I thought if I could find the concentration of edges in the video I can direct the user towards an area with a low concentration of edges, hence a clear path.
Is there a way to find the concentration of the edges? I'm new to image processing so any help would be appreciated.
Im using the OpenCV Android library for my implementation.
Thank you.
The edge detection filter will leave you with a grey-scale image showing where edges are, so you could just throw another smoothing/averaging filter on this to get an idea of the concentration of edges in the neighbourhood of each pixel.
This is a pretty big topic though, and there are a lot of better and more complex alternatives. You should probably look up academic papers, textbooks and tutorials which talk about 'image segmentation', 'region detection' and mention texture or edges.
Also, consider the cases where edges are not indicative of clearness of path, such as plain white walls, and brick foot paths.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I was wondering if it was possible to use the iOS gyroscope to detect if an object the camera is taking (a physical photo of someone) is straight. This means by perhaps using the iOS camera, I could know if the physical photo is straight. Does anyone know if this can be done?
If so, can someone please provide an example?
Thank you
Use the gravity property of CMDeviceMotion, which incorporates both accelerometer and gyroscope data.
Another approach would be to detect straight lines in the image, and see whether these are horizontal or vertical. In most scenes, the camera is oriented correctly when the most prominent straight lines are horizontal or vertical. You can do this using the Hough transform on an edge-filtered image.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Can anyone tell me how to implement 2D to 3D face reconstruction in OpenGL
I am new to 3D Graphics and rendering. Can anyone tell me how to do 2D to 3D face reconstruction from 2D image. What are the algorithms used and how can it be done using OpenGL? Related functions.
Input: 2D frontal face image
Output: 3D reconstruction of the input
Can anyone tell me how to implement 2D to 3D face reconstruction in OpenGL
You can't. OpenGL is a 3D rasterizer drawing API. It's used to turn 3D geometry into 2D pictures, i.e. goes in the opposite direction of what you want to do.
Input: 2D frontal face image
Output: 3D reconstruction of the input
This is the real world, not CSI!
You need at least some additional input to turn this into 3D data. Each pixel of the image is a point in 2D space, i.e. provides 2 variables into an equation. You want to turn this into 3 variables. Or in other words, this is an underdetermined system of equations.
Additional input could be:
Time: If you had a movie, then by the movements of the visible objects over time their position in space can be inferred.
Images from multiple angles
Taking the picture with special illumination, so that the pixel's color provides additional information about the depth.
Research on this has been done at the University of Washington. I recommend looking at all their papers!
An trained observer model, i.e. what our brain does to infer objects depth in a 2D picture is very unreliable. Just look at any arbitrary perspective illusion to understand why. Our brain is much better in interpreting images than any computer program and still can be fooled.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to adjust an image like curves tool in photoshop. It changes image color, constrast, etc in each R,G,B channel or all RGB.
any idea to do this task in objective C?
I found this link http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=68577&lngWId=-1 but it only adjust curves in all image using VB, not support each color channel like photoshop
The way the curves work in Photoshop use histogramming methods. Essentially one grabs the histogram by counting the amount of each value (the values that can be assigned are on the X axis of the histo) there are throughout the whole image. One can perform this operation to gain the histogram for each color channel.
Look here for image histogramming
http://en.wikipedia.org/wiki/Image_histogram
After one has the histogram, a curve can be applied (to each color channel if you like). The standard curve is a one-2-one or linear curve. What this means is when the actual pixel value is a 10, the value assigned to your edited image is a 10.
One could imagine any curve or even a random distribution. While there are many methods a standard method is log based histogram methods. What this does is essentially looks at the image histogram, and applies the greatest transform curve slopes to the histogram areas with the highest input pixel counts thus providing good contrast for the most amount of pixels.
In terms of a curve, the curve you place on top of the histogram simply defines the mapping function of input pixel value to edited pixel value. You can apply a curve without doing a histogram, but the histo is a good reference for your user so that they know where they want to edit the curve for best effect.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am new to OpenCV and I'm trying to count and locate broken or cracked biscuits
(those which are not perfect circles as they should be) in the image.
What kind of strategy should I follow?
Any suggestion can help to open my mind. Regards.
Your question is very abstract (it would be better if you provide some picture), but I can try to answer it.
First of all you have to detect all busquits on your image. To do this you have to find bisquit color (maybe HSV color space would be better for your aim) on your picture and convert input image to single channel image (or matrix), each element of this matrix can be:
1 (or 255) if this pixel belongs to bisquit
0 if not.
[OpenCV function inRange may help you to do such conversion.]
When bisquits are detected you can:
Use HoughCircles to detect normal bisquits (if normal bisquit is round).
Find contours of each bisquit (take a look at findContours) and compare each contour with normal bisquit (if it's not round) using cross-corellation or maybe other methods (Euclidean distance etc).
Also look at HoughCircle tutorial to detect just circles if your image doesn't contain other circles (except bisquits).