It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am trying to count number of flashes using opencv. The setup is; I have a video camera staring at a blinking flashlight, and using opencv functions I am trying to detect the number of flashes in the video stream coming from the camera. what is the easiest way of doing so? Thank you.
Assuming this is a video camera (!) you would capture a video stream and then in each frame check for a bright region.
Start here http://docs.opencv.org/ and get a video capture working
How to detect the flash depends on how much of the image it covers - if it completely fills the frame it might be as simple as find the average pixel value of the image
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am developing a painting app,in which i'll be having different images of irregular objects like animals,flowers etc..and i want that when user starts painting using a color,color should not go out side the boundaries of that object..help me in detecting the boundaries of a irregular bodily object...
What you're looking for is a 2d flood fill algorithm. It's fairly straightforward once you understand the recursive nature of the algorithm. Posting code here for the whole thing would take up far too much space. There's a great article on this here:
QuickFill Article
I just reread your question. The above would help but you may just want to use a masking layer and an editable layer. The making layer would be drawn over the "editable" layer, having complete transparency where the user can paint.
You need either a low pass filter (detect, but not change) or a flood fill solution.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to create an application where a user can detect an unobstructed path using an Android mobile phone. I used a Gaussian filter for smoothing and then used canny edge detection to detect edges.
I thought if I could find the concentration of edges in the video I can direct the user towards an area with a low concentration of edges, hence a clear path.
Is there a way to find the concentration of the edges? I'm new to image processing so any help would be appreciated.
Im using the OpenCV Android library for my implementation.
Thank you.
The edge detection filter will leave you with a grey-scale image showing where edges are, so you could just throw another smoothing/averaging filter on this to get an idea of the concentration of edges in the neighbourhood of each pixel.
This is a pretty big topic though, and there are a lot of better and more complex alternatives. You should probably look up academic papers, textbooks and tutorials which talk about 'image segmentation', 'region detection' and mention texture or edges.
Also, consider the cases where edges are not indicative of clearness of path, such as plain white walls, and brick foot paths.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I was wondering if it was possible to use the iOS gyroscope to detect if an object the camera is taking (a physical photo of someone) is straight. This means by perhaps using the iOS camera, I could know if the physical photo is straight. Does anyone know if this can be done?
If so, can someone please provide an example?
Thank you
Use the gravity property of CMDeviceMotion, which incorporates both accelerometer and gyroscope data.
Another approach would be to detect straight lines in the image, and see whether these are horizontal or vertical. In most scenes, the camera is oriented correctly when the most prominent straight lines are horizontal or vertical. You can do this using the Hough transform on an edge-filtered image.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am new to OpenCV and I'm trying to count and locate broken or cracked biscuits
(those which are not perfect circles as they should be) in the image.
What kind of strategy should I follow?
Any suggestion can help to open my mind. Regards.
Your question is very abstract (it would be better if you provide some picture), but I can try to answer it.
First of all you have to detect all busquits on your image. To do this you have to find bisquit color (maybe HSV color space would be better for your aim) on your picture and convert input image to single channel image (or matrix), each element of this matrix can be:
1 (or 255) if this pixel belongs to bisquit
0 if not.
[OpenCV function inRange may help you to do such conversion.]
When bisquits are detected you can:
Use HoughCircles to detect normal bisquits (if normal bisquit is round).
Find contours of each bisquit (take a look at findContours) and compare each contour with normal bisquit (if it's not round) using cross-corellation or maybe other methods (Euclidean distance etc).
Also look at HoughCircle tutorial to detect just circles if your image doesn't contain other circles (except bisquits).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a ball moving inside a cube, and I detect when it goes outside of the cube (with a bounding sphere and a bounding box).
Now I would like to detect from which side the ball goes out. Then I could redirect the ball in the correct direction. How can I do this with the ball's “world” matrix?
Should I keep track of the ball's coordinates myself, or should I deduce them from the world matrix?
I'd start over with the collisions. You have six planes (each a [point,normal unit vector] pair) and a sphere (a [point,radius] pair).
Check the point against each plane. To do this, subtract the unit vector, scaled up by the radius of the sphere, of the plane from the point. (Point -= PlaneUnitVector * radius)
Now, with some vector math, you can see which side of the plane it's on.
You'll then use the unit vector of the plane for the bounce calculation.
The next problem you'll run into is the case where you cross through more than one plane at a time.