It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to adjust an image like curves tool in photoshop. It changes image color, constrast, etc in each R,G,B channel or all RGB.
any idea to do this task in objective C?
I found this link http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=68577&lngWId=-1 but it only adjust curves in all image using VB, not support each color channel like photoshop
The way the curves work in Photoshop use histogramming methods. Essentially one grabs the histogram by counting the amount of each value (the values that can be assigned are on the X axis of the histo) there are throughout the whole image. One can perform this operation to gain the histogram for each color channel.
Look here for image histogramming
http://en.wikipedia.org/wiki/Image_histogram
After one has the histogram, a curve can be applied (to each color channel if you like). The standard curve is a one-2-one or linear curve. What this means is when the actual pixel value is a 10, the value assigned to your edited image is a 10.
One could imagine any curve or even a random distribution. While there are many methods a standard method is log based histogram methods. What this does is essentially looks at the image histogram, and applies the greatest transform curve slopes to the histogram areas with the highest input pixel counts thus providing good contrast for the most amount of pixels.
In terms of a curve, the curve you place on top of the histogram simply defines the mapping function of input pixel value to edited pixel value. You can apply a curve without doing a histogram, but the histo is a good reference for your user so that they know where they want to edit the curve for best effect.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am doing a project which is hole detection in road. I am using a laser to emit beam on the road and using a camera to take a image of the road. the image may be like this
Now i want to process this image and give a result that is it straight or not. if it curve then how big the curve is.
I dont understand how to do this. i have search a lot but cant find a appropriate result .Can any one help me for that?
This is rather complicated and your question is very broad, but lets have a try:
Perhaps you have to identify the dots in the pixel image. There are several options to do this, but I'd smoothen the image by a blur filter and then find the most red pixels (which are believed to be the centers of the dots). Store these coordinates in a vector array (array of x times y).
I'd use a spline interpolation between the dots. This way one can simply get the local derivation of a curve touching each point.
If the maximum of the first derivation is small, the dots are in a line. If you believe, the dots belong to a single curve, the second derivation is your curvature.
For 1. you may also rely on some libraries specialized in image processing (this is the image processing part of your challenge). One such a library is opencv.
For 2. I'd use some math toolkit, either octave or a math library for a native language.
There are several different ways of measuring the straightness of a line. Since your question is rather vague, it's impossible to say what will work best for you.
But here's my suggestion:
Use linear regression to calculate the best-fit straight line through your points, then calculate the mean-squared distance of each point from this line (straighter lines will give smaller results).
You may need to read this paper, it is so interesting one to solve your problem
As #urzeit suggested, you should first find the points as accurately as possible. There's really no way to give good advice on that without seeing real pictures, except maybe: try to make the task as easy as possible for yourself. For example, if you can set the camera to a very short shutter time (microseconds, if possible) and concentrate the laser energy in the same time, the "background" will contribute less energy to the image brightness, and the laser spots will simply be bright spots on a dark background.
Measuring the linearity should be straightforward, though: "Linearity" is just a different word for "linear correlation". So you can simply calculate the correlation between X and Y values. As the pictures on linked wikipedia page show, correlation=1 means all points are on a line.
If you want the actual line, you can simply use Total Least Squares.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to create an application where a user can detect an unobstructed path using an Android mobile phone. I used a Gaussian filter for smoothing and then used canny edge detection to detect edges.
I thought if I could find the concentration of edges in the video I can direct the user towards an area with a low concentration of edges, hence a clear path.
Is there a way to find the concentration of the edges? I'm new to image processing so any help would be appreciated.
Im using the OpenCV Android library for my implementation.
Thank you.
The edge detection filter will leave you with a grey-scale image showing where edges are, so you could just throw another smoothing/averaging filter on this to get an idea of the concentration of edges in the neighbourhood of each pixel.
This is a pretty big topic though, and there are a lot of better and more complex alternatives. You should probably look up academic papers, textbooks and tutorials which talk about 'image segmentation', 'region detection' and mention texture or edges.
Also, consider the cases where edges are not indicative of clearness of path, such as plain white walls, and brick foot paths.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Can anyone tell me how to implement 2D to 3D face reconstruction in OpenGL
I am new to 3D Graphics and rendering. Can anyone tell me how to do 2D to 3D face reconstruction from 2D image. What are the algorithms used and how can it be done using OpenGL? Related functions.
Input: 2D frontal face image
Output: 3D reconstruction of the input
Can anyone tell me how to implement 2D to 3D face reconstruction in OpenGL
You can't. OpenGL is a 3D rasterizer drawing API. It's used to turn 3D geometry into 2D pictures, i.e. goes in the opposite direction of what you want to do.
Input: 2D frontal face image
Output: 3D reconstruction of the input
This is the real world, not CSI!
You need at least some additional input to turn this into 3D data. Each pixel of the image is a point in 2D space, i.e. provides 2 variables into an equation. You want to turn this into 3 variables. Or in other words, this is an underdetermined system of equations.
Additional input could be:
Time: If you had a movie, then by the movements of the visible objects over time their position in space can be inferred.
Images from multiple angles
Taking the picture with special illumination, so that the pixel's color provides additional information about the depth.
Research on this has been done at the University of Washington. I recommend looking at all their papers!
An trained observer model, i.e. what our brain does to infer objects depth in a 2D picture is very unreliable. Just look at any arbitrary perspective illusion to understand why. Our brain is much better in interpreting images than any computer program and still can be fooled.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am new to OpenCV and I'm trying to count and locate broken or cracked biscuits
(those which are not perfect circles as they should be) in the image.
What kind of strategy should I follow?
Any suggestion can help to open my mind. Regards.
Your question is very abstract (it would be better if you provide some picture), but I can try to answer it.
First of all you have to detect all busquits on your image. To do this you have to find bisquit color (maybe HSV color space would be better for your aim) on your picture and convert input image to single channel image (or matrix), each element of this matrix can be:
1 (or 255) if this pixel belongs to bisquit
0 if not.
[OpenCV function inRange may help you to do such conversion.]
When bisquits are detected you can:
Use HoughCircles to detect normal bisquits (if normal bisquit is round).
Find contours of each bisquit (take a look at findContours) and compare each contour with normal bisquit (if it's not round) using cross-corellation or maybe other methods (Euclidean distance etc).
Also look at HoughCircle tutorial to detect just circles if your image doesn't contain other circles (except bisquits).
What are the applications of matrix addition to image processing?And also is there any application in image processing which modifies current pixel value based on values of neighbour pixels?
Strict addition is rare, but subtraction is more common, like when you subtract one image from its filtered one. For example you may subtract one image from its low pass filter and obtain its details.
Edit 1: Examples: image addition (averaging) for noise suppression. Subtraction for change enhancement.
Lots of image processing algorithms modify the pixels based on its neighbors, like many digital image filters.
Take for example the prewit filter which emphasizes horizontal edges. Its kernel is:
[1 1 1
0 0 0
-1 -1 -1]
Here the current pixel value will be replaced by the sum of its the north-west, north and north-east pixel value, minus the sum of its south-west, south and south-east pixel value.
Similar kernels, for other applications, include average, gaussian, laplacian, sobel, etc.. Its is also possible to compute fast rough distance maps.
Note 1: By the way, your acceptance rate is very low. People here may understand that you are ungrateful for the answers, and therefore won't bother to answer at all. Since you're new, I'm sure it's just because you need to understand how the system works. For any answer that has usefulness, click the arrow up. At some point, chose one of the answers as "the best one" and accept it (the green V tick sign.)
Note 2: TBH the question is actually quite vague, and would address a full book of digital image processing. In the future, try to be more specific (and, for stackoverflow, ask within programming).
All the best.