How can I find white/black pixels between 2 points in javacv? I have tried with cvInitLineIterator but I don't know if I am on the right way..
Many thanks for considering my request.
Convert your image data into grayscale
Get list of pixels between your two points using cvInitLineIterator
Now verify each pixel value in line. 0 is ideally black (#000000) and your max pixel depth value is white.
I think you are moving in a right direction.
Related
I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).
I'm reading a paper about image processing and came across this color histogram:
image. But I'm not sure how to interpet it. The 3 different curves are for red, green and blue. But what is on the X and Y-axis? My guess would be X-axis going from 0 to 255 for the 'intensity' of the color and Y-axis the amount of pixels in the image that have this intensity. Could anyone confirm this or correct me if I'm wrong?
If I know well, someone please correct me if I am wrong, the X axis represents the possible values of a color from either one of the RGB channels (a value in the [0-255] interval), and the Y axis represents the number of pixels having that value.
Light Field captures the scene from slightly different points. This means I would have two images of the same scene with a slight shift, as shown in the following figure:
Assuming the red squares in the images above are pixels. I know that the spatial difference between those two pixels is a shift. Nevertheless, what other information do these two pixels give us in terms of scene radiance? I mean is there a way to find (or compute) the difference in image irradiance values between those two points?
Look for color space representations other than RGB. Some of them have explicit channel(s) carrying luminance information of a pixel.
A varaiant of the same idea is to convert to a Black and White image and examine the pixel values.
I am trying to apply histogram normalization to create a dense color histogram.
Split the channels into R, G, B
Normalize the individual histogram
Merge
I think that this is the normal step perhaps if I am wrong please let me know. Now,
for a rainbow image as shown below I get I get max of 255 for all 3 channel and 0 as min. Using the formula for
Pixel - Min / (Max - min) * 255
I will get the same image as the original one back. What is the critical step that I am missing. Please advise me.Thank you!
REf: http://www.roborealm.com/help/Normalize.php.. I used this reference
White = (255,255,255). Black = (0,0,0). So your program finds the white background, and the black line in the bottom right.
Remove the white and change it to black. Then make your program ignore black.
images having white and black pixels cannot be normalized as such. your formula is giving you the same value. try ignoring all white and black pixels and normalize the pixels one by one.
as i see here you have a well distributed image for all channels already, so normalizing this one may not work well anyways..
I've tried -noise radius and -noise geometry and they don't seem to do what I want at all. I have some b&w images (TIFF G4 Fax compression) with lots of noise around the characters. This noise takes the form of pixel blobs that are 1 pixel wide in most cases.
My desire is to do the following 3 steps (in this order):
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Whiteout all black pixels that are 1 pixel tall (white pixels above and below)
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Do I have to write code to do this, or can Imagemagick pull it off? If it can, how do you specify the geometry to do it?
Lacking a lot of good answers here, I put this one to the ImageMagick forum and their response was really good. You can read it here ImageMagick Forum
Morphology proved to be the best answer.
Blur then sharpen would be the normal technique for speckle noise.
Imagemagik can do both of these - you might have to play with the amoutn of blurring