I could not find any good explanations of sigmoidal-contrast parameter. For example, if we have such a command:
$ convert -channel B -gamma 1.25 -channel G -gamma 1.25 -channel RGB -sigmoidal-contrast 25x25% 564.tif 564-adj.tif
What does this 25x25% mean? What is the right syntax of this parameter? Can we have values like LxMxN%? Are these values - integer numbers only? Thanks!
Looking at the following sigmoidal curve between input (horizontal) and output (vertical axes:
-sigmoidal-contrast c1,c2%
c1 is the contrast or slope of the line at the midpoint. A c1 of 0 will be a straight line between lower left and upper right on the diagram. A larger c1 will make the slope or center part where it is straightest be more vertical.
c2 is the horizontal center point of the curve in the range 0 to 100% with 0% being shifted left so that straight part is on the left of the figure and 100% is shifted right so that the straight part is on the right.
You can see this from the diagrams of my script, sigmoidal, in terms of using the sigmoidal curve as a means of adjusting brightness and contrast with little clipping. See http://www.fmwconcepts.com/imagemagick/sigmoidal/index.php
I think the answer here is going to be quite subjective. As I said in the comments, there is an already excellent explanation by Anthony Thyssen here.
As far as I understand, there are two parameters which are:
the amount of contrast increase, with 0 being least and 10 being most, and
the centre-point about which to increase the contrast, which is on a scale of 0-100%, where 50% would increase the contrast centred around mid-grey (i.e. 128 on a scale of 0-255).
convert input.png -sigmoidal-contrast <AMOUNT>,<CENTRE>% result.png
Let's look at the <CENTRE> value first. In general, you would want to draw a line of increasing contrast through the range of pixel brightnesses that interests you. The histogram is the easiest way I know of to determine where that is. So, if your histogram looks like this:
then I would suggest you use something like 25% for the <CENTRE> value. Whereas if your histogram looks more like this:
then you would probably want to set the <CENTRE> to 75%. So, in general, 50% is not an unreasonable default for the <CENTRE> parameter.
The <AMOUNT> parameter is going to be very subjective and vary from photo to photo. If, as I suspect, you are analysing satellite imagery, you can probably experiment to find a sensible value and then bulk-apply it to your images from the same series. I would start with 3-5 for normal photos maybe.
Related
I'd like to implement such as the level function of gimp by using c language.
What equation used to implement the input level function of gimp?
I just thought that the original image's value range is between 0~255.
but If I do adjust input level 0~206 from 0~255. then can I just do this?
adjusted pixel = Input pixel /255 * 206 ?
but I think this is not make sense, because the output range is more darker then before. how does the output image getting more brighter then before when I adjust input level?
Easy to experiment. Create an image with a 256px-wide canvas. Create a Black-to-white RGB gradient across it. WIth the Pointer dialog (Windows>Dockable dialog>Pointer), it is easy to check that the pixels with horizontal coordinate x also have R=G=B=x (with minor variations).
Now apply the Levels tool. If you set the white point at 192 (255*3/4) then you can check that the pixels at x now have R=G=B=(x*4)/3 (this shows that the function is linear). In the Levels tool you can also hit Edit these settings at Curves to enter the Curves tool. And you will see that the corresponding curve is actually a straight line.
PS: The middle handle is the "gamma". Experimentally, you put it where that input value will be the average of the black and white points.
I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...
Given an image (Like the one given below) I need to convert it into a binary image (black and white pixels only). This sounds easy enough, and I have tried with two thresholding functions. The problem is I cant get the perfect edges using either of these functions. Any help would be greatly appreciated.
The filters I have tried are, the Euclidean distance in the RGB and HSV spaces.
Sample image:
Here it is after running an RGB threshold filter. (40% it more artefects after this)
Here it is after running an HSV threshold filter. (at 30% the paths become barely visible but clearly unusable because of the noise)
The code I am using is pretty straightforward. Change the input image to appropriate color spaces and check the Euclidean distance with the the black color.
sqrt(R*R + G*G + B*B)
since I am comparing with black (0, 0, 0)
Your problem appears to be the variation in lighting over the scanned image which suggests that a locally adaptive thresholding method would give you better results.
The Sauvola method calculates the value of a binarized pixel based on the mean and standard deviation of pixels in a window of the original image. This means that if an area of the image is generally darker (or lighter) the threshold will be adjusted for that area and (likely) give you fewer dark splotches or washed-out lines in the binarized image.
http://www.mediateam.oulu.fi/publications/pdf/24.p
I also found a method by Shafait et al. that implements the Sauvola method with greater time efficiency. The drawback is that you have to compute two integral images of the original, one at 8 bits per pixel and the other potentially at 64 bits per pixel, which might present a problem with memory constraints.
http://www.dfki.uni-kl.de/~shafait/papers/Shafait-efficient-binarization-SPIE08.pdf
I haven't tried either of these methods, but they do look promising. I found Java implementations of both with a cursory Google search.
Running an adaptive threshold over the V channel in the HSV color space should produce brilliant results. Best results would come with higher than 11x11 size window, don't forget to choose a negative value for the threshold.
Adaptive thresholding basically is:
if (Pixel value + constant > Average pixel value in the window around the pixel )
Pixel_Binary = 1;
else
Pixel_Binary = 0;
Due to the noise and the illumination variation you may need an adaptive local thresholding, thanks to Beaker for his answer too.
Therefore, I tried the following steps:
Convert it to grayscale.
Do the mean or the median local thresholding, I used 10 for the window size and 10 for the intercept constant and got this image (smaller values might also work):
Please refer to : http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm if you need more
information on this techniques.
To make sure the thresholding was working fine, I skeletonized it to see if there is a line break. This skeleton may be the one needed for further processing.
To get ride of the remaining noise you can just find the longest connected component in the skeletonized image.
Thank you.
You probably want to do this as a three-step operation.
use leveling, not just thresholding: Take the input and scale the intensities (gamma correct) with parameters that simply dull the mid tones, without removing the darks or the lights (your rgb threshold is too strong, for instance. you lost some of your lines).
edge-detect the resulting image using a small kernel convolution (5x5 for binary images should be more than enough). Use a simple [1 2 3 2 1 ; 2 3 4 3 2 ; 3 4 5 4 3 ; 2 3 4 3 2 ; 1 2 3 2 1] kernel (normalised)
threshold the resulting image. You should now have a much better binary image.
You could try a black top-hat transform. This involves substracting the Image from the closing of the Image. I used a structural element window size of 11 and a constant threshold of 0.1 (25.5 on for a 255 scale)
You should get something like:
Which you can then easily threshold:
Best of luck.
I hope someone can point me, to how I can solve my issue. . I have 6000 X-rays where I have to measure the angle between bones.
My strategy is the following: If I can somehow draw a line1 though the long axis of bone1, and line2 though the long axis of bone2, then I can simply measure the angle between the 2 lines.
So how can I find the axis in the first place? Is it possible to do it this way? :
(It is an x-ray picture) Lets say 1 cm from the top of the picture, we scan that row for the first pixel that turns white (the first edge of the bone), here we have a dot A1, the we continue scanning until we find the first pixel that turns black (the second edge of bone ), this is dot A2, we draw a line between Y1(A1,A2).
We do the same procedure, we go just further down lets say 10 cm from the top, we then have another line Y2(B1,B2). A line that goes from the middle of Y1 to the middle of Y2, will be the axis of the bone
I already managed to play with the threshold, and making and edge. to make it easy to draw the lines ?
Does it make sense?
Please, can it be done? Any idea how?
Any help will be appreciated, thank you!
Here's an idea:
Maybe if you downsample the images to get less artifacts and/or apply some mathematical morphology (http://en.wikipedia.org/wiki/Mathematical_morphology) to reduce the noise you can convert the bones into more line-shaped separated figures.
Apply some threshold so you can have black/white binary pictures. Use math to find a point in each of the 2 shapes and then try to match them to a rectangle or an oval. These will give you the axis you are looking for and then you can measure the angle.
This is too general a question. Images would always be appreciated! I guess you have 6000 xrays producing a grayscale image of the bones. In this case the general idea would be to:
1. Find a good binary segmentation of the bones in 3d
2. Find a good skeletonization of the 2 bones, also look at this
3. Replace the main skeletons of the two bones by line segments that best approximate it and measure the two angles (in 3d) between them
4. If this is two bones in the body - there is usually a limit to the degrees of freedom of two connected bones. It would be good to validate it wrt to this reference.
Tracing the line in realtime might not be the best in terms of accuracy. I guess this is obvious.
This could give an idea for the full human pose.
When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion