I'm attempting to convert some effects created in Photoshop into code for use with php/imagemagick. Right now I'm specifically interested in how to recreate Photoshop's RGB levels feature. I'm not really familiar with the Photoshop interface, but this is the info that I am given:
RGB Level Adjust
Input levels: Shadow 0, Midtone 0.92, Highlight 255
Output levels: Shadow 0, Highlight 255
What exaclty are the input levels vs. the output levels? How would I translate this into ImageMagick? Below you can see what I have tried, but it does not correctly render the desired effect (converting Photoshop's 0-255 scale to 0-65535):
$im->levelImage(0, 0.92, 65535);
$im->levelImage(0, 1, 65535);
This was mostly a stab in the dark since the parameter names don't line up and for output levels the number of parameters don't even match. Basically I don't understand exactly what is going on when photoshop applies the adjustment. I think that's my biggest hurdle right now. Once I get that, I'll need to find corresponding effects in ImageMagick.
Can anyone shed some light on what's going on in Photoshop and how to replicate that with ImageMagick?
Shadows, Midtones and Highlights are colors that fall within a certain range of luminosity. For example, shadows are the lower range of the luminosity histogram, midtones are colors in the middle and highlights are the ones up high. However - you can't use a hard limit on these values, which is why you will have to use curves like these that weight the histogram (a pixel may lie in multiple ranges at the same time).
To adjust shadows, midtones and highlights separately, you will need to create a weighted sum per pixel that uses the current shadow, midtone and highlight values to create a resultant value.
I don't think you can do this directly using ImageMagick API's - perhaps you could simply write it as a filter.
Hope this helps.
So I stumbled across this website: http://www.fmwconcepts.com/imagemagick/levels/index.php
Based on the information there, I was able to come up with the following php which seems pretty effective at simulating what Photoshop does with input and output and all that.
function levels($im, $inshadow, $midtone, $inhighlight, $outshadow, $outhighlight, $channel = self::CHANNEL_ALL) {
$im->levelImage($inshadow, $midtone, $inhighlight, $channel);
$im->levelImage(-$outshadow, 1.0, 255 + (255 - $outhighlight), $channel);
}
This assumes that the parameters to levelImage for blackpoint and whitepoint are on a scale of 0-255. They might actually be 0-65535 on your system. If that's they can it's easy enough to adjust it. You can also check what value your setup uses with $im->getQuantumRange(). It will return an array with a string version and a long version of the quantum. From there it should be easy enough to normalize the values or just use the new range.
See the documentation: The first value is the black point (shadow) input value, the middle is a gamma (which I'm guessing is the same as Photoshop's midpoint), and the last is the white point (highlight) input value.
The output values are fixed at the quantum values of the image type, there's no need to specify them.
Related
I have a bunch of values that seem to be 12-bit numbers. If I put them in a matrix and scale each one to a value 0-255 and then show them as an image, I get something that looks like a photo, but it's quite bland.
I think that they might be direct reading off of a camera sensor. They have a sort of stippled pattern, kind of like plaid, that makes me think that they might be a sort of Bayer filter. https://en.wikipedia.org/wiki/Bayer_filter
I want to convert these number into RGB values. What do I need to do? For each 2x2 in the Bayer pattern, do I convert the red to R, blue to B, and then average the green values? Do I need a gamma correction?
I noticed that the max value is much lower than the full 0xfff. Do I need to scale the values?
The procedure is well-described here: https://www.strollswithmydog.com/raw-file-conversion-steps/
Looks like I was getting it mostly right by the problem was grey balance. There is a transformation that needs to be made on the sensor values to map it to the 0-255 RGB component and the transform that needs to be made depends on the color. The best way is to take a photo of a perfect grey and calibrate.
I'd like to implement such as the level function of gimp by using c language.
What equation used to implement the input level function of gimp?
I just thought that the original image's value range is between 0~255.
but If I do adjust input level 0~206 from 0~255. then can I just do this?
adjusted pixel = Input pixel /255 * 206 ?
but I think this is not make sense, because the output range is more darker then before. how does the output image getting more brighter then before when I adjust input level?
Easy to experiment. Create an image with a 256px-wide canvas. Create a Black-to-white RGB gradient across it. WIth the Pointer dialog (Windows>Dockable dialog>Pointer), it is easy to check that the pixels with horizontal coordinate x also have R=G=B=x (with minor variations).
Now apply the Levels tool. If you set the white point at 192 (255*3/4) then you can check that the pixels at x now have R=G=B=(x*4)/3 (this shows that the function is linear). In the Levels tool you can also hit Edit these settings at Curves to enter the Curves tool. And you will see that the corresponding curve is actually a straight line.
PS: The middle handle is the "gamma". Experimentally, you put it where that input value will be the average of the black and white points.
I apologize in advance if a question like this was already answered. All of my searches for adding filters resulted in how to add dog faces. I wasn't sure what the proper terminology is.
What techniques do phone apps (such as Snapchat's text overlay or a QR code program for android) use to "darken" a section of the image? I am looking to replace this functionality in OpenCV. Is it possible to do this with other colors? (Such as adding a blue accent)
Example: Snapchat text overlay
https://i.imgur.com/9NHfiBY.jpg
Another Example: Google Allo QR code search
https://i.imgur.com/JnMzvWT.jpg
Any questions or comments would be appreciated
In General:
Change of brightness can be achieved via Addition/Subtraction.
If you want to brighten your Image, you can add a specific amount (e.g. 20) to each channel of the image. The other way around for darkening (Subtraction).
If you would subtract 50 from each channel of the image, you would get:
To darken pixel dependent you could also use Division. This is how a division with 1.5 would change the image:
Another way would be to use the Exponential Operator. This operator takes the value of each channel and will then calculate pixel^value. The resulting value will be then scaled back to the 0-255 range (for 8 bit RGB) via looking the highest value and then calculating the scaling factor via 255/resulting value.
If use it with values > one, it will darker the image. This is because
Here a chart how the exponential operator will change the value of each pixel. As you can see, values for the operator above 1 will darken the image (meaning the channels will be shifted towards lower values), whilst values below 0 will shift all pixels towards white and thus increase brightness.
Here is an example image for application of the operator using the value 0.5, meaning you take each pixel^0.5 and scale it back to the range of 0-255:
For a value of 2 you get:
Sadly i can not help you further, because i am not familiar with OpenCV, but it should be easy enough to implement yourself.
For your question about tinting: Yes, that is also possible. Instead of shifting towards white, you would have to shift the values of each pixel towards the respective color. I recommend to inform you about blending.
Original image taken from here
Update: I was able to darken an image by blending an image matrix with a black matrix. After that, it was just a matter of darkening certain parts of the image to replicate an overlay.
The lower the alpha value is, the darker the image.
Result
void ApplyFilter(cv::Mat &inFrame, cv::Mat &outFrame, double alpha)
{
cv::Mat black = cv::Mat(inFrame.rows, inFrame.cols, inFrame.type(), 0.0);
double beta = (1.0 - alpha);
cv::addWeighted(inFrame, alpha, black, beta, 0.0, outFrame);
}
https://docs.opencv.org/2.4/doc/tutorials/core/adding_images/adding_images.html
Thank you for the help everyone!
I have to detect leukocytes cells in an image that contains another blood cells, but the differences can be distinguished through the color of cells, leukocytes have more dense purple color, can be seen in the image below.
What color methode I've to use RGB/HSV ? and why ?!
sample image:
Usually when making decisions like this I just quickly plot the different channels and color spaces and see what I find. It is always better to start with a high quality image than to start with a low one and try to fix it with lots of processing
In this specific case I would use HSV. But unlike most color segmentation I would actually use the Saturation Channel to segment the images. The cells are nearly the same Hue so using the hue channel would be very difficult.
hue, (at full saturation and full brightness) very hard to differentiate cells
saturation huge contrast
Green channel, actually shows a lot of contrast as well (it surprised me)
the red and blue channels are hard to actually distinguish the cells.
Now that we have two candidate representations the saturation or the Green channel, we ask which is easier to work with? Since any HSV work involves us converting the RGB image, we can dismiss it, so the clear choice is to simply use the green channel of the RGB image for segmentation.
edit
since you didn't include a language tag I would like to attach some Matlab code I just wrote. It displays an image in all 4 color spaces so you can quickly make an informed decision on which to use. It mimics matlabs Color Thresholder colorspace selection window
function ViewColorSpaces(rgb_image)
% ViewColorSpaces(rgb_image)
% displays an RGB image in 4 different color spaces. RGB, HSV, YCbCr,CIELab
% each of the 3 channels are shown for each colorspace
% the display mimcs the New matlab color thresholder window
% http://www.mathworks.com/help/images/image-segmentation-using-the-color-thesholder-app.html
hsvim = rgb2hsv(rgb_image);
yuvim = rgb2ycbcr(rgb_image);
%cielab colorspace
cform = makecform('srgb2lab');
cieim = applycform(rgb_image,cform);
figure();
%rgb
subplot(3,4,1);imshow(rgb_image(:,:,1));title(sprintf('RGB Space\n\nred'))
subplot(3,4,5);imshow(rgb_image(:,:,2));title('green')
subplot(3,4,9);imshow(rgb_image(:,:,3));title('blue')
%hsv
subplot(3,4,2);imshow(hsvim(:,:,1));title(sprintf('HSV Space\n\nhue'))
subplot(3,4,6);imshow(hsvim(:,:,2));title('saturation')
subplot(3,4,10);imshow(hsvim(:,:,3));title('brightness')
%ycbcr / yuv
subplot(3,4,3);imshow(yuvim(:,:,1));title(sprintf('YCbCr Space\n\nLuminance'))
subplot(3,4,7);imshow(yuvim(:,:,2));title('blue difference')
subplot(3,4,11);imshow(yuvim(:,:,3));title('red difference')
%CIElab
subplot(3,4,4);imshow(cieim(:,:,1));title(sprintf('CIELab Space\n\nLightness'))
subplot(3,4,8);imshow(cieim(:,:,2));title('green red')
subplot(3,4,12);imshow(cieim(:,:,3));title('yellow blue')
end
you could call it like this
rgbim = imread('http://i.stack.imgur.com/gd62B.jpg');
ViewColorSpaces(rgbim)
and the display is this
in DIP and CV is this always a valid question
But it has no universal answer because each task is unique so use what is better suited for it. To choose correctly you need to know the pros/cons of each so here is some summary:
RGB
this is easy to handle and you can easyly access r,g,b bands. For many cases is better to check just single band instead of whole color or mix the colors to emphasize wanted feature or even dampening unwanted one. It is hard to compare colors in RGB due to intensity encoded into bands directly. To remedy that you can use normalization but that is slow (need per pixel sqrt). You can do arithmetics on RGB colors directly.
Example of task better suited for RGB:
finding horizont in high altitude photo
HSV
is better suited for color recognition because CV algorithms using HSV has very similar visual perception to human perception so if you want to recognize areas of distinct colors HSV is better. The conversion between RGB/HSV takes a bit of time which can be for big resolutions or hi fps apps a problem. For standard DIP/CV tasks is this usually not the case.
Example of task better suited for HSV:
Compare RGB colors
Take a look at:
HSV histogram
to see the distinct color separation in HSV. The segmentation of image based on color is easy on HSV. You can not do arithmetics on HSV colors directly instead need to convert to RGB and back
I am working on a project on detecting traffic signs in opencv. I need a good HSV range to filter out red, blue and yellow traffic signs in the urban environment. This is just so that I have a smaller region of interest. So I do not want a highly accurate range but a rough estimate. Can anyone help me out??
You might want to read this link. I extract the interesting part here:
How to find HSV values to track?
This is a common question found in stackoverflow.com. It is very simple and you can use the same function, cv2.cvtColor(). Instead of passing an image, you just pass the BGR values you want. For example, to find the HSV value of Green, try following commands in Python terminal:
green = np.uint8([[[0,255,0 ]]])
hsv_green = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)
print hsv_green
[[[ 60 255 255]]]
Now you take [H-10, 100,100] and [H+10, 255, 255] as lower bound and upper bound respectively. Apart from this method, you can use any image editing tools like GIMP or any online converters to find these values, but don’t forget to adjust the HSV ranges.