I'm looking for a filter in openCV library that change the image chromatically. For example, the blur does not change the colors of the image, I need one that do it.
I got a colored image, I need to apply a filter that distorts the color. For example, if I have an image with a lot of blue, with this filter this blue will be less or more intensity.
My images are in L* a* b* colour space and I need to work in it.
Related
I am currently working on a lane detection project, where the input is an RGB road image "img" from a racing game, and the output is the same image annotated with drawn colored lines on detected lanes.
The steps are:
Convert the RGB image "img" to HSL image, then use a white color mask on it (white lanes only are expected in the image) with a white color range to discard any parts of the image with colors outside this range (put their values as zeros), let the output of this step by "white_img".
convert "white_img" to Grayscale producing "gray_img".
Apply Gaussian blurring to "gray_img" to make edges smoother, so less noisy edges can be detected, producing "smoothed_img".
Apply edge detection on "smoothed_img", producing "edge_img".
Crop "edge_img" by selecting a region of interest ROI, which is approximately within the lower half of image, producing "roi_img".
Finally, apply Hough transform on "roi_img" to detect the lines which will be considered as the detected lanes.
The biggest problems I am facing now are the brightness change and shades on lanes. For a dark image with shades on lanes, the lanes color can become very dark. I tried to increase the accepted white color range in step 1, which worked well for this kind of images. But for a bright image with no shades on lanes, most of the image is not discarded after step 1, which produces an output containing many things irrelevant to lanes.
Examples of input images:
Medium brightness without shades on lanes
Low brightness with shades on lanes
High brightness without shades on lanes
Any help to deal with these issues will be appreciated. Thanks in advance.
I want to try and reverse engineer the camera calibration panel in the camera raw filter in Photoshop/Lightroom.
Photoshop Colour Calibration Tool
It can create some pretty cool effects, so I want to write a program what will help automate these effects. I've attempted to try and figure out how it works, it seems to work differently from the HSL colour adjustment methods in that just moving the "Blue Primary" slider seems to affect all colours not just the blue hues (it even affects some colours that begin as solid red).
I've tried to graph out the sort of function this would do, since it seems to do something along the lines of shifting the hue of the actual blue colour in RGB to be whatever you shift the hue by, but I'm not sure what this actually means.
Here's an unmodified graph of hues relating to RGB values.
Here's the same graph, but by shifting the blue primary hues all the way to the left.
I know it's doing more than just hue shifting, since just running the filter on a hue spectrum with L/S both at 100% seems to actually change the lightness and saturation on some of the hues, see images linked below for an example.
Regular Hue Spectrum
Hue Spectrum with Blue Primary slider all the way to the left.
Is there any other open source software that does something like this that I can look to for code, or possibly an idea of how this actually works under the hood?
I figured it out (at least what I believe they're doing). So if anyone else has the same question, what they're doing is by taking advantage of the chromaticity coordinates for each of the RGB colours in the RGB -> XYZ colour space conversion. So when shifting the hue of the blue coordinate, I think they're first just shifting the hue of the blue in HSL, then taking that colour of the shifted hue, converting it into XYZ, then projecting the XYZ onto XY to get the chromaticity coordinate for the shifted blue. Then to apply that to an image, just converting from RGB to XYZ with the shifted coordinate, and converting back into RGB with the unshifted XYZ conversion matrix.
I was really confuse between intensity slicing and color map implementation in OpenCV. Is the color maps implementation in OpenCV the same with the concept of intensity slicing? Can anyone clarify this to me. Your help will be very much appreciated. Thank you.
Intensity slicing is more like a thresholding action. You have 2 kinds, one is without background, so black, and the selected greyscale colors are white. In OpenCV this can be achieved with threshold or inRange. The second one is with background, which you turn certain greyscale values white and the rest you leave them as they are... I do not know any OpenCV function that do this... but it can be easily achieve with inRange to get the binary mask and then setTo with the mask and to color white.
Now, the color mapping is actually as its name says, mapping colors :) This means that for each "colormap" it has a color value for each 8 bit greyscale value, i.e. 256 colors. Then it creates a new colored image by putting a color value that mapped the value of the greyscale pixel intensity. In the "Jet" colormap, 0 in greyscale will be mapped to a dark blue. And 255 in greyscale will mapped to a dark red.
I would like to use GPUImage's Histogram Equalization filter (link to .h) (link to .m) for a camera app. I'd like to use it in real time and present it as an option to be applied on the live camera feed. I understand this may be an expensive operation and cause some latency.
I'm confused about how this filter works. When selected in GPUImage's example project (Filter Showcase) the filter shows a very dark image that is biased toward red and blue which does not seem to be the way equalization should work.
Also what is the difference between the histogram types kGPUImageHistogramLuminance and kGPUImageHistogramRGB? Filter Showcase uses kGPUImageHistogramLuminance but the default in the init is kGPUImageHistogramRGB. If I switch Filter Showcase to kGPUImageHistogramRGB, I just get a black screen. My goal is an overall contrast optimization.
Does anyone have experience using this filter? Or are there current limitations with this filter that are documented somewhere?
Histogram equalization of RGB images is done using the Luminance as equalizing the RGB channels separately would render the colour information useless.
You basically convert RGB to a colour space that separates colour from intensity information. Then equalize the intensity image and finally reconvert it to RGB.
According to the documentation: http://oss.io/p/BradLarson/GPUImage
GPUImageHistogramFilter: This analyzes the incoming image and creates
an output histogram with the frequency at which each color value
occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide
image with the center (vertical) pixels containing pixels that
correspond to the frequency at which various color values occurred.
Each color value occupies one of the 256 width positions, from 0 on
the left to 255 on the right. This histogram can be generated for
individual color channels (kGPUImageHistogramRed,
kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the
image (kGPUImageHistogramLuminance), or for all three color channels
at once (kGPUImageHistogramRGB).
I'm not very familiar with the programming language used so I can't tell if the implementation is correct. But in the end, colours should not change too much. Pixels should just become brighter or darker.
I have an random shape bitmap cut out by user. I want to fade out its borders i.e. contours, so as to make it appear smooth. What should I do? To get the borders and color of every pixel in bitmap, I am traversing it pixel by pixel. It takes long time, still I am ok with it. Is openCV my only option? If yes, can anybody point me towards any tutorial or suggestion for logical approach?
You can just run a smoothing filter on your shape.
In opencv you can use the blur fnnction or gaussainBlur. Look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html.
You don't have to use opencv but i think it would be easier and faster.
If you still don't want can use any other code that implement smoothing an image.
In case you just want to effect the border pixels do the following:
Make a copy of the original image
Filter the entire image.
Extract the border pixel using opencv findContours.
Copy from the blurred image only the pixels in the border and in there neighborhood and copy them to the copy you did in step 1.