I want to convert colors from RGB to Hue and Tone(Chroma and Value) 130 System. I've searched but it seems there is no direct way for that, and no python library is available. Some references says RGB should be first convert to sRGB and after that we could generalize Munsell(Chroma, Value and Hue) to reach Hue and Tone from it. Another approach says we should convert RGB to XYZ and then extract Munsell and use it to make Hue and tone version. Some other approaches just quantize the RGB histogram and do not explain how exactly. Is there any clear way to do that? If anyone knows please help.
Related
I am wondering whether OpenCV has functions to handle the non-linearities in the sRGB color space.
Say I want to convert an JPEG image from sRGB color space into XYZ color space. As specified in this Wiki page, one needs to first undo the nonlinearities to convert to linear RGB space, and then multiply with the 3x3 color transform matrix. However, I couldn't find any such discussions in the cvtColor documentation. Did I miss something?
Thanks a lot in advance!
It's not explicitly stated in the documentation, so you're not missing anything, but OpenCV does not perform gamma correction in its RGB2XYZ/BGR2XYZ color conversions. You can confirm this by looking at the source code for cvtColor in
<OpenCV_dir>/modules/imgproc/src/color.cpp
If you look at the RGB <-> XYZ section you'll see that the input RGB values are simply multiplied by the coefficient matrix.
I have also not found any existing method to perform gamma correction on an RGB image.
Interestingly, a custom RGB -> XYZ conversion is done as a preliminary step for converting to both L*a*b* and L*u*v*, and in both cases it performs gamma correction.
Unfortunately, this isn't accessible from RGB2XYZ code, but you might be able to reuse it in your own code. I've also seen several code samples on the web, mostly using look-up tables for CV_8U depth images.
can anyone explain me advantage and disadvantage of HSI, Ycbcr and RGB color spaces and a give me a short comparison about these spaces?
I know relations between these models indeed I just need a comparison.
HSI and Ycbcr, unlike RGB, separate the intensity (luma) from color information (chroma). This is useful if you want to ignore one or the other. For example, face detection is usually done on intensity images. On the other hand, ignoring the intensity can help to get rid of shadows.
HSI contains hue and saturation, which are the terms that people use to describe colors. On the other hand, hue and saturation are angles, which can be inconvenient for computing distances in the color space. Not to mention that hue wraps around. Ycbcr, on the other hand, is a Euclidean space. Also, Ycbcr is what you typically get directly from a camera.
Also see this answer on DSP stackexchange.
Why OpenCV using BGR colour space instead of RGB. We all know that RGB is the convenient colour model for most of the computer graphics and also the human visual system works in a way that is similar to a RGB colour space. Is there any reason behind OpenCV BGR colour space?.
"The reason why the early developers at OpenCV chose BGR color format is probably that back then BGR color format was popular among camera manufacturers and software providers. E.g. in Windows, when specifying color value using COLORREF they use the BGR format 0x00bbggrr.
BGR was a choice made for historical reasons and now we have to live with it. In other words, BGR is the horse’s ass in OpenCV."
OpenCV reads in images in BGR format (instead of RGB) because when OpenCV was first being developed, BGR color format was popular among camera manufacturers and image software providers. The red channel was considered one of the least important color channels, so was listed last, and many bitmaps use BGR format for image storage. However, now the standard has changed and most image software and cameras use RGB format, which is why, in programs, it's good practice to initially convert BGR images to RGB before analyzing or manipulating any images.
Why? For historical reasons. In 1987, Microsoft Windows ran on the IBM PS/2, and an early IBM video display controller, VGA, made use of the INMOS 171/176 RAMDAC chip, which was easier to use when images were stored in BGR format.
See details at
Why BGR color order - Retrocomputing Stack Exchange
I'm working on processing .raw image files, but I'm not sure how the image is being stored. Each pixel is a unsigned 16-bit value, with typical values ranging from 0 to about 1000 (in integer form). This isnt enough bits for hex values, and its not RGB (0-255) so I'm not quite sure what it is.
Bonus: if you have any idea on how to convert this to grayscale in OpenCV (or just mathematically) that would be a huge help too.
The name RAW comes from the fact that the values stored in the file are not pixel RGB values, but the raw values that were measured from the camera itself. The values have meaning only if you know how the camera works. There are some standards, but really, you should just consider RAW to be a collection of poorly defined, undocumented, proprietary formats that probably won't intuitively match any idea you have about how images are stored.
Check out DCRaw -- it's the code that nearly every program that supports RAW uses
https://www.dechifro.org/dcraw/
The author reverse-engineered and implemented nearly every proprietary RAW format -- and keeps it up to date.
The other answers are correct, RAW is not a standard, it's shorthand. Camera CCDs often do not have separate red, green and blue pixels for each pixel, instead, they will use what's called a Bayer Pattern and then only save the pixel values for that pattern. Then, you will need to convert that pattern to rgb values.
Also, for the bonus question, if you are simply trying to convert a RGB image to grayscale, or something like that, you can either use the matrix operators, or call convertTO
Forgot what the R/G/B of 16-bit was:
"there can be 5 bits for red, 6 bits for green, and 5 bits for blue"
http://en.wikipedia.org/wiki/Color_depth#16-bit_direct_color
Seen it used in game code before.
Complete shot in the dark though being as there are also proprietary RAW formats.
I am using JNA to access openCV, in my application I have one function that returns an array of RGB values to java for display which is fine if the image is actually uses rgb color space but if the image is hsv or a binary image it produces odd behavior artifacts how i can detect what color space it is using and convert everything to rgb before transfer and convert them to rgb if they arent?
You can't detect if an image is rgb or not by direct examination of the the three buffers. You need to know what format it's in before making it available to another process or app.
I suggest you decide to use rgb for all your interprocess buffers and ensure that all the images are converted to rgb in each originating process.
in Opencv use "CvtColor" to get the native bgr into rgb. From other apps - if they don't support image conversion - then you can use cvconvert to get them all into rgb if you need to.
You can also use "Merge" and "Mixchannels" to make simple rgb to bgr without any fuss and in-place.
in 2.2 there is a better rgb to hsv fuinction which uses all 256 values for hue. it is better than the older one.
docs here: http://opencv.willowgarage.com/wiki/