Convert All Images to RGB for Transfer - opencv

I am using JNA to access openCV, in my application I have one function that returns an array of RGB values to java for display which is fine if the image is actually uses rgb color space but if the image is hsv or a binary image it produces odd behavior artifacts how i can detect what color space it is using and convert everything to rgb before transfer and convert them to rgb if they arent?

You can't detect if an image is rgb or not by direct examination of the the three buffers. You need to know what format it's in before making it available to another process or app.
I suggest you decide to use rgb for all your interprocess buffers and ensure that all the images are converted to rgb in each originating process.
in Opencv use "CvtColor" to get the native bgr into rgb. From other apps - if they don't support image conversion - then you can use cvconvert to get them all into rgb if you need to.
You can also use "Merge" and "Mixchannels" to make simple rgb to bgr without any fuss and in-place.
in 2.2 there is a better rgb to hsv fuinction which uses all 256 values for hue. it is better than the older one.
docs here: http://opencv.willowgarage.com/wiki/

Related

How do I create an RGB CIImage from 3 8-bit gray images?

I have 3 CIImage objects that are gray 8-bpp images that are meant to be the 8-bit R, G, and B channels of a new image. Aside from low-level image pixel data operations, is there a way to construct the CIImage (from filters or some other easier way)
I realize I can do this by looping through the pixels of a new RGB image and setting it from the gray channels I have -- I was wondering if there was a more idiomatic way to work with channels.
For example, in Pillow for Python, it's Image.merge([rChannel, gChannel, bChannel]) -- I know how to code the pixel access way if there is no built in way.
The book, Core Image for Swift, covers how to do this and provides the code to do it here:
https://github.com/FlexMonkey/Filterpedia/blob/master/Filterpedia/customFilters/RGBChannelCompositing.swift
The basic idea is that you need to provide a color kernel function in GPU shader language and wrap it in a CIFilter subclass.
NOTE: The code is not copied here because it's under GPL, which is an incompatible license with StackOverflow answers. You can follow the link if you want to see how it's done, and use it if it's compatible with your license.

Does OpenCV have functions to handle non-linearities in sRGB color space?

I am wondering whether OpenCV has functions to handle the non-linearities in the sRGB color space.
Say I want to convert an JPEG image from sRGB color space into XYZ color space. As specified in this Wiki page, one needs to first undo the nonlinearities to convert to linear RGB space, and then multiply with the 3x3 color transform matrix. However, I couldn't find any such discussions in the cvtColor documentation. Did I miss something?
Thanks a lot in advance!
It's not explicitly stated in the documentation, so you're not missing anything, but OpenCV does not perform gamma correction in its RGB2XYZ/BGR2XYZ color conversions. You can confirm this by looking at the source code for cvtColor in
<OpenCV_dir>/modules/imgproc/src/color.cpp
If you look at the RGB <-> XYZ section you'll see that the input RGB values are simply multiplied by the coefficient matrix.
I have also not found any existing method to perform gamma correction on an RGB image.
Interestingly, a custom RGB -> XYZ conversion is done as a preliminary step for converting to both L*a*b* and L*u*v*, and in both cases it performs gamma correction.
Unfortunately, this isn't accessible from RGB2XYZ code, but you might be able to reuse it in your own code. I've also seen several code samples on the web, mostly using look-up tables for CV_8U depth images.

Why OpenCV Using BGR Colour Space Instead of RGB

Why OpenCV using BGR colour space instead of RGB. We all know that RGB is the convenient colour model for most of the computer graphics and also the human visual system works in a way that is similar to a RGB colour space. Is there any reason behind OpenCV BGR colour space?.
"The reason why the early developers at OpenCV chose BGR color format is probably that back then BGR color format was popular among camera manufacturers and software providers. E.g. in Windows, when specifying color value using COLORREF they use the BGR format 0x00bbggrr.
BGR was a choice made for historical reasons and now we have to live with it. In other words, BGR is the horse’s ass in OpenCV."
OpenCV reads in images in BGR format (instead of RGB) because when OpenCV was first being developed, BGR color format was popular among camera manufacturers and image software providers. The red channel was considered one of the least important color channels, so was listed last, and many bitmaps use BGR format for image storage. However, now the standard has changed and most image software and cameras use RGB format, which is why, in programs, it's good practice to initially convert BGR images to RGB before analyzing or manipulating any images.
Why? For historical reasons. In 1987, Microsoft Windows ran on the IBM PS/2, and an early IBM video display controller, VGA, made use of the INMOS 171/176 RAMDAC chip, which was easier to use when images were stored in BGR format.
See details at
Why BGR color order - Retrocomputing Stack Exchange

Client-side conversion of rgb-jpg to 8-bit-jpg using Canvas+HTML5

Many articles shows ways of converting jpeg files to grayscale using canvas+html5 at the client-side. But what I need is to convert an image to 8bit grayscale to reduce its size before uploading to my server.
Is it possible to do it using canvas+html5?
The whatwg specification mentions a toBlob method, which is supposed to convert the canvas to a jpeg or png and give you the binary representation. Unfortunately, it isn't widely supported yet.
So all you can do is use getImageData to get an array of the bytes of the raw image data. In this array, every pixel is represented by 4 bytes: red, green, blue and alpha. You can easily calculate the grayscale values from this (gray = (red + green + blue) / 3 * alpha / 255;). But the resulting array will be completely uncompressed, so it will likely be even larger than the original jpeg, even though it only uses 8 bit per pixel. In order to reduce the size, you will have to implement an image compression algorithm yourself. You might consider to use the DEFLATE algorithm used by PNG instead of JPEG encoding - it's a lot easier to implement, doesn't introduce further artifacts because it's lossless, and performs pretty well on 8bit images.
The boilerplate data to turn this compressed data stream into a vialid PNG/JPEG file should be added on the server (when you need it).

Reading 16-bit grayscale png using chunky_png

I'm trying to use chunky_png for reading PNG image in Ruby on Rails. the library seems to work fine reading 8-bit PNG image. However, what I actually have is 16-bit grayscale PNG image and I want to retriev pixel brightness value of certain points. All of my attempts on retrieving pixel value always end-up with 8-bit rgba format.
Is there any way to read 16-bit brightness value from grayscale png image using chunky_png? Or should I give up and use some other tools that can do this job instead?
Because of how ChunkyPNG stores color values internally, it doesn't support more than 8 bit colors per channel. It automatically converts channels to 8 bit values when it encounters higher values.
So, this is impossible now, and would require some significant rewrites of the codebase to make this possible (but pull requests are accepted! :)

Resources