I have a greyscale image which was created from a RGB image using farmula:
0.3 * c.r + 0.59 * c.g + 0.11 * c.b
Now, I want to convert the greyscale image back to the RGB color image close to original as far as possible.
I tried o look for it on the internet, but could not find how to do it. Wikipedia suggests that it is possible but does not explain how.
Could someone please suggest how can I do it.
Thanks in advance.
That is not possible. You've taken 3D information, and thrown away 2 of the dimensions. You can't get them back.
You cannot get true color values.
Think of it like this:
you have three unknowns and only one equation.
there are infinite solutions that form a two dimensional plane. You can get a poor representation of the image. Just choose two of the values randomly and calculate thethird one. Hope for the best.
you can generate an RGB value from yoru grayscale if you set R=G=B=Grayscale value
yes you cant, there was some method to make greyscale to RGB, in this method its include RGB image as baseimage. i write code about this before but i completely forgot the method name.
try RGB, alpha beta gamma, some kind like color conversion.
Related
I have a bunch of values that seem to be 12-bit numbers. If I put them in a matrix and scale each one to a value 0-255 and then show them as an image, I get something that looks like a photo, but it's quite bland.
I think that they might be direct reading off of a camera sensor. They have a sort of stippled pattern, kind of like plaid, that makes me think that they might be a sort of Bayer filter. https://en.wikipedia.org/wiki/Bayer_filter
I want to convert these number into RGB values. What do I need to do? For each 2x2 in the Bayer pattern, do I convert the red to R, blue to B, and then average the green values? Do I need a gamma correction?
I noticed that the max value is much lower than the full 0xfff. Do I need to scale the values?
The procedure is well-described here: https://www.strollswithmydog.com/raw-file-conversion-steps/
Looks like I was getting it mostly right by the problem was grey balance. There is a transformation that needs to be made on the sensor values to map it to the 0-255 RGB component and the transform that needs to be made depends on the color. The best way is to take a photo of a perfect grey and calibrate.
Could someone explain what is the difference between contrast and alpha? (is there any difference...?)
when talking about OpenCV, it seems to be the same...
Contrast is the color difference which makes the objects distinguishable. Alpha, on the other hand, is the variable that indicates the transparency of a pixel.
If you want to increase the contrast, you can do this by enwidening the histogram of an image. See histogram equalization. CLAHE will give the best results.
If you want to add transparency to an image, you can write out the 4-channel image (blue - green - red - alpha) as .png. If you want to blend two images, you can use addWeighted or write a function of your own.
can anybody explain the mathematical background and function for conversion of BGR2GRAY?
Under https://docs.opencv.org/3.4/de/d25/imgproc_color_conversions.html I found the following for RGB to Gray:
RGB[A] to Gray:Y←0.299⋅R+0.587⋅G+0.114⋅B
Is it the same reversed for BGR? Is it really that simple or is there a more complex method behind:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Since the human vision does not receipt all colors equally, the contributions of the primary colors vary. This depends on the wavelengths of the colors. In the following document on page 7 you can find the formula and also some more explanations: http://poynton.ca/PDFs/ColorFAQ.pdf
BGR has been used for OpenCV since back then when it was established a couple companies used BGR instead of RGB. The standard is nowadays RGB. Nontheless, the formula for the transformation is equivalent to Y=0.299*R + 0.587*G + 0.114*B
I am trying to apply histogram normalization to create a dense color histogram.
Split the channels into R, G, B
Normalize the individual histogram
Merge
I think that this is the normal step perhaps if I am wrong please let me know. Now,
for a rainbow image as shown below I get I get max of 255 for all 3 channel and 0 as min. Using the formula for
Pixel - Min / (Max - min) * 255
I will get the same image as the original one back. What is the critical step that I am missing. Please advise me.Thank you!
REf: http://www.roborealm.com/help/Normalize.php.. I used this reference
White = (255,255,255). Black = (0,0,0). So your program finds the white background, and the black line in the bottom right.
Remove the white and change it to black. Then make your program ignore black.
images having white and black pixels cannot be normalized as such. your formula is giving you the same value. try ignoring all white and black pixels and normalize the pixels one by one.
as i see here you have a well distributed image for all channels already, so normalizing this one may not work well anyways..
I split color image for 3 channels and made a contrast enhancement of each channel.
Then merged them together, I like the image at the result, but it has different colors.
Black objects became yellow and so on...
EDIT:
The algorithm I used is to calculate the 5th percentile and the 95th percentile
as min and max values, and then expand the values of image so that it will have min and max values as 0 and 255. If there is a better approach please tell me.
When doing contrast enhancement in color images, it is a good idea to only adjust the luminance (brightness) and leave the color information alone. This requires a colorspace conversion from RGB to something like YUV. In this colorspace, the Y component is similar to a grayscale version of the image, while the other components provide the color. This effectively allows you to adjust contrast (by running your algorithm on just the Y component) without distorting the color information. Finally, you can convert back to RGB.
Use CLAHE algorithm. openCV has an implementation of it: cv::createCLAHE()