I trained an image to image translation model on pytorch and the input and output images are in CIELAB color space. How do I convert this to an RGB image? Simply converting the image causes some sort of clipping and produces white patches.
out=model.forward(x)
out=torch.squeeze(out)
out=out.permute(1,2,0)
out=torch.from_numpy(out.data.numpy())
plt.imshow(out)
This doesn't produce white patches however I cant use OpenCV and convert it to RGB as the values are in range 0-1.
Now if I convert the tensor to a PIL image and then convert to RGB(0-255) some sort of clipping occurs and produces white patches which are even visible before converting to RGB
out=model.forward(x)
out=torch.squeeze(out)
out=np.asarray(transforms.ToPILImage()(out))
plt.imshow(out)
The white patches after using out=cv2.cvtColor(out, cv2.COLOR_LAB2RGB) to convert
How can I properly convert the CIELAB image to RGB?
Related
I am working on processing images that consists of colors that have the same grayscale. In other words, each image is colored with random colors that have the same gray value.
When I converted the image using (rgb2grey() from skimage or cv2.cvtColor() from OpenCV), the resulted image has only one gray value (or slightly difference gray values (unperceivable by human eyes). Therefore, the resulted image details unrecognizable.
My questions are:
What are the best way to do before converting these images to grayscale ones? (Please note the colors of these images are not fixed)
Are there any color combinations for which the color-gray conversion algorithms won't work?
How about using YCbCr?
Y is intensity, Cb is the blue component relative to the green component and Cr is the red component relative to the green component.
So I think YCbCr can differentiate between multiple pixels with same grayscale value.
I have a 512x512 grayscale image (or MultiArray) which is the output of a CoreML depth estimation model.
In Python, one can use Matplotlib or other packages to visualise grayscale images in different colormaps, like so:
Grayscale
Magma
[Images from https://ai.googleblog.com/2019/08/turbo-improved-rainbow-colormap-for.html]
I was wondering if there was any way to take said output and present it as a cmap in Swift/iOS?
If you make the model output an image, you get a CVPixelBuffer object. This is easy enough to draw on the screen by converting it to a CIImage and then a CGImage.
If you want to draw it with a colormap, you'll have to replace each of the grayscale values with a color manually. One way to do this is to output an MLMultiArray and loop through each of the output values, and use a lookup table for the colors. A quicker way is to do this in a Metal compute shader.
I am trying to get the values of white in erosion in image morphology. Is there any function that will help me to convert the values of the white into an integer?
I have a problem with normalization.
Let me what the problem is and how I attempt to solve it.
I take a three-channel color image, convert it to grayscale and apply uniform or non-uniform quantization and the same thing.
To this image, I should apply the normalization, but I have a problem even if the image and grayscale and always has three channels.
How can I apply normalization having a three-channel image?
Should the min and the max all be in the three channels?
Could someone give me a hand?
The language I am using is processing 2.
P.S.
Can you do the same thing with a color image instead use a grayscale image?
You can convert between the 1-channel and 3-channel representations easily. I'd recommend scikit-image (http://scikit-image.org/).
from skimage.io import imread
from skimage.color import rgb2gray, gray2rgb
rgb_img = imread('path/to/my/image')
gray_img = rgb2gray(rgb_image)
# Now normalize gray image
gray_norm = gray_img / max(gray_img)
# Now convert back
rgb_norm = gray2rgb(gray_norm)
I worked with a similar problem sometime back. One of the good solutions to this was to:
Convert the image from RGB to HSI
Leaving the Hue and Saturation channels unchanged, simply normalize across the Intensity channel
Convert back to RGB
This logic can be applied accross several other image processing tasks, like for example, applying histogram equalization to RGB images.
I am working with opencv 2.4 and numpy. I would like to open an image and get all the information about it (8 bit - if its RGB-BGR etc) and also try to change the color space.
I have this code:
if __name__ == '__main__':
img = cv2.imread('imL.png')
conv= cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.imwrite('lab.png', conv )
When I open the file lab.png I get the image with different colors!
I check the value of BGR to LAB in: http://www.brucelindbloom.com/
For this I would like to know all the information about one image.
That rigtht you will get a different image colour because imwrite() saves file in the format specified (PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order). images can be saved using this function. imwrite() doesn't know the format LAB to save image as it always expect the image in BGR.
If the format, depth or channel order is different, use Mat::convertTo() , and cvtColor() to convert it before saving.
Lab is another color space, like the BGR color space which is gained from cv2.imread(). It just like you convert temperature from Fahrenheit to Celsius.
32 Fahrenheit and 0 Celsius is the same temperature but in different unit.
cv2.imwrite() dose not know if the values are in BGR color space or not. When it get a 3 dimension array, it assume that it is a BGR color space while your conv variable contains Lab color space. This is why your color of your image is changed.
For your information, Each layer of BGR color space contains blue, green and red colors while layers of Lab contains lightness (0-100), a* and b* respectively. For more information, please see "Lab color space" in Wikipedia.