I am trying to get the values of white in erosion in image morphology. Is there any function that will help me to convert the values of the white into an integer?
Related
I am working on processing images that consists of colors that have the same grayscale. In other words, each image is colored with random colors that have the same gray value.
When I converted the image using (rgb2grey() from skimage or cv2.cvtColor() from OpenCV), the resulted image has only one gray value (or slightly difference gray values (unperceivable by human eyes). Therefore, the resulted image details unrecognizable.
My questions are:
What are the best way to do before converting these images to grayscale ones? (Please note the colors of these images are not fixed)
Are there any color combinations for which the color-gray conversion algorithms won't work?
How about using YCbCr?
Y is intensity, Cb is the blue component relative to the green component and Cr is the red component relative to the green component.
So I think YCbCr can differentiate between multiple pixels with same grayscale value.
I trained an image to image translation model on pytorch and the input and output images are in CIELAB color space. How do I convert this to an RGB image? Simply converting the image causes some sort of clipping and produces white patches.
out=model.forward(x)
out=torch.squeeze(out)
out=out.permute(1,2,0)
out=torch.from_numpy(out.data.numpy())
plt.imshow(out)
This doesn't produce white patches however I cant use OpenCV and convert it to RGB as the values are in range 0-1.
Now if I convert the tensor to a PIL image and then convert to RGB(0-255) some sort of clipping occurs and produces white patches which are even visible before converting to RGB
out=model.forward(x)
out=torch.squeeze(out)
out=np.asarray(transforms.ToPILImage()(out))
plt.imshow(out)
The white patches after using out=cv2.cvtColor(out, cv2.COLOR_LAB2RGB) to convert
How can I properly convert the CIELAB image to RGB?
I am working with images of rocks. Here I have to segment them. I get a depth image as input.
After thresholding the image, there are some white pixels which I tried to remove but to no avail.
The methods I have used :
1. Bilateral Filter (not a morphological operator)
2. Closing
3. Erosion
The result are shown in the images below.
My task is to remove those white pixels INSIDE THE ROCKS using morphological operations. If the white pixels are not removed it affects my algorithm later(distance transform).
Is there a way using only morphological operations?If not, is there any other way?
1.Bilateral Filter
2.Closing
3.Erosion
4.Original Depth Image
I have a problem with normalization.
Let me what the problem is and how I attempt to solve it.
I take a three-channel color image, convert it to grayscale and apply uniform or non-uniform quantization and the same thing.
To this image, I should apply the normalization, but I have a problem even if the image and grayscale and always has three channels.
How can I apply normalization having a three-channel image?
Should the min and the max all be in the three channels?
Could someone give me a hand?
The language I am using is processing 2.
P.S.
Can you do the same thing with a color image instead use a grayscale image?
You can convert between the 1-channel and 3-channel representations easily. I'd recommend scikit-image (http://scikit-image.org/).
from skimage.io import imread
from skimage.color import rgb2gray, gray2rgb
rgb_img = imread('path/to/my/image')
gray_img = rgb2gray(rgb_image)
# Now normalize gray image
gray_norm = gray_img / max(gray_img)
# Now convert back
rgb_norm = gray2rgb(gray_norm)
I worked with a similar problem sometime back. One of the good solutions to this was to:
Convert the image from RGB to HSI
Leaving the Hue and Saturation channels unchanged, simply normalize across the Intensity channel
Convert back to RGB
This logic can be applied accross several other image processing tasks, like for example, applying histogram equalization to RGB images.
I have images containing gray gradations and one another color. I'm trying to convert image to grayscale with opencv, also i want the colored pixels in the source image to become rather light in the output grayscale image, independently to the color itself.
The common luminosity formula is smth like 0.299R+0.587G+0.114B, according to opencv docs, so it gives very different luminosity to different colors.
I consider the solution is to set some custom weights in the luminosity formula.
Is it possible in opencv? Or maybe there is a better way to perform such selective desaturation?
I use python, but it doesnt matter
This is the perfect case for the transform() function. You can treat grayscale conversion as applying a 1x3 matrix transformation to each pixel of the input image. The elements in this matrix are the coefficients for the blue, green, and red components, respectively since OpenCV images are BGR by default.
im = cv2.imread(image_path)
coefficients = [1,0,0] # Gives blue channel all the weight
# for standard gray conversion, coefficients = [0.114, 0.587, 0.299]
m = np.array(coefficients).reshape((1,3))
blue = cv2.transform(im, m)
So you have custom formula,
Load source,
Mat src=imread(fileName,1);
Create gray image,
Mat gray(src.size(),CV_8UC1,Scalar(0));
Now in a loop, access BGR pixel of source like,
Vec3b bgrPixel=src.at<cv::Vec3b>(y,x); //gives you the BGR vector of type cv::Vec3band will be in row, column order
bgrPixel[0]= Blue//
bgrPixel[1]= Green//
bgrPixel[2]= Red//
Calculate new gray pixel value using your custom equation.
Finally set the pixel value on gray image,
gray.at<uchar>(y,x) = custom intensity value // will be in row, column order