OpenCV Display Colored Cb Cr Channels - opencv

So I understand how to convert a BGR image to YCrCb format using cvtColor() and seperate different channels using split() or mixChannels() in OpenCV. However, these channels are displayed as grayscale images as they are CV_8UC1 Mats.
I would like to display Cb and Cr channels in color like
Barns image on Wikipedia.
I found this solution in Matlab, but how do I do it in OpenCV?
Furthermore, the mentioned solution displayed Cb and Cr channels by "fills the other channels with a constant value of 50%". My question is:
Is this the common way to display Cr Cb channels? Or is there any recommendations or specifications when displaying Cr Cb channels?

I made a code from scratch as described in answer. Looks like it's what you need.
Mat bgr_image = imread("lena.png");
Mat yCrCb_image;
cvtColor(bgr_image, yCrCb_image, CV_BGR2YCrCb);
Mat yCrCbChannels[3];
split(yCrCb_image, yCrCbChannels);
Mat half(yCrCbChannels[0].size(), yCrCbChannels[0].type(), 127);
vector<Mat> yChannels = { yCrCbChannels[0], half, half };
Mat yPlot;
merge(yChannels, yPlot);
cvtColor(yPlot, yPlot, CV_YCrCb2BGR);
imshow("y", yPlot);
vector<Mat> CrChannels = { half, yCrCbChannels[1], half };
Mat CrPlot;
merge(CrChannels, CrPlot);
cvtColor(CrPlot, CrPlot, CV_YCrCb2BGR);
imshow("Cr", CrPlot);
vector<Mat> CbChannels = { half, half, yCrCbChannels[2] };
Mat CbPlot;
merge(CbChannels, CbPlot);
cvtColor(CrPlot, CrPlot, CV_YCrCb2BGR);
imshow("Cb", CbPlot);
waitKey(0);
As for converting grayscale images to color format, usually in such case all color channels (B, G, R) set to one grayscale value. In OpenCV CV_GRAY2BGR mode implemented in that manner.
As for "fills the other channels with a constant value of 50%" I believe it's common way to visualize such color spaces as YCbCr and Lab. I did not find any articles and descriptions of this approach, but I think it's driven by visualization purposes. Indeed, if we fill the other channels with zero, fundamentally nothing has changed: we can also see the influence of each channel, but the picture does not look very nice:
So, the aim of this approach to make visualization more colorful.

For thoses who want the code in Python (which is the same as #akarsakov) here it is :
import cv2
import numpy as np
img = cv2.imread(r"lena.png")
imgYCC = cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB)
Y,Cr,Cb = cv2.split(imgYCC)
half = np.array([[127]*Y.shape[1]]*Y.shape[0]).astype(Y.dtype)
merge_Y = cv2.merge([Y, half, half])
merge_Cb = cv2.merge([half, half, Cb])
merge_Cr = cv2.merge([half, Cr, half])
merge_Y = cv2.cvtColor(merge_Y, cv2.COLOR_YCrCb2BGR)
merge_Cb = cv2.cvtColor(merge_Cb, cv2.COLOR_YCrCb2BGR)
merge_Cr = cv2.cvtColor(merge_Cr, cv2.COLOR_YCrCb2BGR)
cv2.imwrite(r'Y.png', merge_Y)
cv2.imwrite(r'Cb.png', merge_Cb)
cv2.imwrite(r'Cr.png', merge_Cr)
The result isn't the same by the way and more look like what we could find on google when we write Y Cb Cr images.
Or maybe I did a mistake with split and/or BGR.

Related

Fastest way to apply color matrix to RGB image using OpenCV 3.0?

I have a color image represented as an OpenCV Mat object (C++, image type CV_32FC3). I have a color correction matrix that I want to apply to each pixel of the RGB color image (or BGR using OpenCV convention, doesn't matter here). The color correction matrix is 3x3.
I could easily iterate over the pixels and create a vector v (3x1) representing RGB, and then compute M*v, but this would be too slow for my real-time video application.
The cv::cvtColor function is fast, but does not seem to allow for custom color transformations.
http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor
Similar to the following, but I am using OpenCV for C++, not Python.
Apply transformation matrix to pixels in OpenCV image
Here's the code that worked using cv::reshape. It was fast enough for my application:
#define WIDTH 2048
#define HEIGHT 2048
...
Mat orig_img = Mat(HEIGHT, WIDTH, CV_32FC3);
//put some data in orig_img somehow ...
/*The color matrix
Red:RGB; Green:RGB; Blue:RGB
1.8786 -0.8786 0.0061
-0.2277 1.5779 -0.3313
0.0393 -0.6964 1.6321
*/
float m[3][3] = {{1.6321, -0.6964, 0.0393},
{-0.3313, 1.5779, -0.2277},
{0.0061, -0.8786, 1.8786 }};
Mat M = Mat(3, 3, CV_32FC1, m).t();
Mat orig_img_linear = orig_img.reshape(1, HEIGHT*WIDTH);
Mat color_matrixed_linear = orig_img_linear*M;
Mat final_color_matrixed = color_matrixed_linear.reshape(3, HEIGHT);
A few things to note from the above: The color matrix in the comment block is the one I would ordinarily apply to an RGB image. In defining the float array m, I switched rows 1 and 3, and columns 1 and 3 for OpenCV's BGR ordering. The color matrix also must be transposed. Usually a color matrix is applied as M* v = v_new, where M is 3x3 and v is 3x1 but here we are doing vT *MT = v_newT to avoid having to transpose each 3-channel pixel.
Basically the linked answer uses reshape to convert your CV_32FC3 mat of size m x n to a CV_32F mat of size (mn) x 3. After that, each row of the matrix contains exactly color channels of one pixel. You can then apply usual matrix multiplication to obtain a new mat and reshape it back to the original shape with three channels.
Note: It may be worth noticing that the default color space of opencv is BGR, not RGB.

best way to segment a tree in plantation aerial image using opencv

so i want to segment a tree from an aerial image
sample image (original image) :
and i expect the result like this (or better) :
the first thing i do is using threshold function in opencv and i didn't get expected result (it cant segment the tree crown), and then i'm using black and white filter in photoshop using some adjusted parameter (the result is shown beloww) and do the threshold and morphological filter and got result like shown above.
my question, is there a some ways to do the segmentation to the image without using photoshop first, and produce segmented image like the second image (or better) ? or maybe is there a way to do produce image like the third image ?
ps: you can read the photoshop b&w filter question here : https://dsp.stackexchange.com/questions/688/whats-the-algorithm-behind-photoshops-black-and-white-adjustment-layer
You can do it in OpenCV. The code below will basically do the same operations you did in Photoshop. You may need to tune some of the parameters to get exactly what you want.
#include "opencv2\opencv.hpp"
using namespace cv;
int main(int, char**)
{
Mat3b img = imread("path_to_image");
// Use HSV color to threshold the image
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Apply a treshold
// HSV values in OpenCV are not in [0,100], but:
// H in [0,180]
// S,V in [0,255]
Mat1b res;
inRange(hsv, Scalar(100, 80, 100), Scalar(120, 255, 255), res);
// Negate the image
res = ~res;
// Apply morphology
Mat element = getStructuringElement( MORPH_ELLIPSE, Size(5,5));
morphologyEx(res, res, MORPH_ERODE, element, Point(-1,-1), 2);
morphologyEx(res, res, MORPH_OPEN, element);
// Blending
Mat3b green(res.size(), Vec3b(0,0,0));
for(int r=0; r<res.rows; ++r) {
for(int c=0; c<res.cols; ++c) {
if(res(r,c)) { green(r,c)[1] = uchar(255); }
}
}
Mat3b blend;
addWeighted(img, 0.7, green, 0.3, 0.0, blend);
imshow("result", res);
imshow("blend", blend);
waitKey();
return 0;
}
The resulting image is:
The blended image is:
This has been an interesting topic of research in the past - mainly in the remote sensing literature.
While the morphological methods proposed using OpenCV will work in certain cases, you might want to consider more sophisticated approaches (depending on how variable your data is and how robust a detector you want to build).
For example, this paper, and those who cite it - give you a flavour of what has been attempted.
Pragmatically speaking - I think a neat solution would be one founded more on statistical texture analysis. There are many ways to classify (and then count) regions of an image as belong to a texture (co-occurance matrices, filter banks, textons, wavelets, etc, etc.).
Sadly, this is an area where OpenCV is rather deficient - it only provides a subset of the useful algorithms out there... However, here are a few quick ideas (none of which I have tried directly, just what I'm aware of are based on underlying OpenCV):
Use OpenCV Gabor filter support and cluster (for example).
You could also possibly train an OpenCV SVM with Local Binary Patterns.
A new library - but probably not so relevant for static images - LIBDT
Anyways, I hope you get something that just works for your purposes!

iOS & OpenCV: Image Registration / Alignment

I am doing a project of combining multiple images similar to HDR in iOS. I have managed to get 3 images of different exposures through the Camera and now I want to align them because during the capture, one's hand must have shaken and resulted in all 3 images having slightly different alignment.
I have imported OpenCV framework and I have been exploring functions in OpenCV to align/register images, but found nothing. Is there actually a function in OpenCV to achieve this? If not, is there any other alternatives?
Thanks!
In OpenCV 3.0 you can use findTransformECC. I have copied this ECC Image Alignment code from LearnOpenCV.com where a very similar problem is solved for aligning color channels. The post also contains code in Python. Hope this helps.
// Read the images to be aligned
Mat im1 = imread("images/image1.jpg");
Mat im2 = imread("images/image2.jpg");
// Convert images to gray scale;
Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = MOTION_EUCLIDEAN;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == MOTION_HOMOGRAPHY )
warp_matrix = Mat::eye(3, 3, CV_32F);
else
warp_matrix = Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 5000;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
TermCriteria criteria (TermCriteria::COUNT+TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
Mat im2_aligned;
if (warp_mode != MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), INTER_LINEAR + WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),INTER_LINEAR + WARP_INVERSE_MAP);
// Show final result
imshow("Image 1", im1);
imshow("Image 2", im2);
imshow("Image 2 Aligned", im2_aligned);
waitKey(0);
There is no single function called something like align, you need to do/implement it yourself, or find an already implemented one.
Here is a one solution.
You need to extract keypoints from all 3 images and try to match them. Be sure that your keypoint extraction technique is invariant to illumination changes since all have different intensity values because of different exposures. You need to match your keypoints and find some disparity. Then you can use disparity to align your images.
Remember this answer is so superficial, for details first you need to do some research about keypoint/descriptor extraction, and keypoint/descriptor matching.
Good luck!

Detecting if a bottle has a label

I am currently doing a bit of computer vision using openCv. I have a sample of bottles a label on it. I am trying to determine when a bottle does not have a label on it.
The label is rectangular in shape.
I have done an edge detection using Canny.I have tried using findcountour() to detect if a bottle has an inner contour(this would represent the rectangular label).
If your problem is this simple, just place reduce your image using a rectangle.
cv::Mat image = imread("image.png");
cv::Rect labelRegion(50, 200, 50, 50);
cv::Mat labelImage = image(labelRegion);
Then decompose your image region into three channels.
cv::Mat channels[3];
cv::split(labelImage, channels);
cv::Mat labelImageRed = channels[2];
cv::Mat labelImageGreen = channels[1];
cv::Mat labelImageBlue = channels[0];
Then threshold each of these one channeled images and count number of zero/nonzero pixels.
I'm not providing code for this part!
If you don't have label on the image then each channel has values bigger then ~200 (you should check this). If there is a label, then you will see different result when counting zero/nonzero pixels from the non labeled one.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat img=imread("c:/data/bottles/1.png");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
resize(gray,gray,Size(50,100));
Sobel(gray,gray,CV_16SC1,0,1);
convertScaleAbs(gray,gray);
if(sum(gray)[0]<130000)
{
cout<<"no label";
}else{
cout<<"has label";
}
imshow("gray",gray);
waitKey();
return 0;
}
I am guessing it should be enough to just see if there is text present on the bottle or not (if yes, then it has a label and vice versa).. You could check out a project like THIS.. There are numerous papers in this area; some of the more famous ones are done by the Stanford CV group - 1 and 2..
HTH
guneykayim suggested image segmentation which I feel would be the easiest method. I am just adding a little bit more...
my suggestion is that you convert your BGR image into YCbCr and then look for values within the Cb and Cr channels to match the color of your label. This will allow you to easily segment out colors even if lighting conditions on the bottle change (a darkly lit bottle will end up having white regions appear dark gray and this can be a problem if you have gray colored labeling)
something like this should work in python:
# Required moduls
import cv2
import numpy
# Convert image to YCrCb
imageYCrCb = cv2.cvtColor(sourceImage,cv2.COLOR_BGR2YCR_CB)
# Constants for finding range of label color in YCrCb
# a, b, c and d need to be defined
min_YCrCb = numpy.array([0,a,b],numpy.uint8)
max_YCrCb = numpy.array([0,c,d],numpy.uint8)
# Threshold the image to produce blobs that indicate the labeling
labelRegion = cv2.inRange(imageYCrCb,min_YCrCb,max_YCrCb)
# Just in case you are interested in going an extra step
contours, hierarchy = cv2.findContours(labelRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contour on the source image
for i, c in enumerate(contours):
area = cv2.contourArea(c)
if area > minArea: # minArea needs to be defined, try 300 square pixels
cv2.drawContours(sourceImage, contours, i, (0, 255, 0), 3)
the function cv2.inRange() will also work incase you decided to work with BGR image.
Reference:
http://en.wikipedia.org/wiki/YCbCr

How can I know the red, blue and red component value of a pixel in a color image using openCV?

The pixel value of a colored image represents the total of the Red , green , blue component
effect . I want to extract the exact value for each component using opencv, Please suggest !
It's all in the OpenCV FAQ Wiki:
Suppose, we have 8-bit 3-channel image I (IplImage* img):
I(x,y)blue ~ ((uchar*)(img->imageData + img->widthStep*y))[x*3]
I(x,y)green ~ ((uchar*)(img->imageData + img->widthStep*y))[x*3+1]
I(x,y)red ~ ((uchar*)(img->imageData + img->widthStep*y))[x*3+2]
You might also want to get a copy of O'Reilly's Learning OpenCV and read it if you're planning to do any serious work with OpenCV - it will save a lot of time on very basic questions such as the above.
I suggest to learn opencv c++ api. Pixels are represented by vector of uchar.
If this is a color image then we have 3 uchar by pixel.
Opencv defines typedef Vec<uchar, 3> Vec3b; then:
//load image
cv::Mat img = cv::imread("myimage.png",1); // 1 means color image
/* Here the cv::Mat can be seen as cv::Mat_<Vec3b>
* Matrix of uchar with 3 channels for BGR (warning this is not RGB)
*/
// access to pixel value
cv::Vec3b mypix = img.at<Vec3b>(i,j);
uchar bluevalue = mypix.x;
This code will print the red and blue value of pixel 300, 300:
img1 = cv2.imread('Image.png', cv2.IMREAD_UNCHANGED)
b,g,r = (img1[300, 300])
print (r)
print (b)

Resources