I am trying to convert the camera captured image to 8 bit image. And that should be grayscale image.
I searched in forums but could able to find the way to convert to 8 bit image.
Any help or suggestion will be help ful to me.
Thanks....
You have given too little information. First of all, what is the image format your camera delivers? Is it some RAW format, jpeg, or what else?
Doing it programmatically (using C for the example):
The best way to go was to use some image loading library (e.g. SDL_image), and load the image into memory, uncompressed RGB being the target format. Once you have an uncompressed RGB image format, you could do something like
// bufPtr points to the start of the memory containing the bitmap
typedef unsigned char byte;
struct rgb { byte red, green blue; } * colorPtr = bufPtr;
for (int i = 0; i < bufSize; i++, bufPtr++) {
byte gray = (unsigned char) (((float) bufPtr->red * 0.3f +
(float) bufPtr->green * 0.59f +
(float) bufPtr->blue * 0.11f)) / 3.0f * 255.0f + 0.5f);
bufPtr->red = bufPtr->green = bufPtr->blue = gray;
}
If you don't want to code, you could e.g. use GIMP, load your image and apply desaturate from the color menu. You can install the ufraw plugin for GIMP to load images in RAW format in it. If you want to store the entire color information in 8 bits (and not use 8 bits per color channel), there is another option in GIMP to decrease the color depth.
Related
I want to convert colorBGR image into grey scale in opencv without using direct command CV_RGB2GRAY. Here I uploaded my code which gives me a bluish color of the image which is not a proper grey output image. Please check the below code and tell me where I m going wrong or you can give me another solution to convert the color image into grey output image without CV_RGB2GRAY.
Thanks in advance.
Mat image=imread("Desktop\\Sample input\\ip1.png");
Mat grey( image.rows,image.cols, CV_8UC3);
for(int i=0;i<image.rows;i++)
{
for(int j=0;j<image.cols;j++)
{
int blue = image.at<Vec3b>(i,j)[0];
int green = image.at<Vec3b>(i,j)[1];
int red = image.at<Vec3b>(i,j)[2];
grey.at<Vec3b>(i,j) = 0.114*blue+0.587*green+ 0.299*red ;
}
}
imshow("grey image",grey);
If you intend to convert the image which you are taking by imread() functions, you can take the image as input as a grayscale image directly by
Mat image = imread("Desktop\\Sample input\\ip1.png",CV_LOAD_IMAGE_GRAYSCALE);
or, by
Mat image = imread("Desktop\\Sample input\\ip1.png",0);
It is because CV_LOAD_IMAGE_GRAYSCALE corresponds to the constant 0. And when in imread() function gets this argument zero, it will load an image with intensity one.
And if want to convert any image to grayscale then the out image image should like
Mat grey = Mat::zeros(src_image.rows, src_image.cols, CV_8UC1);
as grayscale image is of only one channel and then you can convert the image like this:
for(int i=0;i<image.rows;i++)
{
for(int j=0;j<image.cols;j++)
{
int blue = image.at<Vec3b>(i,j)[0];
int green = image.at<Vec3b>(i,j)[1];
int red = image.at<Vec3b>(i,j)[2];
grey.at<uchar>(i, j) = (uchar) (0.114*blue + 0.587*green + 0.299*red);
}
}
It will give you the grayscale image.
In your code, the grey Mat has 3 channels. For a grayscale image you only need 1 channel (8UC1).
Also, when you are writing the values in the grayscale image, you need to use uchar instead of Vec3b because each pixel in the grayscale image is only made up of one unsigned char value, not a vector of 3 values.
So, you need to replace these lines:
Mat grey(image.rows, image.cols, CV_8UC1);
and
grey.at<uchar>(i, j) = 0.114*blue + 0.587*green + 0.299*red;
Is it possible to adjust the alpha using the Accelerate framework based on the pixels RGB value?
Specifically I want to set the Alpha to 0 if the color is black (RGB 0/0/0)
Not with Accelerate. In CoreGraphics, you can set a masking color, if you like.
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHCADE
Strictly speaking, if you set a masking color on a CGImageRef and then decode it to pixels with vImageBufer_InitWithCGImage, it should do that which /might/ qualify. However, CG is doing the masking work in that case.
You can file a Radar asking for it. It hasn't been a priority so far because the alpha you get is not anti-aliased.
typedef __attribute__ ((ext_vector_type( 4),__aligned__( 16))) uint32_t uint4;
// ARGB example (on little endian processor):
// uint4 result = maskByColor( pixels, (uint4)0, (uint4) 0x000000ff );
uint4 maskByColor( uint4 fourPixels, uint4 maskColor, uint4 alphaMask ){
return fourPixels & ~(maskColor == (fourPixels & ~alphaMask));
}
I am currently working on a program which should take an LDR images and multiply certain pixel in the image, so that their pixel value would exceed the normal 0-255 (0-1) pixel value boundary. The program i have written can do so, but I am not able to write the image file, as the imwrite() in OpenCV clambs the values back in the range of 0-255 (0-1)
if they are bigger than 255.
Is there anybody there who knows how to write a floating point image with pixel values bigger than 255 (1)
My code looks like this
Mat ApplySunValue(Mat InputImg)
{
Mat Image1 = imread("/****/.jpg",CV_LOAD_IMAGE_COLOR);
Mat outPutImage;
Image1.convertTo(Image1, CV_32FC3);
for(int x = 0; x < InputImg.cols; x++){
for(int y = 0; y < InputImg.rows; y++){
float blue = Image1.at<Vec3f>(y,x)[0] /255.0f;
float green = Image1.at<Vec3f>(y,x)[1] /255.0f;
float red = Image1.at<Vec3f>(y,x)[2] /255.0f ;
Image1.at<Vec3f>(y,x)[0] = blue;
Image1.at<Vec3f>(y,x)[1] = green;
Image1.at<Vec3f>(y,x)[2] = red;
int pixelValue = InputImg.at<uchar>(y,x);
if(pixelValue > 254){
Image1.at<Vec3f>(y,x)[0] = blue * SunMultiplyer;
Image1.at<Vec3f>(y,x)[1] = green * SunMultiplyer;
Image1.at<Vec3f>(y,x)[2] = red * SunMultiplyer;
}
}
}
imwrite("/****/Nice.TIFF", Image1 * 255);
namedWindow("Hej",CV_WINDOW_AUTOSIZE);
imshow("hej", Image1);
return InputImg;
}
For storage purposes, the following is more memory efficient than the XML / YAML alternative (due to the use of a binary format):
// Save the image data in binary format
std::ofstream os(<filepath>,std::ios::out|std::ios::trunc|std::ios::binary);
os << (int)image.rows << " " << (int)image.cols << " " << (int)image.type() << " ";
os.write((char*)image.data,image.step.p[0]*image.rows);
os.close();
You can then load the image as follows:
// Load the image data from binary format
std::ifstream is(<filepath>,std::ios::in|std::ios::binary);
if(!is.is_open())
return false;
int rows,cols,type;
is >> rows; is.ignore(1);
is >> cols; is.ignore(1);
is >> type; is.ignore(1);
cv::Mat image;
image.create(rows,cols,type);
is.read((char*)image.data,image.step.p[0]*image.rows);
is.close();
For instance, without compression, a 1920x1200 floating-point three-channel image takes 26 MB when stored in binary format, whereas it takes 129 MB when stored in YML format. This size difference also has an impact on runtime since the number of accesses to the hard drive are very different.
Now, if what you want is to visualize your HDR image, you have no choice but to convert it to LDR. This is called "tone-mapping" (Wikipedia entry).
As far as I know, when opencv writes using imwrite, it writes in the format supported by the image container, and this by default is 255.
However, if you just want to save the data, you might consider writing the Mat object to an xml/yaml file.
//Writing
cv::FileStorage fs;
fs.open(filename, cv::FileStorage::WRITE);
fs<<"Nice"<<Image1;
//Reading
fs.open(filename, cv::FileStorage::READ);
fs["Nice"]>>Image1;
fs.release(); //Very Important
I converted a png (RGBA) to jpeg (RGB) using libpng to decode the png file and applying png_set_strip_alpha to ignore alpha channels. But after conversion the output image has many spots. I think the reason is that the original image has areas whose alpha was 0, which hides the pixel regardless of its RGB value. And when I strip alpha(ie set alpha = 1), the pixel shows. So I think just using png_set_strip_alpha is not the right solution. Should I write a method myself, or is there already a way to achieve this in libpng?
There is no method for that. If you drop alpha channel libpng will give you raw RGB channels and this will "uncover" colors that were previously invisible.
You should load RGBA image and convert it to RGB yourself. The simplest way is to multiply RGB values by alpha.
This will convert RGBA bitmap to RGB in-place:
for(int i=0; i < width*height; i++) {
int r = bitmap[i*4+0],
g = bitmap[i*4+1],
b = bitmap[i*4+2],
a = bitmap[i*4+3];
bitmap[i*3+0] = r * a / 255;
bitmap[i*3+1] = g * a / 255;
bitmap[i*3+2] = b * a / 255;
}
How can I print the numbers stored in CVmat* in opencv?
I am facing a problem in accessing elements of cvmat. Please suggest a solution!
here with i have given exp code ..its may be helpful for u...
CvMat mathdr, *mat = cvGetMat( img1, &mathdr );
CvSize size_im = cvGetSize(img1);
unsigned int M = img1->height;
unsigned int N = img1->width;
for(i=0;i<M;i++)
{
for (j =0;j<N;j++)
{
CvScalar scal = cvGet2D(mat,i,j);
printf("pixel val of the image is:%f %f %f\n",scal.val[0],scal.val[1],scal.val[3]);
}
}
In answer to your comment to aranga,
i am not getting why have u used three scal.val[0],scal.val[1],scal.val[3]); only scal.val[0] is showing output scal.val[1] and scal.val[2] are just giving zeros
this depends on your image, or more precisely on how many channels it has. A normal image will have 3 channels (RGB, though it is actually in reverse, so BGR), yours seems to be grayscale, or in anycase have only values in the first channel.
And I've just checked, indeed if you perform
cvtColor(src,dst,CV_RGB2GRAY);
to convert a three-channel RGB image into grayscale, the grayscale image has only 1 channel. But perhaps you would know why your image is only using 1 channel...