I'm trying to write a program which rearranges pixels from a gray scale raw file according to bmp format. But I think I would make some mistakes I don't know. Could someone tells me what's wrong with the following code? specs says a pixel for BMP format consist of 3 bytes. The first row on a raw image array is placed on the bottom row on a bmp pixel array, the second row is placed on the second bottom row, and so on. so I write the core code like, except GUI code and one for bmp header:
void MyClass_MakeBMP(void)
{
int i,j,k ;
/* m_uiWidth and m_uiHeight are rows and cols for the raw image, respectively. */
m_BMPheader.biWidth = m_uiWidth;
m_BMPheader.biHeight = m_uiHeight;
// raw format buffer -> 2d array.
// bmp format buffer -> 1d array.
// m_pcImgbuf is 1d array for a raw file.
// m_pcBMP is 2d array to be copied from m_pcImgbuf.
for(i = 0 ; i < m_uiHeight; i++)
{
k = -1; // index into m_pcImgbuf.
for(j = 0 ; j < m_uiWidth * 3; j++)
{
if( j % 3 == 0)
k++;
m_pcBMP[i * m_uiHeight + j] = m_pcImgbuf[(m_uiHeight - 1) - i][k];
}
}
}
Also, I don't care about padding, because this program can takes as input, 256*256, 128*128, and 512*512 images. Thank you in advance.
Your calculation for the destination offset is wrong.
m_pcBMP[i * 3*m_uiWidth + j] = ...
Related
I want to realize smth like tone curve.
I have predefined set of curves that I should apply to the image.
For instance:
as I understand on this chart we see dependences of current tone value to new, for example:
if we get first dot on the left - every r,g and b that = 0 will be converted to 64
or every value more than 224 will be converted to 0 and ect.
so I tried to change every pixel of image to new value
for test purpose i've simplified curve:
and here the code I have:
//init original image
cv::Mat originalMat = [self cvMatFromUIImage:inputImage];
//out image the same size
cv::Mat outMat = [self cvMatFromUIImage:inputImage];
//loop throw every row of image
for( int y = 0; y < originalMat.rows; y++ ){
//loop throw every column of image
for( int x = 0; x < originalMat.cols; x++ ){
//loop throw every color channel of image (R,G,B)
for( int c = 0; c < 3; c++ ){
if(originalMat.at<cv::Vec3b>(y,x)[c] <= 64)
outMat.at<cv::Vec3b>(y,x)[c] = 64 + ( originalMat.at<cv::Vec3b>(y,x)[c] ) -
( originalMat.at<cv::Vec3b>(y,x)[c] ) * 2 ;
if((originalMat.at<cv::Vec3b>(y,x)[c] > 64)&&(originalMat.at<cv::Vec3b>(y,x)[c] <= 128))
outMat.at<cv::Vec3b>(y,x)[c] = (( originalMat.at<cv::Vec3b>(y,x)[c] ) - 64 ) * 4
;
if((originalMat.at<cv::Vec3b>(y,x)[c] > 128))
outMat.at<cv::Vec3b>(y,x)[c] = ( originalMat.at<cv::Vec3b>(y,x)[c] ) + 128 -
(( originalMat.at<cv::Vec3b>(y,x)[c] ) - 128) * 3;
} //end of r,g,b loop
} //end of column loop
} //end of row loop
//send to output
return [self UIImageFromCVMat:outMat];
but here the result I get:
by some reason only 3/4 of image was processed
and it not matches with result i expected:
Update 0
thanks to #ACCurrent comment found errors in calculation(code and image updated), but still not understand why only 3/4 of images processed.
not sure that understand why 'noise' appears, hope it because of curve not smooth.
looks the way to avoid .at operation.
Update 1
original image:
You need to access the images with Vec4b
originalMat.type() is equals to 24
Your originalMat is of type 24, i.e. CV_8UC4. This means that the image has 4 channels, but you're accessing it with Vec3b as if it has only 3 channels. This explains why about 1/4 of the image is not modified.
So, simply replace every Vec3b in your code with Vec4b.
So I am new to OpenCV, what I want to do is to copy elements of an bit16 matrix.
src.create(h, w, CV_16UC(channels));
dst.create(hr, wr, CV_16UC(channels));
finalDst.create(h, w, CV_16UC(channels));
memcpy(src.data, data_in, w*h*sizeof(raw_t_ubit16));
for (i = 0; i < h; i++)
{
for (j = 0; j < w; j++)
{
finalDst.data[j + i*w] = src.data[j + i*w];
}
}
memcpy(data_out, finalDst.data, h*w*sizeof(raw_t_ubit16));
However this only copies one half of the image, Ironically if I put 2*h instead of h then everything gets back to normal. But that shouldn't be so, since I am defining h to be the exact height of my image just like w is the width.
src.data - gives you (*uchar), but not (*raw_t_ubit16).
(*uchar) 8 bit.
(*raw_t_ubit16) 16 bit.
((*uchar)data)[2] - will point to 3-rd byte in array (assumes array of uchar).
((*raw_t_ubit16)data)[2] - will point to 5-th byte in array (assumes array of raw_t_ubit16).
That's why you got only a half copied.
You can write:
((raw_t_ubit16*)finalDst.data)[j + i*w] = ((raw_t_ubit16*)src.data)[j + i*w];
It should work, but better use clone() method of Mat class:
m1=m2.clone();
I am currently working on a program which should take an LDR images and multiply certain pixel in the image, so that their pixel value would exceed the normal 0-255 (0-1) pixel value boundary. The program i have written can do so, but I am not able to write the image file, as the imwrite() in OpenCV clambs the values back in the range of 0-255 (0-1)
if they are bigger than 255.
Is there anybody there who knows how to write a floating point image with pixel values bigger than 255 (1)
My code looks like this
Mat ApplySunValue(Mat InputImg)
{
Mat Image1 = imread("/****/.jpg",CV_LOAD_IMAGE_COLOR);
Mat outPutImage;
Image1.convertTo(Image1, CV_32FC3);
for(int x = 0; x < InputImg.cols; x++){
for(int y = 0; y < InputImg.rows; y++){
float blue = Image1.at<Vec3f>(y,x)[0] /255.0f;
float green = Image1.at<Vec3f>(y,x)[1] /255.0f;
float red = Image1.at<Vec3f>(y,x)[2] /255.0f ;
Image1.at<Vec3f>(y,x)[0] = blue;
Image1.at<Vec3f>(y,x)[1] = green;
Image1.at<Vec3f>(y,x)[2] = red;
int pixelValue = InputImg.at<uchar>(y,x);
if(pixelValue > 254){
Image1.at<Vec3f>(y,x)[0] = blue * SunMultiplyer;
Image1.at<Vec3f>(y,x)[1] = green * SunMultiplyer;
Image1.at<Vec3f>(y,x)[2] = red * SunMultiplyer;
}
}
}
imwrite("/****/Nice.TIFF", Image1 * 255);
namedWindow("Hej",CV_WINDOW_AUTOSIZE);
imshow("hej", Image1);
return InputImg;
}
For storage purposes, the following is more memory efficient than the XML / YAML alternative (due to the use of a binary format):
// Save the image data in binary format
std::ofstream os(<filepath>,std::ios::out|std::ios::trunc|std::ios::binary);
os << (int)image.rows << " " << (int)image.cols << " " << (int)image.type() << " ";
os.write((char*)image.data,image.step.p[0]*image.rows);
os.close();
You can then load the image as follows:
// Load the image data from binary format
std::ifstream is(<filepath>,std::ios::in|std::ios::binary);
if(!is.is_open())
return false;
int rows,cols,type;
is >> rows; is.ignore(1);
is >> cols; is.ignore(1);
is >> type; is.ignore(1);
cv::Mat image;
image.create(rows,cols,type);
is.read((char*)image.data,image.step.p[0]*image.rows);
is.close();
For instance, without compression, a 1920x1200 floating-point three-channel image takes 26 MB when stored in binary format, whereas it takes 129 MB when stored in YML format. This size difference also has an impact on runtime since the number of accesses to the hard drive are very different.
Now, if what you want is to visualize your HDR image, you have no choice but to convert it to LDR. This is called "tone-mapping" (Wikipedia entry).
As far as I know, when opencv writes using imwrite, it writes in the format supported by the image container, and this by default is 255.
However, if you just want to save the data, you might consider writing the Mat object to an xml/yaml file.
//Writing
cv::FileStorage fs;
fs.open(filename, cv::FileStorage::WRITE);
fs<<"Nice"<<Image1;
//Reading
fs.open(filename, cv::FileStorage::READ);
fs["Nice"]>>Image1;
fs.release(); //Very Important
I want to move every pixel in an image to right by 1px, and below is the map I use to do the remap transformation.
This approach require much more time than it should to do such a simple transform. Is there a cv function I can use? Or do I just split the image into 2 images, one is src.cols-1 pixels wide, the other is 1 px wide, and then copy them to the new image?
void update_map()
{
for( int j = 0; j < src.cols; j++ ){
for( int i = 0; i < src.rows; i++ ){
if (j == src.cols-1)
mat_x_Rotate.at<float>(i,j) = 0;
else
mat_x_Rotate.at<float>(i,j) = j + 1;
mat_y_Rotate.at<float>(i,j) = i;
}
}
}
Things you can do to improve your performance:
remap is overkill for this purpose. It is more efficient to copy the pixels directly than to define an entire remap transformation and then use it.
switch your loop order: iterate over rows, then columns. (OpenCV's Mat is stored in row-major order, so iterating over columns first is very cache-unfriendly)
use Mat::ptr() to access pixels in the same row directly, as a C-style array. (this is a big performance win over using at<>(), which probably does stuff like check indices for each access)
take your if statement out of the inner loop, and handle column 0 separately.
As an alternative: yes, splitting the image into parts and copying to the new image might be about as efficient as copying directly, as described above.
Mat Shift_Image_to_Right( Mat src_in, int num_pixels)
{
Size sz_src_in = src_in.size();
Mat img_out(sz_src_in.height, sz_src_in.width, CV_8UC3);
Rect roi;
roi.x = 0;
roi.y = 0;
roi.width = sz_src_in.width-num_pixels;
roi.height = sz_src_in.height;
Mat crop;
crop = src_in(roi);
// Move the left boundary to the right
img_out = Scalar::all(0);
img_out.adjustROI(0, 0, -num_pixels, 0);
crop.copyTo(img_out);
img_out.adjustROI(0, 0, num_pixels, 0);
return img_out;
}
I'm convoluting an image (512*512) with a FFT filter (kernelsize=10), it looks good.
But when I compare it with an image which I convoluted the normal way the result was horrible.
The PSNR is about 35.
67,187/262,144 Pixel values have a difference of 1 or more(peak at ~8) (having a max pixel value of 255).
My question is, is it normal when convoluting in frequency space or might there be a problem with my convolution/transforming functions? . Because the strange thing is that I should get better results when using double as data-type. But it stays COMPLETELY the same.
When I transform an image into frequency space, DON'T convolute it, then transform it back it's fine and the PSNR is about 140 when using float.
Also, due to the pixel differences being only 1-10 I think I can rule out scaling errors
EDIT: More Details for bored interested people
I use the open source kissFFT library. With real 2dimensional input (kiss_fftndr.h)
My Image Datatype is PixelMatrix. Simply a matrix with alpha, red, green and blue values from 0.0 to 1.0 float
My kernel is also a PixelMatrix.
Here some snippets from the Convolution function
Used datatypes:
#define kiss_fft_scalar float
#define kiss_fft_cpx struct {
kiss_fft_scalar r;
kiss_fft_scalar i,
}
Configuration of the FFT:
//parameters to kiss_fftndr_alloc:
//1st param = array with the size of the 2 dimensions (in my case dim={width, height})
//2nd param = count of the dimensions (in my case 2)
//3rd param = 0 or 1 (forward or inverse FFT)
//4th and 5th params are not relevant
kiss_fftndr_cfg stf = kiss_fftndr_alloc(dim, 2, 0, 0, 0);
kiss_fftndr_cfg sti = kiss_fftndr_alloc(dim, 2, 1, 0, 0);
Padding and transforming the kernel:
I make a new array:
kiss_fft_scalar kernel[width*height];
I fill it with 0 in a loop.
Then I fill the middle of this array with the kernel I want to use.
So if I would use a 2*2 kernel with values 1/4, 1/4, 1/4 and 1/4 it would look like
0 0 0 0 0 0
0 1/4 1/4 0
0 1/4 1/4 0
0 0 0 0 0 0
The zeros are padded until they reach the size of the image.
Then I swap the quadrants of the image diagonally. It looks like:
1/4 0 0 1/4
0 0 0 0
0 0 0 0
1/4 0 0 1/4
now I transform it: kiss_fftndr(stf, floatKernel, outkernel);
outkernel is declarated as
kiss_fft_cpx outkernel= new kiss_fft_cpx[width*height]
Getting the colors into arrays:
kiss_fft_scalar *red = new kiss_fft_scalar[width*height];
kiss_fft_scalar *green = new kiss_fft_scalar[width*height];
kiss_fft-scalar *blue = new kiss_fft_scalar[width*height];
for(int i=0; i<height; i++) {
for(int j=0; i<width; j++) {
red[i*height+j] = input.get(j,i).getRed(); //input is the input image pixel matrix
green[i*height+j] = input.get(j,i).getGreen();
blue{i*height+j] = input.get(j,i).getBlue();
}
}
Then I transform the arrays:
kiss_fftndr(stf, red, outred);
kiss_fftndr(stf, green, outgreen);
kiss_fftndr(stf, blue, outblue); //the out-arrays are type kiss_fft_cpx*
The convolution:
What we have now:
3 transformed color arrays from type kiss_fft_cpx*
1 transformed kernel array from type kiss_fft_cpx*
They are both complex arrays
Now comes the convolution:
for(int m=0; m<til; m++) {
for(int n=0; n<til; n++) {
kiss_fft_scalar real = outcolor[m*til+n].r; //I do that for all 3 arrys in my code!
kiss_fft_scalar imag = outcolor[m*til+n].i; //so I have realred, realgreen, realblue
kiss_fft_scalar realMask = outkernel[m*til+n].r; // and imagred, imaggreen, etc.
kiss_fft_scalar imagMask = outkernel[m*til+n].i;
outcolor[m*til+n].r = real * realMask - imag * imagMask; //Same thing here in my code i
outcolor[m*til+n].i = real * imagMask + imag * realMask; //do it with all 3 colors
}
}
Now I transform them back:
kiss_fftndri(sti, outred, red);
kiss_fftndri(sti, outgreen, green);
kiss_fftndri(sti, outblue, blue);
and I create a new Pixel Matrix with the values from the color-arrays
PixelMatrix output;
for(int i=0; i<height; i++) {
for(int j=0; j<width; j++) {
Pixel p = new Pixel();
p.setRed( red[i*height+j] / (width*height) ); //I divide through (width*height) because of the scaling happening in the FFT;
p.setGreen( green[i*height+j] );
p.setBlue( blue[i*height+j] );
output.set(j , i , p);
}
}
Notes:
I already take care in advance that the image has a size with a power of 2 (256*256), (512*512) etc.
Examples:
kernelsize: 10
Input:
Output:
Output from normal convolution:
my console says :
142519 out of 262144 Pixels have a difference of 1 or more (maxRGB = 255)
PSNR: 32.006027221679688
MSE: 44.116752624511719
though for my eyes they look the same °.°
Maybe one person is bored and goes through the code. It's not urgent, but it's a kind of problem I just want to know what the hell I did wrong ^^
Last, but not least, my PSNR function, though I don't really think that's the problem :D
void calculateThePSNR(const PixelMatrix first, const PixelMatrix second, float* avgpsnr, float* avgmse) {
int height = first.getHeight();
int width = first.getWidth();
BMP firstOutput;
BMP secondOutput;
firstOutput.SetSize(width, height);
secondOutput.SetSize(width, height);
double rsum=0.0, gsum=0.0, bsum=0.0;
int count = 0;
int total = 0;
for(int i=0; i<height; i++) {
for(int j=0; j<width; j++) {
Pixel pixOne = first.get(j,i);
Pixel pixTwo = second.get(j,i);
double redOne = pixOne.getRed()*255;
double greenOne = pixOne.getGreen()*255;
double blueOne = pixOne.getBlue()*255;
double redTwo = pixTwo.getRed()*255;
double greenTwo = pixTwo.getGreen()*255;
double blueTwo = pixTwo.getBlue()*255;
firstOutput(j,i)->Red = redOne;
firstOutput(j,i)->Green = greenOne;
firstOutput(j,i)->Blue = blueOne;
secondOutput(j,i)->Red = redTwo;
secondOutput(j,i)->Green = greenTwo;
secondOutput(j,i)->Blue = blueTwo;
if((redOne-redTwo) > 1.0 || (redOne-redTwo) < -1.0) {
count++;
}
total++;
rsum += (redOne - redTwo) * (redOne - redTwo);
gsum += (greenOne - greenTwo) * (greenOne - greenTwo);
bsum += (blueOne - blueTwo) * (blueOne - blueTwo);
}
}
fprintf(stderr, "%d out of %d Pixels have a difference of 1 or more (maxRGB = 255)", count, total);
double rmse = rsum/(height*width);
double gmse = gsum/(height*width);
double bmse = bsum/(height*width);
double rpsnr = 20 * log10(255/sqrt(rmse));
double gpsnr = 20 * log10(255/sqrt(gmse));
double bpsnr = 20 * log10(255/sqrt(bmse));
firstOutput.WriteToFile("test.bmp");
secondOutput.WriteToFile("test2.bmp");
system("display test.bmp");
system("display test2.bmp");
*avgmse = (rmse + gmse + bmse)/3;
*avgpsnr = (rpsnr + gpsnr + bpsnr)/3;
}
Phonon had the right idea. Your images are shifted. If you shift your image by (1,1), then the MSE will be approximately zero (provided that you mask or crop the images accordingly). I confirmed this using the code (Python + OpenCV) below.
import cv
import sys
import math
def main():
fname1, fname2 = sys.argv[1:]
im1 = cv.LoadImage(fname1)
im2 = cv.LoadImage(fname2)
tmp = cv.CreateImage(cv.GetSize(im1), cv.IPL_DEPTH_8U, im1.nChannels)
cv.AbsDiff(im1, im2, tmp)
cv.Mul(tmp, tmp, tmp)
mse = cv.Avg(tmp)
print 'MSE:', mse
psnr = [ 10*math.log(255**2/m, 10) for m in mse[:-1] ]
print 'PSNR:', psnr
if __name__ == '__main__':
main()
Output:
MSE: (0.027584912741602553, 0.026742391458366047, 0.028147870144492403, 0.0)
PSNR: [63.724087463606452, 63.858801190963192, 63.636348220531396]
My advice for you to try to implement the following code :
A=double(inputS(1:10:length(inputS))); %segmentation
A(:)=-A(:);
%process the image or signal by fast fourior transformation and inverse fft
fresult=fft(inputS);
fresult(1:round(length(inputS)*2/fs))=0;
fresult(end-round(length(fresult)*2/fs):end)=0;
Y=real(ifft(fresult));
that's code help you to obtain the same size image and good for remove DC component ,the you can to convolution.