So i have a program written already that captures the image from a webcam, into a vector called pBuffer.
I can easily acess the RGB pixel information of each pixel, simply by
pBuffer[i]=R;pBuffer[i+1]=G;Buffer[i+2]=B.
No problem in here.
The next step is now create an IplImage* img, and fill it in with the information of the pBuffer...some sort of SetPixel.
There is a SetPixel Function on the web, that is :
(((uchar*)(image>imageData + image>widthStep*(y))))[x * image>nChannels + channel] = (uchar)value;
where the value is the pBuffer information, x and y the pixel coordinates.However i simply cannot put this to work. Any ideas?? I am working with C++.
What you are trying to do you can do like this (assuming width and height are the image dimensions):
CvSize size;
size.height = height;
size.width = width;
IplImage* ipl_image_p = cvCreateImage(size, IPL_DEPTH_8U, 3);
for (int y = 0; y < height; ++y)
for (int x = 0; x < width; ++x)
for (int channel = 0; channel < 3; ++channel)
*(ipl_image_p->imageData + ipl_image_p->widthStep * y + x * ipl_image_p->nChannels + channel) = pBuffer[x*y*3+channel];
However, you don't have to copy the data. You can also use your image data by IplImage (assuming pBuffer is of type char*, otherwise you need possibly to cast it):
CvSize size;
size.height = height ;
size.width = width;
IplImage* ipl_image_p = cvCreateImageHeader(size, IPL_DEPTH_8U, 3);
ipl_image_p->imageData = pBuffer;
ipl_image_p->imageDataOrigin = ipl_image_p->imageData;
Related
I wrote down an openCV code .I tried to embed a 64X64 pix watermark image in a 512X512 image.
my code has 5 parts:
reading two pictures( watermark and original image that I want to
embed watermark in it)
resize 2 readed images to specified size.(64X64 for watermark image
and 512X512 for original image)
devide original resized image to 8X8 blocks and transform them with
DCT.
embedding each pixel of watermark in each block of original image.
applying inverse DCT on each block.
I have this problem that all of three imshows have same results.
thank you for your help :)
here is my code :
int _tmain(int argc, _TCHAR* argv[])
{
int index=0;
int iindex=0;
vector<Mat> blocks(4096);
/////////////Part1:reading images
Mat originalImage;
originalImage = imread("C:\\MGC.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat watermarkImage;
watermarkImage = imread("C:\\ivp_lg.bmp" , CV_LOAD_IMAGE_GRAYSCALE);
/// show original image
namedWindow("Original");
int x = 0; int y = 0;
moveWindow("Original", x, y);
imshow("Original", originalImage);
x += 100; y += 100;
//////Part 2:Leave originals alone, work on a copys. resize readed images
Mat dctImage = originalImage.clone();
Mat wmrk = watermarkImage.clone();
Mat tmp1(512, 512, CV_8UC1);
Mat tmp2(64, 64, CV_8UC1);
resize(dctImage, dctImage, tmp1.size());
resize(wmrk, wmrk , tmp2.size());
/////Part 3:break dctImage into 8X8 blocks and applying DCT on each block
for (int i = 0; i < 512; i += 8)
{
for (int j = 0; j < 512; j+= 8)
{
Mat block = dctImage(Rect(i, j, 8, 8));
block.convertTo(block, CV_32FC1);
dct(block,blocks[index]);
blocks[index].convertTo(blocks[index], CV_8UC1);
index++;
}
}
/// show transformed image
namedWindow("TransformedImage");
moveWindow("TransformedImage", x, y);
imshow("TransformedImage",dctImage );
x += 100; y += 100;
//////Part 4: embeding watermark. if corresponding pixel of watermark was 255 then element (5,5) in the block increase 200 otherwise do nothing
for(int idx=0 ; idx<4096 ; idx++)
{
int i=idx/64;
int j=idx%64;
float elem=(float) wmrk.at<uchar>(i,j);
if (elem>=128)
{
float tmp=(float) blocks[idx].at<uchar>(5,5);
float temp=tmp +200;
uchar ch=(uchar) temp;
blocks[idx].at<uchar>(5,5)=ch;
}
}
//////Part 5:applying iDCT on each block
for (int i = 0; i < 512; i += 8)
{
for (int j = 0; j < 512; j+= 8)
{
Mat block = dctImage(Rect(i, j, 8, 8));
block.convertTo(block, CV_32FC1);
idct(block,blocks[iindex]);
blocks[iindex].convertTo(blocks[iindex], CV_8UC1);
iindex++;
}
}
/// show watermarked image
namedWindow("WatermarkedImage");
moveWindow("WatermarkedImage", x, y);
imshow("WatermarkedImage",dctImage );
cvWaitKey(80000);
destroyAllWindows();
return 0;
}
#N_Kh As far as I have seen ur code in hurry, You are executing IMSHOW Command over the Matrix dctImage while you are performing operation on different Matrix and vector Block and Blocks respectively.
I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}
I want to create ColorCube CIFilter for my app and i found documentation on apple site here https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_filer_recipes/ci_filter_recipes.html .
Also i post code here,
**//Allocate memory **
**const unsigned int size = 64;**
**float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);**
float rgb[3], hsv[3], *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
rgbToHSV(rgb, hsv);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgb[0] * alpha;
c[1] = rgb[1] * alpha;
c[2] = rgb[2] * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
i want to know what they take size=64 wand what the mean of that bold line in code?
Any help appreciated...
Like the subject says. i am trying to implement openCVSharp surf in unity3d and kinda stuck in the converting part from iplimage to texture2d. Also considering that this converting proces should run at least at 25 fps. So any tips or suggestions are very helpfull!
Might be a bit late, I am working on the same thing now and here is my solution:
void IplImageToTexture2D (IplImage displayImg)
{
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
float b = (float)displayImg[i, j].Val0;
float g = (float)displayImg[i, j].Val1;
float r = (float)displayImg[i, j].Val2;
Color color = new Color(r / 255.0f, g / 255.0f, b / 255.0f);
videoTexture.SetPixel(j, height - i - 1, color);
}
}
videoTexture.Apply();
}
But it is a bit slow.
Still trying to improve the performance.
Texture2D tex = new Texture2D(640, 480);
CvMat img = new CvMat(640, 480, MatrixType.U8C3);
byte[] data = new byte[640 * 480 * 3];
Marshal.Copy(img.Data, data, 0, 640 * 480 * 3);
tex.LoadImage(data);
To improve performance use Unity3d's undocumented function LoadRawTextureData :
Texture2D IplImageToTexture2D(IplImage img)
{
Texture2D videoTexture = new Texture2D(imWidth, imHeight, TextureFormat.RGB24, false);
byte[] data = new byte[imWidth * imHeight * 3];
Marshal.Copy(img.ImageData, data, 0, imWidth * imHeight * 3);
videoTexture.LoadRawTextureData(data);
videoTexture.Apply();
return videoTexture;
}
I am using an OpenCV 1.0 based calibration toolbox to which I am making small additions. My additions require the use of the FFTW library (OpenCV has DFT functions but they aren't to my liking).
I have been trying to access the pixel values of an image and store those pixel values into a FFTW_complex type variable. I have tried a lot of the different suggestions (including openCV documentation) but I have been unable to do this properly.
The code below doesn't bring up any inconsistencies with variable types during the build or whilst debugging; however, the pixel values obtained and stored in "testarray" are a repetition of the values [13, 240, 173, 186]. Does anyone know how to access the pixel values and store them into FFTW compliant matrices/containers?
//.....................................//
//For image manipulation
IplImage* im1 = cvCreateImage(cvSize(400,400),IPL_DEPTH_8U,1);
int width = im1 -> width;
int height = im1 -> height;
int step = im1 -> widthStep/sizeof(uchar);
int fft_size = width *height;
//Setup pointers to images
uchar *im_data = (uchar *)im1->imageData;
//......................................//
fftw_complex testarray[subIM_size][subIM_size]; //size of complex FFTW array
im1= cvLoadImage(FILEname,0);
if (!im1)printf("Could not load image file");
//Load imagedata into FFTW arrays
for( i = 0 ; i < height ; i++ ) {
for( j = 0 ; j < width ; j++) {
testarray[i][j].re = double (im_data[i * step + j]);
testarray[i][j].im = 0.0;
}
}
I found out the problem. I had been using the wrong approach to access it.
This is what I used:
testarray[i][j].re = ((uchar*)(im1->imageData + i *im1->widthStep))[j]; //double (im_data[i * step + j]);
I am using C++ in Visual Studio 2008 and this is the way is use:
If we have a loop like that for going through the image:
for (int y = 0 ; y < height; y++){
for (int x = 0 ; x < width ; x++){
Then, the access to the fftw variable ( let's call it A) will be done as follows:
A [ height * y + x][0] = double (im_data[height * y + x]);
A [ height * y + x][1] = 0;
Hope it helps!
Antonio