The image which one can get from OpenNI Image Meta Data is arranged as an RGB image. I would like to convert it to OpenCV IplImage which by default assumes the data to be stored as BGR. I use the following code:
XnUInt8 * pImage = new XnUInt8 [640*480*3];
memcpy(pImage,imageMD.Data(),640*480*3*sizeof(XnUInt8));
XnUInt8 temp;
for(size_t row=0; row<480; row++){
for(size_t col=0;col<3*640; col+=3){
size_t index = row*3*640+col;
temp = pImage[index];
pImage[index] = pImage[index+2];
pImage[index+2] = temp;
}
}
img->imageData = (char*) pImage;
What is the best way (fastest) in C/C++ to perform this conversion such that RGB image becomes BGR (in IplImage format)?
Is it not easy to use the color conversion function of OpenCV?
imgColor->imageData = (char*) pImage;
cvCvtColor( imgColor, imgColor, CV_BGR2RGB);
There are some interesting references out there.
For instance, the QImage to IplImage convertion shown here, that also converts RGB to BGR:
static IplImage* qImage2IplImage(const QImage& qImage)
{
int width = qImage.width();
int height = qImage.height();
// Creates a iplImage with 3 channels
IplImage *img = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);
char * imgBuffer = img->imageData;
//Remove alpha channel
int jump = (qImage.hasAlphaChannel()) ? 4 : 3;
for (int y=0;y<img->height;y++)
{
QByteArray a((const char*)qImage.scanLine(y), qImage.bytesPerLine());
for (int i=0; i<a.size(); i+=jump)
{
//Swap from RGB to BGR
imgBuffer[2] = a[i];
imgBuffer[1] = a[i+1];
imgBuffer[0] = a[i+2];
imgBuffer+=3;
}
}
return img;
}
There are several posts here besides this one that show how to iterate on IplImage data.
There might be more than that (if the encoding is not openni_wrapper::Image::RGB). A good example can be found in the openni_image.cpp file where they use in line 170 the function fillRGB.
Related
I use swift 4.2 and OpenCV 3.1, I install Opencv with Pod.
So there is my function call in my swift file:
image = OpenCVWrapper.hdrImaging(Arrayimages, Arraytimes)
Arrayimages is an array of UIImage
Arraytimes is an array of Float64
In my OpencvWrapper.h I call my function like this:
+(UIImage *) hdrImaging:(NSArray *)images :(NSArray *)times;
and I make this function in my OpenCVWrapper.mm
+(UIImage *) hdrImaging:(NSArray *)images :(NSArray *)times{
cv::Mat response;
std::vector<cv::Mat> imagesVector;
std::vector<float> timesVector;
for (int i = 0; i < images.count; i++) {
UIImage * imageToMat = images[i];
cv::Mat rgba, matToInsert;
UIImageToMat(imageToMat, rgba); // rgba is RGBA
cv::cvtColor(rgba, matToInsert, cv::COLOR_RGBA2BGR) // matToInsert is 3 channel BGR
imagesVector.push_back(matToInsert);
float time = [times[i] floatValue];
timesVector.push_back(time);
}
cv::Ptr<cv::CalibrateDebevec> calibrate = cv::createCalibrateDebevec();
calibrate->process(imagesVector, response, timesVector);
cv::Mat hdr;
cv::Ptr<cv::MergeDebevec> merge_debevec = cv::createMergeDebevec();
merge_debevec->process(imagesVector, hdr, timesVector, response);
cv::Mat ldr;
cv::Ptr<cv::TonemapDurand> tonemap = cv::createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
cv::Mat fusion;
cv::Ptr<cv::MergeMertens> merge_mertens = cv::createMergeMertens();
merge_mertens->process(imagesVector, fusion);
response = fusion * 255;
response.convertTo(response, CV_8U);
return MatToUIImage(response);
}
EDIT
If you have a memory error like :
EXC_RESOURCE RESOURCE_TYPE_MEMORY (limit=650 MB, unused=0x0)
on the line merge_debevec->process(imagesVector, hdr, timesVector, response);
You can dows quality of your picture with the following code in your declaration of capture session : captureSession.sessionPreset = .hd1920x1080 or an other résolution!
EDIT 2
if you have an error about invalid bits/components :
CustomCamera[5793:1463869] [Unknown process name] CGImageCreate: invalid image bits/component: 8 bits/pixel 96 alpha info = kCGImageAlphaNone
Don't forget to set your response Mat to 8bit like : response.convertTo(response, CV_8U);
Why my picture rotate ? I think is MatToUIImage function who rotate my picture but how to set my picture with the good rotation ?
thanks !
The first error says you ran out of memory, probably due to big size of images. As for the second error, UIImageToMat convert UIImage to a 4 channel RGBA image. So you need to convert that into a BGR or RGB for HDR,
for (int i = 0; i < images.count; i++) {
UIImage * imageToMat = images[i];
cv::Mat rgba, matToInsert;
UIImageToMat(imageToMat, rgba); // rgba is RGBA
cv::cvtColor(rgba, matToInsert, cv::COLOR_RGBA2BGR) // matToInsert is 3 channel BGR
imagesVector.push_back(matToInsert);
float time = [times[i] floatValue];
timesVector.push_back(time);
}
I am working on an implementation where I have a rectangle shaped image in an big background image. I am trying to programmatically retrieve the rectangle shaped image from the big image and retrieve text information from that particular rectangle image. I am trying to use Open-CV third party framework, but couldn't able to retrieve the rectangle image from the big background image. Could someone please guide me, how i can achieve this?
UPDATED:
I found the Link to find out the square shapes using OpenCV. Can i get it modified for finding Rectangle shapes? Can someone guide me on this?
UPDATED LATEST:
I got the code finally, here is it below.
- (cv::Mat)cvMatWithImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if ( cvMat.elemSize() == 1 ) {
colorSpace = CGColorSpaceCreateDeviceGray();
}
else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
//CFDataRef data;
CGDataProviderRef provider = CGDataProviderCreateWithCFData( (CFDataRef) data ); // It SHOULD BE (__bridge CFDataRef)data
CGImageRef imageRef = CGImageCreate( cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault );
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorSpace );
return finalImage;
}
-(void)forOpenCV
{
imageView = [UIImage imageNamed:#"myimage.jpg"];
if( imageView != nil )
{
cv::Mat tempMat = [imageView CVMat];
cv::Mat greyMat = [self cvMatWithImage:imageView];
cv::vector<cv::vector<cv::Point> > squares;
cv::Mat img= [self debugSquares: squares: greyMat];
imageView = [self UIImageFromCVMat: img];
self.imageView.image = imageView;
}
}
double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
- (cv::Mat) debugSquares: (std::vector<std::vector<cv::Point> >) squares : (cv::Mat &)image
{
NSLog(#"%lu",squares.size());
// blur will enhance edge detection
//cv::Mat blurred(image);
cv::Mat blurred = image.clone();
medianBlur(image, blurred, 9);
cv::Mat gray0(image.size(), CV_8U), gray;
cv::vector<cv::vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&image, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
cv::vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(cv::Mat(approx))) > 1000 &&
isContourConvex(cv::Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
NSLog(#"squares.size(): %lu",squares.size());
for( size_t i = 0; i < squares.size(); i++ )
{
cv::Rect rectangle = boundingRect(cv::Mat(squares[i]));
NSLog(#"rectangle.x: %d", rectangle.x);
NSLog(#"rectangle.y: %d", rectangle.y);
if(i==squares.size()-1)////Detecting Rectangle here
{
const cv::Point* p = &squares[i][0];
int n = (int)squares[i].size();
NSLog(#"%d",n);
line(image, cv::Point(507,418), cv::Point(507+1776,418+1372), cv::Scalar(255,0,0),2,8);
polylines(image, &p, &n, 1, true, cv::Scalar(255,255,0), 5, CV_AA);
int fx1=rectangle.x;
NSLog(#"X: %d", fx1);
int fy1=rectangle.y;
NSLog(#"Y: %d", fy1);
int fx2=rectangle.x+rectangle.width;
NSLog(#"Width: %d", fx2);
int fy2=rectangle.y+rectangle.height;
NSLog(#"Height: %d", fy2);
line(image, cv::Point(fx1,fy1), cv::Point(fx2,fy2), cv::Scalar(0,0,255),2,8);
}
}
return image;
}
Thank you.
Here is a full answer using a small wrapper class to separate the c++ from objective-c code.
I had to raise another question on stackoverflow to deal with my poor c++ knowledge - but I have worked out everything we need to interface c++ cleanly with objective-c code, using the squares.cpp sample code as an example. The aim is to keep the original c++ code as pristine as possible, and to keep the bulk of the work with openCV in pure c++ files for (im)portability.
I have left my original answer in place as this seems to go beyond an edit. The complete demo project is on github
CVViewController.h / CVViewController.m
pure Objective-C
communicates with openCV c++ code via a WRAPPER... it neither knows nor cares that c++ is processing these method calls behind the wrapper.
CVWrapper.h / CVWrapper.mm
objective-C++
does as little as possible, really only two things...
calls to UIImage objC++ categories to convert to and from UIImage <> cv::Mat
mediates between CVViewController's obj-C methods and CVSquares c++ (class) function calls
CVSquares.h / CVSquares.cpp
pure C++
CVSquares.cpp declares public functions inside a class definition (in this case, one static function).
This replaces the work of main{} in the original file.
We try to keep CVSquares.cpp as close as possible to the C++ original for portability.
CVViewController.m
//remove 'magic numbers' from original C++ source so we can manipulate them from obj-C
#define TOLERANCE 0.01
#define THRESHOLD 50
#define LEVELS 9
UIImage* image =
[CVSquaresWrapper detectedSquaresInImage:self.image
tolerance:TOLERANCE
threshold:THRESHOLD
levels:LEVELS];
CVSquaresWrapper.h
// CVSquaresWrapper.h
#import <Foundation/Foundation.h>
#interface CVSquaresWrapper : NSObject
+ (UIImage*) detectedSquaresInImage:(UIImage*)image
tolerance:(CGFloat)tolerance
threshold:(NSInteger)threshold
levels:(NSInteger)levels;
#end
CVSquaresWrapper.mm
// CVSquaresWrapper.mm
// wrapper that talks to c++ and to obj-c classes
#import "CVSquaresWrapper.h"
#import "CVSquares.h"
#import "UIImage+OpenCV.h"
#implementation CVSquaresWrapper
+ (UIImage*) detectedSquaresInImage:(UIImage*) image
tolerance:(CGFloat)tolerance
threshold:(NSInteger)threshold
levels:(NSInteger)levels
{
UIImage* result = nil;
//convert from UIImage to cv::Mat openCV image format
//this is a category on UIImage
cv::Mat matImage = [image CVMat];
//call the c++ class static member function
//we want this function signature to exactly
//mirror the form of the calling method
matImage = CVSquares::detectedSquaresInImage (matImage, tolerance, threshold, levels);
//convert back from cv::Mat openCV image format
//to UIImage image format (category on UIImage)
result = [UIImage imageFromCVMat:matImage];
return result;
}
#end
CVSquares.h
// CVSquares.h
#ifndef __OpenCVClient__CVSquares__
#define __OpenCVClient__CVSquares__
//class definition
//in this example we do not need a class
//as we have no instance variables and just one static function.
//We could instead just declare the function but this form seems clearer
class CVSquares
{
public:
static cv::Mat detectedSquaresInImage (cv::Mat image, float tol, int threshold, int levels);
};
#endif /* defined(__OpenCVClient__CVSquares__) */
CVSquares.cpp
// CVSquares.cpp
#include "CVSquares.h"
using namespace std;
using namespace cv;
static int thresh = 50, N = 11;
static float tolerance = 0.01;
//declarations added so that we can move our
//public function to the top of the file
static void findSquares( const Mat& image, vector<vector<Point> >& squares );
static void drawSquares( Mat& image, vector<vector<Point> >& squares );
//this public function performs the role of
//main{} in the original file (main{} is deleted)
cv::Mat CVSquares::detectedSquaresInImage (cv::Mat image, float tol, int threshold, int levels)
{
vector<vector<Point> > squares;
if( image.empty() )
{
cout << "Couldn't load " << endl;
}
tolerance = tol;
thresh = threshold;
N = levels;
findSquares(image, squares);
drawSquares(image, squares);
return image;
}
// the rest of this file is identical to the original squares.cpp except:
// main{} is removed
// this line is removed from drawSquares:
// imshow(wndname, image);
// (obj-c will do the drawing)
UIImage+OpenCV.h
The UIImage category is an objC++ file containing the code to convert between UIImage and cv::Mat image formats. This is where you move your two methods -(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat and - (cv::Mat)cvMatWithImage:(UIImage *)image
//UIImage+OpenCV.h
#import <UIKit/UIKit.h>
#interface UIImage (UIImage_OpenCV)
//cv::Mat to UIImage
+ (UIImage *)imageFromCVMat:(cv::Mat&)cvMat;
//UIImage to cv::Mat
- (cv::Mat)CVMat;
#end
The method implementations here are unchanged from your code (although we don't pass a UIImage in to convert, instead we refer to self)
Here is a partial answer. It is not complete because I am attempting to do the exact same thing and experiencing huge difficulties every step of the way. My knowledge is quite strong on objective-c but really weak on C++
You should read this guide to wrapping c++
And everything on Ievgen Khvedchenia's Computer Vision Talks blog, especially the openCV tutorial. Ievgen has also posted an amazingly complete project on github to go with the tutorial.
Having said that, I am still having a lot of trouble getting openCV to compile and run smoothly.
For example, Ievgen's tutorial runs fine as a finished project, but if I try to recreate it from scratch I get the same openCV compile errors that have been plaguing me all along. It's probably my poor understanding of C++ and it's integration with obj-C.
Regarding squares.cpp
What you need to do
remove int main(int /*argc*/, char** /*argv*/) from squares.cpp
remove imshow(wndname, image); from drawSquares (obj-c will do the drawing)
create a header file squares.h
make one or two public functions in the header file which you can call from obj-c (or from an obj-c/c++ wrapper)
Here is what I have so far...
class squares
{
public:
static cv::Mat& findSquares( const cv::Mat& image, cv::vector<cv::vector<cv::Point> >& squares );
static cv::Mat& drawSquares( cv::Mat& image, const cv::vector<cv::vector<cv::Point> >& squares );
};
you should be able to reduce this to a single method, say processSquares with one input cv::Mat& image and one return cv::Mat& image. That method would declare squares and call findSquares and drawSquares within the .cpp file.
The wrapper will take an input UIImage, convert it to cv::Mat image, call processSquares with that input, and get a result cv::Mat image. That result it will convert back to NSImage and pass back to the objc calling function.
SO that's a neat sketch of what we need to do, I will try and expand this answer once I've actually managed to do any of it!
char* filename1="1.bmp";
IplImage* greyLeftImg= cvLoadImage(filename1,0);
char* filename2="2.bmp";
IplImage* greyRightImg= cvLoadImage(filename2,0);
IplImage* greyLeftImg32=cvCreateImage(cvSize(width,height),32,greyLeftImg->nChannels);//IPL_DEPTH_32F
IplImage* greyRightImg32=cvCreateImage(cvSize(width,height),32,greyRightImg->nChannels);
Always failed,said " Assertion failed (src.size == dst.size && dst.type() == CV_8UC(src.channels())) in unknown function"
I have searched for many methods , but none of them seems to work?
A simple step to convert any gray scale 8 bit or 16 bit uint images in opencv to 32 bit floating type is like this...
IplImage* img = cvLoadImage( "E:\\Work_DataBase\\earth.jpg",0);
IplImage* out = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F, img->nChannels);
double min,max;
cvMinMaxLoc(img,&min,&max);
// Remember values of the floating point image are in the range of 0 to 1, which u
// can't visualize by cvShowImage().......
cvCvtScale(img,out,1/ max,0);
Hope it is easy way...
Here is a simple function to convert any IplImage to 32 bit float.
IplImage* convert_to_float32(IplImage* img)
{
IplImage* img32f = cvCreateImage(cvGetSize(img),IPL_DEPTH_32F,img->nChannels);
for(int i=0; i<img->height; i++)
{
for(int j=0; j<img->width; j++)
{
cvSet2D(img32f,i,j,cvGet2D(img,i,j));
}
}
return img32f;
}
An important consideration is that for floating point images in OpenCV, only those can be visualized whose pixel values are from 0.0 and 1.0.
To visualize the floating point image, you have to scale the values from 0.0 to 1.0.
Here is an example for how to do this:
IplImage* img8u = cvLoadImage(filename1,0);
IplImage* img32f = convert_to_float32(img8u);
cvShowImage("float image",img32f); //Image will not be shown correctly
cvWaitKey(0);
cvScale(img32f, img32f, 1.0/255.0);
cvShowImage("float image normalized",img32f); //Image will be shown correctly now
cvWaitKey(0);
So, I have an image cv::Mat created as an indexed 2D matrix with colors 1,2,3,... up to 255. I want to resize my image all at once but do it like I currently do - individually for each index, so as not to get mixed colors:
//...
std::map<unsigned char , cv::Mat* > clusters;
for(int i = 0; i < sy; ++i)
{
for(int j = 0; j < sx; ++j)
{
unsigned char current_k = image[i][j];
if (clusters[current_k] == NULL) {
clusters[current_k] = new cv::Mat();
(*clusters[current_k]) = cv::Mat::zeros(cv::Size(sx, sy), CV_8UC1);
}
(*clusters[current_k]).row(i).col(j) = 255;
}
}
std::vector<cv::Mat> result;
for( std::map<unsigned char, cv::Mat*>::iterator it = clusters.begin(); it != clusters.end(); ++it )
{
cv::Mat filled(cv::Size(w, h), (*it->second).type());
cv::resize((*it->second), filled, filled.size(), 0,0, CV_INTER_CUBIC);
cv::threshold( filled, filled, 1, 255, CV_THRESH_BINARY);
result.push_back(filled);
}
So, can OpenCV help me with the automation of my indexed image (so that I could not create cv::Mat per each cluster for a correct resize)?
you can use the Remap function with your own mash to interpolate the values as you'de like
take a look at this tutorial (Link)
I want to write data directly into the imageData array of an IplImage, but I can't find a lot of information on how it's formatted. One thing that's particularly troubling me is that, despite creating an image with three channels, there are four bytes to each pixel.
The function I'm using to create the image is:
IplImage *frame = cvCreateImage(cvSize(1, 1), IPL_DEPTH_8U, 3);
By all indications, this should create a three channel RGB image, but that doesn't seem to be the case.
How would I, for example, write a single red pixel to that image?
Thanks for any help, it's get me stumped.
If you are looking at frame->imageSize keep in mind that it is frame->height * frame->widthStep, not frame->height * frame->width.
BGR is the native format of OpenCV, not RGB.
Also, if you're just getting started, you should consider using the C++ interface (where Mat replaces IplImage) since that is the future direction and it's a lot easier to work with.
Here's some sample code that accesses pixel data directly:
int main (int argc, const char * argv[]) {
IplImage *frame = cvCreateImage(cvSize(41, 41), IPL_DEPTH_8U, 3);
for( int y=0; y<frame->height; y++ ) {
uchar* ptr = (uchar*) ( frame->imageData + y * frame->widthStep );
for( int x=0; x<frame->width; x++ ) {
ptr[3*x+2] = 255; //Set red to max (BGR format)
}
}
cvNamedWindow("window", CV_WINDOW_AUTOSIZE);
cvShowImage("window", frame);
cvWaitKey(0);
cvReleaseImage(&frame);
cvDestroyWindow("window");
return 0;
}
unsigned char* imageData = [r1, g1, b1, r2, g2, b2, ..., rN, bn, gn]; // n = height*width of image
frame->imageData = imageData.
Take Your image that is a dimensional array of height N and width M and arrange it into a row-wise vector of length N*M. Make this of type unsigned char* for IPL_DEPTH_8U images.
Straight to your answer, painting the pixel red:
IplImage *frame = cvCreateImage(cvSize(1, 1), IPL_DEPTH_8U, 3);
int y,x;
x=0;y=0; //Pixel coordinates. Use this for bigger images than a single pixel.
int C=2; //0 for blue, 1 for green and 2 for red (BGR is the default format).
frame->imageData[y*frame->widthStep+3*x+C]=(uchar)255;