I've been following the Caffe MINST example and trying to deploy a test of the trained model with C++ where I use OpenCV to read in the images. In the example, they mention how for the training and test images they
scale the incoming pixels so that they are in the range [0,1). Why
0.00390625? It is 1 divided by 256.
I've heard how there's a DataTransformer class in Caffe you can use to scale your images, but if I multiplied each pixel in the OpenCV Mat object by 0.00390625 would this give the same result?
The idea is right. But remember to convert your OpenCV Mats to float or double type before scaling.
Something like:
cv::Mat mat; // assume this is one of your images (grayscale)
/* convert it to float */
mat.convertTo(mat, CV_32FC1); // use CV_32FC3 for color images
/* scaling here */
mat = mat * 0.00390625;
Update #1: Converting and scaling can also simply be done in one line, i.e.
cv::Mat mat; // assume this is one of your images (grayscale)
/* convert and scale here */
mat.convertTo(mat, CV_32FC1, 0.00390625);
Related
I have a color image represented as an OpenCV Mat object (C++, image type CV_32FC3). I have a color correction matrix that I want to apply to each pixel of the RGB color image (or BGR using OpenCV convention, doesn't matter here). The color correction matrix is 3x3.
I could easily iterate over the pixels and create a vector v (3x1) representing RGB, and then compute M*v, but this would be too slow for my real-time video application.
The cv::cvtColor function is fast, but does not seem to allow for custom color transformations.
http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor
Similar to the following, but I am using OpenCV for C++, not Python.
Apply transformation matrix to pixels in OpenCV image
Here's the code that worked using cv::reshape. It was fast enough for my application:
#define WIDTH 2048
#define HEIGHT 2048
...
Mat orig_img = Mat(HEIGHT, WIDTH, CV_32FC3);
//put some data in orig_img somehow ...
/*The color matrix
Red:RGB; Green:RGB; Blue:RGB
1.8786 -0.8786 0.0061
-0.2277 1.5779 -0.3313
0.0393 -0.6964 1.6321
*/
float m[3][3] = {{1.6321, -0.6964, 0.0393},
{-0.3313, 1.5779, -0.2277},
{0.0061, -0.8786, 1.8786 }};
Mat M = Mat(3, 3, CV_32FC1, m).t();
Mat orig_img_linear = orig_img.reshape(1, HEIGHT*WIDTH);
Mat color_matrixed_linear = orig_img_linear*M;
Mat final_color_matrixed = color_matrixed_linear.reshape(3, HEIGHT);
A few things to note from the above: The color matrix in the comment block is the one I would ordinarily apply to an RGB image. In defining the float array m, I switched rows 1 and 3, and columns 1 and 3 for OpenCV's BGR ordering. The color matrix also must be transposed. Usually a color matrix is applied as M* v = v_new, where M is 3x3 and v is 3x1 but here we are doing vT *MT = v_newT to avoid having to transpose each 3-channel pixel.
Basically the linked answer uses reshape to convert your CV_32FC3 mat of size m x n to a CV_32F mat of size (mn) x 3. After that, each row of the matrix contains exactly color channels of one pixel. You can then apply usual matrix multiplication to obtain a new mat and reshape it back to the original shape with three channels.
Note: It may be worth noticing that the default color space of opencv is BGR, not RGB.
I've got a sequence of images of type CV_8UC4. It is of HD size 1280x720.
I'm executing the bgfg segmentation (MOG2 specifically) on a ROI of the image.
After the algo finished I've got the binary image of the size of ROI and of
type CV_8UC1.
I want to insert this binary image back to the original big image. How can I do
this?
Here's what I'm doing (the code is simplified for the sake of readability):
// cvImage is the big Mat coming from outside
cv::Mat roi(cvImage, cv::Rect(200, 200, 400, 400));
mog2 = cv::createBackgroundSubtractorMOG2();
cv::Mat fgMask;
mog2->apply(roi, fgMask); // Here the fgMask is the binary mat which corresponds to the roi size
So, how can insert the fgMask back to the original image?
Hwo to do this CV_8UC1 -> CV_8UC4 conversion only for the ROI?
Thank you.
You need to make fgMask a 4 channel image:
Mat4b fgMask4ch;
cvtColor(fgMask, fgMask4ch, COLOR_GRAY2BGRA);
and then copy this into the original cvImage at the correct position, given by roi:
fgMask4ch.copyTo(roi);
I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?
I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());
No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.
I am doing a project of combining multiple images similar to HDR in iOS. I have managed to get 3 images of different exposures through the Camera and now I want to align them because during the capture, one's hand must have shaken and resulted in all 3 images having slightly different alignment.
I have imported OpenCV framework and I have been exploring functions in OpenCV to align/register images, but found nothing. Is there actually a function in OpenCV to achieve this? If not, is there any other alternatives?
Thanks!
In OpenCV 3.0 you can use findTransformECC. I have copied this ECC Image Alignment code from LearnOpenCV.com where a very similar problem is solved for aligning color channels. The post also contains code in Python. Hope this helps.
// Read the images to be aligned
Mat im1 = imread("images/image1.jpg");
Mat im2 = imread("images/image2.jpg");
// Convert images to gray scale;
Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = MOTION_EUCLIDEAN;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == MOTION_HOMOGRAPHY )
warp_matrix = Mat::eye(3, 3, CV_32F);
else
warp_matrix = Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 5000;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
TermCriteria criteria (TermCriteria::COUNT+TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
Mat im2_aligned;
if (warp_mode != MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), INTER_LINEAR + WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),INTER_LINEAR + WARP_INVERSE_MAP);
// Show final result
imshow("Image 1", im1);
imshow("Image 2", im2);
imshow("Image 2 Aligned", im2_aligned);
waitKey(0);
There is no single function called something like align, you need to do/implement it yourself, or find an already implemented one.
Here is a one solution.
You need to extract keypoints from all 3 images and try to match them. Be sure that your keypoint extraction technique is invariant to illumination changes since all have different intensity values because of different exposures. You need to match your keypoints and find some disparity. Then you can use disparity to align your images.
Remember this answer is so superficial, for details first you need to do some research about keypoint/descriptor extraction, and keypoint/descriptor matching.
Good luck!
my project scope is currency note identification by comparing the sample images feature set.There, i have completed the feature extraction part of the sample images. Further i need to store the sample images features in the text file or XML file and the classification of them.
please help me to do the image classification part by using SVM classifier on the OpenCv
this is the feature extraction code that i have completed.
int main( intargc, char** argv )
{
/Loading the image as gray scale/
//declaring Mat object.This will holds an image(like iplimage in old opencv versions).
Mat gray_scale_img;
//imread is used to load an image. in here i have load the image as a grayscale image.
gray_scale_img=imread("100.jpg",CV_LOAD_IMAGE_GRAYSCALE);
/*surf detector settings*/
//setting the threshold value.high value will result low number of keypoints.
int hessian=100;
//initializing the surf keypoint detector
SurfFeatureDetectordetector(hessian);
/*detect surf key points*/
//creating vector to store detected keypoints
std::vector<KeyPoint>keypoints;
//detect keypoints
detector.detect(gray_scale_img,keypoints);
/*extract descriptor vectors/feature vectors from each and every keypoints */
SurfDescriptorExtractor extractor;
//this mat object will goinf to hold the extracted descriptors.
Mat descriptors;
//extracting descriptors/features
extractor.compute(gray_scale_img,keypoints,descriptors);
}
SVM in OpenCV is implemented in CvSVM class;
You need to have feature vector in form of a Matrix (row wise).
Assuming you are using height, width as your feature vector, your mat will be as follows (assuming you have 20 feature vectors):
Mat FV(20,2, CV_32F);
Mat flagmat(20,1,CV_8U);
/*
code to populate the matrix FV.
Fill the matrix with values so that it will look something as follows:
20 30
30 40
..
..
code to populate the matrix flagmat.
Fill the matrix with labels of each corresponding feature vector in matrix FV. It will look something as follows:
1
-1
1
1
-1
1
1
1
..
*/
CvSVM svm;
svm.train(datamat, flagmat,Mat(),Mat(),CvSVMParams());
Mat testFV(20,2,CV_32F);
Mat sample(1,2,CV_32F);
/* similarly as described above fill testFV matrix*/
float res;// to store result
for(int i =0;i<testFV.rows;i++)
{
sample.at<float>(0,0)=testFV.at<float>(i,0);
sample.at<float>(0,1)=testFV.at<float>(i,1);
float res = svm.predict(sample);
cout<<"predicted label: "<<res<<endl;
}
I'm assuming you can extract numerical values from the feature descriptors/vectors and put them in the sample matrix in above code. You can replace the feature vectors with any feature descriptor that you are using.