Accessing value at row,col in a Matrix - opencv

I'm trying to access a specific row in a matrix but am having a hard time doing so.
I want to get the value at row j, column i but I don't think my algorithm is correct. I'm using OpenCV's Mat for my matrix and accessing it through the data member.
Here is how I am attempting to access values:
plane.data[i + j*plane.rows]
Where i = the column, j = the row. Is this correct? The Matrix is 1 plane from a YUV matrix.
Any help would be appreciated! Thanks.

No, your are wrong
plane.data[i + j*plane.rows] is not a good way to access pixel. Your pointer must depend on type of the matrix and its depth.
You should use at() operator of the matrix.
To make it simple here is a code sample which access each pixel of a matrix and prints it. It works almost for every matrix type and for any number of channels:
void printMat(const Mat& M){
switch ( (M.dataend-M.datastart) / (M.cols*M.rows*M.channels())){
case sizeof(char):
printMatTemplate<unsigned char>(M,true);
break;
case sizeof(float):
printMatTemplate<float>(M,false);
break;
case sizeof(double):
printMatTemplate<double>(M,false);
break;
}
}
template <typename T>
void printMatTemplate(const Mat& M, bool isInt = true){
if (M.empty()){
printf("Empty Matrix\n");
return;
}
if ((M.elemSize()/M.channels()) != sizeof(T)){
printf("Wrong matrix type. Cannot print\n");
return;
}
int cols = M.cols;
int rows = M.rows;
int chan = M.channels();
char printf_fmt[20];
if (isInt)
sprintf_s(printf_fmt,"%%d,");
else
sprintf_s(printf_fmt,"%%0.5g,");
if (chan > 1){
// Print multi channel array
for (int i = 0; i < rows; i++){
for (int j = 0; j < cols; j++){
printf("(");
const T* Pix = &M.at<T>(i,j);
for (int c = 0; c < chan; c++){
printf(printf_fmt,Pix[c]);
}
printf(")");
}
printf("\n");
}
printf("-----------------\n");
}
else {
// Single channel
for (int i = 0; i < rows; i++){
const T* Mi = M.ptr<T>(i);
for (int j = 0; j < cols; j++){
printf(printf_fmt,Mi[j]);
}
printf("\n");
}
printf("\n");
}
}

I do not think there is anything different between accessing RGB Mat and YUV Mat. Its just the colorspace different.
Please refer to http://opencv.willowgarage.com/wiki/faq#Howtoaccessmatrixelements.3F on how to access each pixel.

Related

error in OpenCV svm->predict()

I use predict() in my program, the flowing is my codes:
int plateJudge(vector<Mat>& inVec,vector<Mat>& resultVec){
size_t num = inVec.size();
for (int j = 0; j < num; j++)
{
Mat inMat = inVec[j];
Mat gray;
cvtColor(inMat,gray,COLOR_BGR2GRAY);
equalizeHist(gray,gray);
Mat p = gray.reshape(1, 1);
p.convertTo(p, CV_32FC1);
int response = (int)svm->predict(p);
if (response == 1)
{
resultVec.push_back(inMat);
}
}
return 0;
}
but I got the error:
error: (-215) samples.cols == var_count && samples.type() == 5 in function predict
I have translate the image to gray format & change the array as 1*n, it still didn't work. Besides,the svm has already been defined well(using the trained model).So, that's all. Wish for answers! Thanks a lot.
You should pass same dimension matrix that you may had passed during training.

how find index of max in each row of Opencv Mat object

I have an OpenCv Mat.The Mat is response of MLP Neural Network. how can i find the index of maximum value in each row?
You can use minMaxLoc to do this.
Mat img = imread("image.jpg"), row;
double min=0, max=0;
Point minLoc, maxLoc;
for (int i = 0; i < img.rows; i++)
{
row = img.row(i);
//maxLoc contains coordinate of maximum value
minMaxLoc(row, &min, &max, &minLoc, &maxLoc);
}
Using minMaxIdx for each row (as mentioned before) might be more straightforward:
cv::minMaxIdx
void GetMaxValueIndex(const cv::Mat src_mat) {
double min_value;
int minidx;
std::vector<double> min_value_vec;
std::vector<int> min_idx_vec;
for (int i = 0; i < src_mat.rows; i++) {
cv::minMaxIdx(cls_confs.row(i), &minvalue, NULL, &minidx, NULL);
min_value_vec.push_back(min_value);
min_idx_vec.push_back(min_idx);
}
}

OpenCV - get coordinates of top of object in contour

Given a contour such as the one seen below, is there a way to get the X,Y coordinates of the top point in the contour? I'm using Python, but other language examples are fine.
Since every pixel needs to be checked, I'm afraid you will have to iterate linewise over the image and see which is the first white pixel.
You can iterate over the image until you encounter a pixel that isn't black.
I will write an example in C++.
cv::Mat image; // your binary image with type CV_8UC1 (8-bit 1-channel image)
int topRow(-1), topCol(-1);
for(int i = 0; i < image.rows; i++) {
uchar* ptr = image.ptr<uchar>(i);
for(int j = 0; j < image.cols; j++) {
if(ptr[j] != 0) {
topRow = i;
topCol = j;
std::cout << "Top point: " << i << ", " << j << std::endl;
break;
}
}
if(topRow != -1)
break;
}

How to merge a lot of square images via OpenCV?

How can I merge images like below into a single image using OpenCV (there can be any number of them both horizontally and vertically)? Is there any built-in solution to do it?
Additional pieces:
Well, it seems that I finished the puzzle:
Main steps:
Compare each pair of images (puzzle pieces) to know the relative position (findRelativePositions and getPosition).
Build a map knowing the relative positions of the pieces (buildPuzzle and builfForPiece)
Create the final collage putting each image at the correct position (final part of buildPuzzle).
Comparison between pieces A and B in step 1 is done checking for similarity (sum of absolute difference) among:
B is NORTH to A: A first row and B last row;
B is SOUTH to A: A last row and B first row;
B is WEST to A : A last column and B first column;
B is EAST to A : A first column and B last column.
Since images do not overlap, but we can assume that confining rows (columns) are quite similar, the key aspect is to use a (ad-hoc) threshold to discriminate between confining pieces or not. This is handled in function getPosition, with threshold parameter threshold.
Here the full code. Please let me know if something is not clear.
#include <opencv2\opencv.hpp>
#include <algorithm>
#include <set>
using namespace std;
using namespace cv;
enum Direction
{
NORTH = 0,
SOUTH,
WEST,
EAST
};
int getPosition(const Mat3b& A, const Mat3b& B, double& cost)
{
Mat hsvA, hsvB;
cvtColor(A, hsvA, COLOR_BGR2HSV);
cvtColor(B, hsvB, COLOR_BGR2HSV);
int threshold = 1000;
// Check NORTH
Mat3b AN = hsvA(Range(0, 1), Range::all());
Mat3b BS = hsvB(Range(B.rows - 1, B.rows), Range::all());
Mat3b AN_BS;
absdiff(AN, BS, AN_BS);
Scalar scoreN = sum(AN_BS);
// Check SOUTH
Mat3b AS = hsvA(Range(A.rows - 1, A.rows), Range::all());
Mat3b BN = hsvB(Range(0, 1), Range::all());
Mat3b AS_BN;
absdiff(AS, BN, AS_BN);
Scalar scoreS = sum(AS_BN);
// Check WEST
Mat3b AW = hsvA(Range::all(), Range(A.cols - 1, A.cols));
Mat3b BE = hsvB(Range::all(), Range(0, 1));
Mat3b AW_BE;
absdiff(AW, BE, AW_BE);
Scalar scoreW = sum(AW_BE);
// Check EAST
Mat3b AE = hsvA(Range::all(), Range(0, 1));
Mat3b BW = hsvB(Range::all(), Range(B.cols - 1, B.cols));
Mat3b AE_BW;
absdiff(AE, BW, AE_BW);
Scalar scoreE = sum(AE_BW);
vector<double> scores{ scoreN[0], scoreS[0], scoreW[0], scoreE[0] };
int idx_min = distance(scores.begin(), min_element(scores.begin(), scores.end()));
int direction = (scores[idx_min] < threshold) ? idx_min : -1;
cost = scores[idx_min];
return direction;
}
void resolveConflicts(Mat1i& positions, Mat1d& costs)
{
for (int c = 0; c < 4; ++c)
{
// Search for duplicate pieces in each column
set<int> pieces;
set<int> dups;
for (int r = 0; r < positions.rows; ++r)
{
int label = positions(r, c);
if (label >= 0)
{
if (pieces.count(label) == 1)
{
dups.insert(label);
}
else
{
pieces.insert(label);
}
}
}
if (dups.size() > 0)
{
int min_idx = -1;
for (int duplicate : dups)
{
// Find minimum cost position
Mat1d column = costs.col(c);
min_idx = distance(column.begin(), min_element(column.begin(), column.end()));
// Keep only minimum cost position
for (int ir = 0; ir < positions.rows; ++ir)
{
int label = positions(ir, c);
if ((label == duplicate) && (ir != min_idx))
{
positions(ir, c) = -1;
}
}
}
}
}
}
void findRelativePositions(const vector<Mat3b>& pieces, Mat1i& positions)
{
positions = Mat1i(pieces.size(), 4, -1);
Mat1d costs(pieces.size(), 4, DBL_MAX);
for (int i = 0; i < pieces.size(); ++i)
{
for (int j = i + 1; j < pieces.size(); ++j)
{
double cost;
int pos = getPosition(pieces[i], pieces[j], cost);
if (pos >= 0)
{
if (costs(i, pos) > cost)
{
positions(i, pos) = j;
costs(i, pos) = cost;
switch (pos)
{
case NORTH:
positions(j, SOUTH) = i;
costs(j, SOUTH) = cost;
break;
case SOUTH:
positions(j, NORTH) = i;
costs(j, NORTH) = cost;
break;
case WEST:
positions(j, EAST) = i;
costs(j, EAST) = cost;
break;
case EAST:
positions(j, WEST) = i;
costs(j, WEST) = cost;
break;
}
}
}
}
}
resolveConflicts(positions, costs);
}
void builfForPiece(int idx_piece, set<int>& posed, Mat1i& labels, const Mat1i& positions)
{
Point pos(-1, -1);
// Find idx_piece on grid;
for (int r = 0; r < labels.rows; ++r)
{
for (int c = 0; c < labels.cols; ++c)
{
if (labels(r, c) == idx_piece)
{
pos = Point(c, r);
break;
}
}
if (pos.x >= 0) break;
}
if (pos.x < 0) return;
// Put connected pieces
for (int c = 0; c < 4; ++c)
{
int next = positions(idx_piece, c);
if (next > 0)
{
switch (c)
{
case NORTH:
labels(Point(pos.x, pos.y - 1)) = next;
posed.insert(next);
break;
case SOUTH:
labels(Point(pos.x, pos.y + 1)) = next;
posed.insert(next);
break;
case WEST:
labels(Point(pos.x + 1, pos.y)) = next;
posed.insert(next);
break;
case EAST:
labels(Point(pos.x - 1, pos.y)) = next;
posed.insert(next);
break;
}
}
}
}
Mat3b buildPuzzle(const vector<Mat3b>& pieces, const Mat1i& positions, Size sz)
{
int n_pieces = pieces.size();
set<int> posed;
set<int> todo;
for (int i = 0; i < n_pieces; ++i) todo.insert(i);
Mat1i labels(n_pieces * 2 + 1, n_pieces * 2 + 1, -1);
// Place first element in the center
todo.erase(0);
labels(Point(n_pieces, n_pieces)) = 0;
posed.insert(0);
builfForPiece(0, posed, labels, positions);
// Build puzzle starting from the already placed elements
while (todo.size() > 0)
{
auto it = todo.begin();
int next = -1;
do
{
next = *it;
++it;
} while (posed.count(next) == 0 && it != todo.end());
todo.erase(next);
builfForPiece(next, posed, labels, positions);
}
// Posed all pieces, now collage!
vector<Point> pieces_position;
Mat1b mask = labels >= 0;
findNonZero(mask, pieces_position);
Rect roi = boundingRect(pieces_position);
Mat1i lbls = labels(roi);
Mat3b collage(roi.height * sz.height, roi.width * sz.width, Vec3b(0, 0, 0));
for (int r = 0; r < lbls.rows; ++r)
{
for (int c = 0; c < lbls.cols; ++c)
{
if (lbls(r, c) >= 0)
{
Rect rect(c*sz.width, r*sz.height, sz.width, sz.height);
pieces[lbls(r, c)].copyTo(collage(rect));
}
}
}
return collage;
}
int main()
{
// Load images
vector<String> filenames;
glob("D:\\SO\\img\\puzzle*", filenames);
vector<Mat3b> pieces(filenames.size());
for (int i = 0; i < filenames.size(); ++i)
{
pieces[i] = imread(filenames[i], IMREAD_COLOR);
}
// Find Relative positions
Mat1i positions;
findRelativePositions(pieces, positions);
// Build the puzzle
Mat3b puzzle = buildPuzzle(pieces, positions, pieces[0].size());
imshow("Puzzle", puzzle);
waitKey();
return 0;
}
NOTE
No, there is no built-in solution to perform this. Image stitching won't work since the images are not overlapped.
I cannot guarantee that this works for every puzzle, but should work for the most.
I probably should have worked this couple of hours, but it was fun :D
EDIT
Adding more puzzle pieces generates wrong results in the previous code version. This was due the (wrong) assumption that at most one piece is good enough to be connected with a given piece.
Now I added a cost matrix, and only the minimum cost piece is saved as neighbor of a given piece.
I added also a resolveConflicts function that avoid that one piece can be merged (in non-conflicting position) with more than one piece.
This is the result adding more pieces:
UPDATE
Considerations after increasing the number of puzzle pieces:
This solution it's dependent on the input order of pieces, since it turns out it has a greedy approach to find neighbors.
While searching for neighbors, it's better to compare the H channel in the HSV space. I updated the code above with this improvement.
The final solution needs probably some kind of global minimization of the of a global cost matrix. This will make the method independent on the input order. I'll be back on this asap.
Once you have loaded this images as OpenCV Mat, you can concatenate these Mat both vertically or horizontally using:
Mat A, B; // Images that will be concatenated
Mat H; // Here we will concatenate A and B horizontally
Mat V; // Here we will concatenate A and B vertically
hconcat(A, B, H);
vconcat(A, B, V);
If you need to concatenate more than two images, you can use these methods recursively.
By the way, I think these methods are not included in the OpenCV documentation, but I have used them in the past.

How tu put B, G and R component value straight into a pixel of cv::Mat? [duplicate]

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}

Resources