Shift (like Matlab function) rows or columns of a matrix in OpenCV - opencv

In Matlab there is a shift function in order to perform a circular shift of the columns or rows of a matrix. There is a similar function in OpenCV?

I was searching for same question but since there is none, I wrote by myself. Here is another option. In my code you can shift right or left n times: for left numRight is -n, right +n.
void shiftCol(Mat& out, Mat in, int numRight){
if(numRight == 0){
in.copyTo(out);
return;
}
int ncols = in.cols;
int nrows = in.rows;
out = Mat::zeros(in.size(), in.type());
numRight = numRight%ncols;
if(numRight < 0)
numRight = ncols+numRight;
in(cv::Rect(ncols-numRight,0, numRight,nrows)).copyTo(out(cv::Rect(0,0,numRight,nrows)));
in(cv::Rect(0,0, ncols-numRight,nrows)).copyTo(out(cv::Rect(numRight,0,ncols-numRight,nrows)));
}
Hope this will help to some people. Similarly, shiftRows can be written

Here is my implementation of the circular matrix shift. Any suggestion is welcome.
//circular shift one row from up to down
void shiftRows(Mat& mat) {
Mat temp;
Mat m;
int k = (mat.rows-1);
mat.row(k).copyTo(temp);
for(; k > 0 ; k-- ) {
m = mat.row(k);
mat.row(k-1).copyTo(m);
}
m = mat.row(0);
temp.copyTo(m);
}
//circular shift n rows from up to down if n > 0, -n rows from down to up if n < 0
void shiftRows(Mat& mat,int n) {
if( n < 0 ) {
n = -n;
flip(mat,mat,0);
for(int k=0; k < n;k++) {
shiftRows(mat);
}
flip(mat,mat,0);
} else {
for(int k=0; k < n;k++) {
shiftRows(mat);
}
}
}
//circular shift n columns from left to right if n > 0, -n columns from right to left if n < 0
void shiftCols(Mat& mat, int n) {
if(n < 0){
n = -n;
flip(mat,mat,1);
transpose(mat,mat);
shiftRows(mat,n);
transpose(mat,mat);
flip(mat,mat,1);
} else {
transpose(mat,mat);
shiftRows(mat,n);
transpose(mat,mat);
}
}

Short answer, no.
Long answer, you can implement it easily if you really need it, for example using temporary objects using cv::Mat::row(i), cv::Mat::(cv::Range(rowRange), cv::Range(cv::colRange)).

Or if you're using Python, just the roll() method.

Related

Network Delay Problem - Complexity Analysis

Below is a solution Network delay problem of leetcode. I have written a all test case success solution. But not able to analyse the time complexity. I believe its O(V^2 + E) where V is the number of nodes and E edges.
In this solution though I am adding all adjacents of each node every time, but not processing them further if there exists a min distance for that node already.
Leetcode question link https://leetcode.com/problems/network-delay-time
public int networkDelayTime(int[][] times, int n, int k) {
int[] distances = new int[n+1];
Arrays.fill(distances , -1);
if(n > 0){
List<List<int[]>> edges = new ArrayList<List<int[]>>();
for(int i = 0 ; i <= n ; i++){
edges.add(new ArrayList<int[]>());
}
for(int[] time : times){
edges.get(time[0]).add(new int[]{time[1] , time[2]});
}
Queue<Vertex> queue = new LinkedList<>();
queue.add(new Vertex(k , 0));
while(!queue.isEmpty()){
Vertex cx = queue.poll();
int index = cx.index;
int distance = cx.distance;
//process adjacents only if distance is updated
if(distances[index] == -1 || distances[index] > distance){
distances[index] = distance;
List<int[]> adjacents = edges.get(index);
for(int[] adjacent : adjacents){
queue.add(new Vertex(adjacent[0] , adjacent[1]+distance));
}
}
}
}
int sum = 0;
for(int i = 1 ; i <= n; i++){
int distance = distances[i];
if(distance == -1){
return -1;
}
sum = Math.max(sum , distance);
}
return sum;
}
public static class Vertex{
int index;
int distance;
public Vertex(int i , int d){
index = i;
distance = d;
}
}
You should use PriorityQueue instead of LinkedList

How to merge a lot of square images via OpenCV?

How can I merge images like below into a single image using OpenCV (there can be any number of them both horizontally and vertically)? Is there any built-in solution to do it?
Additional pieces:
Well, it seems that I finished the puzzle:
Main steps:
Compare each pair of images (puzzle pieces) to know the relative position (findRelativePositions and getPosition).
Build a map knowing the relative positions of the pieces (buildPuzzle and builfForPiece)
Create the final collage putting each image at the correct position (final part of buildPuzzle).
Comparison between pieces A and B in step 1 is done checking for similarity (sum of absolute difference) among:
B is NORTH to A: A first row and B last row;
B is SOUTH to A: A last row and B first row;
B is WEST to A : A last column and B first column;
B is EAST to A : A first column and B last column.
Since images do not overlap, but we can assume that confining rows (columns) are quite similar, the key aspect is to use a (ad-hoc) threshold to discriminate between confining pieces or not. This is handled in function getPosition, with threshold parameter threshold.
Here the full code. Please let me know if something is not clear.
#include <opencv2\opencv.hpp>
#include <algorithm>
#include <set>
using namespace std;
using namespace cv;
enum Direction
{
NORTH = 0,
SOUTH,
WEST,
EAST
};
int getPosition(const Mat3b& A, const Mat3b& B, double& cost)
{
Mat hsvA, hsvB;
cvtColor(A, hsvA, COLOR_BGR2HSV);
cvtColor(B, hsvB, COLOR_BGR2HSV);
int threshold = 1000;
// Check NORTH
Mat3b AN = hsvA(Range(0, 1), Range::all());
Mat3b BS = hsvB(Range(B.rows - 1, B.rows), Range::all());
Mat3b AN_BS;
absdiff(AN, BS, AN_BS);
Scalar scoreN = sum(AN_BS);
// Check SOUTH
Mat3b AS = hsvA(Range(A.rows - 1, A.rows), Range::all());
Mat3b BN = hsvB(Range(0, 1), Range::all());
Mat3b AS_BN;
absdiff(AS, BN, AS_BN);
Scalar scoreS = sum(AS_BN);
// Check WEST
Mat3b AW = hsvA(Range::all(), Range(A.cols - 1, A.cols));
Mat3b BE = hsvB(Range::all(), Range(0, 1));
Mat3b AW_BE;
absdiff(AW, BE, AW_BE);
Scalar scoreW = sum(AW_BE);
// Check EAST
Mat3b AE = hsvA(Range::all(), Range(0, 1));
Mat3b BW = hsvB(Range::all(), Range(B.cols - 1, B.cols));
Mat3b AE_BW;
absdiff(AE, BW, AE_BW);
Scalar scoreE = sum(AE_BW);
vector<double> scores{ scoreN[0], scoreS[0], scoreW[0], scoreE[0] };
int idx_min = distance(scores.begin(), min_element(scores.begin(), scores.end()));
int direction = (scores[idx_min] < threshold) ? idx_min : -1;
cost = scores[idx_min];
return direction;
}
void resolveConflicts(Mat1i& positions, Mat1d& costs)
{
for (int c = 0; c < 4; ++c)
{
// Search for duplicate pieces in each column
set<int> pieces;
set<int> dups;
for (int r = 0; r < positions.rows; ++r)
{
int label = positions(r, c);
if (label >= 0)
{
if (pieces.count(label) == 1)
{
dups.insert(label);
}
else
{
pieces.insert(label);
}
}
}
if (dups.size() > 0)
{
int min_idx = -1;
for (int duplicate : dups)
{
// Find minimum cost position
Mat1d column = costs.col(c);
min_idx = distance(column.begin(), min_element(column.begin(), column.end()));
// Keep only minimum cost position
for (int ir = 0; ir < positions.rows; ++ir)
{
int label = positions(ir, c);
if ((label == duplicate) && (ir != min_idx))
{
positions(ir, c) = -1;
}
}
}
}
}
}
void findRelativePositions(const vector<Mat3b>& pieces, Mat1i& positions)
{
positions = Mat1i(pieces.size(), 4, -1);
Mat1d costs(pieces.size(), 4, DBL_MAX);
for (int i = 0; i < pieces.size(); ++i)
{
for (int j = i + 1; j < pieces.size(); ++j)
{
double cost;
int pos = getPosition(pieces[i], pieces[j], cost);
if (pos >= 0)
{
if (costs(i, pos) > cost)
{
positions(i, pos) = j;
costs(i, pos) = cost;
switch (pos)
{
case NORTH:
positions(j, SOUTH) = i;
costs(j, SOUTH) = cost;
break;
case SOUTH:
positions(j, NORTH) = i;
costs(j, NORTH) = cost;
break;
case WEST:
positions(j, EAST) = i;
costs(j, EAST) = cost;
break;
case EAST:
positions(j, WEST) = i;
costs(j, WEST) = cost;
break;
}
}
}
}
}
resolveConflicts(positions, costs);
}
void builfForPiece(int idx_piece, set<int>& posed, Mat1i& labels, const Mat1i& positions)
{
Point pos(-1, -1);
// Find idx_piece on grid;
for (int r = 0; r < labels.rows; ++r)
{
for (int c = 0; c < labels.cols; ++c)
{
if (labels(r, c) == idx_piece)
{
pos = Point(c, r);
break;
}
}
if (pos.x >= 0) break;
}
if (pos.x < 0) return;
// Put connected pieces
for (int c = 0; c < 4; ++c)
{
int next = positions(idx_piece, c);
if (next > 0)
{
switch (c)
{
case NORTH:
labels(Point(pos.x, pos.y - 1)) = next;
posed.insert(next);
break;
case SOUTH:
labels(Point(pos.x, pos.y + 1)) = next;
posed.insert(next);
break;
case WEST:
labels(Point(pos.x + 1, pos.y)) = next;
posed.insert(next);
break;
case EAST:
labels(Point(pos.x - 1, pos.y)) = next;
posed.insert(next);
break;
}
}
}
}
Mat3b buildPuzzle(const vector<Mat3b>& pieces, const Mat1i& positions, Size sz)
{
int n_pieces = pieces.size();
set<int> posed;
set<int> todo;
for (int i = 0; i < n_pieces; ++i) todo.insert(i);
Mat1i labels(n_pieces * 2 + 1, n_pieces * 2 + 1, -1);
// Place first element in the center
todo.erase(0);
labels(Point(n_pieces, n_pieces)) = 0;
posed.insert(0);
builfForPiece(0, posed, labels, positions);
// Build puzzle starting from the already placed elements
while (todo.size() > 0)
{
auto it = todo.begin();
int next = -1;
do
{
next = *it;
++it;
} while (posed.count(next) == 0 && it != todo.end());
todo.erase(next);
builfForPiece(next, posed, labels, positions);
}
// Posed all pieces, now collage!
vector<Point> pieces_position;
Mat1b mask = labels >= 0;
findNonZero(mask, pieces_position);
Rect roi = boundingRect(pieces_position);
Mat1i lbls = labels(roi);
Mat3b collage(roi.height * sz.height, roi.width * sz.width, Vec3b(0, 0, 0));
for (int r = 0; r < lbls.rows; ++r)
{
for (int c = 0; c < lbls.cols; ++c)
{
if (lbls(r, c) >= 0)
{
Rect rect(c*sz.width, r*sz.height, sz.width, sz.height);
pieces[lbls(r, c)].copyTo(collage(rect));
}
}
}
return collage;
}
int main()
{
// Load images
vector<String> filenames;
glob("D:\\SO\\img\\puzzle*", filenames);
vector<Mat3b> pieces(filenames.size());
for (int i = 0; i < filenames.size(); ++i)
{
pieces[i] = imread(filenames[i], IMREAD_COLOR);
}
// Find Relative positions
Mat1i positions;
findRelativePositions(pieces, positions);
// Build the puzzle
Mat3b puzzle = buildPuzzle(pieces, positions, pieces[0].size());
imshow("Puzzle", puzzle);
waitKey();
return 0;
}
NOTE
No, there is no built-in solution to perform this. Image stitching won't work since the images are not overlapped.
I cannot guarantee that this works for every puzzle, but should work for the most.
I probably should have worked this couple of hours, but it was fun :D
EDIT
Adding more puzzle pieces generates wrong results in the previous code version. This was due the (wrong) assumption that at most one piece is good enough to be connected with a given piece.
Now I added a cost matrix, and only the minimum cost piece is saved as neighbor of a given piece.
I added also a resolveConflicts function that avoid that one piece can be merged (in non-conflicting position) with more than one piece.
This is the result adding more pieces:
UPDATE
Considerations after increasing the number of puzzle pieces:
This solution it's dependent on the input order of pieces, since it turns out it has a greedy approach to find neighbors.
While searching for neighbors, it's better to compare the H channel in the HSV space. I updated the code above with this improvement.
The final solution needs probably some kind of global minimization of the of a global cost matrix. This will make the method independent on the input order. I'll be back on this asap.
Once you have loaded this images as OpenCV Mat, you can concatenate these Mat both vertically or horizontally using:
Mat A, B; // Images that will be concatenated
Mat H; // Here we will concatenate A and B horizontally
Mat V; // Here we will concatenate A and B vertically
hconcat(A, B, H);
vconcat(A, B, V);
If you need to concatenate more than two images, you can use these methods recursively.
By the way, I think these methods are not included in the OpenCV documentation, but I have used them in the past.

compare 2 histograms with Chi-Square

i want to compare 2 Histograms, which have 2 dimensions.
For this i want to use the Chi-Square-Metric.
My comparator looks like this function:
double Histogram::compareHistogram(Histogram *hist){
double result=0;
double a=0;
double b=0;
for (int y=0 ; y < bins_1 ; y++) {
for (int x=0 ; x < bins_2 ; x++) {
a=getHistogramValue(x,y)-hist->getHistogramValue(x,y);
b=getHistogramValue(x,y)+hist->getHistogramValue(x,y);
if(fabs(b)>0.0){
result+=a*a/b;
}
}
}
return result;
}
I've compared the result with the result of OpenCv's cv::compareHist() function and it is different. I don't know why.
Before i compared the histograms, i norm the histograms with the MINMAX-Norm.
I compared my normed histogram with the normed histogram of openCV and they are equal.
So I think, the problem is in my compareHist function.
But where?
Best regards,
The relevant section of source code from OpenCV is as follows:
if( method == CV_COMP_CHISQR )
{
for( j = 0; j < len; j++ )
{
double a = h1[j] - h2[j];
double b = h1[j];
if( fabs(b) > DBL_EPSILON )
result += a*a/b;
}
}
So you can see that the difference in your code is this line
b=getHistogramValue(x,y)+hist->getHistogramValue(x,y);
which should be
b=getHistogramValue(x,y);

stripes while calculating image gradient with CUDA

I'm writing a code for the image denoising and came across a strange problem with stripes in the processed images. Basically when I'm calculating X-gradient of image the horizontal stripes appear (or vertical for Y direction) Lena X gradient.
The whole algorithm works OK and it looks like I'm getting the correct answer (I'm comparing with program in C) except those annoying stripes Lena result.
The distance between stripes is changing with different block sizes. I'm also having different stripes positions each time I run the program! Here is the part of the program related to the gradient calculation. I have a feeling that I'm doing something very stupid :) Thank you!
#define BLKXSIZE 16
#define BLKYSIZE 16
#define idivup(a, b) ( ((a)%(b) != 0) ? (a)/(b)+1 : (a)/(b) )
void Diff4th_GPU(float* A, float* B, int N, int M, int Z, float sigma, int iter, float tau, int type)
{
float *Ad;
dim3 dimBlock(BLKXSIZE,BLKYSIZE);
dim3 dimGrid(idivup(N,BLKXSIZE), idivup(M,BLKYSIZE));
cudaMalloc((void**)&Ad,N*M*sizeof(float));
cudaMemcpy(Ad,A,N*M*sizeof(float),cudaMemcpyHostToDevice);
cudaCheckErrors("cc1");
int n = 1;
while (n <= iter) {
Diff4th2D<<<dimGrid,dimBlock>>>(Ad, N, M, sigma, iter, tau, type);
n++;
cudaDeviceSynchronize();
cudaCheckErrors("kernel");}
cudaMemcpy(B,Ad,N*M*sizeof(float),cudaMemcpyDeviceToHost);
cudaCheckErrors("cc2");
cudaFree(Ad);
}
__global__ void Diff4th2D(float* A, int N, int M, float sigma, int iter, float tau, int type)
{
float gradX, gradX_sq, gradY, gradY_sq, gradXX, gradYY, gradXY, sq_sum, xy_2, Lam, V_norm, V_orth, c, c_sq, lam_t;
int i = blockIdx.x*blockDim.x + threadIdx.x;
int j = blockIdx.y*blockDim.y + threadIdx.y;
int index = j + i*N;
if ((i < N) && (j < M))
{
float gradX = 0, gradY = 0, gradXX = 0, gradYY = 0, gradXY = 0;
if ((i>1) && (i<N)) {
if ((j>1) && (j<M)){
int indexN = (j)+(i-1)*(N);
if (indexN > ((N*M)-1)) indexN = (N*M)-1;
if (indexN < 0) indexN = 0;
int indexS = (j)+(i+1)*(N);
if (indexS > ((N*M)-1)) indexS = (N*M)-1;
if (indexS < 0) indexS = 0;
int indexW = (j-1)+(i)*(N);
if (indexW > ((N*M)-1)) indexW = (N*M)-1;
if (indexW < 0) indexW = 0;
int indexE = (j+1)+(i)*(N);
if (indexE > ((N*M)-1)) indexE = (N*M)-1;
if (indexE < 0) indexE = 0;
gradX = 0.5*(A[indexN]-A[indexS]);
A[index] = gradX;
}
}
}
}
You have a race condition inside your kernel, as elements of A may or may not be overwritten before they are used.
Use different arrays for input and output.

Whats wrong in the following cpp Bucubic interpolation code for Image Resizing

I am trying to upsample an Image using Bicubic Interpoloation, I need the accurate values matching the cvResize() function of opencv, but the results of following code is not matching the results from cvResize(), can you take a look and help me to fix the bug.
Image *Image::resize_using_Bicubic(int w, int h) {
float dx,dy;
float x,y;
float tx, ty;
float i,j,m,n;
Image *result=new Image(w, h);
tx = (float)this->m_width/(float)w;
ty = (float)this->m_height/(float)h;
for(i=0; i< w; i++)
{
for(j=0; j< h; j++)
{
x = i*tx;
y = j*ty;
dx = float(i-x)-(int)(i-x);
dy = float(j-y)-(int)(j-y);
float temp=0.0;
for(m=-1;m<=2;m++)
{
for(n=-1;n<=2;n++)
{
int HIndex,WIndex;
HIndex=(y+n);
WIndex=(x+m);
if (HIndex<0) {
HIndex=0;
}
else if(HIndex>this->getHeight())
{
HIndex=this->getHeight()-1;
}
if (WIndex<0) {
WIndex=0;
}
else if(WIndex>this->getWidth())
{
WIndex=this->getWidth()-1;
}
temp+=this->getPixel(HIndex,WIndex)*R(m-dx)*R(dy-n);
}
}
result->setPixel(j, i, temp);
}
}
return result;
}
You haven't said how different the results are. If they're very close, say within 1 or 2 in each RGB channel, this could be explained simply by roundoff differences.
There is more than one algorithm for Bicubic interpolation. Don Mitchell and Arun Netravali did an analysis and came up with a single formula to describe a number of them: http://www.mentallandscape.com/Papers_siggraph88.pdf
Edit: One more thing, the individual filter coefficients should be summed up and used to divide the final value at the end, to normalize the values. And I'm not sure why you have m-dx for one and dy-n for the other, shouldn't they be the same sign?
r=R(m-dx)*R(dy-n);
r_sum+=r;
temp+=this->getPixel(HIndex,WIndex)*r;
. . .
result->setPixel(j, i, temp/r_sum);
Change:
else if(HIndex>this->getHeight())
to:
else if(HIndex >= this->getHeight())
and change:
else if(WIndex>this->getWidth())
to:
else if(WIndex >= this->getWidth())
EDIT
Also change:
for(m=-1;m<=2;m++)
{
for(n=-1;n<=2;n++)
to:
for(m = -1; m <= 1; m++)
{
for(n = -1; n <= 1; n++)

Resources