Dijkstra Algorithm infinite loop - graph-algorithm

I'm learning about shortest path algorithms, and I'm trying to implement A Dijkstra Algorithm that takes the input from a file like this:
7
A
B
C
D
E
F
G
A B 21
A C 14
B E 5
B D 7
D F 3
E C 44
E G 53
E D 123
G F 51
The problem is when i add an extra edge to some vertex like D B 12,:
DIJKSTRA ALGORITHM:
public Set<Vertex> dijkstraAlgo(Graph G, int s) {
initializeSingleSource(G, s);
Set<Vertex> set = new HashSet<Vertex>(); // intitially empty set of
// vertexes
Queue<Vertex> Q = new PriorityQueue<Vertex>(10, new VertexComparator()); // min
// priority
// queue
for (Vertex v : G.vertices) { // add source to priority queue
Q.add(G.vertices[s]);
}
while (Q.size() != 0) {
Vertex u = Q.poll(); // extract vertex which have min distance in
// priority queue
set.add(u); // add that vertex to set
for (String vertexId : u.neighbours.keySet()) { // see neighbours of
// vertex extracted
int vertexNum = indexForName(G, vertexId);
Vertex v = G.vertices[vertexNum];
int w = weightFunc(G, u, v);
relax(u, v, w);
Q.add(v);
}
}
return set;
}
READING THE FILE:
public class Graph {
Vertex[] vertices;
public Graph(String file) throws FileNotFoundException{
Scanner sc = new Scanner(new File(file));
vertices=new Vertex[sc.nextInt()];
for (int v = 0; v < vertices.length; v++){
vertices[v] = new Vertex(sc.next());
}
while (sc.hasNext()) {
int v1= indexForName(sc.next()); //read source vertex
String destination=sc.next(); //read destination vertex
int w=sc.nextInt(); //read weight of the edge
vertices[v1].neighbours.put(destination, w); //put the edge adjacent to source vertex
}
sc.close();
}
MAIN:
public static void main(String[] args) throws FileNotFoundException {
String fileName = "Dijikstra.txt";
Dijkstra dijkstra = new Dijkstra(fileName);
Set<Vertex> vertexInfo = dijkstra.dijkstraAlgo(dijkstra.graph, 0);
System.out
.println("Printing min distance of all vertexes from source vertex A ");
for (Vertex v : vertexInfo) {
System.out.println("Id: " + v.id + " distance: " + v.d + " Comming From " + v.p);
}
}
VERTEX:
class Vertex{
String id;
int d; //to store min distance from source
Vertex p; //to store last vertex from which min distance is reached
Map<String,Integer> neighbours; //to store edges of adjacent to the vertex
public Vertex(String id){
this.id=id;
neighbours=new HashMap<String,Integer>();
}
}

for (Vertex v : G.vertices) { // add source to priority queue
Q.add(G.vertices[s]);
}
Why are you adding every Vertex to the Priority Queue instead of just the initial one? What does your Vertex class look like?

Related

Itrying to find the path of a node in a binary tree by a vector function but I am getting only 2 values from the starting node in output of the vector

I am trying to find the path of a node in a binary tree by a
vector function but I am getting only 2 values from the of
starting of the node in output of the vector
like I am trying to find the path of the node 10 and my output
would be in the following way---(1 3 7 9 10),but I am getting
only (1 3) in the output
#include <bits/stdc++.h>
using namespace std;
struct node
{
int data;
struct node *left;
struct node *right;
node(int val)
{
data = val;
left = NULL;
right = NULL;
}
};
vector<int> path(node *root, int x)
{
static vector<int> v;
if (root == NULL)
{
return v;
}
v.push_back(root->data);
if (root->data == x)
{
v.push_back(x);
return v;
}
path(root->left, x);
path(root->right, x);
v.pop_back();
return v;
}
int main()
{
struct node *root = new node(1);
root->left = new node(2);
root->right = new node(3);
root->left->left = new node(4);
root->left->right = new node(5);
root->left->right->left = new node(11);
root->right->left = new node(6);
root->right->right = new node(7);
root->right->right->right = new node(9);
root->right->right->right->right = new node(10);
int x = 10;
vector<int> v = path(root, x);
cout << v.size();
cout << endl;
for (int i = 0; i < v.size(); i++)
{
cout << v[i] << " ";
}
return 0;
}

Round Robin Algorithm Using Circular Linked List

Use a circular singly linked list to implement Round Robin process scheduling algorithm in which
each process is provided a fixed time (quantum) to execute and is pre-empted after that time period
to allow the other process to execute. Assume a set of ‘n’ processes are ready for execution.
Read the time quantum and for each of the processes, read the total execution time.
Name the processes as ‘A’, ‘B’ and so on in sequence. Each node should contain the name
of the process, its total execution time and the remaining execution time. If a process
completes its execution, remove it from the list after displaying its name and the
completion time.
Input format:
First line contains the value of ‘n’, the number of processes
Second line contains the time quantum
The remaining lines contain the total execution time of the processes in order.
5
2
6
3
7
5
1
Output:
E 9
B 12
A 18
D 21
C 22
#include <iostream>
using namespace std;
class node
{
public:
char name;
int tm;
int rt;
node *next;
};
class rr
{
public:
node * Head = NULL;
int j = 65;
void insert (int n)
{
node *nn = new node;
nn->name = j++;
nn->tm = n;
nn->rt = nn->tm;
if (Head == NULL)
{
Head = nn;
Head->next = Head;
}
else
{
node *temp = Head;
while (temp->next != Head)
temp = temp->next;
nn->next = temp->next;
temp->next = nn;
}
}
void quantum (int t)
{
node *temp = Head;
int c = 0, i = 0;
while (Head != NULL)
{
{
temp->rt = temp->rt - t;
c = c + t;
if (temp->rt <= 0)
{
c = c + temp->rt;
cout << temp->name;
cout << c << endl;
del (temp->name);
if (temp->next == temp)
{
break;
}
}
temp = temp->next;
}
}
}
void del (char x)
{
node *p = NULL;
node *temp = Head;
if (Head->name == x)
{
while (temp->next != Head)
temp = temp->next;
p = Head;
temp->next = Head->next;
Head = Head->next;
delete p;
}
else
{
while (temp->name != x)
{
p = temp;
temp = temp->next;
}
p->next = temp->next;
delete temp;
}
}
};
int
main ()
{
rr robin;
int i, n, x, y, t;
cin >> y;
cin >> t;
for (i = 0; i < y; i++)
{
cin >> n;
robin.insert (n);
}
robin.quantum (t);
return 0;
}

Calculate recursive EMA with burn period in Esper

As an exercise I am trying to calculate a recursive EMA with a burn period in Esper, EPL. It has moderately complex startup logic, and I thought this would be a good test for evaluating the sorts of things Esper could achieve.
Assuming a stream of values x1, x2, x3 at regular intervals, we want to calculate:
let p = 0.1
a = average(x1, x2, x3, x4, x5) // Assume 5, in reality use a parameter
y1 = p * x1 + (p - 1) * a // Recursive calculation initialized with look-ahead average
y2 = p * x2 + (p - 1) * y1
y3 = p * x3 + (p - 1) * y2
....
The final stream should only publish y5, y6, y7, ...
I was toying with a context that produces an event containing the average a, and that event triggers a second context that begins the recursive calculations. But by the time I try to get the first context to trigger once and once only, and the second context to handle the initial case using a and subsequent events recursively I end up with a messy tangle of logic.
Is there a straight-forward way to approach this problem?
(I'm ignoring using a custom aggregator, since this is a learning exercise)
This doesn't answer the question, but might be useful - implementation as a custom aggregation function, tested with esper 7.1.0
public class EmaFactory implements AggregationFunctionFactory {
int burn = 0;
#Override
public void setFunctionName(String s) {
// Don't know why/when this is called
}
#Override
public void validate(AggregationValidationContext ctx) {
#SuppressWarnings("rawtypes")
Class[] p = ctx.getParameterTypes();
if ((p.length != 3)) {
throw new IllegalArgumentException(String.format(
"Ema aggregation required three parameters, received %d",
p.length));
}
if (
!(
(p[0] == Double.class || p[0] == double.class) ||
(p[1] == Double.class || p[1] == double.class) ||
(p[2] == Integer.class || p[2] == int.class))) {
throw new IllegalArgumentException(
String.format(
"Arguments to Ema aggregation must of types (Double, Double, Integer), got (%s, %s, %s)\n",
p[0].getName(), p[1].getName(), p[2].getName()) +
"This should be made nicer, see AggregationMethodFactorySum.java in the Esper source code for " +
"examples of correctly dealing with multiple types"
);
}
if (!ctx.getIsConstantValue()[2]) {
throw new IllegalArgumentException(
"Third argument 'burn' to Ema aggregation must be constant"
);
}
;
burn = (int) ctx.getConstantValues()[2];
}
#Override
public AggregationMethod newAggregator() {
return new EmaAggregationFunction(burn);
}
#SuppressWarnings("rawtypes")
#Override
public Class getValueType() {
return Double.class;
}
}
public class EmaAggregationFunction implements AggregationMethod {
final private int burnLength;
private double[] burnValues;
private int count = 0;
private double value = 0.;
EmaAggregationFunction(int burn) {
this.burnLength = burn;
this.burnValues = new double[burn];
}
private void update(double x, double alpha) {
if (count < burnLength) {
value += x;
burnValues[count++] = x;
if (count == burnLength) {
value /= count;
for (double v : burnValues) {
value = alpha * v + (1 - alpha) * value;
}
// in case burn is long, free memory
burnValues = null;
}
} else {
value = alpha * x + (1 - alpha) * value;
}
}
#Override
public void enter(Object tmp) {
Object[] o = (Object[]) tmp;
assert o[0] != null;
assert o[1] != null;
assert o[2] != null;
assert (int) o[2] == burnLength;
update((double) o[0], (double) o[1]);
}
#Override
public void leave(Object o) {
}
#Override
public Object getValue() {
if (count < burnLength) {
return null;
} else {
return value;
}
}
#Override
public void clear() {
// I don't know when / why this is called - this part untested
count = 0;
value = 0.;
burnValues = new double[burnLength];
}
}
public class TestEmaAggregation {
private EPRuntime epRuntime;
private SupportUpdateListener listener = new SupportUpdateListener();
void send(int id, double value) {
epRuntime.sendEvent(
new HashMap<String, Object>() {{
put("id", id);
put("value", value);
}},
"CalculationEvent");
}
#BeforeEach
public void beforeEach() {
EPServiceProvider provider = EPServiceProviderManager.getDefaultProvider();
EPAdministrator epAdministrator = provider.getEPAdministrator();
epRuntime = provider.getEPRuntime();
ConfigurationOperations config = epAdministrator.getConfiguration();
config.addPlugInAggregationFunctionFactory("ema", EmaFactory.class.getName());
config.addEventType(
"CalculationEvent",
new HashMap<String, Object>() {{ put("id", Integer.class); put("value", Double.class); }}
);
EPStatement stmt = epAdministrator.createEPL("select ema(value, 0.1, 5) as ema from CalculationEvent where value is not null");
stmt.addListener(listener);
}
Double getEma() {
return (Double)listener.assertOneGetNewAndReset().get("ema");
}
#Test
public void someTest() {
send(1, 1);
assertEquals(null, getEma());
send(1, 2);
assertEquals(null, getEma());
send(1, 3);
assertEquals(null, getEma());
send(1, 4);
assertEquals(null, getEma());
// Last of the burn period
// We expect:
// a = (1+2+3+4+5) / 5 = 3
// y1 = 0.1 * 1 + 0.9 * 3 = 2.8
// y2 = 0.1 * 2 + 0.9 * 2.8
// ... leading to
// y5 = 3.08588
send(1, 5);
assertEquals(3.08588, getEma(), 1e-10);
// Outside burn period
send(1, 6);
assertEquals(3.377292, getEma(), 1e-10);
send(1, 7);
assertEquals(3.7395628, getEma(), 1e-10);
send(1, 8);
assertEquals(4.16560652, getEma(), 1e-10);
}
}

Need help in Parallelizing if and else condition in CUDA C program

I have written a filter for image blurring in C and it's working fine, I am trying to run in on GPU using CUDA C for faster processing. The program has a few if and else conditions as can be seen below for the C code version,
The input to the function being input image, output image, and size of columns.
void convolve_young1D(double * in, double * out, int datasize) {
int i, j;
/* Compute first 3 output elements */
out[0] = B*in[0];
out[1] = B*in[1] + bf[2]*out[0];
out[2] = B*in[2] + (bf[1]*out[0]+bf[2]*out[1]);
/* Recursive computation of output in forward direction using filter parameters bf and B */
for (i=3; i<datasize; i++) {
out[i] = B*in[i];
for (j=0; j<3; j++) {
out[i] += bf[j]*out[i-(3-j)];
}
}
}
//Calling function below
void convolve_young2D(int rows, int columns, int sigma, double ** ip_padded) {
/** \brief Filter radius */
w = 3*sigma;
/** \brief Filter parameter q */
double q;
if (sigma < 2.5)
q = 3.97156 - 4.14554*sqrt(1-0.26891*sigma);
else
q = 0.98711*sigma - 0.9633;
/** \brief Filter parameters b0, b1, b2, b3 */
double b0 = 1.57825 + 2.44413*q + 1.4281*q*q + 0.422205*q*q*q;
double b1 = 2.44413*q + 2.85619*q*q + 1.26661*q*q*q;
double b2 = -(1.4281*q*q + 1.26661*q*q*q);
double b3 = 0.422205*q*q*q;
/** \brief Filter parameters bf, bb, B */
bf[0] = b3/b0; bf[1] = b2/b0; bf[2] = b1/b0;
bb[0] = b1/b0; bb[1] = b2/b0; bb[2] = b3/b0;
B = 1 - (b1+b2+b3)/b0;
int i,j;
/* Convolve each row with 1D Gaussian filter */
double *out_t = calloc(columns+(2*w),sizeof(double ));
for (i=0; i<rows+2*w; i++) {
convolve_young1D(ip_padded[i], out_t, columns+2*w);
}
free(out_t);
Tried the same approach with blocks and threads in CUDA C but wasn't successful I have been getting zeros as output and even the input values seem to change to Zeros don't know where I am going wrong please do help. I am pretty new to CUDA C programming. Here is my attempted version of the CUDA Kernel.
__global__ void convolve_young2D( float *in, float *out,int rows,int columns, int j,float B,float bf[3],int w) {
int k;
int x = blockIdx.x * blockDim.x + threadIdx.x;
if((x>0) && (x<(rows+2*w)))
{
//printf("%d \t",x);
if(j ==0)
{
// Compute first output elements
out[x*columns] = B*in[x*columns];
}
else if(j==1)
{
out[x*columns +1 ] = B*in[x*columns +1] + bf[2]*out[x*columns];
}
else if (j== 2)
{
out[2] = B*in[x*columns +2] + (bf[1]*out[x*columns]+bf[2]*out[x*columns+1]);
}
else{
// Recursive computation of output in forward direction using filter parameters bf and B
out[x*columns+j] = B*in[x*columns+j];
for (k=0; k<3; k++) {
out[x*columns + j] += bf[k]*out[(x*columns+j)-(3-k)];
}
}
}
}
//Calling function below
void convolve_young2D(int rows, int columns, int sigma, const float * const ip_padded, float * const op_padded) {
float bf[3], bb[3];
float B;
int w;
/** \brief Filter radius */
w = 3*sigma;
/** \brief Filter parameter q */
float q;
if (sigma < 2.5)
q = 3.97156 - 4.14554*sqrt(1-0.26891*sigma);
else
q = 0.98711*sigma - 0.9633;
/** \brief Filter parameters b0, b1, b2, b3 */
float b0 = 1.57825 + 2.44413*q + 1.4281*q*q + 0.422205*q*q*q;
float b1 = 2.44413*q + 2.85619*q*q + 1.26661*q*q*q;
float b2 = -(1.4281*q*q + 1.26661*q*q*q);
float b3 = 0.422205*q*q*q;
/** \brief Filter parameters bf, bb, B */
bf[0] = b3/b0; bf[1] = b2/b0; bf[2] = b1/b0;
bb[0] = b1/b0; bb[1] = b2/b0; bb[2] = b3/b0;
B = 1 - (b1+b2+b3)/b0;
int p;
const int inputBytes = (rows+2*w) * (columns+2*w) * sizeof(float);
float *d_input, *d_output; // arrays in the GPU´s global memory
cudaMalloc(&d_input, inputBytes);
cudaMemcpy(d_input, ip_padded, inputBytes, cudaMemcpyHostToDevice);
cudaMalloc(&d_output,inputBytes);
for (p = 0; p<columns+2*w; p++){
convolve_young<<<4,500>>>(d_input,d_output,rows,columns,p,B,bf,w);
}
cudaMemcpy(op_padded, d_input, inputBytes, cudaMemcpyDeviceToHost);
cudaFree(d_input);
The first problem is that you call convolve_young<<<4,500>>>(d_input,d_output,rows,columns,p,B,bf,w); but you defined a kernel named convolve_young2D.
Another possible problem is that to do the convolution you do:
for (p = 0; p<columns+2*w; p++){
convolve_young<<<4,500>>>(d_input,d_output,rows,columns,p,B,bf,w);
}
Here you're looping over the columns instead of the rows compared to the CPU algorithm:
for (i=0; i<rows+2*w; i++) {
convolve_young1D(ip_padded[i], out_t, columns+2*w);
}
First you should try to do a direct port of your CPU algorithm, computing one line at the time, and then modify it to transfer the whole image.

How to merge a lot of square images via OpenCV?

How can I merge images like below into a single image using OpenCV (there can be any number of them both horizontally and vertically)? Is there any built-in solution to do it?
Additional pieces:
Well, it seems that I finished the puzzle:
Main steps:
Compare each pair of images (puzzle pieces) to know the relative position (findRelativePositions and getPosition).
Build a map knowing the relative positions of the pieces (buildPuzzle and builfForPiece)
Create the final collage putting each image at the correct position (final part of buildPuzzle).
Comparison between pieces A and B in step 1 is done checking for similarity (sum of absolute difference) among:
B is NORTH to A: A first row and B last row;
B is SOUTH to A: A last row and B first row;
B is WEST to A : A last column and B first column;
B is EAST to A : A first column and B last column.
Since images do not overlap, but we can assume that confining rows (columns) are quite similar, the key aspect is to use a (ad-hoc) threshold to discriminate between confining pieces or not. This is handled in function getPosition, with threshold parameter threshold.
Here the full code. Please let me know if something is not clear.
#include <opencv2\opencv.hpp>
#include <algorithm>
#include <set>
using namespace std;
using namespace cv;
enum Direction
{
NORTH = 0,
SOUTH,
WEST,
EAST
};
int getPosition(const Mat3b& A, const Mat3b& B, double& cost)
{
Mat hsvA, hsvB;
cvtColor(A, hsvA, COLOR_BGR2HSV);
cvtColor(B, hsvB, COLOR_BGR2HSV);
int threshold = 1000;
// Check NORTH
Mat3b AN = hsvA(Range(0, 1), Range::all());
Mat3b BS = hsvB(Range(B.rows - 1, B.rows), Range::all());
Mat3b AN_BS;
absdiff(AN, BS, AN_BS);
Scalar scoreN = sum(AN_BS);
// Check SOUTH
Mat3b AS = hsvA(Range(A.rows - 1, A.rows), Range::all());
Mat3b BN = hsvB(Range(0, 1), Range::all());
Mat3b AS_BN;
absdiff(AS, BN, AS_BN);
Scalar scoreS = sum(AS_BN);
// Check WEST
Mat3b AW = hsvA(Range::all(), Range(A.cols - 1, A.cols));
Mat3b BE = hsvB(Range::all(), Range(0, 1));
Mat3b AW_BE;
absdiff(AW, BE, AW_BE);
Scalar scoreW = sum(AW_BE);
// Check EAST
Mat3b AE = hsvA(Range::all(), Range(0, 1));
Mat3b BW = hsvB(Range::all(), Range(B.cols - 1, B.cols));
Mat3b AE_BW;
absdiff(AE, BW, AE_BW);
Scalar scoreE = sum(AE_BW);
vector<double> scores{ scoreN[0], scoreS[0], scoreW[0], scoreE[0] };
int idx_min = distance(scores.begin(), min_element(scores.begin(), scores.end()));
int direction = (scores[idx_min] < threshold) ? idx_min : -1;
cost = scores[idx_min];
return direction;
}
void resolveConflicts(Mat1i& positions, Mat1d& costs)
{
for (int c = 0; c < 4; ++c)
{
// Search for duplicate pieces in each column
set<int> pieces;
set<int> dups;
for (int r = 0; r < positions.rows; ++r)
{
int label = positions(r, c);
if (label >= 0)
{
if (pieces.count(label) == 1)
{
dups.insert(label);
}
else
{
pieces.insert(label);
}
}
}
if (dups.size() > 0)
{
int min_idx = -1;
for (int duplicate : dups)
{
// Find minimum cost position
Mat1d column = costs.col(c);
min_idx = distance(column.begin(), min_element(column.begin(), column.end()));
// Keep only minimum cost position
for (int ir = 0; ir < positions.rows; ++ir)
{
int label = positions(ir, c);
if ((label == duplicate) && (ir != min_idx))
{
positions(ir, c) = -1;
}
}
}
}
}
}
void findRelativePositions(const vector<Mat3b>& pieces, Mat1i& positions)
{
positions = Mat1i(pieces.size(), 4, -1);
Mat1d costs(pieces.size(), 4, DBL_MAX);
for (int i = 0; i < pieces.size(); ++i)
{
for (int j = i + 1; j < pieces.size(); ++j)
{
double cost;
int pos = getPosition(pieces[i], pieces[j], cost);
if (pos >= 0)
{
if (costs(i, pos) > cost)
{
positions(i, pos) = j;
costs(i, pos) = cost;
switch (pos)
{
case NORTH:
positions(j, SOUTH) = i;
costs(j, SOUTH) = cost;
break;
case SOUTH:
positions(j, NORTH) = i;
costs(j, NORTH) = cost;
break;
case WEST:
positions(j, EAST) = i;
costs(j, EAST) = cost;
break;
case EAST:
positions(j, WEST) = i;
costs(j, WEST) = cost;
break;
}
}
}
}
}
resolveConflicts(positions, costs);
}
void builfForPiece(int idx_piece, set<int>& posed, Mat1i& labels, const Mat1i& positions)
{
Point pos(-1, -1);
// Find idx_piece on grid;
for (int r = 0; r < labels.rows; ++r)
{
for (int c = 0; c < labels.cols; ++c)
{
if (labels(r, c) == idx_piece)
{
pos = Point(c, r);
break;
}
}
if (pos.x >= 0) break;
}
if (pos.x < 0) return;
// Put connected pieces
for (int c = 0; c < 4; ++c)
{
int next = positions(idx_piece, c);
if (next > 0)
{
switch (c)
{
case NORTH:
labels(Point(pos.x, pos.y - 1)) = next;
posed.insert(next);
break;
case SOUTH:
labels(Point(pos.x, pos.y + 1)) = next;
posed.insert(next);
break;
case WEST:
labels(Point(pos.x + 1, pos.y)) = next;
posed.insert(next);
break;
case EAST:
labels(Point(pos.x - 1, pos.y)) = next;
posed.insert(next);
break;
}
}
}
}
Mat3b buildPuzzle(const vector<Mat3b>& pieces, const Mat1i& positions, Size sz)
{
int n_pieces = pieces.size();
set<int> posed;
set<int> todo;
for (int i = 0; i < n_pieces; ++i) todo.insert(i);
Mat1i labels(n_pieces * 2 + 1, n_pieces * 2 + 1, -1);
// Place first element in the center
todo.erase(0);
labels(Point(n_pieces, n_pieces)) = 0;
posed.insert(0);
builfForPiece(0, posed, labels, positions);
// Build puzzle starting from the already placed elements
while (todo.size() > 0)
{
auto it = todo.begin();
int next = -1;
do
{
next = *it;
++it;
} while (posed.count(next) == 0 && it != todo.end());
todo.erase(next);
builfForPiece(next, posed, labels, positions);
}
// Posed all pieces, now collage!
vector<Point> pieces_position;
Mat1b mask = labels >= 0;
findNonZero(mask, pieces_position);
Rect roi = boundingRect(pieces_position);
Mat1i lbls = labels(roi);
Mat3b collage(roi.height * sz.height, roi.width * sz.width, Vec3b(0, 0, 0));
for (int r = 0; r < lbls.rows; ++r)
{
for (int c = 0; c < lbls.cols; ++c)
{
if (lbls(r, c) >= 0)
{
Rect rect(c*sz.width, r*sz.height, sz.width, sz.height);
pieces[lbls(r, c)].copyTo(collage(rect));
}
}
}
return collage;
}
int main()
{
// Load images
vector<String> filenames;
glob("D:\\SO\\img\\puzzle*", filenames);
vector<Mat3b> pieces(filenames.size());
for (int i = 0; i < filenames.size(); ++i)
{
pieces[i] = imread(filenames[i], IMREAD_COLOR);
}
// Find Relative positions
Mat1i positions;
findRelativePositions(pieces, positions);
// Build the puzzle
Mat3b puzzle = buildPuzzle(pieces, positions, pieces[0].size());
imshow("Puzzle", puzzle);
waitKey();
return 0;
}
NOTE
No, there is no built-in solution to perform this. Image stitching won't work since the images are not overlapped.
I cannot guarantee that this works for every puzzle, but should work for the most.
I probably should have worked this couple of hours, but it was fun :D
EDIT
Adding more puzzle pieces generates wrong results in the previous code version. This was due the (wrong) assumption that at most one piece is good enough to be connected with a given piece.
Now I added a cost matrix, and only the minimum cost piece is saved as neighbor of a given piece.
I added also a resolveConflicts function that avoid that one piece can be merged (in non-conflicting position) with more than one piece.
This is the result adding more pieces:
UPDATE
Considerations after increasing the number of puzzle pieces:
This solution it's dependent on the input order of pieces, since it turns out it has a greedy approach to find neighbors.
While searching for neighbors, it's better to compare the H channel in the HSV space. I updated the code above with this improvement.
The final solution needs probably some kind of global minimization of the of a global cost matrix. This will make the method independent on the input order. I'll be back on this asap.
Once you have loaded this images as OpenCV Mat, you can concatenate these Mat both vertically or horizontally using:
Mat A, B; // Images that will be concatenated
Mat H; // Here we will concatenate A and B horizontally
Mat V; // Here we will concatenate A and B vertically
hconcat(A, B, H);
vconcat(A, B, V);
If you need to concatenate more than two images, you can use these methods recursively.
By the way, I think these methods are not included in the OpenCV documentation, but I have used them in the past.

Resources