How to find connected object in a binary image in VIVADO HLS? - opencv

I have a thresholded binary image as shown below:
I want to find all connected object in the image.
The code will take an input image stream and gives no. of connected components as output.
I have already implemented it in C where matrices are stored and can be accessed directly by A[][] format. But in HLS images come in as a stream, which I have converted in hls::Mat. I am not sure whether I can perform an element-wise operation on Mat and whether mat is available for offline operations just like a matrix.
I am not able to figure out how to implement the C code in Vivado HLS.
unsigned char paddedImg[FULL_HD_HEIGHT+2][FULL_HD_WIDTH+2] = {0};
void connectedComponents(unsigned char* Image,int width, int height, int
connectType, int* noOfComponents, char* list)
{
unsigned char west, north,northWest, northEast,south,east,southEast,southWest = 0;
int label = 0;
int min = 0;
for(int i=0; i<height; i++)
{
for(int j=0; j<width; j++)
{
paddedImg[i+1][j+1] = Image[(i*width) + j];
}
}
for(int i=1; i<(height+1); i++)
{
for(int j=1; j<(width+1); j++)
{
west = paddedImg[i][j-1];
northWest = paddedImg[i-1][j-1];
north = paddedImg[i-1][j];
northEast = paddedImg[i-1][j+1];
if(paddedImg[i][j] != 0)
{
min = 25500;
if(connectType == 8)
{
if( west != 0 || north != 0 || northWest != 0 || northEast != 0)
{
if(west < min && west != 0) min = west;
if(northWest < min && northWest != 0) min = northWest;
if(north < min && north != 0) min = north;
if(northEast < min && northEast != 0) min = northEast;
paddedImg[i][j] = min;
}
else
{
paddedImg[i][j] = ++label;
}
}
else if(connectType == 4)
{
if( west != 0 || north != 0)
{
if(west < min && west != 0) min = west;
if(north < min && north != 0) min = north;
paddedImg[i][j] = min;
}
else
{
paddedImg[i][j] = ++label;
}
}
else
{
printf("invalid connect type\n");
}
}
}
}
for(int i=1; i<height+1; i++)
{
for(int j=1; j<width+1; j++)
{
if(paddedImg[i][j] != 0)
{
if(connectType == 8)
{
west = paddedImg[i][j-1];
northWest = paddedImg[i-1][j-1];
north = paddedImg[i-1][j];
northEast = paddedImg[i-1][j+1];
east = paddedImg[i][j+1];
southEast = paddedImg[i+1][j+1];
south = paddedImg[i+1][j];
southWest = paddedImg[i+1][j-1];
min = paddedImg[i][j];
if(west < min && west != 0) min = west;
if(northWest < min && northWest != 0) min = northWest;
if(north < min && north != 0) min = north;
if(northEast < min && northEast != 0) min = northEast;
if(east < min && east != 0) min = east;
if(southEast < min && southEast != 0) min = southEast;
if(south < min && south != 0) min = south;
if(southWest < min && southWest != 0) min = southWest;
if(west != 0) paddedImg[i][j-1] = min;
if(northWest != 0) paddedImg[i-1][j-1] = min;
if(north != 0) paddedImg[i-1][j] = min;
if(northEast != 0) paddedImg[i-1][j+1] = min;
if(east != 0) paddedImg[i][j+1] = min;
if(southEast != 0) paddedImg[i+1][j+1] = min;
if(south != 0) paddedImg[i+1][j] = min;
if(southWest != 0) paddedImg[i+1][j-1] = min;
}
else if(connectType == 4)
{
west = paddedImg[i][j-1];
north = paddedImg[i-1][j];
east = paddedImg[i][j+1];
south = paddedImg[i+1][j];
min = paddedImg[i][j];
if(west < min && west != 0) min = west;
if(north < min && north != 0) min = north;
if(east < min && east != 0) min = east;
if(south < min && south != 0) min = south;
if(west != 0) paddedImg[i][j-1] = min;
if(north != 0) paddedImg[i-1][j] = min;
if(east != 0) paddedImg[i][j+1] = min;
if(south != 0) paddedImg[i+1][j] = min;
}
else
{
printf("invalid connect type\n");
}
}
}
}
for(int i=0; i<height; i++)
{
for(int j=0; j<width; j++)
{
Image[i*width + j] = paddedImg[i+1][j+1];
}
}
*noOfComponents = min;
}

The algorithm you describe seem to require that the entire image be loaded into FPGA memory to allow random access to elements. If the image is small enough, you can do that by reading it from the stream into memory beforehand.
If the image is large or you don't want to buffer the entire image, the paper FPGA implementation of a Single Pass Connected Components Algorithm proposes an algorithm for finding connected component in a single pass.

Related

issue with checking result of last two orders from the same Symbol

The following code works if it's attached to only a single pair, but if the EA is attached to multiple pairs, it also counts the result of cross pair and results in the wrong alert.
The idea of the following code is to decide if the attached pair has the last two trade ends positively. It should not count the other pair result from the history.
void Alert_if_the_last_two_ordes_for_same_pair_was_positive()
{
double profit = 0;
int orderId = -1;
datetime lastCloseTime = 0;
int cnt = OrdersHistoryTotal();
for(int i=0; i < cnt; i++)
{
if(!OrderSelect(i, SELECT_BY_POS, MODE_HISTORY))
continue;
orderId = OrderMagicNumber();
if(OrderSymbol() == Symbol() && lastCloseTime < OrderCloseTime() && orderId == 114)
{
lastCloseTime = OrderCloseTime();
profit = OrderProfit();
}
}
double profit1 = 0;
int orderId1 = -1;
datetime lastCloseTime1 = 0;
int cnt1 = OrdersHistoryTotal()-1;
for(int i1=0; i1 < cnt1; i1++)
{
if(!OrderSelect(i1, SELECT_BY_POS, MODE_HISTORY))
continue;
orderId1 = OrderMagicNumber();
if(OrderSymbol() == Symbol() && lastCloseTime1 < OrderCloseTime() && orderId1 == 114)
{
lastCloseTime1 = OrderCloseTime();
profit1 = OrderProfit();
}
}
double profit2 = 0;
int orderId2 = -1;
datetime lastCloseTime2 = 0;
int cnt2 = OrdersHistoryTotal()-2;
for(int i2=0; i2 < cnt2; i2++)
{
if(!OrderSelect(i2, SELECT_BY_POS, MODE_HISTORY))
continue;
orderId2 = OrderMagicNumber();
if(OrderSymbol() == Symbol() && lastCloseTime2 < OrderCloseTime() && orderId2 == 114)
{
lastCloseTime2 = OrderCloseTime();
profit2 = OrderProfit();
}
}
if(profit > 0 && profit1 > 0 )
{
Alert(" profit > 0 && profit1 = up for " + Symbol());
}

Contour segmentation

I have a contour which consists of curved segments and straigth segments. Is there any possibility to segment the contour into the curved and straigth parts?
So this is an example for a contour
I would like to have a segmentation like this:
Do you have any idea how I could solve such a problem
Thank you very much and best regards
Yes I got the solution with the link #PSchn posted. I just go through the contour points and defined a border. Everything under the border is a "curved segment" everthing else is a straigth segment. Thank you for your help!!
vector<double> getCurvature(vector<Point> const& tContourPoints, int tStepSize)
{
int iplus;
int iminus;
double acurvature;
double adivisor;
Point2f pplus;
Point2f pminus;
// erste Ableitung
Point2f a1stDerivative;
// zweite Ableitung
Point2f a2ndDerivative;
vector< double > rVecCurvature( tContourPoints.size() );
if ((int)tContourPoints.size() < tStepSize)
{
return rVecCurvature;
}
for (int i = 0; i < (int)tContourPoints.size(); i++ )
{
const Point2f& pos = tContourPoints[i];
iminus = i-tStepSize;
iplus = i+tStepSize;
if(iminus < 0)
{
pminus = tContourPoints[iminus + tContourPoints.size()];
}
else
{
pminus = tContourPoints[iminus];
}
if(iplus > (int)tContourPoints.size())
{
pplus = tContourPoints[iplus - (int)tContourPoints.size()];
}
else
{
pplus = tContourPoints[iplus];
}
a1stDerivative.x = (pplus.x - pminus.x) / ( iplus-iminus);
a1stDerivative.y = (pplus.y - pminus.y) / ( iplus-iminus);
a2ndDerivative.x = (pplus.x - 2*pos.x + pminus.x) / ((iplus-iminus)/2*(iplus-iminus)/2);
a2ndDerivative.y = (pplus.y - 2*pos.y + pminus.y) / ((iplus-iminus)/2*(iplus-iminus)/2);
adivisor = a2ndDerivative.x*a2ndDerivative.x + a2ndDerivative.y*a2ndDerivative.y;
if ( abs(adivisor) > 10e-8 )
{
acurvature = abs(a2ndDerivative.y*a1stDerivative.x - a2ndDerivative.x*a1stDerivative.y) / pow(adivisor, 3.0/2.0 ) ;
}
else
{
acurvature = numeric_limits<double>::infinity();
}
rVecCurvature[i] = acurvature;
}
return rVecCurvature;
}
Once I get the curvature I defined a border and went through my contour:
acurvature = getCurvature(aContours_img[0], 50);
if(acurvature.size() > 0)
{
// aSegmentChange =1 --> curved segment
// aSegmentChange =0 --> straigth segment
if( acurvature[0] < aBorder)
{
aSegmentChange = 1;
}
else
{
aSegmentChange = 0;
}
// Kontur segmentieren
for(int i = 0; i < (int)acurvature.size(); i++)
{
aSegments[aSegmentIndex].push_back(aContours_img[0][i]);
aCurveIndex[aSegmentIndex].push_back(aSegmentChange);
if( acurvature[i] < aBorder && aSegmentChange == 0 )
{
aSegmentIndex++;
aSegmentChange = 1;
}
if( acurvature[i] > aBorder && aSegmentChange == 1 )
{
aSegmentIndex++;
aSegmentChange = 0;
}
if(aSegmentIndex >= (int)aSegments.size()-1)
{
aSegments.resize(aSegmentIndex+1);
aCurveIndex.resize(aSegmentIndex+1);
}
}
}
This is my contour
And this is the result of segmentation

Boxing Detected Objects in OpenCV

I am currently implementing a connected components algorithm and the last step of the algorithm requires me to enclose the objects I found in a box. I have attempted to enclose an object in a box and this is the result:
As you can see some of them seem to be enclosed in a box. Some of the lines of the box are not seen unless I stretch the windows of the imshow function's output and also some of them have color when I expected a line with a shade of gray.
My question is: Is the object really getting enclosed since I remember when I ran a similar code of mine into a different OS the lines with color are not see at all but are seen in my computer. Additionally, why are some of the lines in a different color given that I was expecting a shade of gray.
Mat src, src_gray;
Mat dst, detected_edges;
const char* window_name = "THRESHOLDED IMAGE";
/**
* #function connectedComponent
*/
static void connectedComponent(int, void*)
{
Mat test; //dummy
Mat sub;
int newObject = 0;
int zeroTest = 0, nonZero = 0;
int arr[5] = {0,0,0,0,0};
/// Reduce noise with a kernel 3x3
blur( src_gray,detected_edges, Size(3,3) ); //filtering out of noise
namedWindow("INITIAL", WINDOW_NORMAL);
imshow("INITIAL",detected_edges);
resizeWindow("INITIAL", 300, 300);
threshold(detected_edges, detected_edges, 0,255, THRESH_BINARY | THRESH_OTSU);
int** newSub = new int*[detected_edges.rows];
for(int i = 0; i < detected_edges.rows; i++)
newSub[i] = new int[detected_edges.cols];
for(int i = 0; i < detected_edges.rows; i++){
for(int j = 0; j < detected_edges.cols; j++){
newSub[i][j] = 0;
}
}
/*INITIAL MARKING LOOP*/
for(int i = 0; i < detected_edges.rows; i++){
for(int j = 0; j < detected_edges.cols; j++){
if(detected_edges.at<uchar>(i,j) == 0){
if(i-1 < 0 && j-1 < 0){
newObject = newObject + 1; //no values
newSub[i][j] = newObject;
}else if(i-1 >= 0 && j-1 < 0){
if(newSub[i-1][j] != 0){
newSub[i][j] = newSub[i-1][j]; //only up has value
}else{
newObject = newObject + 1; //no values
newSub[i][j] = newObject;
}
}else if(i-1 < 0 && j-1 >= 0){
if(newSub[i-1][j] != 0){
newSub[i][j] = newSub[i-1][j]; //only left has value
}else{
newObject = newObject + 1; //no values
newSub[i][j] = newObject;
}
}else{
if(newSub[i-1][j] == 0 && newSub[i][j-1] == 0){
newObject = newObject + 1; //no values
newSub[i][j] = newObject;
}else if(newSub[i-1][j] == newSub[i][j-1]){ //same value
newSub[i][j] = newSub[i-1][j];
}else if((newSub[i-1][j] != 0 && newSub[i][j-1] == 0)){
newSub[i][j] = newSub[i-1][j]; //only up has value
}else if(newSub[i-1][j] == 0 && newSub[i][j-1] != 0 ){
newSub[i][j] = newSub[i][j-1]; //only left has value
}else if(newSub[i-1][j] != newSub[i][j-1]){
newSub[i][j] = newSub[i-1][j]; //different values follow upper's value
}
}
}
}
}
int a = 1;
int maxRows = detected_edges.rows;
int maxCols = detected_edges.cols;
/*CONNECTING PIXELS RIGHT-BOTTOM*/
while(a < newObject){
int update = 0;
for(int i = 0; i < maxRows; i++){
for(int j = 0; j < maxCols; j++){
if(newSub[i][j] == a){
if(i+1 < maxRows && j+1 < maxCols){
if(newSub[i][j+1] > a){ //both points allowed
int value = newSub[i][j+1]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){
if(newSub[h][k] == value){ //replace all instances of that value
newSub[h][k] = a;
}
}
}
update = 1;
}
if(newSub[i+1][j] > a){ //both points allowed
int value = newSub[i+1][j]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){
if(newSub[h][k] == value){
newSub[h][k] = a; //replace all instances of that value
}
}
}
update = 1;
}
}else if(i+1 > maxRows && j+1 < maxCols){
if(newSub[i][j+1] > a){ //bottom is not allowed
int value = newSub[i][j+1]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){
if(newSub[h][k] == value){
newSub[h][k] = a; //replace all instances of that value
}
}
}
update = 1;
}
}else if(i+1 < maxRows && j+1 > maxCols){
if(newSub[i+1][j] > a){ //right is not allowed
int value = newSub[i+1][j]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){
if(newSub[h][k] == value){ //replace all instances of that value
newSub[h][k] = a;
}
}
}
update = 1;
}
}
}
}
}
a++;
}
/*CONNECTING PIXELS LEFT-TOP*/
a = newObject;
while(a > 0){
int update = 0;
for(int i = maxRows-1; i > 0; i--){
for(int j = maxCols-1; j > 0 ; j--){
if(newSub[i][j] == a){
if(i-1 >= 0 && j-1 >= 0){
if(newSub[i][j-1] > a){ //both points allowed
int value = newSub[i][j-1]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){
if(newSub[h][k] == value){
newSub[h][k] = a;
}
}
}
update = 1;
}
if(newSub[i-1][j] > a){
int value = newSub[i-1][j]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){
if(newSub[h][k] == value){ //replace all instances of that value
newSub[h][k] = a;
}
}
}
update = 1;
}
}else if(i-1 >= 0 && j-1 < 0){
if(newSub[i][j-1] > a){ //left is not allowed
int value = newSub[i][j-1]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){ //replace all instances of that value
if(newSub[h][k] == value){
newSub[h][k] = a;
}
}
}
update = 1;
}
}else if(i-1 < 0 && j-1 >= 0){
if(newSub[i-1][j] > a){ //top is not allowed
int value = newSub[i-1][j]; //get the value I need to replace
for(int h = 0; h < maxRows; h++){
for(int k = 0; k < maxCols; k++){ //replace all instances of that value
if(newSub[h][k] == value){
newSub[h][k] = a;
}
}
}
update = 1;
}
}
}
}
}
a--;
}
for(int i = 0; i < maxRows; i++){
for(int j = 0; j < maxCols; j++){
int check = 0;
if(newSub[i][j] != 0){
for(int k = 0; k < 5; k++){
if(newSub[i][j] == arr[k]){ //check if there is an instance of the value in the given array of values
check = 1;
break;
}
}
if(check == 0){
for(int r = 0; r < 5; r++){
if(arr[r] == 0){
arr[r] = newSub[i][j]; //if new value is found add to array
break;
}
}
}
}
}
}
/*
I HAVE AN ARRAY CONTAINING ALL VALUES
**/
src.copyTo( sub, detected_edges);
sub = Scalar::all(0);
/*SET AN INTENSITY FOR CORRESPONDING VALUES*/
int intensity = 50;
a = 0;
while(a < 5){
int update = 0;
for(int i = 0; i < maxRows; i++){
for(int j = 0; j < maxCols; j++){
if(newSub[i][j] == arr[a]){
sub.at<uchar>(i,j) = intensity;
}
}
}
a++;
intensity = intensity + 50;
}
a = 250;
/*GETTING MIN-MAX COORDINATES*/
while(a >= 50){
int setter = 0;
int minRow = 0;
int minCol = 0;
int maxRow = 0;
int maxCol = 0;
for(int i = 0; i < maxRows; i++){
for(int j = 0; j < maxCols; j++){
if(sub.at<uchar>(i,j) == a){
if(setter == 0){
minRow = i;
minCol = j;
maxRow = i;
maxCol = j;
setter = 1;
}else{
if(i <= minRow){
minRow = i;
}
else{
if(i > maxRow){
maxRow = i;
}
}
if(j <= minCol){
minCol = j;
}
else{
if(j > maxCol){
maxCol = j;
}
}
}
}
}
}
/*THIS IS WHERE I MAKE MY BOUNDING BOX*/
for(int i = minRow; i < maxRow; i++){
sub.at<uchar>(i,minCol) = 255; //set up the horizontal lines
sub.at<uchar>(i,maxCol) = 255;
}
for(int i = minCol; i < maxCol; i++){
sub.at<uchar>(minRow,i) = 255; //set up the vertical lines
sub.at<uchar>(maxRow,i) = 255;
}
a = a - 50;
}
dst = Scalar::all(0);
src.copyTo( dst, detected_edges);
imshow( window_name, dst );
namedWindow("FINAL", WINDOW_NORMAL);
imshow("FINAL",sub); //final output
resizeWindow("FINAL", 300, 300);
for(int i = 0; i < detected_edges.rows; i++)
delete[] newSub[i];
delete[] newSub;
}
/**
* #function main
*/
int main( int, char** argv )
{
/// Load an image
src = imread( argv[1] );
if( src.empty() )
{ return -1; }
/// Create a matrix of the same type and size as src (for dst)
dst.create( src.size(), src.type() );
/// Convert the image to grayscale
cvtColor( src, src_gray, COLOR_BGR2GRAY ); //grayscale for one channel for easy computation
/// Create a window
namedWindow( window_name, WINDOW_NORMAL );
resizeWindow(window_name, 300,300);
/// Show the image
connectedComponent(0, 0);
/// Wait until user exit program by pressing a key
waitKey(0);
return 0;
}

Computing gradient orientation in c++ using opencv functions

Can anyone help me out with this?
I am trying to calculate gradient orientation using the Sobel operator in OpenCV for gradient in x and y direction. I am using the atan2 function for computing the tangent in radians, which I later convert to degrees, but all the angles I am getting are between 0 and 90 degrees.
My expectation is to get angles between 0 and 360 degrees. The image I am using is grayscale. The code segment is here below.
Mat PeripheralArea;
Mat grad_x, grad_y; // this is the matrix for the gradients in x and y directions
int off_set_y = 0, off_set_x = 0;
int scale = 1, num_bins = 8, bin = 0;
int delta=-1 ;
int ddepth = CV_16S;
GaussianBlur(PeripheralArea, PeripheralArea, Size(3, 3), 0, 0, BORDER_DEFAULT);
Sobel(PeripheralArea, grad_y, ddepth, 0, 1,3,scale, delta, BORDER_DEFAULT);
Sobel(PeripheralArea, grad_x, ddepth, 1, 0,3, scale, delta, BORDER_DEFAULT);
for (int row_y1 = 0, row_y2 = 0; row_y1 < grad_y.rows / 5, row_y2 < grad_x.rows / 5; row_y1++, row_y2++) {
for (int col_x1 = 0, col_x2 = 0; col_x1 < grad_y.cols / 5, col_x2 < grad_x.cols / 5; col_x1++, col_x2++) {
gradient_direction_radians = (double) atan2((double) grad_y.at<uchar>(row_y1 + off_set_y, col_x1 + off_set_x), (double) grad_x.at<uchar>(row_y2 + off_set_y, col_x2 + off_set_x));
gradient_direction_degrees = (int) (180 * gradient_direction_radians / 3.1415);
gradient_direction_degrees = gradient_direction_degrees < 0
? gradient_direction_degrees+360
: gradient_direction_degrees;
}
}
Note the off_set_x and off_set_y variable are not part of the computation
but to offset to different square blocks for which I eventually want to
compute an histogram feature vector
You have specified that the destination depth of Sobel() is CV_16S.
Yet, when you access grad_x and grad_y, you use .at<uchar>(), implying that their elements are 8 bit unsigned quantities, when in fact they are 16 bit signed. You could use .at<short>() instead, but to me it looks like there a number of issues with your code, not the least of which is that there is an OpenCV function that does exactly what you want.
Use cv::phase(), and replace your for loops with
cv::Mat gradient_angle_degrees;
bool angleInDegrees = true;
cv::phase(grad_x, grad_y, gradient_angle_degrees, angleInDegrees);
I solved this need when I dived into doing some edge detection using C++.
For orientation of gradient I use artan2(), this standard API defines its +y and +x same as how we usually traverse a 2D image.
Plot it to show you my understanding.
///////////////////////////////
// Quadrants of image:
// 3(-dx,-dy) | 4(+dx,-dy) [-pi,0]
// ------------------------->+x
// 2(-dx,+dy) | 1(+dx,+dy) [0,pi]
// v
// +y
///////////////////////////////
// Definition of arctan2():
// -135(-dx,-dy) | -45(+dx,-dy)
// ------------------------->+x
// 135(-dx,+dy) | +45(+dx,+dy)
// v
// +y
///////////////////////////////
How I do for gradient:
bool gradient(double*& magnitude, double*& orientation, double* src, int width, int height, string file) {
if (src == NULL)
return false;
if (width <= 0 || height <= 0)
return false;
double gradient_x_correlation[3*3] = {-0.5, 0.0, 0.5,
-0.5, 0.0, 0.5,
-0.5, 0.0, 0.5};
double gradient_y_correlation[3*3] = {-0.5,-0.5,-0.5,
0.0, 0.0, 0.0,
0.5, 0.5, 0.5};
double *Gx = NULL;
double *Gy = NULL;
this->correlation(Gx, src, gradient_x_correlation, width, height, 3);
this->correlation(Gy, src, gradient_y_correlation, width, height, 3);
if (Gx == NULL || Gy == NULL)
return false;
//magnitude
magnitude = new double[sizeof(double)*width*height];
if (magnitude == NULL)
return false;
memset(magnitude, 0, sizeof(double)*width*height);
double gx = 0.0;
double gy = 0.0;
double gm = 0.0;
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
gx = pow(Gx[i+j*width],2);
gy = pow(Gy[i+j*width],2);
gm = sqrt(pow(Gx[i+j*width],2)+pow(Gy[i+j*width],2));
if (gm >= 255.0) {
return false;
}
magnitude[i+j*width] = gm;
}
}
//orientation
orientation = new double[sizeof(double)*width*height];
if (orientation == NULL)
return false;
memset(orientation, 0, sizeof(double)*width*height);
double ori = 0.0;
double dtmp = 0.0;
double ori_normalized = 0.0;
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
gx = (Gx[i+j*width]);
gy = (Gy[i+j*width]);
ori = atan2(Gy[i+j*width], Gx[i+j*width])/PI*(180.0); //[-pi,+pi]
if (gx >= 0 && gy >= 0) { //[Qudrant 1]:[0,90] to be [0,63]
if (ori < 0) {
printf("[Err1QUA]ori:%.1f\n", ori);
return false;
}
ori_normalized = (ori)*255.0/360.0;
if (ori != 0.0 && dtmp != ori) {
printf("[Qudrant 1]orientation: %.1f to be %.1f(%d)\n", ori, ori_normalized, (uint8_t)ori_normalized);
dtmp = ori;
}
}
else if (gx >= 0 && gy < 0) { //[Qudrant 4]:[270,360) equal to [-90, 0) to be [191,255]
if (ori > 0) {
printf("[Err4QUA]orientation:%.1f\n", ori);
return false;
}
ori_normalized = (360.0+ori)*255.0/360.0;
if (ori != 0.0 && dtmp != ori) {
printf("[Qudrant 4]orientation:%.1f to be %.1f(%d)\n", ori, ori_normalized, (uint8_t)ori_normalized);
dtmp = ori;
}
}
else if (gx < 0 && gy >= 0) { //[Qudrant 2]:(90,180] to be [64,127]
if (ori < 0) {
printf("[Err2QUA]orientation:%.1f\n", ori);
return false;
}
ori_normalized = (ori)*255.0/360.0;
if (ori != 0.0 && dtmp != ori) {
printf("[Qudrant 2]orientation: %.1f to be %.1f(%d)\n", ori, ori_normalized, (uint8_t)ori_normalized);
dtmp = ori;
}
}
else if (gx < 0 && gy < 0) { //[Qudrant 3]:(180,270) equal to (-180, -90) to be [128,190]
if (ori > 0) {
printf("[Err3QUA]orientation:%.1f\n", ori);
return false;
}
ori_normalized = (360.0+ori)*255.0/360.0;
if (ori != 0.0 && dtmp != ori) {
printf("[Qudrant 3]orientation:%.1f to be %.1f(%d)\n", ori, ori_normalized, (uint8_t)ori_normalized);
dtmp = ori;
}
}
else {
printf("[EXCEPTION]orientation:%.1f\n", ori);
return false;
}
orientation[i+j*width] = ori_normalized;
}
}
return true;
}
How I do for cross correlation:
bool correlation(double*& dst, double* src, double* kernel, int width, int height, int window) {
if (src == NULL || kernel == NULL)
return false;
if (width <= 0 || height <= 0 || width < window || height < window )
return false;
dst = new double[sizeof(double)*width*height];
if (dst == NULL)
return false;
memset(dst, 0, sizeof(double)*width*height);
int ii = 0;
int jj = 0;
int nn = 0;
int mm = 0;
double max = std::numeric_limits<double>::min();
double min = std::numeric_limits<double>::max();
double range = std::numeric_limits<double>::max();
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
for (int m=0; m<window; m++) {
for (int n=0; n<window; n++) {
ii = i+(n-window/2);
jj = j+(m-window/2);
nn = n;
mm = m;
if (ii >=0 && ii<width && jj>=0 && jj<height) {
dst[i+j*width] += src[ii+jj*width]*kernel[nn+mm*window];
}
else {
dst[i+j*width] += 0;
}
}
}
if (dst[i+j*width] > max)
max = dst[i+j*width];
else if (dst[i+j*width] < min)
min = dst[i+j*width];
}
}
//normalize double matrix to be an uint8_t matrix
range = max - min;
double norm = 0.0;
printf("correlated matrix max:%.1f, min:%.1f, range:%.1f\n", max, min, range);
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
norm = dst[i+j*width];
norm = 255.0*norm/range;
dst[i+j*width] = norm;
}
}
return true;
}
For me, I use an image like a hollow rectangle, you can download it on my sample.
The orientation of gradient of the hollow rectangle part of my sample image would move from 0 to 360 clockwise (Quadrant 1 to 2 to 3 to 4).
Here is my print which describes the trace of orientation:
[Qudrant 1]orientation: 45.0 to be 31.9(31)
[Qudrant 1]orientation: 90.0 to be 63.8(63)
[Qudrant 2]orientation: 135.0 to be 95.6(95)
[Qudrant 2]orientation: 180.0 to be 127.5(127)
[Qudrant 3]orientation:-135.0 to be 159.4(159)
[Qudrant 3]orientation:-116.6 to be 172.4(172)
[Qudrant 4]orientation:-90.0 to be 191.2(191)
[Qudrant 4]orientation:-63.4 to be 210.1(210)
[Qudrant 4]orientation:-45.0 to be 223.1(223)
You can see more source code about digital image processing on my GitHub :)

opencv remap function : How to set pixel value to (0,0,0)

Function call is:
remap( src, dst, map_x, map_y, CV_INTER_LINEAR, BORDER_CONSTANT, Scalar(0,0, 0) );
map_x and map_y defines as below:
for( int j = 0; j < src.rows; j++ )
{ for( int i = 0; i < src.cols; i++ ){
if( i > src.cols*0.25 && i < src.cols*0.75 && j > src.rows*0.25 && j < src.rows*0.75 )
{
map_x.at<float>(j,i) = 2*( i - src.cols*0.25 ) + 0.5 ;
map_y.at<float>(j,i) = 2*( j - src.rows*0.25 ) + 0.5 ;
}
else{
map_x.at<float>(j,i) = 0 ;
map_y.at<float>(j,i) = 0 ;
}
}
}
You can see the original and resulting image. I want to ask:
How to set the outer parts of the result image(green part) to black(r=0,g=0,b=0)?
I think that you are mapping all pixels on the border to the original pixel at location (0,0). Try to replace the following
else {
map_x.at<float>(j,i) = 0 ;
map_y.at<float>(j,i) = 0 ;
}
by this
else {
map_x.at<float>(j,i) = -1 ;
map_y.at<float>(j,i) = -1 ;
}

Resources