Intel IPP Erode Access Violation Exception - image-processing

I made a simple script using the ErodeBorder function in IPP, and I want to use ippiErode_1u_C1R_L. I am having trouble using ippiErode_1u_C1R_L I keep getting an AccessViolation Exception. First script shows a functioning code, second script shows an attempt to use ippiErode_1u_C1R_L.
Working Code:
int width = 1600;
int height = 594;
int binSize = 118800;
int binStep = ceil(width / 8);
IppiSize roi = { width, height };
Ipp8u* workBin = (Ipp8u*)ippsMalloc_8u(binSize);
Ipp8u* defectMask = (Ipp8u*)ippsMalloc_8u(binSize);
Ipp8u* origBin = GetMask(); //Same size as workBin
Ipp8u mask[9] = { 1, 1, 1,
1, 0, 1,
1, 1, 1 };
IppiSize maskSize = { 3, 3 };
int pSpecSize = 0, pBufferSize = 0;
ippiMorphologyBorderGetSize_1u_C1R(roi, maskSize, &pSpecSize, &pBufferSize);
Ipp8u* pBuffer = (Ipp8u*)ippsMalloc_8u(pBufferSize);
IppiMorphState* pSpec = (IppiMorphState*)ippsMalloc_8u(pSpecSize);
ippiMorphologyBorderInit_1u_C1R(roi, mask, maskSize, pSpec, pBuffer);
ippiErodeBorder_1u_C1R(origBin, binStep, 0, workBin, binStep, 0, roi, ippBorderRepl, 0, pSpec, pBuffer);
ippiErodeBorder_1u_C1R(workBin, binStep, 0, defectMask, binStep, 0, roi, ippBorderRepl, 0, pSpec, pBuffer);
ippiErodeBorder_1u_C1R(defectMask, binStep, 0, workBin, binStep, 0, roi, ippBorderRepl, 0, pSpec, pBuffer);
ippiErodeBorder_1u_C1R(workBin, binStep, 0, defectMask, binStep, 0, roi, ippBorderRepl, 0, pSpec, pBuffer);
Throws Exception when calling ippiErode_1u_C1R_L():
int width = 1600;
int height = 594;
int binSize = 118800;
int binStep = ceil(width / 8);
IppiSizeL roi_L = { width, height };
Ipp8u* workBin = (Ipp8u*)ippsMalloc_8u(binSize);
Ipp8u* defectMask = (Ipp8u*)ippsMalloc_8u(binSize);
Ipp8u* origBin = GetMask(); //Same size as workBin
Ipp8u mask[9] = { 1, 1, 1,
1, 0, 1,
1, 1, 1 };
IppiSizeL maskSize = { 3, 3 };
IppSizeL pSpecSize = 0, pBufferSize = 0;
ippiErodeGetBufferSize_L(roi_L, maskSize, ipp1u, 1, &pBufferSize);
ippiErodeGetSpecSize_L(roi_L, maskSize, &pSpecSize);
Ipp8u* pBuffer = (Ipp8u*)ippsMalloc_8u_L(pBufferSize);
IppiMorphStateL* pSpec = (IppiMorphStateL*)ippsMalloc_8u_L(pSpecSize);
IppStatus initSizeStat = ippiErodeInit_L(roi_L, mask, maskSize, pSpec);
ippiErode_1u_C1R_L(origBin, binStep, 0, workBin, binStep, 0, roi_L, ippBorderRepl, 0, pSpec, pBuffer);

Related

OpenCV, how can we normalize a Mat min to max and max to min?

I want to normalize a Mat to the min value goes to 255 and max goes to 0 (normalize the Mat between 0~255).
For example, if we have an array like [0.02, 0.002, 0.0002] after normalization I want to get a result like this: [3, 26, 255], but now when I am using NORM_MINMAX I got [255, 26, 3].
But I did not find any function to do the inverted operation of the NORM_MINMAX.
Code used:
cv::Mat mat(10, 10, CV_64F);
mat.setTo(0);
mat.row(0) = 0.02;
mat.row(1) = 0.002;
mat.row(2) = 0.0002;
cv::normalize(mat, mat, 255, 0, cv::NORM_MINMAX);
mat.convertTo(mat, CV_8UC1);
std::cout << mat << std::endl;
Result is:
[255, 255, 255, 255, 255, 255, 255, 255, 255, 255;
26, 26, 26, 26, 26, 26, 26, 26, 26, 26;
3, 3, 3, 3, 3, 3, 3, 3, 3, 3;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
But I want the inverse of the above result.
Update: When I subtract 255 from the mat like:
cv::subtract(255, mat, mat, mat); // the last mat acts as mask
std::cout << mat << std::endl;
Result is:
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
229, 229, 229, 229, 229, 229, 229, 229, 229, 229;
252, 252, 252, 252, 252, 252, 252, 252, 252, 252;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
I finally found the way how to calculate, below are the steps:
By using inverse proportions formula, we can easily calculate the inverse of the NORM_MINMAX
x = a*b/c
Where the a= min value of the mat element, b=255 (max value), and c= the element which we want to calculate it.
cv::Mat mat(10, 10, CV_64F);
mat.setTo(0);
mat.row(0) = 0.02;
mat.row(1) = 0.002;
mat.row(2) = 0.0002;
std::cout << mat<< std::endl;
// craete a mask
cv::Mat mask(mat.size(), CV_8U);
mask.setTo(0);
mask.row(0) = 255;
mask.row(1) = 255;
mask.row(2) = 255;
// find the min value
double min;
cv::minMaxLoc(mat, &min, nullptr, nullptr, nullptr, mask);
std::cout << "min=" << min << std::endl;
// unfortunately opencv divide operation does not support mask, so we need some extra steps to perform.
cv::Mat result, maskNeg;
cv::divide(min*255, mat, result); // this is the magic line
cv::bitwise_not(mask, maskNeg);
mat.copyTo(result, maskNeg);
std::cout << result << std::endl;
// convert to 8bit
result .convertTo(result , CV_8UC1);
std::cout << "the final result:" << std::endl;
std::cout << temp << std::endl;
And the outputs:
original mat
[0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02;
0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002;
0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
min=0.0002
the calculated min-max
[2.55, 2.55, 2.55, 2.55, 2.55, 2.55, 2.55, 2.55, 2.55, 2.55;
25.5, 25.5, 25.5, 25.5, 25.5, 25.5, 25.5, 25.5, 25.5, 25.5;
255, 255, 255, 255, 255, 255, 255, 255, 255, 255;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
the final result:
[ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3;
26, 26, 26, 26, 26, 26, 26, 26, 26, 26;
255, 255, 255, 255, 255, 255, 255, 255, 255, 255;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Yes, It is what I want.

How to understand the math behind the CTM(current transformation matrix)?

When I written some test code about modifying CTM, I found it can't be explain with The Math Behind the Matrices in Quartz 2D Programming Guide. The test code is as follow:
// {a, b, c, d, tx, ty}
NSLog(#"UIKit CTM:%#\n", NSStringFromCGAffineTransform(CGContextGetCTM(ctx)));
CGContextSaveGState(ctx);
CGContextTranslateCTM(ctx, 0, CGRectGetHeight(rect));// 1
NSLog(#"Quartz part 1 CTM:%#\n", NSStringFromCGAffineTransform(CGContextGetCTM(ctx)));
CGContextScaleCTM(ctx, 1, -1);// 2
NSLog(#"Quartz CTM:%#\n", NSStringFromCGAffineTransform(CGContextGetCTM(ctx)));
CGContextTranslateCTM(ctx, 0, CGRectGetHeight(rect)); // 3
NSLog(#"UIKit part 1 CTM:%#\n", NSStringFromCGAffineTransform(CGContextGetCTM(ctx)));
CGContextScaleCTM(ctx, 1, -1);// 4
NSLog(#"UIKit part 2 CTM:%#\n", NSStringFromCGAffineTransform(CGContextGetCTM(ctx)));
CGContextRestoreGState(ctx);
The output:
2017-09-29 09:51:27.166 QuartzDemo[53287:31120880] UIKit CTM:[2, 0, 0, -2, 0, 1136]
2017-09-29 09:51:27.167 QuartzDemo[53287:31120880] Quartz part 1 CTM:[2, 0, 0, -2, 0, 0]
2017-09-29 09:51:27.167 QuartzDemo[53287:31120880] Quartz CTM:[2, 0, -0, 2, 0, 0]
2017-09-29 09:51:27.167 QuartzDemo[53287:31120880] UIKit part 1 CTM:[2, 0, -0, 2, 0, 1136]
2017-09-29 09:51:27.167 QuartzDemo[53287:31120880] UIKit part 2 CTM:[2, 0, 0, -2, 0, 1136]
First, let's focus on UIKit CTM transforms to Quartz CTM, we use array express matrix, on line 1:
[2, 0, 0, 0, -2, 0, 0, 1136, 1] x [1, 0, 0, 0, 1, 0, tx1, ty1, 1] = [2, 0, 0, 0, -2, 0, 0, 0, 1]
then
[1, 0, 0, 0, 1, 0, tx1, ty1, 1] = [1, 0, 0, 0, 1, 0, -1136, 1]
so in this case CGContextTranslateCTM(ctx, 0, CGRectGetHeight(rect)); equal to [1, 0, 0, 0, 1, 0, -1136, 1], Question 1: Where is minus sign come from?
On line 2:
[2, 0, 0, 0, -2, 0, 0, 0, 1] x [sx1, 0, 0, 0, sy1, 0, 0, 0, 1] = [2, 0, 0, -0, 2, 0, 0, 0, 1]
then
[sx1, 0, 0, 0, sy1, 0, 0, 0, 1] = [1, 0, 0, 0, -1, 0, 0, 0, 1]
so CGContextScaleCTM(ctx, 1, -1); equal to [1, 0, 0, 0, -1, 0, 0, 0, 1], this result is the same as theory value.
On line 3:
[2, 0, 0, -0, 2, 0, 0, 0, 1] x [1, 0, 0, 0, 1, 0, tx2, ty2, 1] = [2, 0, 0, -0, 2, 0, 0, 1136, 1]
then
[1, 0, 0, 0, 1, 0, tx2, ty2, 1] = [2, 0, 0, 0, 2, 0, 0, 1136, 1]
this result also the same as theory value.
On line 4:
[2, 0, 0, 0, 2, 0, 0, 1136, 1] x [sx2, 0, 0, 0, sy2, 0, 0, 0, 1] = [2*sx2, 0, 0, 0, 2*sy2, 0, 0, 1136*sy2, 1]
then
[2*sx2, 0, 0, 0, 2*sy2, 0, 0, 1136*sy2, 1] = [2, 0, 0, 0, -2, 0, 0, 1136, 1]
2*sx2 = 2 => sx2 = 1; but
2*sy2 = -2 (1)
1136*sy2 = 1136 (2)
In equation (1) sy2 = -1, but in equation (2) sy2 = 1, So Question 2: Why this happened? How to explain this case?

Displaying a grid map with corona SDK - Where I can get the content / screen offsets?

I'm displaying a grid map using Corona SDK but my map doesn't start in the left corner as you can see on the screen. zoomEven config.
-- do not care about this line :
local graphics = require( "utilities.graphics" )
local M = {}
function count(T)
local count = 0
for _ in pairs(T) do count = count + 1 end
return count
end
local function displayMap( group, mapName )
local map = require ( "maps." .. mapName )
local max_row_num = count( map.tiles )
local row
local max_tile_num_in_a_row
local tile
for r = 1, max_row_num do
row = map.tiles[ r ]
max_tile_num_in_a_row = count( row )
for t = 1, max_tile_num_in_a_row do
local tile = map.tiles[ r ][ t ]
local tileSize = map.tileSize
local j = r - 1
local i = t - 1
local x = i * tileSize
local y = j * tileSize
-- This function uses display.newImageRect :
graphics.displayImage( group, "tiles", tile, x, y, tileSize, tileSize, false, false )
end
end
end
M.displayMap = displayMap
return M
My map :
local M = {}
local tiles = {
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0 }
}
local tileSize = 16
M.tiles = tiles
M.tileSize = tileSize
return M
The function who does the drawing :
local function fitImage( displayObject, fitWidth, fitHeight, enlarge )
--
-- first determine which edge is out of bounds
--
local scaleFactor = fitHeight / displayObject.height
local newWidth = displayObject.width * scaleFactor
if newWidth > fitWidth then
scaleFactor = fitWidth / displayObject.width
end
if not enlarge and scaleFactor > 1 then
return
end
displayObject:scale( scaleFactor, scaleFactor )
end
local function displayImage( sceneGroup, subfolders, imageName, x, y, width, height, centered, enlarge )
local img
if subfolders == nil then
img = display.newImageRect( "assets/" .. imageName .. ".png", width, height )
else
img = display.newImageRect( "assets/" .. subfolders .. "/" .. imageName .. ".png", width, height )
end
if centered then
img.x = display.contentCenterX
img.y = display.contentCenterY
else
img.x = x
img.y = y
end
if enlarge then
fitImage( img, sceneGroup.width, sceneGroup.height, true )
else
fitImage( img, sceneGroup.width, sceneGroup.height, false ) end
sceneGroup.scene:insert(img)
return img
end
I tried to change the anchors but it does not work. Is there a way to get the offsets distancing the content and the viewable part of the screen ?
Note : Adding display.screenOriginX, Y to x, y push the map slightly to the right but does not resolve the problem.

Kalman filter 3D implementation

I want to implement the kalman filter for a moving object in r3 (X,Y,Z-coordinate) in OpenCV.
I tried to understand the OpenCV documentation but this is really not helpful and very rare.
The syntax for the initialization is:
KalmanFilter::KalmanFilter ( int dynamParams, int measureParams, int
controlParams = 0, int type = CV_32F )
In my case, is dynamParams = 9 and measureParams=3?
And what is the transitionMatrix in my case?
In that case the Transition Matrix A looks like:
A = [1, 0, 0, v, 0, 0, a, 0, 0;
0, 1, 0, 0, v, 0, 0, a, 0;
0, 0, 1, 0, 0, v, 0, 0, a;
0, 0, 0, 1, 0, 0, v, 0, 0;
0, 0, 0, 0, 1, 0, 0, v, 0;
0, 0, 0, 0, 0, 1, 0, 0, v;
0, 0, 0, 0, 0, 0, 1, 0, 0;
0, 0, 0, 0, 0, 0, 0, 1, 0;
0, 0, 0, 0, 0, 0, 0, 0, 1]
With
v = dt
a = 0.5*dt^2
See http://campar.in.tum.de/Chair/KalmanFilter
I found out, that for the 3D-Case often the position, velocity and acceleration is used. That means, that for the OpenCV implementation dynamParams=9 and measureParams=3 is right.

KNN find nearest error

I'm doing KNN classification of static gestures and i get this error.
ERROR: Unhandled exception at 0x01213aa2 in NUIGHR.exe: 0xC0000005:
Access violation reading location 0x00000000.
CvMat* GetFeatures(CvSeq* contour, CvSeq* hull, double boundingRectArea){
CvMoments moments;
CvHuMoments humoments;
cvMoments(contour, &moments, cvGetHuMoments(&moments, &humoments);
int cCont;
double cArea, cPerimeter, cDiameter, cExtent, cCompactness, cEccentricity, cCircularity;
cCont = contour->total;
cArea = fabs(cvContourArea(contour));
cPerimeter = cvContourPerimeter(contour);
cDiameter = sqrt( 4 * cArea / CV_PI);
cExtent = cArea / (boundingRectArea * boundingRectArea);
cCompactness = (4 * cArea * CV_PI) / cPerimeter;
cEccentricity = pow( (moments.m20 - moments.m02), 2) - (4 * pow(moments.m11, 2)) / ( pow(moments.m20 + moments.m02, 2) );
cCircularity = pow(cPerimeter, 2) / cArea;
cvmSet( featureVector, 0, 0, boundingRectArea);
cvmSet( featureVector, 0, 1, cCont);
cvmSet( featureVector, 0, 2, cArea);
cvmSet( featureVector, 0, 3, cPerimeter);
cvmSet( featureVector, 0, 4, cDiameter);
cvmSet( featureVector, 0, 5, cExtent);
cvmSet( featureVector, 0, 6, cCompactness);
cvmSet( featureVector, 0, 7, cEccentricity);
cvmSet( featureVector, 0, 8, cCircularity);
cvmSet( featureVector, 0, 9, humoments.hu1);
cvmSet( featureVector, 0, 10, humoments.hu2);
cvmSet( featureVector, 0, 11, humoments.hu3);
cvmSet( featureVector, 0, 12, humoments.hu4);
cvmSet( featureVector, 0, 13, humoments.hu5);
cvmSet( featureVector, 0, 14, humoments.hu6);
cvmSet( featureVector, 0, 15, humoments.hu7);
return featureVector;
}
int main(){
...
const int K = 10;
CvKNearest *knn = NULL;
float resultNode = 0;
CvMat* featVector = cvCreateMat(1, NUMBER_OF_FEATURES, CV_32FC1 );
CvMat* nearest = cvCreateMat(1, K, CV_32FC1);
...
resultNode = knn->find_nearest(&featVector, K, 0, 0, nearest, 0);
}
I think i need to convert CvMat* to CvMat.
How do i do it?
You cannot pass 0 as 3rd and 4th argument to find_nearest function, not if you do pass something as 5th argument. OpenCV tries to populate the results and neighbourResponses (see doc), but cannot read/write the NULL pointer.
featureVector = NULL pointer
sorry guys... i'm a beginner :(

Resources