I have the following code
tile_width = 64;
tile_height = 64;
tile_map = {
{1,1,1,1,1,1,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,1,2,1,1,1,1,1},
{1,1,1,1,1,1,1,1,1,1,1,1},
{1,1,1,1,1,1,2,1,1,1,1,1}
}
i=1;
j=1;
while i<table.getn(tile_map) do
while j<table.getn(tile_map[i]) do
print(tile_map[i][j]);
x = (j * tile_width / 2) + (i * tile_width / 2)
y = (i * tile_height / 2) - (j * tile_height / 2)
print(x);
print(y);
j = j+1;
end
i = i+1;
end
And it works, but it only displays the first row values, and doesn't go onto the second row, third row, etc.
What I am trying to do in another language
for (i = 0; i < tile_map.size; i++):
for (j = 0; j < tile_map[i].size j++):
draw(
tile_map[i][j],
x = (j * tile_width / 2) + (i * tile_width / 2)
y = (i * tile_height / 2) - (j * tile_height / 2)
)
Any idea what I am doing wrong?
Thanks!
Here is a cleaned up version of your code.
Note changes:
Use local variables instead of global.
Use # for table size instead of table.getn().
Use numeric for loop instead of while.
Lack of semicolons.
If you'll uncomment io.write() calls and comment out prints, you will get your map printed out in a readable way.
local tile_width = 64
local tile_height = 64
local tile_map = {
{1,1,1,1,1,1,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,3,1,1,1,1,1,1},
{1,1,1,1,1,1,2,1,1,1,1,1},
{1,1,1,1,1,1,1,1,1,1,1,1},
{1,1,1,1,1,1,2,1,1,1,1,1}
}
for i = 1, #tile_map do
local row = tile_map[i]
for j = 1, #row do
--io.write(row[j])
print(row[j])
local x = (j * tile_width / 2) + (i * tile_width / 2)
local y = (i * tile_height / 2) - (j * tile_height / 2)
print(x)
print(y)
end
--io.write("\n")
end
P.S. Make sure you've read the Programming in Lua 2nd Edition book. Note that the version, available online, is the first edition and it describes older Lua 5.0.
You have to reset j to 1 after each of the inner loops. Because: After the inner loop has been completed for the first time, j will have been incremented 64 times. But you want to start over with j set to 1.
In the code of the other programming language, note that this re-setting of j is taken care of.
Related
I have implemented separable Gaussian blur. Horizontal pass was relatively easy to optimize with SIMD processing. However, I am not sure how to optimize vertical pass.
Accessing elements is not very cache friendly and filling SIMD lane would mean reading many different pixels. I was thinking about transpose the image and run horizontal pass and then transpose image back, however, I am not sure if it will gain any improvement because of two tranpose operations.
I have quite large images 16k resolution and kernel size is 19, so vectorization of vertical pass gain was about 15%.
My Vertical pass is as follows (it is sinde generic class typed to T which can be uint8_t or float):
int yStart = kernelHalfSize;
int xStart = kernelHalfSize;
int yEnd = input.GetWidth() - kernelHalfSize;
int xEnd = input.GetHeigh() - kernelHalfSize;
const T * inData = input.GetData().data();
V * outData = output.GetData().data();
int kn = kernelHalfSize * 2 + 1;
int kn4 = kn - kn % 4;
for (int y = yStart; y < yEnd; y++)
{
size_t yW = size_t(y) * output.GetWidth();
size_t outX = size_t(xStart) + yW;
size_t xEndSimd = xStart;
int len = xEnd - xStart;
len = len - len % 4;
xEndSimd = xStart + len;
for (int x = xStart; x < xEndSimd; x += 4)
{
size_t inYW = size_t(y) * input.GetWidth();
size_t x0 = ((x + 0) - kernelHalfSize) + inYW;
size_t x1 = x0 + 1;
size_t x2 = x0 + 2;
size_t x3 = x0 + 3;
__m128 sumDot = _mm_setzero_ps();
int i = 0;
for (; i < kn4; i += 4)
{
__m128 kx = _mm_set_ps1(kernelDataX[i + 0]);
__m128 ky = _mm_set_ps1(kernelDataX[i + 1]);
__m128 kz = _mm_set_ps1(kernelDataX[i + 2]);
__m128 kw = _mm_set_ps1(kernelDataX[i + 3]);
__m128 dx, dy, dz, dw;
if constexpr (std::is_same<T, uint8_t>::value)
{
//we need co convert uint8_t inputs to float
__m128i u8_0 = _mm_loadu_si128((const __m128i*)(inData + x0));
__m128i u8_1 = _mm_loadu_si128((const __m128i*)(inData + x1));
__m128i u8_2 = _mm_loadu_si128((const __m128i*)(inData + x2));
__m128i u8_3 = _mm_loadu_si128((const __m128i*)(inData + x3));
__m128i u32_0 = _mm_unpacklo_epi16(
_mm_unpacklo_epi8(u8_0, _mm_setzero_si128()),
_mm_setzero_si128());
__m128i u32_1 = _mm_unpacklo_epi16(
_mm_unpacklo_epi8(u8_1, _mm_setzero_si128()),
_mm_setzero_si128());
__m128i u32_2 = _mm_unpacklo_epi16(
_mm_unpacklo_epi8(u8_2, _mm_setzero_si128()),
_mm_setzero_si128());
__m128i u32_3 = _mm_unpacklo_epi16(
_mm_unpacklo_epi8(u8_3, _mm_setzero_si128()),
_mm_setzero_si128());
dx = _mm_cvtepi32_ps(u32_0);
dy = _mm_cvtepi32_ps(u32_1);
dz = _mm_cvtepi32_ps(u32_2);
dw = _mm_cvtepi32_ps(u32_3);
}
else
{
/*
//load 8 consecutive values
auto dd = _mm256_loadu_ps(inData + x0);
//extract parts by shifting and casting to 4 values float
dx = _mm256_castps256_ps128(dd);
dy = _mm256_castps256_ps128(_mm256_permutevar8x32_ps(dd, _mm256_set_epi32(0, 0, 0, 0, 4, 3, 2, 1)));
dz = _mm256_castps256_ps128(_mm256_permutevar8x32_ps(dd, _mm256_set_epi32(0, 0, 0, 0, 5, 4, 3, 2)));
dw = _mm256_castps256_ps128(_mm256_permutevar8x32_ps(dd, _mm256_set_epi32(0, 0, 0, 0, 6, 5, 4, 3)));
*/
dx = _mm_loadu_ps(inData + x0);
dy = _mm_loadu_ps(inData + x1);
dz = _mm_loadu_ps(inData + x2);
dw = _mm_loadu_ps(inData + x3);
}
//calculate 4 dots at once
//[dx, dy, dz, dw] <dot> [kx, ky, kz, kw]
auto mx = _mm_mul_ps(dx, kx); //dx * kx
auto my = _mm_fmadd_ps(dy, ky, mx); //mx + dy * ky
auto mz = _mm_fmadd_ps(dz, kz, my); //my + dz * kz
auto res = _mm_fmadd_ps(dw, kw, mz); //mz + dw * kw
sumDot = _mm_add_ps(sumDot, res);
x0 += 4;
x1 += 4;
x2 += 4;
x3 += 4;
}
for (; i < kn; i++)
{
auto v = _mm_set_ps1(kernelDataX[i]);
auto v2 = _mm_set_ps(
*(inData + x3), *(inData + x2),
*(inData + x1), *(inData + x0)
);
sumDot = _mm_add_ps(sumDot, _mm_mul_ps(v, v2));
x0++;
x1++;
x2++;
x3++;
}
sumDot = _mm_mul_ps(sumDot, _mm_set_ps1(weightX));
if constexpr (std::is_same<V, uint8_t>::value)
{
__m128i asInt = _mm_cvtps_epi32(sumDot);
asInt = _mm_packus_epi32(asInt, asInt);
asInt = _mm_packus_epi16(asInt, asInt);
uint32_t res = _mm_cvtsi128_si32(asInt);
((uint32_t *)(outData + outX))[0] = res;
outX += 4;
}
else
{
float tmpRes[4];
_mm_store_ps(tmpRes, sumDot);
outData[outX + 0] = tmpRes[0];
outData[outX + 1] = tmpRes[1];
outData[outX + 2] = tmpRes[2];
outData[outX + 3] = tmpRes[3];
outX += 4;
}
}
for (int x = xEndSimd; x < xEnd; x++)
{
int kn = kernelHalfSize * 2 + 1;
const T * v = input.GetPixelStart(x - kernelHalfSize, y);
float tmp = 0;
for (int i = 0; i < kn; i++)
{
tmp += kernelDataX[i] * v[i];
}
tmp *= weightX;
outData[outX] = ImageUtils::clamp_cast<V>(tmp);
outX++;
}
}
There’s a well-known trick for that.
While you compute both passes, read them sequentially, use SIMD to compute, but write out the result into another buffer, transposed, using scalar stores. Protip: SSE 4.1 has _mm_extract_ps just don’t forget to cast your destination image pointer from float* into int*. Another thing about these stores, I would recommend using _mm_stream_si32 for that as you want maximum cache space used by your input data. When you’ll be computing the second pass, you’ll be reading sequential memory addresses again, the prefetcher hardware will deal with the latency.
This way both passes will be identical, I usually call same function twice, with different buffers.
Two transposes caused by your 2 passes cancel each other. Here’s an HLSL version, BTW.
There’s more. If your kernel size is only 19, that fits in 3 AVX registers. I think shuffle/permute/blend instructions are still faster than even L1 cache loads, i.e. it might be better to load the kernel outside the loop.
I have written a code for the cost function and it is giving incorrect answer.
I have read the code many times but I cannot find the mistake.
Here is my code:-
function J = computeCost(X, y, theta)
m = length(y); % number of training examples
s = 0;
h = 0;
sq = 0;
J = 0;
for i = 1:m
h = theta' * X(i, :)';
sq = (h - y(i))^2;
s = s + sq;
end
J = (1/2*m) * s;
end
Example:-
computeCost( [1 2; 1 3; 1 4; 1 5], [7;6;5;4], [0.1;0.2] )
ans = 11.9450
Here the answer should be 11.9450 but my code is giving me this:-
ans = 191.12
I have checked the the matrix multiplication and the code is calculating it right.
It seems you misunderstood the operator evaluation order. In fact
1/2*m ~= 1/(2*m)
With this in mind it seems you're computing an average. Instead of reinventing the wheel it is usually a good idea to use the built in functions to do the job which results in a much clearer (and less error prone) implementation:
function J = computeCost(X, y, theta)
h = X * theta;
sq = (h - y).^2;
J = 1/2 * mean(sq);
end
computeCost( [1,2;1,3;1,4;1,5], [7;6;5;4], [0.1;0.2] )
% ans = 11.9450
Try it online!
Im trying to apply a Sharpen Kernel to a raster picture, Here is my kernel:
{ 0.0f,-1.0f,0.0f,
-1.0f,5.0f,-1.0f,
0.0f,-1.0f,0.0f }
And here is my Code:
struct Pixel{
GLubyte R, G, B;
float x, y;
};
. . .
for (unsigned i = 1; i < iWidth - 1; i++){
for (unsigned j = 1; j < iHeight - 1; j++){
float r = 0, g = 0, b = 0;
r += -(float)pixels[i + 1][j].R;
g += -(float)pixels[i + 1][j].G;
b += -(float)pixels[i + 1][j].B;
r += -(float)pixels[i - 1][j].R;
g += -(float)pixels[i - 1][j].G;
b += -(float)pixels[i - 1][j].B;
r += -(float)pixels[i][j + 1].R;
g += -(float)pixels[i][j + 1].G;
b += -(float)pixels[i][j + 1].B;
r += -(float)pixels[i][j - 1].R;
g += -(float)pixels[i][j - 1].G;
b += -(float)pixels[i][j - 1].B;
pixels[i][j].R = (GLubyte)((pixels[i][j].R * 5) + r);
pixels[i][j].G = (GLubyte)((pixels[i][j].G * 5) + g);
pixels[i][j].B = (GLubyte)((pixels[i][j].B * 5) + b);
}
}
But the colors get mixed up when I apply this kernel, Here is an example:
What am I doing wrong?
NOTE : I know that OpenGL can do this fast and easy, but I just wanted to experiment on this kind of masks.
EDIT : The first code had a bug:
pixels[i][j].R = (GLubyte)((pixels[i][j].R * 5) + r);
pixels[i][j].G = (GLubyte)((pixels[i][j].R/*G*/ * 5) + g);
pixels[i][j].B = (GLubyte)((pixels[i][j].R/*B*/ * 5) + b);
I fixed it but I still got that problem.
Iv changed the last three lines to this:
r = (float)((pixels[i][j].R * 5) + r);
g = (float)((pixels[i][j].G * 5) + g);
b = (float)((pixels[i][j].B * 5) + b);
if (r < 0) r = 0;
if (g < 0) g = 0;
if (b < 0) b = 0;
if (r > 255) r = 255;
if (g > 255) g = 255;
if (b > 255) b = 255;
pixels[i][j].R = r;
pixels[i][j].G = g;
pixels[i][j].B = b;
And now the output looks like this:
You have a copy-paste bug here:
pixels[i][j].R = (GLubyte)((pixels[i][j].R * 5) + r);
pixels[i][j].G = (GLubyte)((pixels[i][j].R * 5) + g);
pixels[i][j].B = (GLubyte)((pixels[i][j].R * 5) + b);
^
This should be:
pixels[i][j].R = (GLubyte)((pixels[i][j].R * 5) + r);
pixels[i][j].G = (GLubyte)((pixels[i][j].G * 5) + g);
pixels[i][j].B = (GLubyte)((pixels[i][j].B * 5) + b);
Also it looks like you may have iWidth/iHeight transposed, but it's hard to say without seeing the rest of the code. Typically though the outer loop iterates over rows, so the upper bound would be the number of rows, i.e. the image height.
Most importantly though you have a fundamental problem in that you're trying to perform a neighbourhood operation in-place. Each output pixel depends on its neighbours, but you're modifying these neighbours as you iterate through the image. You need to do this kind of operation out-of-place, i.e. have a separate output image:
out_pixels[i][j].R = r;
out_pixels[i][j].G = g;
out_pixels[i][j].B = b;
so that the input image does not get modified. (Note also that you'll want to copy the edge pixels over from the input image to the output image.)
I'm applying an NV12 video transformation which shuffles pixels of the video. On an ARM device such as Google Nexus 7 2013, performance is pretty bad at 30fps for a 1024x512 area with the following C code:
* Pre-processing done only once at beginning of video *
//Temporary tables for the destination
for (j = 0; j < height; j++)
for (i = 0; i < width; i++) {
toY[i][j] = j * width + i;
toUV[i][j] = j / 2 * width + ((int)(i / 2)) * 2;
}
//Temporary tables for the source
for (j = 0; j < height; j++)
for (i = 0; i < width; i++) {
fromY[i][j] = funcY(i, j) * width + funcX(i, j);
fromUV[i][j] = funcY(i, j) / 2 * width + ((int)(funcX(i, j) / 2)) * 2;
}
* Process done at each frame *
for (j = 0; j < height; j++)
for (i = 0; i < width; i++) {
destY[ toY[i][j] ] = srcY[ fromY[i][j] ];
if ((i % 2 == 0) && (j % 2 == 0)) {
destUV[ toUV[i][j] ] = srcUV[ fromUV[i][j] ];
destUV[ toUV[i][j] + 1 ] = srcUV[ fromUV[i][j] + 1 ];
}
}
Though it's computed only once, funcX/Y is a pretty complex transformation so it's not very easy to optimize this part.
Is there still a way to fasten the double loop computed at each frame with the given "from" tables?
You create FOUR lookup tables 8 times as large as the original image?
You put an unnecessary if statement in the inner most loop?
What about swapping i and j?
Honestly, your question should be tagged with [c] instead of arm, neon, or image-processing to start with.
Since you didn't show what funcY and funcX do, the best answer I can give is following.
(Performance skyrocketed. And it's something really really fundamental)
//Temporary tables for the source
pTemp = fromYUV;
for (j = 0; j < height; j+=2)
{
for (i = 0; i < width; i+=2) {
*pTemp++ = funcY(i, j) * width + funcX(i, j);
*pTemp++ = funcY(i+1, j) * width + funcX(i+1, j);
*pTemp++ = funcY(i, j) / 2 * width + ((int)(funcX(i, j) / 2)) * 2;
}
for (i = 0; i < width; i+=2) {
*pTemp++ = funcY(i, j+1) * width + funcX(i, j+1);
*pTemp++ = funcY(i+1, j+1) * width + funcX(i+1, j+1);
}
}
* Process done at each frame *
pTemp = fromYUV;
pTempY = destY;
pTempUV = destUV;
for (j = 0; j < height; j+=2)
{
for (i = 0; i < width; i+=2) {
*pTempY++ = srcY[*pTemp++];
*pTempY++ = srcY[*pTemp++];
*pTempUV++ = srcUV[*pTemp++];
}
for (i = 0; i < width; i+=2) {
*pTempY++ = srcY[*pTemp++];
*pTempY++ = srcY[*pTemp++];
}
}
You should avoid (when possible) :
access on multiple memory area
random memory access
if statements within loops
The worst crime you committed is the order of i and j. (Which you don't need to start with)
If you access a pixel at the coordinate x and y, it's pixel = image[y][x] and NOT image[x][y]
I've got the following code to do a biliner interpolation from a matrix of 2D vectors, each cell has x and y values of the vector, and the function receives k and l indices telling the bottom-left nearest position in the matrix
// p[1] returns the interpolated values
// fieldLinePointsVerts the raw data array of fieldNumHorizontalPoints x fieldNumVerticalPoints
// only fieldNumHorizontalPoints matters to determine the index to access the raw data
// k and l horizontal and vertical indices of the point just bellow p[0] in the raw data
void interpolate( vertex2d* p, vertex2d* fieldLinePointsVerts, int fieldNumHorizontalPoints, int k, int l ) {
int index = (l * fieldNumHorizontalPoints + k) * 2;
vertex2d p11;
p11.x = fieldLinePointsVerts[index].x;
p11.y = fieldLinePointsVerts[index].y;
vertex2d q11;
q11.x = fieldLinePointsVerts[index+1].x;
q11.y = fieldLinePointsVerts[index+1].y;
index = (l * fieldNumHorizontalPoints + k + 1) * 2;
vertex2d q21;
q21.x = fieldLinePointsVerts[index+1].x;
q21.y = fieldLinePointsVerts[index+1].y;
index = ( (l + 1) * fieldNumHorizontalPoints + k) * 2;
vertex2d q12;
q12.x = fieldLinePointsVerts[index+1].x;
q12.y = fieldLinePointsVerts[index+1].y;
index = ( (l + 1) * fieldNumHorizontalPoints + k + 1 ) * 2;
vertex2d p22;
p22.x = fieldLinePointsVerts[index].x;
p22.y = fieldLinePointsVerts[index].y;
vertex2d q22;
q22.x = fieldLinePointsVerts[index+1].x;
q22.y = fieldLinePointsVerts[index+1].y;
float fx = 1.0 / (p22.x - p11.x);
float fx1 = (p22.x - p[0].x) * fx;
float fx2 = (p[0].x - p11.x) * fx;
vertex2d r1;
r1.x = fx1 * q11.x + fx2 * q21.x;
r1.y = fx1 * q11.y + fx2 * q21.y;
vertex2d r2;
r2.x = fx1 * q12.x + fx2 * q22.x;
r2.y = fx1 * q12.y + fx2 * q22.y;
float fy = 1.0 / (p22.y - p11.y);
float fy1 = (p22.y - p[0].y) * fy;
float fy2 = (p[0].y - p11.y) * fy;
p[1].x = fy1 * r1.x + fy2 * r2.x;
p[1].y = fy1 * r1.y + fy2 * r2.y;
}
Currently this code needs to be run every single frame in old iOS devices, say devices with arm6 processors
I've taken the numeric sub-indices from the wikipedia's equations http://en.wikipedia.org/wiki/Bilinear_interpolation
I'd accreciate any comments on optimization for performance, even plain asm code
This code should not be causing your slowdown if it's only run once per frame. However, if it's run multiple times per frame, it easily could be.
I'd run your app with a profiler to see where the true performance problem lies.
There is some room for optimization here: a) Certain index calculations could be factored out and re-used in subsequent calculations), b) You could dereference your fieldLinePointsVerts array to a pointer once and re-use that, instead of indexing it twice per index...
but in general those things won't help a great deal, unless this function is being called many, many times per frame. In which case every little thing will help.