Attempting to find a formula for tessellating rectangles onto a board, where middle square can't be used - spatial

I'm working on a spatial stacking problem... at the moment I'm trying to solve in 2D but will eventually have to make this work in 3D.
I divide up space into n x n squares around a central block, therefore n is always odd... and I'm trying to find the number of locations that a rectangle of any dimension less than n x n (eg 1x1, 1x2, 2x2 etc) can be placed, where the middle square is not available.
So far I've got this..
total number of rectangles = ((n^2 + n)^2 ) / 4
..also the total number of squares = (n (n+1) (2n+1)) / 6
However I'm stuck in working out a formula to find how many of those locations are impossible as the middle square would be occupied.
So for example:
[] [] []
[] [x] []
[] [] []
3 x 3 board... with 8 possible locations for storing stuff as mid square is in use.
I can use 1x1 shapes, 1x2 shapes, 2x1, 3x1, etc...
Formula gives me the number of rectangles as: (9+3)^2 / 4 = 144/4 = 36 stacking locations
However as the middle square is unoccupiable these can not all be realized.
By hand I can see that these are impossible options:
1x1 shapes = 1 impossible (mid square)
2x1 shapes = 4 impossible (anything which uses mid square)
3x1 = 2 impossible
2x2 = 4 impossible
etc
Total impossible combinations = 16
Therefore the solution I'm after is 36-16 = 20 possible rectangular stacking locations on a 3x3 board.
I've coded this in C# to solve it through trial and error, but I'm really after a formula as I want to solve for massive values of n, and also to eventually make this 3D.
Can anyone point me to any formulas for these kind of spatial / tessellation problem?
Also any idea on how to take the total rectangle formula into 3D very welcome!
Thanks!

Ok.. so I've got an answer now which is.. the total impossible cases is defined by:
n^4 where n is the order of grid size using only odd grids
2^4 = 16 (grid is 3 by 3)
3^4= 81 (grid is 5 by 5)
4^4 = 256 (grid is 7 by 7)
etc

Related

Mean Filter at first position (0,0)

Actually, I am in the middle work of adaptive thresholding using mean. I used 3x3 matrix so I calculate means value on that matrix and replace it into M(1,1) or middle position of the matrix. I got confused about how to do perform the process at first position f(0,0).
This is a little illustration, let's assume that I am using 3x3 Matrix (M) and image (f) first position f(0,0) = M(1,1) = 4. So, M(0,0) M(0,1) M(0,2) M(1,0) M(2,0) has no value.
-1 | -1 | -1 |
-1 | 4 | 3 |
-1 | 2 | 1 |
Which one is the correct process,
a) ( 4 + 3 + 2 + 1 ) / 4
b) ( 4 + 3 + 2 + 1) / 9
I asked this because I follow some tutorial adaptive mean thresholding, it shows a different result. So, I need to make sure that the process is correct. Thanks.
There is no "correct" way to solve this issue. There are many different solutions used in practice, they all have some downsides:
Averaging over only the known values (i.e. your suggested (4+3+2+1)/4). By averaging over fewer pixels, one obtains a result that is more sensitive to noise (i.e. the "amount of noise" left in the image after filtering is larger near the borders. Also, a bias is introduced, since the averaging happens over values to one side only.
Assuming 0 outside the image domain (i.e. your suggested (4+3+2+1)/9). Since we don't know what is outside the image, assuming 0 is as good as anything else, no? Well, no it is not. This leads to a filter result that has darker values around the edges.
Assuming a periodic image. Here one takes values from the opposite side of the image for the unknown values. This effectively happens when computing the convolution through the Fourier domain. But usually images are not periodic, with strong differences in intensities (or colors) at opposite sides of the image, leading to "bleeding" of the colors on the opposite of the image.
Extrapolation. Extending image data by extrapolation is a risky business. This basically comes down to predicting what would have been in those pixels had we imaged them. The safest bet is 0-order extrapolation (i.e. replicating the boundary pixel), though higher-order polygon fits are possible too. The downside is that the pixels at the image edge become more important than other pixels, they will be weighted more heavily in the averaging.
Mirroring. Here the image is reflected at the boundary (imagine placing a mirror at the edge of the image). The value at index -1 is taken to be the value at index 1; at index -2 that at index 2, etc. This has similar downsides as the extrapolation method.

Why does fundamental matrix have 7 degrees of freedom?

There are 9 parameters in the fundamental matrix to relate the pixel co-ordinates of left and right images but only 7 degrees of freedom (DOF).
The reasoning for this on several pages that I've searched says :
Homogenous equations means we lose a degree of freedom
The determinant of F = 0, therefore we lose another degree of freedom.
I don't understand why those 2 reasons mean we lose 2 DOF - can someone explain it?
We initially have 9 DOF because the fundamental matrix is composed of 9 parameters, which implies that we need 9 corresponding points to compute the fundamental matrix (F). But because of the following two reasons, we only need 7 corresponding points.
Reason 1
We lose 1 DOF because we are using homogeneous coordinates. This basically is a way to represent nD points as a vector form by adding an extra dimension. ie) A 2D point (0,2) can be represented as [0,2,1], in general [x,y,1]. There are useful properties when using homogeneous coordinates with 2D/3D transformation, but I'm going to assume you know that.
Now given the expression p and p' representing pixel coordinates:
p'=[u',v',1] and p=[u,v,1]
the fundamental matrix:
F = [f1,f2,f3]
[f4,f5,f6]
[f7,f8,f9]
and fundamental matrix equation:
(transposed p')Fp = 0
when we multiple this expression in algebra form, we get the following:
uu'f1 + vu'f2 + u'f3 + uv'f4 + vv'f5 + v'f6 + uf7 + vf8 + f9 = 0.
In a homogeneous system of linear equation form Af=0 (basically the factorization of the above formula), we get two components A and f.
A:
[uu',vu',u', uv',vv',v',u,v,1]
f (f is essentially the fundamental matrix in vector form):
[f1,f2'f3,f4,f5,f6,f7,f8,f9]
Now if we look at the components of vector A, we have 8 unknowns, but one known value 1 because of homogeneous coordinates, and therefore we only need 8 equations now.
Reason 2
det F = 0.
A determinant is a value that can be obtained from a square matrix.
I'm not entirely sure about the mathematical details of this property but I can still infer the basic idea, and, hopefully, you can as well.
Basically given some matrix A
A = [a,b,c]
[d,e,f]
[g,h,i]
The determinant can be computed using this formula:
det A = aei+bfg+cdh-ceg-bdi-afh
If we look at the determinant using the fundamental matrix, the algebra would look something like this:
F = [f1,f2,f3]
[f4,f5,f6]
[f7,f8,f9]
det F = (f1*f5*f8)+(f2*f6*f7)+(f3*f4*f8)-(f3*f5*f7)-(f2*f4*f9)-(f1*f6*f8)
Now we know the determinant of the fundamental matrix is zero:
det F = (f1*f5*f8)+(f2*f6*f7)+(f3*f4*f8)-(f3*f5*f7)-(f2*f4*f9)-(f1*f6*f8) = 0
So, if we work out only 7 of the 9 parameters of the fundamental matrix, we can work out the last parameter using the above determinant equation.
Therefore the fundamental matrix has 7DOF.
The reasons why F has only 7 degrees of freedom are
F is a 3x3 homogeneous matrix. Homogeneous means there is a scale ambiguity in the matrix, so the scale doesn't matter (as shown in #Curator Corpus 's example). This drops one degree of freedom.
F is a matrix with rank 2. It is not a full rank matrix, so it is singular and its determinant is zero (Proof here). The reason why F is a matrix with rank 2 is that it is mapping a 2D plane (image1) to all the lines (in image 2) that pass through the epipole (of image 2).
Hope it helps.
As for the highest votes answer by nbro, I think it can be interpreted as this way where we have reason two, matrix F has a rank2, so its determinant is zero as a constraint to the f variable function. So, we only need 7 points to determine the rest of variables (f1-f8), with the previous constriant. And 8 equations, 8 variables, leaving only one solution. So there is 7 DOF.

Pathfinding On a huge Map

I am in need of some type of pathfinding, so I searched the Internet and found some algorithms.
It seems like they all need some type of map also.
This map can be represented by:
Grid
Nodes
As my map is currently quite huge (20.000 x 20.000 px), a grid map of 1 x 1 px tiles would lead to 400.000.000 unique points on the Grid and also the best Quality I would think. But thats way to much points for me so I could either
increase the tile size (e.g. 50 x 50 px = 160.000 unique points)
switch to Nodes
As the 160.000 unique points are also to much for me, or I would say, not the quality I would like to have, as some units are bigger as 50 px, I think Nodes are the better way to go.
I found this on the Internet 2D Nodal Pathfinding without a Grid and did some calculations:
local radius = 75 -- this varies for some units so i stick to the biggest value
local DistanceBetweenNodes = radius * 2 -- to pass tiles diagonaly
local grids = 166 -- how many col/row
local MapSize = grids * DistanceBetweenNodes -- around 25.000
local walkable = 0 -- used later
local Map = {}
function even(a)
return ((a / radius) % 2 == 0)
end
for x = 0, MapSize, radius do
Map[x] = {}
for y = 0, MapSize, radius do
if (even(x) and even(y)) or (not even(x) and not even(y)) then
Map[x][y] = walkable
end
end
end
Without removing the unpassable Nodes and a unit size of 75 i would end up with ~55445 unique Nodes. The Nodes will drastically shrink if i remove the unpassable Nodes, but as my units have different sizes i need to make the radius to the smallest unit i got. I dont know if this will work with bigger units later then.
So i searched the Internet again and found this Nav Meshes.
This will reduce the Nodes to only "a few" in my eyes and would work with any unit size.
UPDATE 28.09
I have created a nodal Map of all passable Areas now ~30.000 nodes.
Here is an totally random example of a map and the points i have:
Example Map
This calls for some optimization, and reduce the amount of nodes you have.
Almost any pathfinding algorithm can take a node list that is not a grid. You will need to adjust for distance between nodes, though.
You could also increase your grid size so that it does not have as many squares. You will need to compensate for small, narrow paths, in some sort of way, though.
At the end of the day, i would suggest you reduce your node count by simply placing nodes in an arranged path, where you know it is possible to get from point A to B, specifying the neighbors. You will need to manually make a node path for every level, though. Take my test as an example (There are no walls, just the node path):
For your provided map, you would end up with a path node similar to this:
Which has around 50 nodes, compared to the hundreds a grid can have.
This can work on any scale, since your node count is dramatically cut, compared to the grid approach. You will need to make some adjustments, like calculating the distance between nodes, now that they are not in a grid. For this test i am using dijkstra algorithm, in Corona SDK (Lua), but you can try using any other like A-star (A*), which is used in many games and can be faster.
I found a Unity example that takes a similar approach using nodes, and you can see that the approach works in 3D as well:

straight-line distance heuristic for n-puzzle

Can someone please explain to me what a straight-line distance heuristic would look like while solving the n-puzzle? How would you apply the straight line distance, say for a 8x8 puzzle? Here is an example puzzle:
7 3 4
5 _ 6
8 2 1
Let's recall basic geometry, it is well known that the shortest path between two points is a straight line.
So considering a 8-puzzle, the straight line distance between two tiles is the number of tiles to get from tile A to tile B whether is diagonal, horizontal or vertical line.
Considering the example in your question, let's call d(a,b) the straight line distance between tile a and b:
d(1,_) = 1
d(1,2) = 1
d(1,3) = 2 = d(1,6) + d(6,3) = d(1,_ ) + d(_,3)
d(1,4) = 2
and so on.
We can now generalize that definition to the n-puzzle. Keep in mind that 3 steps are allowed diagonal, horizontal, vertical. In this case the heuristic is usually optimal.
Note: Remember the straight line definition between cities.

Trying to filter image using two color lookup tables

I work with a design team and they want me to implement a photo filter in our iOS app. They've provided me with two files that look pretty much like this image, each with a different variety of colors.
I've been informed that apparently I should be able to open them in Photoshop, compare the differences in the colors, and then apply those differences to filter images in the app. Honestly, I've got no idea where to begin even. I've opened them in photoshop but am not sure what to look for. I know how to apply filters with CIFilter, I just don't know what kind of filtering to apply.
Anyone out there at least able to point me in the right direction?
What they've given you is a 3D Look-up table or LUT. You don't need to involve Photoshop at all in this process other than to examine the LUTs.
It looks like your LUTs are 25x25x40. Each square is 25x25 and there are 8x5=40 squares in your picture. (Actually there are 8x4 = 32 squares and partial squares above and below that. That's kind of weird.) Each 25x25 square has a constant blue value, and the red value varies with the x coordinate and the green varies with the y coordinate.
To use the LUT, you take your (r, g, b) value for each pixel in the input image and look up the new value in the LUT.
So if you have an (R, G, B) value of, say, (128, 52, 215) (assuming 8-bits per channel, so values between 0-255), then you'd need to scale those to be in the ranges of your LUT.
To do that, calculate:
Red: 128 / 255 = x / 25
Green: 52 / 255 = x / 25
Blue: 215 / 255 = x / 40
So you get:
Red = 13
Green = 5
Blue = 34
Now you use those as coordinates in the LUT. You need the square where blue is 34/40ths. That's tricky in your setup since it's not a single long row. Normally, you'd just multiply tile width by number of the tile you want to get the start pixel, but we can't do that here. I recommend reformatting your LUT to work in this way. If you did, you could just multiply 34 * 25 * sizeof(RGB). Once there, you can treat that tile as an image in itself, and look up (x,y) = (13,5). (Which you can do by using 5 * bytesPerRow + 13 * bytesPerPixel.)
You now have the color from the LUT.
There is a CoreImage filter called CIColorCube that allows you to do something similar, but I've not used it myself.

Resources