I'm trying to create a custom pattern fill for highcharts.
It's a horizontal dashed line with alternating starting points from one row to another (the first start at 0,0 the second at 3,10 and so on).
I edited the Highcharts JSfiddle example replacing the custom pattern with the following (here you can find my "final" version) :
color: {
pattern: {
path: {
d: 'M 0 0 H 8 M 14 0 H 22 M 3 10 H 19',
strokeWidth: 0.5
},
width: 22,
height: 20
}
}
The problem is the the two rows of lines have different width.
I can't find any parameter in the documentation to fix this.
I don't know if the problem is in my pattern definition or a highcharts bug.
Any thoughts?
The path as-is moves first to 0,0 and then 14,0, and finally 3,10:
d: 'M 0 0 H 8 M 14 0 H 22 M 3 10 H 19'
You can change that to 0,1 and then 14,1, and then 3,11 and the lines are the same width:
d: 'M 0 1 H 8 M 14 1 H 22 M 3 11 H 19'
The lines starting at 0,0 are centred on the boundary meaning that half the line gets cut off, so just moving them all down by 1 ensures that the whole line is visible.
Updated Fiddle
Say you have 3 integers:
13105
705016
13
I'm wondering if you could combine these into one integer in any way, so that you can still get back to the original 3 integers.
var startingSet = [ 13105, 705016, 13 ]
var combined = combineIntoOneInteger(startingSet)
// 15158958589285958925895292589 perhaps, I have no idea.
var originalIntegers = deconstructInteger(combined, 3)
// [ 13105, 705016, 13 ]
function combineIntoOneInteger(integers) {
// some sort of hashing-like function...
}
function deconstructInteger(integer, arraySize) {
// perhaps pass it some other parameters
// like how many to deconstruct to, or other params.
}
It doesn't need to technically be an "integer". It is just a string using only the integer characters, though perhaps I might want to use the hex characters instead. But I ask in terms of integers because underneath I do have integers of a bounded size that will be used to construct the combined object.
Some other notes....
The combined value should be unique, so no matter what values you combine, you will always get a different result. That is, there are absolutely no conflicts. Or if that's not possible, perhaps an explanation why and a potential workaround.
The mathematical "set" containing all possible outputs can be composed of different amounts of components. That is to say, you might have the output/combined set containing [ 100, 200, 300, 400 ] but the input set is these 4 arrays: [ [ 1, 2, 3 ], [ 5 ], [ 91010, 132 ], [ 500, 600, 700 ] ]. That is, the input arrays can be of wildly different lengths and wildly different sized integers.
One way to accomplish this more generically is to just use a "separator" character, which makes it super easy. So it would be like 13105:705016:13. But this is cheating, I want it to only use the characters in the integer set (or perhaps the hex set, or some other arbitrary set, but for this case just the integer set or hex).
Another idea for a potential way to accomplish this is to somehow hide a separator in there by doing some hashing or permutation jiu jitsu so that [ 13105, 705016, 13 ] becomes some integer-looking thing like 95918155193915183, where 155 and 5 are some separator like interpolator values based on the preceding input or some other tricks. A simpler approach to this would be like saying "anything following three zeroes 000 like 410001414 means it's a new integer. So basically 000 is a separator. But this specifically is ugly and brittle. Maybe it could get more tricky and work though, like "if the value is odd and followed by a multiple of 3 of itself, then it's a separator" sort of thing. But I can see that also having brittle edge cases.
But basically, given a set of integers n (of strings of integer characters), how to convert that into a single integer (or single integer-charactered string), and then convert it back into the original set of integers n.
Sure, there are lots of ways to do this.
To start with, it's only necessary to have a reversible function which combines two values into one. (For it to be reversible, there must be another function which takes the output value and recreates the two input values.)
Let's call the function which combines two values combine and the reverse function separate. Then we have:
separate(combine(a, b)) == [a, b]
for any values a and b. That means that combine(a, b) == combine(c, d)
can only be true if both a == c and b == d; in other words, every pair of inputs produces a different output.
Encoding arbitrary vectors
Once we have that function, we can encode arbitrary-length input vectors. The simplest case is when we know in advance what the length of the vector is. For example, we could define:
combine3 = (a, b, c) => combine(combine(a, b), c)
combine4 = (a, b, c, d) => combine(combine(combine(a, b), c), d)
and so on. To reverse that computation, we only have to repeatedly call separate the correct number of times, each time keeping the second returned value. For example, if we previously had computed:
m = combine4(a, b, c, d)
we could get the four input values back as follows:
c3, d = separate(m)
c2, c = separate(c3)
a, b = separate(c2)
But your question asks for a way to combine an arbitrary number of values. To do that, we just need to do one final combine, which mixes in the number of values. That lets us get the original vector back out: first, we call separate to get the value count back out, and then we call separate enough times to extract each successive input value.
combine_n = v => combine(v.reduce(combine), v.length)
function separate_n(m) {
let [r, n] = separate(m)
let a = Array(n)
for (let i = n - 1; i > 0; --i) [r, a[i]] = separate(r);
a[0] = r;
return a;
}
Note that the above two functions do not work on the empty vector, which should code to 0. Adding the correct checks for this case is left as an exercise. Also note the warning towards the bottom of this answer, about integer overflow.
A simple combine function: diagonalization
With that done, let's look at how to implement combine. There are actually many solutions, but one pretty simple one is to use the diagonalization function:
diag(a, b) = (a + b)(a + b + 1)
------------------ + a
2
This basically assigns positions in the infinite square by tracing successive diagonals:
<-- b -->
0 1 3 6 10 15 21 ...
^ 2 4 7 11 16 22 ...
| 5 8 12 17 23 ...
a 9 13 18 24 ...
| 14 19 25 ...
v 20 26 ...
27 ...
(In an earlier version of this answer, I had reversed a and b, but this version seems to have slightly more intuitive output values.)
Note that the top row, where a == 0, is exactly the triangular numbers, which is not surprising because the already enumerated positions are the top left triangle of the square.
To reverse the transformation, we start by solving the equation which defines the triangular numbers, m = s(s + 1)/2, which is the same as
0 = s² + s - 2m
whose solution can be found using the standard quadratic formula, resulting in:
s = floor((-1 + sqrt(1 + 8 * m)) / 2)
(s here is the original a+b; that is, the index of the diagonal.)
I should explain the call to floor which snuck in there. s will only be precisely an integer on the top row of the square, where a is 0. But, of course, a will usually not be 0, and m will usually be a little more than the triangular number we're looking for, so when we solve for s, we'll get some fractional value. Floor just discards the fractional part, so the result is the diagonal index.
Now we just have to recover a and b, which is straight-forward:
a = m - combine(0, s)
b = s - a
So we now have the definitions of combine and separate:
let combine = (a, b) => (a + b) * (a + b + 1) / 2 + a
function separate(m) {
let s = Math.floor((-1 + Math.sqrt(1 + 8 * m)) / 2);
let a = m - combine(0, s);
let b = s - a;
return [a, b];
}
One cool feature of this particular encoding is that every non-negative integer corresponds to a distinct vector. Many other encoding schemes do not have this property; the possible return values of combine_n are a subset of the set of non-negative integers.
Example encodings
For reference, here are the first 30 encoded values, and the vectors they represent:
> for (let i = 1; i <= 30; ++i) console.log(i, separate_n(i));
1 [ 0 ]
2 [ 1 ]
3 [ 0, 0 ]
4 [ 1 ]
5 [ 2 ]
6 [ 0, 0, 0 ]
7 [ 0, 1 ]
8 [ 2 ]
9 [ 3 ]
10 [ 0, 0, 0, 0 ]
11 [ 0, 0, 1 ]
12 [ 1, 0 ]
13 [ 3 ]
14 [ 4 ]
15 [ 0, 0, 0, 0, 0 ]
16 [ 0, 0, 0, 1 ]
17 [ 0, 1, 0 ]
18 [ 0, 2 ]
19 [ 4 ]
20 [ 5 ]
21 [ 0, 0, 0, 0, 0, 0 ]
22 [ 0, 0, 0, 0, 1 ]
23 [ 0, 0, 1, 0 ]
24 [ 0, 0, 2 ]
25 [ 1, 1 ]
26 [ 5 ]
27 [ 6 ]
28 [ 0, 0, 0, 0, 0, 0, 0 ]
29 [ 0, 0, 0, 0, 0, 1 ]
30 [ 0, 0, 0, 1, 0 ]
Warning!
Observe that all of the unencoded values are pretty small. The encoded values is similar in size to the concatenation of all the input values, and so it does grow pretty rapidly; you have to be careful to not exceed Javascript's limit on exact integer computation. Once the encoded value exceeds this limit (253) it will no longer be possible to reverse the encoding. If your input vectors are long and/or the encoded values are large, you'll need to find some kind of bignum support in order to do precise integer computations.
Alternative combine functions
Another possible implementation of combine is:
let combine = (a, b) => 2**a * 3**b
In fact, using powers of primes, we could dispense with the combine_n sequence, and just produce the combination directly:
combine(a, b, c, d, e,...) = 2a 3b 5c 7d 11e...
(That assumes that the encoded values are strictly positive; if they could be 0, we'd have no way of knowing how long the sequence was because the encoded value does not distinguish between a vector and the same vector with a 0 appended. But that's not a big issue, because if we needed to deal with 0s, we would just add one to all used exponents:
combine(a, b, c, d, e,...) = 2a+1 3b+1 5c+1 7d+1 11e+1...
That is certainly correct and its very elegant in a theoretical sense. It's the solution which you will find in theoretical CS textbooks because it is much easier to prove uniqueness and reversibility. However, in the real world it is really not practical. Reversing the combination depends on finding the prime factors of the encoded value, and the encoded values are truly enormous, well out of the range of easily representable numbers.
Another possibility is precisely the one you mention in the question: simply put a separator between successive values. One simple way to do this is to rewrite the values to encode in base 9 (or base 15) and then increment all the digit values, so that the digit 0 is not present in any encoded value. Then we can put 0s between the encoded values and read the result in base 10 (or base 16).
Neither of these solutions has the property that every non-negative integer is the encoding of some vector. (The second one almost has that property, and it's a useful exercise to figure out which integers are not possible encodings, and then fix the encoding algorithm to avoid that problem.)
I'm trying to print out information in a form of a table where the spacing between never changes, kind of like setw from c++
You can use string.format(). Formatting reference is same as in ISO C's sprintf() and printf(). For quick reference you may use e.g. this website.
print(string.format("%10d%10d%10d", 114, 523, 15224))
will result in:
114 523 15224
Basically you can go with (for integers):
function printTable(t, length)
for _,row in pairs(t) do
local format = ""
for i=1,#row do format = format .. "%" .. length .. "d" end
print(string.format(format, table.unpack(row)))
end
end
It is not the most efficient way but it will do the work:
> S = {{432, 324, 5325, 4356}, {4325, 5643, 223, 543}, {234, 1, 23, 656}}
> printTable(S, 8)
432 324 5325 4356
4325 5643 223 543
234 1 23 656
I was having a little trouble trying to use AStar + Phaser. I debugged it a bit and discovered a little bug. The X and Y of the astarNode property are wrong. I'm still trying to fix it, but you guys maybe help me to find the problem faster.
Code:
preload: function() {
this.game.load.tilemap('map', 'assets/tilemap.json', null, Phaser.Tilemap.TILED_JSON);
this.game.load.image('RPGPackSheet', 'assets/sprites/RPGPackSheet.png');
},
create: function() {
this.map = this.game.add.tilemap('map');
this.map.addTilesetImage('RPGPackSheet');
this.layer = this.map.createLayer('LayerName');
this.astar = this.game.plugins.add(Phaser.Plugin.AStar);
this.astar.setAStarMap(this.map, 'LayerName', 'RPGPackSheet');
console.log(this.map.layers[0].data[4][6].properties.astarNode);
},
tilemap.json
The output on the console should be:
f: 0,
g: 0,
h: 0,
walkable: false,
x: 4, // equals to the second index of layers[0].data
y: 6 // equals to the first index of layers[0].data
But is giving me:
f: 0,
g: 0,
h: 0,
walkable: false,
x: 24,
y: 13
UPDATE: I found out something more. My tilemap.json uses only 2 tiles (42 and 52). So when the setAStarMap() is called, he updates the X and Y of every astarNode with the current x and y that it is on the for loop (to understand better check updateMap() of AStarPlugin). In the end, every astarNode that uses 42 will have x set to 24 and y set to 13 (which is the coordinates of the last astarNode using tile 42), and every astarNode that uses 52 will have x set to 13 and y set to 12 (again, coordinates of the last astarNode using tile 52). I just can't figure out why this is happening...
From what I know, world size in tiles of your map is supposed to be in a square size, so from what I see your world size in tiles is 25x14. So you can add a blank tiles to fill up your world map in order to get it to size 25x25
Referring to the original problem: Optimizing hand-evaluation algorithm for Poker-Monte-Carlo-Simulation
I have a list of 5 to 7 cards and want to store their value in a hashtable, which should be an array of 32-bit-integers and directly accessed by the hashfunctions value as index.
Regarding the large amount of possible combinations in a 52-card-deck, I don't want to waste too much memory.
Numbers:
7-card-combinations: 133784560
6-card-combinations: 20358520
5-card-combinations: 2598960
Total: 156.742.040 possible combinations
Storing 157 million 32-bit-integer values costs about 580MB. So I would like to avoid increasing this number by reserving memory in an array for values that aren't needed.
So the question is: How could a hashfunction look like, that maps each possible, non duplicated combination of cards to a consecutive value between 0 and 156.742.040 or at least comes close to it?
Paul Senzee has a great post on this for 7 cards (deleted link as it is broken and now points to a NSFW site).
His code is basically a bunch of pre-computed tables and then one function to look up the array index for a given 7-card hand (represented as a 64-bit number with the lowest 52 bits signifying cards):
inline unsigned index52c7(unsigned __int64 x)
{
const unsigned short *a = (const unsigned short *)&x;
unsigned A = a[3], B = a[2], C = a[1], D = a[0],
bcA = _bitcount[A], bcB = _bitcount[B], bcC = _bitcount[C], bcD = _bitcount[D],
mulA = _choose48x[7 - bcA], mulB = _choose32x[7 - (bcA + bcB)], mulC = _choose16x[bcD];
return _offsets52c[bcA] + _table4[A] * mulA +
_offsets48c[ (bcA << 4) + bcB] + _table [B] * mulB +
_offsets32c[((bcA + bcB) << 4) + bcC] + _table [C] * mulC +
_table [D];
}
In short, it's a bunch of lookups and bitwise operations powered by pre-computed lookup tables based on perfect hashing.
If you go back and look at this website, you can get the perfect hash code that Senzee used to create the 7-card hash and repeat the process for 5- and 6-card tables (essentially creating a new index52c7.h for each). You might be able to smash all 3 into one table, but I haven't tried that.
All told that should be ~628 MB (4 bytes * 157 M entries). Or, if you want to split it up, you can map it to 16-bit numbers (since I believe most poker hand evaluators only need 7,462 unique hand scores) and then have a separate map from those 7,462 hand scores to whatever hand categories you want. That would be 314 MB.
Here's a different answer based on the colex function concept. It works with bitsets that are sorted in descending order. Here's a Python implementation (both recursive so you can see the logic and iterative). The main concept is that, given a bitset, you can always calculate how many bitsets there are with the same number of set bits but less than (in either the lexicographical or mathematical sense) your given bitset. I got the idea from this paper on hand isomorphisms.
from math import factorial
def n_choose_k(n, k):
return 0 if n < k else factorial(n) // (factorial(k) * factorial(n - k))
def indexset_recursive(bitset, lowest_bit=0):
"""Return number of bitsets with same number of set bits but less than
given bitset.
Args:
bitset (sequence) - Sequence of set bits in descending order.
lowest_bit (int) - Name of the lowest bit. Default = 0.
>>> indexset_recursive([51, 50, 49, 48, 47, 46, 45])
133784559
>>> indexset_recursive([52, 51, 50, 49, 48, 47, 46], lowest_bit=1)
133784559
>>> indexset_recursive([6, 5, 4, 3, 2, 1, 0])
0
>>> indexset_recursive([7, 6, 5, 4, 3, 2, 1], lowest_bit=1)
0
"""
m = len(bitset)
first = bitset[0] - lowest_bit
if m == 1:
return first
else:
t = n_choose_k(first, m)
return t + indexset_recursive(bitset[1:], lowest_bit)
def indexset(bitset, lowest_bit=0):
"""Return number of bitsets with same number of set bits but less than
given bitset.
Args:
bitset (sequence) - Sequence of set bits in descending order.
lowest_bit (int) - Name of the lowest bit. Default = 0.
>>> indexset([51, 50, 49, 48, 47, 46, 45])
133784559
>>> indexset([52, 51, 50, 49, 48, 47, 46], lowest_bit=1)
133784559
>>> indexset([6, 5, 4, 3, 2, 1, 0])
0
>>> indexset([7, 6, 5, 4, 3, 2, 1], lowest_bit=1)
0
"""
m = len(bitset)
g = enumerate(bitset)
return sum(n_choose_k(bit - lowest_bit, m - i) for i, bit in g)