List of lists instead of plain list? - erlang

I'm learing Erlang. Here is a simple task: convert integers like 1011, 111213, 12345678 to lists [10, 11], [11, 12, 13] and [12, 34, 56, 78] correspondingly.
Here is the function I wrote:
num_to_list(0) -> [];
num_to_list(Num) -> [Num rem 100 | [num_to_list((Num - Num rem 100) div 100)]].
But num_to_list(1234) gives me [34,[12,[]]]. Now I don't care that the list is reversed. I don't understand why it is not a plain list.

num_to_list returns list. You dont need use [] around it in num_to_list(Num). I mean
num_to_list(0) -> [];
num_to_list(Num) -> [Num rem 100 | num_to_list((Num - Num rem 100) div 100)].

Related

How to use an array as index for decision variable in cplex

I am new in cplex. I want to use the two arrays that I have in my code as indexes for my decision variables and parameters. I need the values ​​of the arrays as indices, not the sizes of the arrays, too. I saw these links: "https://github.com/AlexFleischerParis/zooopl/blob/master/zooarrayvariableindexerunion.mod" and other examples in https://stackoverflow.com for my problem but I don't know How should I change my code?
.mod
{int} V = {11, 12, 13, 21, 22, 23, 31, 32, 33, 41, 42, 43, 51, 52, 53};
range F=1..20;
int sizeFlow[f in F]=2+rand(3);
int n=card(V);
int m=4;
range r=1..n;
// scripting way that will get m times 1
range subr=1..m;
int t[f in F][i in subr]=1+rand(n+1-i);
{int} setn[f in F]=asSet(r);
int x2[F][i in r];
execute
{
for (var f in F) for(var i in subr)
{
var e=t[f][i];
var e2=Opl.item(setn[f],e-1);
x2[f][e2]=1;
setn[f].remove(e2);
}
}
{int} result[f in F]={i | i in r:x2[f][i]==1};
{int} Flow4[f in F]=union (i in result[f]) {item(V,i-1)};
{int} Flow[f in F]={item(Flow4[f],j-1) | j in 1..sizeFlow[f]};
assert forall(f in F) forall(ordered i,j in Flow[f]) i!=j;
execute{ writeln(Flow); }
Now I want to use arrays sizeFlow[f] and Flow[f] in rest of my code, For example:
range Nunode1 = 0..10;
tuple edge{
key int origin;
key int destination;}
{edge} Edges with origin,destination in Nunode1 = {<0,1>,<1,3>,<2,3>,<3,4>,<3,5>,<3,6>,
<4,5>,<4,6>,<4,8>,<5,6>,<6,7>,<6,9>,<9,10>};
//transmission rate.
float C[Edges]=...;
float landa[Flow[f]] = 0.5 +rand(2);
//DECISION VARIABLES
dvar boolean IL[Edges][Flow[f]][sizeFlow[f]][Nunode1][sizeFlow[f]][Nunode1]; //denotes
that link l is used by flow f to route from the j-th to (j + 1)-th NF service, hosted at
node nj and nj+1.
//Objective function
dexpr float objmodel2 = sum(l in Edges, c in 1..Flow[f], j in 1..sizeFlow[f]: (j+1) in
1..sizeFlow[f] && (j+1)>j, n in Nunode1: (n+1) in Nunode1 && (n+1)>n) ((IL[l][c][j][n]
[j+1][n+1] * landa[c]) / C[l]); //to minimize the utilization of link capacities.
minimize objmodel2;
subject to{
forall (l in Edges)
cons1: sum(c in 1..Flow[f], j in 1..sizeFlow[f]: j+1 in 1..sizeFlow[f], n in Nunode1:
(n+1) in Nunode1) (IL[l][c][j][n][j+1][n+1] * landa[c]) <= C[l];}

How do I convert an integer to a list of indexes in Lua

I'm pretty new to Lua, I'm trying to convert an integer into an array of indexes but cannot find a robust way to do this.
Here's two examples of what I'm trying to achieve:
Input: 0x11
Desired output: [0, 4]
Input: 0x29
Desired output: [0, 3, 5]
This will work if you're on Lua 5.3 or newer:
local function oneBits(n)
local i, rv = 0, {}
while n ~= 0 do
if n & 1 == 1 then
table.insert(rv, i)
end
i = i + 1
n = n >> 1
end
return rv
end

How to store integers in a list

I'm trying to separate an array of integers to count how many repeated numbers are there.
For this input [10, 20, 20, 10, 10, 30, 50, 10, 20] I am receiving the following output:
#{10=>"\n\n\n\n",20=>[20,20],30=>[30],50=>"2"}
Question
I would like to know how can I generate the following output
#{10=>[10,10,10,10],20=>[20,20],30=>[30],50=>[50]}
The function I am using to generate the map output is:
%% Next: number
%% Acc: map
separate_socks(Next, Acc) ->
KeyExists = maps:is_key(Next, Acc),
case KeyExists of
true ->
CurrentKeyList = maps:get(Next, Acc),
maps:update(Next, [Next | CurrentKeyList], Acc);
false -> maps:put(Next, [Next], Acc)
end.
You can use the shell:strings/1 function to deal with the problem of numbers being displayed as characters. When shell:strings(true) is called, numbers will be printed as characters:
1> shell:strings(true).
true
2> [10,10,10].
"\n\n\n"
Calling shell:strings(false) will result in the numbers being printed as numbers instead:
3> shell:strings(false).
true
4> [10,10,10].
[10,10,10]
Your output is actually correct. The ascii value for \n is 10. There is no native string data type in erlang. A string is nothing by a list of values. erlang:is_list("abc") would return true.
Try [1010, 1020, 1020, 1010, 1010, 1030, 1050, 1010, 1020] as input. It should display all numbers.
You can also format the output with io:format():
1> M = #{10=>[10,10,10,10],20=>[20,20],30=>[30],50=>[50]}.
#{10 => "\n\n\n\n",20 => [20,20],30 => [30],50 => "2"}
2> io:format("~w~n", [M]).
#{10=>[10,10,10,10],20=>[20,20],30=>[30],50=>[50]}
ok
w
Writes data with the standard syntax. This is used to output Erlang
terms.

Hashfunction to map combinations of 5 to 7 cards

Referring to the original problem: Optimizing hand-evaluation algorithm for Poker-Monte-Carlo-Simulation
I have a list of 5 to 7 cards and want to store their value in a hashtable, which should be an array of 32-bit-integers and directly accessed by the hashfunctions value as index.
Regarding the large amount of possible combinations in a 52-card-deck, I don't want to waste too much memory.
Numbers:
7-card-combinations: 133784560
6-card-combinations: 20358520
5-card-combinations: 2598960
Total: 156.742.040 possible combinations
Storing 157 million 32-bit-integer values costs about 580MB. So I would like to avoid increasing this number by reserving memory in an array for values that aren't needed.
So the question is: How could a hashfunction look like, that maps each possible, non duplicated combination of cards to a consecutive value between 0 and 156.742.040 or at least comes close to it?
Paul Senzee has a great post on this for 7 cards (deleted link as it is broken and now points to a NSFW site).
His code is basically a bunch of pre-computed tables and then one function to look up the array index for a given 7-card hand (represented as a 64-bit number with the lowest 52 bits signifying cards):
inline unsigned index52c7(unsigned __int64 x)
{
const unsigned short *a = (const unsigned short *)&x;
unsigned A = a[3], B = a[2], C = a[1], D = a[0],
bcA = _bitcount[A], bcB = _bitcount[B], bcC = _bitcount[C], bcD = _bitcount[D],
mulA = _choose48x[7 - bcA], mulB = _choose32x[7 - (bcA + bcB)], mulC = _choose16x[bcD];
return _offsets52c[bcA] + _table4[A] * mulA +
_offsets48c[ (bcA << 4) + bcB] + _table [B] * mulB +
_offsets32c[((bcA + bcB) << 4) + bcC] + _table [C] * mulC +
_table [D];
}
In short, it's a bunch of lookups and bitwise operations powered by pre-computed lookup tables based on perfect hashing.
If you go back and look at this website, you can get the perfect hash code that Senzee used to create the 7-card hash and repeat the process for 5- and 6-card tables (essentially creating a new index52c7.h for each). You might be able to smash all 3 into one table, but I haven't tried that.
All told that should be ~628 MB (4 bytes * 157 M entries). Or, if you want to split it up, you can map it to 16-bit numbers (since I believe most poker hand evaluators only need 7,462 unique hand scores) and then have a separate map from those 7,462 hand scores to whatever hand categories you want. That would be 314 MB.
Here's a different answer based on the colex function concept. It works with bitsets that are sorted in descending order. Here's a Python implementation (both recursive so you can see the logic and iterative). The main concept is that, given a bitset, you can always calculate how many bitsets there are with the same number of set bits but less than (in either the lexicographical or mathematical sense) your given bitset. I got the idea from this paper on hand isomorphisms.
from math import factorial
def n_choose_k(n, k):
return 0 if n < k else factorial(n) // (factorial(k) * factorial(n - k))
def indexset_recursive(bitset, lowest_bit=0):
"""Return number of bitsets with same number of set bits but less than
given bitset.
Args:
bitset (sequence) - Sequence of set bits in descending order.
lowest_bit (int) - Name of the lowest bit. Default = 0.
>>> indexset_recursive([51, 50, 49, 48, 47, 46, 45])
133784559
>>> indexset_recursive([52, 51, 50, 49, 48, 47, 46], lowest_bit=1)
133784559
>>> indexset_recursive([6, 5, 4, 3, 2, 1, 0])
0
>>> indexset_recursive([7, 6, 5, 4, 3, 2, 1], lowest_bit=1)
0
"""
m = len(bitset)
first = bitset[0] - lowest_bit
if m == 1:
return first
else:
t = n_choose_k(first, m)
return t + indexset_recursive(bitset[1:], lowest_bit)
def indexset(bitset, lowest_bit=0):
"""Return number of bitsets with same number of set bits but less than
given bitset.
Args:
bitset (sequence) - Sequence of set bits in descending order.
lowest_bit (int) - Name of the lowest bit. Default = 0.
>>> indexset([51, 50, 49, 48, 47, 46, 45])
133784559
>>> indexset([52, 51, 50, 49, 48, 47, 46], lowest_bit=1)
133784559
>>> indexset([6, 5, 4, 3, 2, 1, 0])
0
>>> indexset([7, 6, 5, 4, 3, 2, 1], lowest_bit=1)
0
"""
m = len(bitset)
g = enumerate(bitset)
return sum(n_choose_k(bit - lowest_bit, m - i) for i, bit in g)

Prolog print value as result instead of true

I need to write a program, which returns a new list from a given list with following criteria.
If list member is negative or 0 it should and that value 3 times to new list. If member is positive it should add value 2 times for that list.
For example :
goal: dt([-3,2,0],R).
R = [-3,-3,-3,2,2,0,0,0].
I have written following code and it works fine for me, but it returns true as result instead of R = [some_values]
My code :
dt([],R):- write(R). % end print new list
dt([X|Tail],R):- X =< 0, addNegavite(Tail,X,R). % add 3 negatives or 0
dt([X|Tail],R):- X > 0, addPositive(Tail,X,R). % add 2 positives
addNegavite(Tail,X,R):- append([X,X,X],R,Z), dt(Tail, Z).
addPositive(Tail,X,R):- append([X,X],R,Z), dt(Tail, Z).
Maybe someone know how to make it print R = [] instead of true.
Your code prepares the value of R as it goes down the recursing chain top-to-bottom, treating the value passed in as the initial list. Calling dt/2 with an empty list produces the desired output:
:- dt([-3,2,0],[]).
Demo #1 - Note the reversed order
This is, however, an unusual way of doing things in Prolog: typically, R is your return value, produced in the other way around, when the base case services the "empty list" situation, and the rest of the rules grow the result from that empty list:
dt([],[]). % Base case: empty list produces an empty list
dt([X|Like],R):- X =< 0, addNegavite(Like,X,R).
dt([X|Like],R):- X > 0, addPositive(Like,X,R).
% The two remaining rules do the tail first, then append:
addNegavite(Like,X,R):- dt(Like, Z), append([X,X,X], Z, R).
addPositive(Like,X,R):- dt(Like, Z), append([X,X], Z, R).
Demo #2
Why do you call write inside your clauses?
Better don't have side-effects in your clauses:
dt([], []).
dt([N|NS], [N,N,N|MS]) :-
N =< 0,
dt(NS, MS).
dt([N|NS], [N,N|MS]) :-
N > 0,
dt(NS, MS).
That will work:
?- dt([-3,2,0], R).
R = [-3, -3, -3, 2, 2, 0, 0, 0] .
A further advantage of not invoking functions with side-effects in clauses is that the reverse works, too:
?- dt(R, [-3, -3, -3, 2, 2, 0, 0, 0]).
R = [-3, 2, 0] .
Of cause you can invoke write outside of your clauses:
?- dt([-3,2,0], R), write(R).
[-3,-3,-3,2,2,0,0,0]
R = [-3, -3, -3, 2, 2, 0, 0, 0] .

Resources