How to convert non linear constraint to new linear constraints? - network-programming

I have converted the non-linear expression [1]: https://i.stack.imgur.com/MzzSO.png into linear equations 11, 12 and 13. But when I run my code, I got these errors: "constraint labeling not supported for dimensions with variable size, use named constraints instead", "CPLEX cannot extract expression", "Element "cons12" not defined" and "Invalid initialization expression for element "cons12" ". Can you help me, what should I do? Thanks in advance.
using CPLEX;
//Total nodes number.
range Nodes = 1..9;
//{int} Nodes = {1,2,3,4,5,6,7,8,9};
//................................................................................
//Total links number
//two_directed
tuple edge{
int node_out;
int node_in;
};
{edge} L with node_out, node_in in Nodes = {<1,3>, <3,1>, <2,3>, <3,2>, <3,4>, <4,3>, <3,5>,
<5,3>, <3,6>, <6,3>, <4,5>, <5,4>, <4,6>, <6,4>,
<4,8>, <8,4>, <5,6>, <6,5>, <6,7>, <7,6>, <6,9>,
<9,6>};
{edge} Lout[Nodes] = [{<1,3>},//node1
{<2,3>},//node2
{<3,1>, <3,2>, <3,4>, <3,5>, <3,6>},//node3
{<4,3>, <4,5>, <4,6>, <4,8>},//node4
{<5,3>, <5,4>, <5,6>},//node5
{<6,3>, <6,4>, <6,5>, <6,7>, <6,9>},//node6
{<7,6>},//node7
{<8,4>},//node8
{<9,6>}];//node9
//Flows
tuple cflow{
int origin;
int destination;
}
{cflow} F with origin,destination in Nodes = {<1,2>, <1,3>, <1,4>, <1,5>, <1,6>, <1,7>,
<1,8>, <1,9>, <2,1>, <2,3>, <2,4>, <2,5>, <2,6>, <2,7>, <2,8>, <2,9>,
<3,1>, <3,2>, <3,4>, <3,5>, <3,6>, <3,7>, <3,8>, <3,9>,
<4,1>, <4,2>, <4,3>, <4,5>, <4,6>, <4,7>, <4,8>, <4,9>,
<5,1>, <5,2>, <5,3>, <5,4>, <5,6>, <5,7>, <5,8>, <5,9>,
<6,1>, <6,2>, <6,3>, <6,4>, <6,5>, <6,7>, <6,8>, <6,9>, <7,1>, <7,2>};
float landa_f[f in F]=[0.86, 0.3, 0.75, 0.23, 0.32, 0.4, 0.5, 0.6, 0.22, 0.14,
0.23, 0.42, 0.33, 0.5, 0.62, 0.36, 0.42, 0.35, 0.2, 0.16,
0.33, 0.9, 0.41, 0.51, 0.61, 0.33, 0.42, 0.51, 0.87, 0.96,
0.31, 0.55, 0.91, 0.36, 0.32, 0.72, 0.76, 0.32, 0.45, 0.64,
0.38, 0.71, 0.43, 0.55, 0.53, 0.9, 0.58, 0.97, 0.5, 0.33 ];
{string} V = {"IDS", "DPI", "NAT", "Proxy", "Firewall"};
//MAIN DECISION VARIABLES
dvar int I[v in V][n in Nodes][f in F][j in 1..2] in 0..1; //denotes that an NF instance v
hosted at node n is used by the j-th service on the service chain of flow f.
dvar int IL[l in L][f in F][j in 1..2][n in Nodes] in 0..1;//denotes that link l is used by
flow f to route from the j-th to (j + 1)-th NF service, hosted at node nj and nj+1.
dvar int Y[v in V][n in Nodes];
//Decision variables related with non linear equations
dvar int z[l in L][f in F][j in 1..2][n in Nodes][v in V] in 0..1;
subject to{
//convert non_linear_equations to new linear constraints
forall (f in F, j in 1..2, v in V)
cons11: sum( l in Lout[item(Routes[f],j-1)] ) z[l][f][j][item(Routes[f],j-1)][v] == 1;
forall (f in F, j in 1..2, l in Lout[item(Routes[f],j-1)], v in V) {
cons12: 3 * z[l][f][j][item(Routes[f],j-1)][v] <= ( IL[l][f][j][item(Routes[f],j-1)] +
I[v][item(Routes[f],j-1)][f][j] );
cons13: z[l][f][j][item(Routes[f],j-1)][v] >= ( IL[l][f][j][item(Routes[f],j-1)] +
I[v][item(Routes[f],j-1)][f][j] ) - 2; }
}

Indeed
dvar int x[1..2][1..2];
{int} s[1..2]=[{1,2},{1}];
subject to
{
forall(i in 1..2) forall(j in s[i]) ct:x[i][j]<=0;
}
execute
{
writeln(ct[1][1].UB);
}
does not work but if you write
dvar float x[1..2][1..2];
{int} s[1..2]=[{1,2},{1}];
subject to
{
forall(i in 1..2) forall(j in 1..2:j in s[i]) ct:x[i][j]<=0;
}
execute
{
writeln(ct[1][1].UB);
}
then it works fine
range r=1..2;
dvar float x[1..2][1..2];
constraint ct[r][r];
{int} s[1..2]=[{1,2},{1}];
subject to
{
forall(i in 1..2,j in 1..2)
ct[i][j]= if (j in s[i]) x[1][i]<=0;
}
execute
{
writeln(ct[1][1].UB);
}
works fine too

Related

Why minimization getting worse for bigger bounds?

I want to minimize this function:
betas = [0.1, 0.2, 0.3]
weights = [0.2, 0.2, 0.6]
def get_beta_p(betas, weights):
return sum([x * y for x, y in zip(betas, weights)])
With
initializer = weights
constraints = ({'type' : 'eq', 'fun': lambda x: np.sum(x) -1},
{"type" : "ineq", "fun" : lambda betas: minimize_beta(initializer, betas)}) # sum = 1, fun = 0
bounds = tuple((0, 0.4) for x in range(len(weights))) # x from 0 to 0.4
So I use it into function
def minimize_beta(weights, args):
betas = args
return get_beta_p(betas, weights)
zero_beta=optimize.minimize(minimize_beta,
initializer,
method = 'SLSQP',
args = betas,
bounds = bounds,
constraints = constraints)
Problem In my homing opinion accuracy does not depend on bounds range, which looks really strange and unnatoral.
For example, when I use bounds (-0.1, 0.5), output is:
[array([1.]), array([0.23947614]), array([0.15513421]), array([0.30120484]), array([0.36498963]), array([0.6351255]), array([0.06166346]), array([0.12740605]), array([0.55059138]), array([0.46437143]), array([0.42777512])]
message: Optimization terminated successfully
success: True
status: 0
fun: -0.15440870412170832 #look here
x: [-1.000e-01 2.000e-01 5.000e-01 -1.000e-01 -1.000e-01
-1.000e-01 5.000e-01 5.000e-01 -1.000e-01 -1.000e-01
-1.000e-01]
nit: 5
jac: [ 1.000e+00 2.395e-01 1.551e-01 3.012e-01 3.650e-01
6.351e-01 6.166e-02 1.274e-01 5.506e-01 4.644e-01
4.278e-01]
nfev: 60
njev: 5
for (0, 0.2)
`fun: 0.17697693990449903
x: [ 2.224e-17 2.000e-01 2.000e-01 2.000e-01 0.000e+00
2.898e-16 2.000e-01 2.000e-01 4.457e-16 0.000e+00
0.000e+00]`
for (0, 0.4)
`fun: 0.106654645436427
x: [ 2.361e-16 1.700e-16 2.000e-01 8.746e-16 2.980e-16
0.000e+00 4.000e-01 4.000e-01 3.434e-16 6.155e-17
0.000e+00]`
for (0.0, 0.5)
fun: 0.09453475545362894 x: [ 4.951e-16 0.000e+00 0.000e+00 2.505e-16 5.714e-16 0.000e+00 5.000e-01 5.000e-01 6.418e-16 4.344e-16 5.516e-17]
for (0.0, 0.9)
fun: 0.06823772147966463 x: [ 8.633e-16 0.000e+00 0.000e+00 5.373e-16 1.113e-15 4.327e-17 9.000e-01 1.000e-01 5.260e-16 1.818e-17 0.000e+00]for (-0.2, 0.2)
Should it be so? Why is it so unpredictable?
P.S: I could plot all results if you ask

How to define two related indexes in sum section by using Cplex

I have been working on the mathematical model for a long time and now I have some problems. In this model we have nodes and edges and the flows that pass through these nodes according to the topology. We also have network functions that we use to create two length chains.
First, I don't know how to define two dependent indices that use the same range. for example: cons1 , sum (j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9). Is it correct or not?
Second How to define IL decision variable.
I just got this error:"Cannot use type range for int" in cons19 and cons20. I will write the explanation of the two constraints 19 and 20, but I don't know how to change the code.
Constraints (19) and (20) are flow conservation constraints for the (destination node). Constraint (19) makes sure that one of the incoming links of the destination node is assigned to route from the node hosting the last NF of the service chain (nJ ) to the destination node (df ) that is also represented as the J + 1 service. In addition, Constraint (20) assigns one of the outgoing links of the node serving the last service to the (J + 1)th service order. I put my whole code for you.
Dear Mr Fleischer, I have asked you several times questions and I am very grateful for your kindness, but if you can, please help me this time as well. Thanks in advance.
using CPLEX;
int Numnodes=10;
range Nodes = 0..Numnodes;
//...................................................
//total links number=> int L=13;
tuple edge{
key int fromnode;
key int tonode;
}
{edge} Edges with fromnode,tonode in Nodes =
{<0,1>,<1,3>,<2,3>,<3,4>,<3,5>,<3,6>,<4,5>,<4,6>,<4,8>,<5,6>,<6,7>,<6,9>,<9,10>};
{edge} Lin with fromnode,tonode in Nodes =
{<0,1>,<1,3>,<2,3>,<3,4>,<3,5>,<3,6>,<4,5>,<4,6>,<4,8>,<5,6>,<6,7>,<6,9>,<9,10>};
{edge} Lout with fromnode,tonode in Nodes =
{<0,1>,<1,3>,<2,3>,<3,4>,<3,5>,<3,6>,<4,5>,<4,6>,<4,8>,<5,6>,<6,7>,<6,9>,<9,10>};
tuple cflow{
key int node1;
key int node2;
}
{cflow} Flows with node1,node2 in Nodes= {<0,1>, <0,3>, <0,4>, <0,5>, <0,6>, <0,7>, <0,8>,
<0,9>, <0,10>, <1,3>, <1,4>, <1,5>, <1,6>, <1,7>, <1,8>, <1,9>, <1,10>, <2,3>, <2,4>, <2,5>,
<2,6>, <2,7>, <2,8>, <2,9>, <2,10>,<3,4>, <3,5>, <3,6>, <3,7>, <3,8>, <3,9>, <3,10>, <4,5>,
<4,6>, <4,7>, <4,8>, <4,9>, <4,10>, <5,6>, <5,7>, <5,9>, <5,10>,<6,7>, <6,9>, <6,10>,
<9,10>};
tuple arraytoset
{
cflow ed;
int rank;
}
{arraytoset} srout = union(ed in Flows) {<ed,i> | i in 1..card(Routes[ed])};
//....................................................................................
//number flows
range F = 1..50;
//length chains
int J[f in Flows] = 2;
int nj[1..2];
//.......................................................................................
//VNFs
{string} V = {"Proxy", "Firewall", "IDS", "DPI", "NAT"};
//An NF instance v hosted on node n is characterized by its service rate of requests.
float u[V][Nodes]=...;
//transmission rate.
float C[l in Edges]=...;
//Delays
float Dvn[V][Nodes]=...; //denote t
//landa
float landa[f in Flows]=(0.5+rand(2))/2;
//..............................................................
//MAIN DECISION VARIABLES
dvar int I[V][Nodes][Flows][1..2] in 0..1;
//denotes that an NF instance v hosted at node n is used by the j-th service on the service
chain of flow f.
dvar int IL[l in Edges][Flows][1..2][Nodes][1..1][0..9] in 0..1;
//denotes that link l is used by flow f to route from the j-th to (j + 1)-th NF service,
hosted at node nj and nj+1.
dvar int Y[V][Nodes];
//represents the number of NF type v instances that are hosted at node n.
//Decision variables related with non linear equations
dvar int z[l1 in Lout][Flows][1..2][Nodes][1..1][0..9][V] in 0..1;
//Related with floor function
dexpr float x[f in Flows] = sum(v in V, n in Nodes, j in 1..2) I[v][n][f][j] / J[f];
dvar int s[f in Flows];
dvar float floorequ[i in Flows] in 0..0.99999;
//Total delays
dexpr float DT = sum(l in Edges, f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes &&
nj[j+1] in 0..9) IL[l][f][j][nj[j]][j+1][nj[j+1]] * Dlink[l];
dexpr float DP = sum(v in V, n in Nodes, f in Flows, j in 1..2) I[v][n][f][j] * Dvn[v][n];
//MAIN objective functions
dexpr float objmodel1 = sum(n in Nodes, v in V) (Y[v][n] * Cpuvnf[v] / Cpunode[n]);//to minimize the use of cores
dexpr float objmodel2 = sum(l in Edges, f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes
&& nj[j+1] in 0..9) ((IL[l][f][j][nj[j]][j+1][nj[j+1]] * landa[f]) / C[l]); //to minimize the utilization of link capacities.
dexpr float objmodel3 = sum(f in Flows) s[f];
maximize staticLex(objmodel3, -objmodel1, -objmodel2);
subject to{
//constrains with j,j+1,nj[j],nj[j+1] are wrong, I don't know how to define these.
forall (<o,d> in Edges)
cons1: sum(f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9) (IL[<o,d>]
[f][j][nj[j]][j+1][nj[j+1]] * landa[f]) <= C[<o,d>];
forall (n in Nodes)
cons2: sum(v in V) Y[v][n] * Cpuvnf[v] <= Cpunode[n];
forall (v in V, n in Nodes)
cons3: sum(f in Flows, j in 1..2) I[v][n][f][j] * landa[f] <= u[v][n];
forall (n in Nodes, v in V, f in Flows, j in 1..2)
cons4: Y[v][n] >= I[v][n][f][j];
forall (f in Flows, j in 1..2)
cons5: sum(n in Nodes, v in V) I[v][n][f][j] == 1;
forall (i in Flows)
cons6a: x[i]==s[i]+floorequ[i];
forall (f in Flows)
cons7: DT + DP <= Dflow[f];
//convert non_linear_equation_11 to new linear constraints == constraints 8, 9, 10
forall (f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9, v in V)
cons8: sum(<o1,d1> in Lout) z[<o1,d1>][f][j][nj[j]][j+1][nj[j+1]][v] == 1;
forall (<o1,d1> in Lout, f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1]
in 0..9, v in V) {
cons9: 3 * z[<o1,d1>][f][j][nj[j]][j+1][nj[j+1]][v] <= (IL[<o1,d1>][f][j][nj[j]][j+1]
[nj[j+1]] + I[v][nj[j]][f][j] + I[v][nj[j+1]][f][j+1]);
cons10: z[<o1,d1>][f][j][nj[j]][j+1][nj[j+1]][v] >= (IL[<o1,d1>][f][j][nj[j]][j+1]
[nj[j+1]] + I[v][nj[j]][f][j] + I[v][nj[j+1]][f][j+1]) - 2; }
//convert non_linear_equation_12 to new linear constraints == constraints 11, 12, 13
forall (f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9, v in V)
cons11: sum(<o2,d2> in Lin) z[<o2,d2>][f][j][nj[j]][j+1][nj[j+1]][v] == 1;
forall (<o2,d2> in Lin, f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in
0..9, v in V) {
cons12: 3 * z[<o2,d2>][f][j][nj[j+1]][j+1][nj[j+1]][v] <= (IL[<o2,d2>][f][j][nj[j]][j+1]
[nj[j+1]] + I[v][nj[j]][f][j] + I[v][nj[j+1]][f][j+1]);
cons13: z[<o2,d2>][f][j][nj[j]][j+1][nj[j+1]][v] >= (IL[<o2,d2>][f][j][nj[j]][j+1]
[nj[j+1]] + I[v][nj[j]][f][j] + I[v][nj[j+1]][f][j+1]) - 2; }
//constraints 14, 15
forall(f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9, v in V){
cons14: sum(<o1,d1> in Lout) IL[<o1,d1>][f][j][nj[j]][j+1][nj[j+1]] <= 1;
cons15: sum(<o2,d2> in Lin) IL[<o2,d2>][f][j][nj[j]][j+1][nj[j+1]] <= 1; }
//constraints 16
forall (f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9)
cons16: (sum(<o2,d2> in Lin) IL[<o2,d2>][f][j][nj[j]][j+1][nj[j+1]]) - (sum(<o1,d1>
in Lout) IL[<o1,d1>][f][j][nj[j]][j+1][nj[j+1]]) == 0;
//constraints 17, 18
forall (f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9, v in
V) {
cons17: sum(<o2,d2> in Lin) IL[<o2,d2>][f][0][0][1][1] == I[v][1][f][1];
cons18: sum(<o1,d1> in Lout) IL[<o1,d1>][f][0][0][1][1] == I[v][1][f][1]; }
//constraints 19, 20
forall (f in Flows, j in 1..2: j+1 in 1..1 && nj[j] in Nodes && nj[j+1] in 0..9, v in
V) {
cons19: sum(<o2,d2> in Lin) IL[<o2,d2>][f][1..2][9][1..1][10] == I[v][9][f][1..2];
cons20: sum(<o1,d1> in Lout) IL[<o1,d1>][f][1..2][9][1..1][10] == I[v][9][f][1..2];}}
assert forall(f in Flows) s[f]==floor(x[f]);
execute DISPLAY_After_SOLVE {
writeln("objmodel1==", objmodel1, "objmodel2==", objmodel2, "objmodel3==", objmodel3);
}
data file:
Cpunode=[1, 1, 1, 5, 4, 4, 5, 1, 10, 1, 1]; //number of core for each
node
Cpuvnf=[1, 1, 1, 1, 1]; //number of core that each vnf wants
C=[1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
1000, 1000]; //1ms
u=[[10,10,10,10,10,10,10,10,10,10,10]
[10,10,10,10,10,10,10,10,10,10,10]
[10,10,10,10,10,10,10,10,10,10,10]
[10,10,10,10,10,10,10,10,10,10,10]
[10,10,10,10,10,10,10,10,10,10,10]]; //10Mbps
Dvn=[[0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003,
0.003, 0.003]
[0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003,
0.003, 0.003]
[0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003,
0.003, 0.003]
[0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003,
0.003, 0.003]
[0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003,
0.003, 0.003] ]; //3ms
Dlink=[0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
0.01, 0.01, 0.01]; //10ms
Dflow=[0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04,
0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04,
0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04,
0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.04,
0.04, 0.04, 0.04, 0.04, 0.04, 0.04]; //40ms
If r is a range , v[r] is not a value!
So in your model you should change
cons19: sum(<o2,d2> in Lin) IL[<o2,d2>][f][1..2][9][1..1][10] == I[v][9][f][1..2];
cons20: sum(<o1,d1> in Lout) IL[<o1,d1>][f][1..2][9][1..1][10] == I[v][9][f][1..2];
into
cons19: sum(<o2,d2> in Lin) IL[<o2,d2>][f][1][9][1][10] == I[v][9][f][1];
cons20: sum(<o1,d1> in Lout) IL[<o1,d1>][f][1][9][1][10] == I[v][9][f][1];
At least this will solve the error and you will be able to go further

Lua pattern returns wrong match when fetching multiple substrings between brackets

So I have this problem with a regex. As seen below I use the regex"CreateRuntimeTxd%(.*%)"
local _code = [[
Citizen.CreateThread(function()
local dui = CreateDui("https://www.youtube.com/watch?v=dQw4w9WgXcQ", 1920, 1080)
local duiHandle = GetDuiHandle(dui)
CreateRuntimeTextureFromDuiHandle(CreateRuntimeTxd('rick'), 'nevergonnagiveuup', duiHandle)
while true do
Wait(0)
DrawSprite('rick', 'nevergonnagiveuup', 0.5, 0.5, 1.0, 1.0, 0, 255, 255, 255, 255)
end
end)
]]
for match in string.gmatch(_code, "CreateRuntimeTxd%(.*%)") do
print(match)
end
So the problem is that the current regex matches
CreateRuntimeTxd('rick'), 'nevergonnagiveuup', duiHandle)
while true do
Wait(0)
DrawSprite('rick', 'nevergonnagiveuup', 0.5, 0.5, 1.0, 1.0, 0, 255, 255, 255, 255)
end
end)
but i only want it to match CreateRuntimeTxd('rick')
You need to use
for match in string.gmatch(_code, "CreateRuntimeTxd%(.-%)") do
print(match)
end
See the Lua demo. Details:
CreateRuntimeTxd - a literal text
%( - a literal ( char
.- - zero or more characters (the least amount needed to complete a match)
%) - a ) char.
You may also use a negated character class, [^()]* (if there can be no ( and ) before )) or [^)]* (if ) chars are still expected) instead of .-:
for match in string.gmatch(_code, "CreateRuntimeTxd%([^()]*%)") do
print(match)
end
See this Lua demo.

How would I be able to make a register-based virtual machine code off of a Binary Tree for math interpretation

My code is represented in Dart, but this is more general to the Binary Tree data structure and Register-based VM implementation. I have commented the code for you to understand if you do not know Dart as well.
So, here are my nodes:
enum NodeType {
numberNode,
addNode,
subtractNode,
multiplyNode,
divideNode,
plusNode,
minusNode,
}
NumberNode has a number value in it.
AddNode, SubtractNode, MultiplyNode, DivideNode, they are really just Binary Op Nodes .
PlusNode, MinusNode, are Unary Operator nodes.
The tree is generated based off Order of Operations. Unary Operation first, then multiplication and division, and then addition and subtraction. E.g. "1 + 2 * -3" becomes "(1 + (2 * (-3)))"
Here is my code to trying to walk over the AST:
/// Converts tree to Register-based VM code
List<Opcode> convertNodeToCode(Node node) {
List<Opcode> result = [const Opcode(OpcodeKind.loadn, 2, -1)];
bool counterHasBeenZero = false;
bool binOpDebounce = false;
int counter = 0;
List<Opcode> convert(Node node) {
switch (node.nodeType) {
case NodeType.numberNode:
counter = counter == 0 ? 1 : 0;
if (counter == 0 && !counterHasBeenZero) {
counterHasBeenZero = true;
} else {
counter = 1;
}
return [Opcode(OpcodeKind.loadn, counter, (node as NumberNode).value)];
case NodeType.addNode:
var aNode = node as AddNode;
return convert(aNode.nodeA) +
convert(aNode.nodeB) +
[
const Opcode(
OpcodeKind.addn,
0,
1,
)
];
case NodeType.subtractNode:
var sNode = node as SubtractNode;
var result = convert(sNode.nodeA) +
convert(sNode.nodeB) +
(binOpDebounce
? [
const Opcode(
OpcodeKind.subn,
0,
0,
1,
)
]
: [
const Opcode(
OpcodeKind.subn,
0,
1,
)
]);
if (!binOpDebounce) binOpDebounce = true;
return result;
case NodeType.multiplyNode:
var mNode = node as MultiplyNode;
var result = convert(mNode.nodeA) +
convert(mNode.nodeB) +
(binOpDebounce
? [
const Opcode(
OpcodeKind.muln,
0,
0,
1,
)
]
: [
const Opcode(
OpcodeKind.muln,
0,
1,
)
]);
if (!binOpDebounce) binOpDebounce = true;
return result;
case NodeType.divideNode:
var dNode = node as DivideNode;
var result = convert(dNode.nodeA) +
convert(dNode.nodeB) +
(binOpDebounce
? [
const Opcode(
OpcodeKind.divn,
0,
0,
1,
)
]
: [
const Opcode(
OpcodeKind.divn,
0,
1,
)
]);
if (!binOpDebounce) binOpDebounce = true;
return result;
case NodeType.plusNode:
return convert((node as PlusNode).node);
case NodeType.minusNode:
return convert((node as MinusNode).node) +
[Opcode(OpcodeKind.muln, 1, 2)];
default:
throw Exception('Non-existent node type');
}
}
return result + convert(node) + [const Opcode(OpcodeKind.exit)];
}
I tried a method to just use 2-3 registers and using a counter to track where I loaded the number in the register, but the code gets ugly real quick and when I'm trying to do Order of Operations, it gets really hard to track where the numbers are with the counter. Basically, how I tried to make this code work is just store the number in register 1 or 0 and load the number if needed to and add the registers together to equal to register 0. Example, 1 + 2 + 3 + 4 becomes [r2 = -1.0, r1 = 1.0, r0 = 2.0, r0 = r1 + r0, r1 = 3.0, r0 = r1 + r0, r1 = 4.0, r0 = r1 + r0, exit]. When I tried this with multiplication though, this became very hard as it incorrectly multiplied the wrong number which is possibly because of the order of operations.
I tried to see if this way could be done as well:
// (1 + (2 * ((-2) + 3) * 5))
const code = [
// (-2)
Opcode(OpcodeKind.loadn, 1, -2), // r1 = -2;
// (2 + 3)
Opcode(OpcodeKind.loadn, 1, 2), // r1 = 2;
Opcode(OpcodeKind.loadn, 2, 3), // r2 = 3;
Opcode(OpcodeKind.addn, 2, 1, 2), // r2 = r1 + r2;
// (2 * (result) * 5)
Opcode(OpcodeKind.loadn, 1, 2), // r1 = 2;
Opcode(OpcodeKind.loadn, 3, 5), // r3 = 5;
Opcode(OpcodeKind.muln, 2, 1, 2), // r2 = r1 * r2;
Opcode(OpcodeKind.muln, 2, 2, 3), // r2 = r2 * r3;
// (1 + (result))
Opcode(OpcodeKind.loadn, 1, 1), // r1 = 1;
Opcode(OpcodeKind.addn, 1, 1, 2), // r1 = r1 + r2;
Opcode(OpcodeKind.exit), // exit halt
];
I knew this method would not work because if I'm going to iterate through the nodes I need to know the position of the numbers and registers beforehand, so I'd have to use another method or way to find the number/register.
You don't need to read all of above; those were just my attempts to try to produce register-based virtual machine code.
I want to see how you guys would do it or how you would make it.

precision_recall_fscore_support returns same values for accuracy, precision and recall

I am training a logistic regression classification model and trying to compare the results using confusion matrix, and calculating precision, recall, accuracy
code is given below
# logistic regression classification model
clf_lr = sklearn.linear_model.LogisticRegression(penalty='l2', class_weight='balanced')
logistic_fit=clf_lr.fit(TrainX, np.where(TrainY >= delay_threshold,1,0))
pred = clf_lr.predict(TestX)
# print results
cm_lr = confusion_matrix(np.where(TestY >= delay_threshold,1,0), pred)
print("Confusion matrix")
print(pd.DataFrame(cm_lr))
report_lr = precision_recall_fscore_support(list(np.where(TestY >= delay_threshold,1,0)), list(pred), average='micro')
print ("\nprecision = %0.2f, recall = %0.2f, F1 = %0.2f, accuracy = %0.2f\n" % \
(report_lr[0], report_lr[1], report_lr[2], accuracy_score(list(np.where(TestY >= delay_threshold,1,0)), list(pred))))
print(pd.DataFrame(cm_lr.astype(np.float64) / cm_lr.sum(axis=1)))
show_confusion_matrix(cm_lr)
#linear_score = cross_validation.cross_val_score(linear_clf, ArrX, ArrY,cv=10)
#print linear_score
expected results are
Confusion matrix
0 1
0 4303 2906
1 1060 1731
precision = 0.37, recall = 0.62, F1 = 0.47, accuracy = 0.60
0 1
0 0.596893 1.041204
1 0.147038 0.620208
however my outputs are
Confusion matrix
0 1
0 4234 2891
1 1097 1778
precision = 0.60, recall = 0.60, F1 = 0.60, accuracy = 0.60
0 1
0 0.594246 1.005565
1 0.153965 0.618435
how do I get correct results ?
In a 'binary' case like yours (2 classes) you need to use average='binary' instead of average='micro'.
For example:
TestY = [0, 1, 1, 0, 1, 1, 1, 0, 0, 0]
pred = [0, 1, 1, 0, 0, 1, 0, 1, 0, 0]
# print results
cm_lr = metrics.confusion_matrix(TestY, pred)
print("Confusion matrix")
print(pd.DataFrame(cm_lr))
report_lr = metrics.precision_recall_fscore_support(TestY, pred, average='binary')
print ("\nprecision = %0.2f, recall = %0.2f, F1 = %0.2f, accuracy = %0.2f\n" % \
(report_lr[0], report_lr[1], report_lr[2], metrics.accuracy_score(TestY, pred)))
and the output:
Confusion matrix
0 1
0 4 1
1 2 3
precision = 0.75, recall = 0.60, F1 = 0.67, accuracy = 0.70
Binary has a default definition of which class is the positive one (the class with the 1 label).
You can read the differences between all the average option in this link.

Resources