Below are my dat and mode files for ampl .
I am getting the following error:
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
MODEL FILE:
# AMPL model for the Minimum Cost Network Flow Problem
#
# By default, this model assumes that b[i] = 0, c[i,j] = 0,
# l[i,j] = 0 and u[i,j] = Infinity.
#
# Parameters not specified in the data file will get their default values.
reset;
options solver cplex;
set NODES; # nodes in the network
set ARCS within {NODES, NODES}; # arcs in the network
set english;
set french;
param b {NODES} default 0; # supply/demand for node i
param c {ARCS} default 0; # cost of one of flow on arc(i,j)
param l {ARCS} default 0; # lower bound on flow on arc(i,j)
param u {ARCS} default Infinity; # upper bound on flow on arc(i,j)
var x {ARCS}; # flow on arc (i,j)
maximize cost: sum{(i,j) in ARCS} c[i,j] * x[i,j]; #objective: minimize
#arc flow cost
subject to flow_balance {i in NODES}:
sum{j in NODES: (i,j) in ARCS} x[i,j] - sum{j in NODES: (j,i) in ARCS}
x[j,i] = b[i];A
subject to capacity {(i,j) in ARCS}: l[i,j] <= x[i,j] <= u[i,j];
subject to flow_conservation {i in english}:
sum{j in french} x[i,j] = 1;
subject to flow_bounds {(i,j) in ARCS}:
x[i,j] = 0 || x[i,j] <= 1;
#subject to Number: {(i,j) in ARCS} x[i,j]=0 || x[i,j] = 1;
data hw3.dat
solve;
printf "The optimal pair assignments with compatibility scores are: \n";
for {i in english, j in french} {
printf "English Child %d and French Child %d with compatibility score %d \n", i, j, c[i,j];
}
data;
set NODES :=e1 e2 e3 f1 f2 f3;
set ARCS:= (e1,f1) (e1,f2) (e1,f3) (e2,f1) (e2,f2) (e2,f3) (e3,f1) (e3,f2) (e3,f3);
set english:=e1 e2 e3;
set french:=f1 f2 f3;
param: b:=
1 1
1 1
1 1
1 1
1 1
1 1;
param: c l u:=
[e1,f1] 6 0 10
[e1,f2] 3 0 10
[e1,f3] 2 0 10
[e2,f1] 9 0 10
[e2,f2] 5 0 10
[e2,f3] 1 0 10
[e3,f1] 4 0 10
[e3,f2] 10 0 10
[e3,f3] 8 0 10
;
It keeps saying that b is already defined, but i didnt do it. i tried changing the name from b to some other thing, still shows the same error.
can someone help please.
In your data file you have:
param: b:=
1 1
1 1
1 1
1 1
1 1
1 1;
Each line means b[1] = 1 and that is why you are getting the error "b[1] already defined context".
Since b is indexed over NODES (param b {NODES} default 0;) you should have something like the following instead:
param: b :=
e1 1
e2 1
e3 1
f1 1
f2 1
f3 1;
I have a table created with the following script:
n=15
ts=now()+1..n * 1000 * 100
status=rand(0 1 ,n)
val=rand(100,n)
t=table(ts,status,val)
select * from t order by ts
where
ts is the time, status indicates the device status (0: down; 1: running), and val indicates the running time.
Suppose I have the following data:
ts status val
2023.01.03T18:17:17.386 1 58
2023.01.03T18:18:57.386 0 93
2023.01.03T18:20:37.386 0 24
2023.01.03T18:22:17.386 1 87
2023.01.03T18:23:57.386 0 85
2023.01.03T18:25:37.386 1 9
2023.01.03T18:27:17.386 1 46
2023.01.03T18:28:57.386 1 3
2023.01.03T18:30:37.386 0 65
2023.01.03T18:32:17.386 1 66
2023.01.03T18:33:57.386 0 56
2023.01.03T18:35:37.386 0 42
2023.01.03T18:37:17.386 1 82
2023.01.03T18:38:57.386 1 95
2023.01.03T18:40:37.386 0 19
So how do I calculate the longest continuous running time? For example, both the 7th and 8th records have the status 1, I want to sum their val values. Or the 14th-15th records, I want to sum up their val values.
You can use the built-in function segment to group the consecutive identical values. The full script is as follows:
select first(ts), sum(iif(status==1, val, 0)) as total_val
from t
group by segment(status)
having sum(iif(status==1, val, 0)) > 0
The result:
segment_status first_ts total_val
0 2023.01.03T18:17:17.386 58
3 2023.01.03T18:22:17.386 87
5 2023.01.03T18:25:37.386 58
9 2023.01.03T18:32:17.386 66
12 2023.01.03T18:37:17.386 177
I am doing machine learning.Here I want to find the best triple (max_samples, n_trees and threshold) that gives the greatest performance in terms of area under ROC curve and area under recall precison curve.
Here is the code:
def meilleur_triplet(x,classes):
for n_trees in np.arange(100,160,10):
for sample_size in np.arange(0.1,1,0.1):
for threshold in np.arange(0.4,1,0.1):
model=IforestLocal(sample_size,n_trees)
model.fit(x)
y_pred,y_score=model.predict(x,threshold)
auc=roc_auc_score(classes,y_pred)
auc_pr=average_precision_score(classes,y_pred)
Now when I use max_samples with a range of int I don't have an error however if it's in float I have the following error:
**
TypeError Traceback (most recent call last)
Input In [201], in <cell line: 1>()
----> 1 meilleur_triplet(X_glass,y_glass)
Input In [200], in meilleur_triplet(x, classes)
6 for threshold in np.arange(0.4,1,0.1):#(0.4,1,0.1)
8 model=IforestLocal(sample_size,n_trees)
----> 9 model.fit(x)
File ~\Desktop\THESE\Maurras\Code_Maurras\iforest_D.py:45, in IsolationForest.fit(self, X)
42 self.sample_size = len_x
44 for i in range(self.n_trees):
---> 45 sample_idx = random.sample(list(range(len_x)), self.sample_size)
46 # TODO: Must be deleted before compute the memory consumption of the methods
47 self.samples.append(sample_idx)
File ~\anaconda3\lib\random.py:450, in Random.sample(self, population, k, counts)
448 if not 0 <= k <= n:
449 raise ValueError("Sample larger than population or is negative")
--> 450 result = [None] * k
451 setsize = 21 # size of a small set minus size of an empty list
452 if k > 5:
TypeError: can't multiply sequence by non-int of type 'numpy.float64'
**
This is where I called the function
meilleur_triplet(X_glass,y_glass)
Thank you please help me
I was wondering, how would I make a simple lua string or entire code look look C++ compiled code but run as regular vanilla lua?
print("Test string") -- How would this look like C++ compiler code?
With Lua you can not directly dump print to a binary Format.
...as i know.
Dumping a Function to a Binary is easy doing with own defined Functions...
> -- Lua 5.4
> myfunc = function() print("Teststring") return end
> string.dump(myfunc, true)
uaT�
�
xV(w#����
��DGG��print�Teststring������
> load(string.dump(myfunc, true))()
Teststring
As you can see, like in a compiled C Binary the Constants are not obfuscated.
More obfuscating you can reach with converting the binary String to Bytecode...
> string.dump(myfunc, true):byte(1, -1)
27 76 117 97 84 0 25 147 13 10 26 10 4 8 8 120 86 0 0 0 0 0 0 0 00 0 0 40 119 64 1 128 129 129 0 0 2 133 11 0 0 0 131 128 0 0 68 0 21 71 0 1 0 71 0 1 0 130 4 134 112 114 105 110 116 4 139 84 101 115 116 115 116 114 105 110 103 129 0 0 0 128 128 128 128 128
...and for converting back later lets put it into a table...
> byte_code_tab = {string.dump(myfunc, true):byte(1, -1)}
> table.concat(byte_code_tab,',')
27,76,117,97,84,0,25,147,13,10,26,10,4,8,8,120,86,0,0,0,0,0,0,0,0,0,0,0,40,119,64,1,128,129,129,0,0,2,133,11,0,0,0,131,128,0,0,68,0,2,1,71,0,1,0,71,0,1,0,130,4,134,112,114,105,110,116,4,139,84,101,115,116,115,116,114,105,110,103,129,0,0,0,128,128,128,128,128
...now a function is needed to get it back...
> bytes_dec = function(tab) local txt = '' for k, v in pairs(tab) do txt = txt .. tostring(v):char() end return txt end
> bytes_dec(byte_code_tab)
uaT�
�
xV(w#����
��DGG��print�Teststring������
> load(bytes_dec(byte_code_tab))()
Teststring
EDIT
To show how it work with a single Lua file that returning a table with a __call metamethod check out this...
-- obfsc.lua
return setmetatable({27,76,117,97,84,0,25,147,13,10,26,10,4,8,8,120,86,0,0,0,0,0,0,0,0,0,0,0,40,119,64,1,128,129,129,0,0,2,133,11,0,0,0,131,128,0,0,68,0,2,1,71,0,1,0,71,0,1,0,130,4,134,112,114,105,110,116,4,139,84,101,115,116,115,116,114,105,110,103,129,0,0,0,128,128,128,128,128},
{__call = function(self, ...)
local txt = ''
for k, v in pairs(self) do
txt = txt .. tostring(v):char()
end
return load(txt)()
end})
...the bytes_dec function is stored in the __call metamethod...
$ /usr/local/bin/lua
Lua 5.4.4 Copyright (C) 1994-2022 Lua.org, PUC-Rio
> require('obfsc')
table: 0x565d3650 ./obfsc.lua
> require('obfsc')()
Teststring
...and do also the load()
But it is up to you where you store: bytes_dec()
Another nice method is ROT.
Its very simple and also old but good enough for de/obfuscating.
An Impression...
$ /bin/lua
Lua 5.1.5 Copyright (C) 1994-2012 Lua.org, PUC-Rio
> rot=require('rot')
> -- Lets rotate the Banner
> print(rot('Lua 5.1.5 Copyright (C) 1994-2012 Lua.org, PUC-Rio'))
5!`unqnu``/092)'(4`hi`qyytmrpqr`
5!n/2'l`m)/ 51
> -- Now read source of rot.lua into rot_src and print it
> rot_src = io.open('rot.lua'):read('*a')
> print(rot_src)
-- rot.lua
local rotator = function(...)
local args, rot, c = {...}, {}, ''
for i = 1, 63 do rot[c.char(i)] = c.char(i + 64) end
for i = 64, 127 do rot[c.char(i)] = c.char(i - 64) end
return args[1]:gsub('.', rot)
end
return rotator
> -- Obfuscate the source and print it
> rot_obfsc = rot(rot_src)
> print(rot_obfsc)
mm`2/4n,5!J,/#!,`2/4!4/2`}`&5.#4)/.hnnniJ,/#!,`!2'3l`2/4l`#`}`;nnn=l`;=l`ggJJ&/2`)`}`ql`vs`$/`2/4#(!2h)i`}`#n#(!2h)`k`vti`%.$J&/2`)`}`vtl`qrw`$/`2/4#(!2h)i`}`#n#(!2h)`m`vti`%.$JJ2%452.`!2'3z'35"hgngl`2/4iJ%.$JJ2%452.`2/4!4/2J
> -- Deobfuscate and print on the fly
> print(rot(rot_obfsc))
-- rot.lua
local rotator = function(...)
local args, rot, c = {...}, {}, ''
for i = 1, 63 do rot[c.char(i)] = c.char(i + 64) end
for i = 64, 127 do rot[c.char(i)] = c.char(i - 64) end
return args[1]:gsub('.', rot)
end
return rotator
236
I am so confused. I have tested a program for myself by following MATLAB code :
feature_train=[1 1 2 1.2 1 1 700 709 708 699 678];
No_of_Clusters = 2;
No_of_Iterations = 10;
[m,v,w]=gaussmix(feature_train,[],No_of_Iterations,No_of_Clusters);
feature_ubm=[1000 1001 1002 1002 1000 1060 70 79 78 99 78 23 32 33 23 22 30];
No_of_Clusters = 3;
No_of_Iterations = 10;
[mubm,vubm,wubm]=gaussmix(feature_ubm,[],No_of_Iterations,No_of_Clusters);
feature_test=[2 2 2.2 3 1 600 650 750 800 658];
[lp_train,rp,kh,kp]=gaussmixp(feature_test,m,v,w);
[lp_ubm,rp,kh,kp]=gaussmixp(feature_test,mubm,vubm,wubm);
However, the result is wondering me because the feature_test must be classified in feature_train not feature_ubm. As you see below the probability of feature_ubm is more than feature_train!?!
Can anyone explain for me what is the problem ?
Is the problem related to gaussmip and gaussmix MATLAB functions ?
sum(lp_ubm)
ans =
-3.4108e+06
sum(lp_train)
ans =
-1.8658e+05
As you see below the probability of feature_ubm is more than feature_train!?!
You see exactly the opposite, despite the absolute value of ubm is big, you are considering negative numbers and
sum(lp_train) > sum(lp_ubm)
hense
P(test|train) > P(test|ubm)
So your test chunk is correctly classified as train, not as ubm.