Consider:
: cell-matrix
create ( width height "name" ) over , * cells allot
does> ( x y -- addr ) dup cell+ >r # * + cells r> + ;
It is the definition that makes the matrix and then you assign values like this:
5 5 cell-matrix test
And then you stuff the values into there.... There they're...
36 0 0 test !
(I think)
Nowhere on the Internet can you find anything to explain this. How do you show the contents of the matrix?
If you want to print the contents of the whole matrix, you can do something like:
: .row ( addr u -- addr' u ) tuck 0 do #+ . loop swap cr ;
: .matrix ( u addr -- ) >body #+ rot 0 do .row loop 2drop ;
Note that your cell-matrix doesn't save the number of rows, so you have to supply this number to .matrix. E.g. like this:
2 3 cell-matrix foo
3 ' foo .matrix
Logically simple:
100 0 0 test ! ok
400 1 0 test ! ok
0 0 test # . 100 ok
1 0 test # . 400 ok
Related
Below are my dat and mode files for ampl .
I am getting the following error:
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
context: 1 1 >>> ; <<<
hw3.dat, line 14 (offset 262):
b[1] already defined
MODEL FILE:
# AMPL model for the Minimum Cost Network Flow Problem
#
# By default, this model assumes that b[i] = 0, c[i,j] = 0,
# l[i,j] = 0 and u[i,j] = Infinity.
#
# Parameters not specified in the data file will get their default values.
reset;
options solver cplex;
set NODES; # nodes in the network
set ARCS within {NODES, NODES}; # arcs in the network
set english;
set french;
param b {NODES} default 0; # supply/demand for node i
param c {ARCS} default 0; # cost of one of flow on arc(i,j)
param l {ARCS} default 0; # lower bound on flow on arc(i,j)
param u {ARCS} default Infinity; # upper bound on flow on arc(i,j)
var x {ARCS}; # flow on arc (i,j)
maximize cost: sum{(i,j) in ARCS} c[i,j] * x[i,j]; #objective: minimize
#arc flow cost
subject to flow_balance {i in NODES}:
sum{j in NODES: (i,j) in ARCS} x[i,j] - sum{j in NODES: (j,i) in ARCS}
x[j,i] = b[i];A
subject to capacity {(i,j) in ARCS}: l[i,j] <= x[i,j] <= u[i,j];
subject to flow_conservation {i in english}:
sum{j in french} x[i,j] = 1;
subject to flow_bounds {(i,j) in ARCS}:
x[i,j] = 0 || x[i,j] <= 1;
#subject to Number: {(i,j) in ARCS} x[i,j]=0 || x[i,j] = 1;
data hw3.dat
solve;
printf "The optimal pair assignments with compatibility scores are: \n";
for {i in english, j in french} {
printf "English Child %d and French Child %d with compatibility score %d \n", i, j, c[i,j];
}
data;
set NODES :=e1 e2 e3 f1 f2 f3;
set ARCS:= (e1,f1) (e1,f2) (e1,f3) (e2,f1) (e2,f2) (e2,f3) (e3,f1) (e3,f2) (e3,f3);
set english:=e1 e2 e3;
set french:=f1 f2 f3;
param: b:=
1 1
1 1
1 1
1 1
1 1
1 1;
param: c l u:=
[e1,f1] 6 0 10
[e1,f2] 3 0 10
[e1,f3] 2 0 10
[e2,f1] 9 0 10
[e2,f2] 5 0 10
[e2,f3] 1 0 10
[e3,f1] 4 0 10
[e3,f2] 10 0 10
[e3,f3] 8 0 10
;
It keeps saying that b is already defined, but i didnt do it. i tried changing the name from b to some other thing, still shows the same error.
can someone help please.
In your data file you have:
param: b:=
1 1
1 1
1 1
1 1
1 1
1 1;
Each line means b[1] = 1 and that is why you are getting the error "b[1] already defined context".
Since b is indexed over NODES (param b {NODES} default 0;) you should have something like the following instead:
param: b :=
e1 1
e2 1
e3 1
f1 1
f2 1
f3 1;
There are two MxN 2D arrays:
rand bit [M-1:0] src [N-1:0];
rand bit [M-1:0] dst [N-1:0];
Both of them will be randomized separately so that they both have P number of 1'b1 in them and rest are 1'b0.
A third MxN array of integers named 'map' establishes a one to one mapping between the two arrays 'src' and 'dst'.
rand int [M-1:0] map [N-1:0];
Need a constraint for 'map' such that after randomization, for each element of src[i][j] where src[i][j] == 1'b1, map[i][j] == M*k+l when dst[k][l] == 1. The k and l must be unique for each non-zero element of map.
To give an example:
Let M = 3 and N = 2.
Let src be
[1 0 1
0 1 0]
Let dst be
[0 1 1
1 0 0]
Then one possible randomization of 'map' will be:
[3 0 1
0 2 0]
In the above map:
3 indicates pointing from src[0,0] to dst[1,0] (3 = 1*M+0)
1 indicates pointing from src[0,2] to dst[0,1] (1 = 0*M+1)
2 indicates pointing from src[1,1] to dst[0,2] (2 = 0*M+2)
This is very difficult to express as a SystemVerilog constraint because
there is no way to conditionally select elements of an array to be unique
You cannot have random variables as part of index expression to an array element.
Since you are randomizing src and dst separately, it might be easier to compute the pointers and then randomly choose the pointers to fill in the map.
module top;
parameter M=3,N=4,P=4;
bit [M-1:0] src [N];
bit [M-1:0] dst [N];
int map [N][M];
int pointers[$];
initial begin
assert( randomize(src) with {src.sum() with ($countones(item)) == P;} );
assert( randomize(dst) with {dst.sum() with ($countones(item)) == P;} );
foreach(dst[K,L]) if (dst[K][L]) pointers.push_back(K*M+L);
pointers.shuffle();
foreach(map[I,J]) map[I][J] = pointers.pop_back();
$displayb("%p\n%p",src,dst);
$display("%p",map);
end
endmodule
I am new to the Ruby scripting language.
I was learning how to generate the byte code in Ruby.
I found the answer for generating the byte code.
But I don't know how to run that generated byte code. I searched the internet, but I didn't get the answer for this.
Generating a byte code:-
puts RubyVM::InstructionSequence.compile("x = 50; x > 100 ? 'foo' : 'bar'").disassemble
The output is,
== disasm: <RubyVM::InstructionSequence:<compiled>#<compiled>>==========
local table (size: 2, argc: 0 [opts: 0, rest: -1, post: 0, block: -1] s1)
[ 2] x
0000 trace 1 ( 1)
0002 putobject 50
0004 setlocal x
0006 trace 1
0008 getlocal x
0010 putobject 100
0012 opt_gt <ic:1>
0014 branchunless 20
0016 putstring "foo"
0018 leave
0019 pop
0020 putstring "bar"
0022 leave
I don't know how to execute the same script, by using the generated byte code.
Anyone please explain me how to execute this.
Thanks in advance!
TL;DR; You are looking for .eval method.
The .compile method would return an instance of RubyVM::InstructionSequence class, which has .eval method that evaluates/runs your "compiled" instructions.
iseq = RubyVM::InstructionSequence.compile("x = 50; x > 100 ? 'foo' : 'bar'")
iseq.eval # => "bar"
Or, a oneliner:
RubyVM::InstructionSequence.compile("x = 50; x > 100 ? 'foo' : 'bar'").eval
I've found a strange piece of code in the Lua documentation :
function trim8(s)
local i1,i2 = find(s,'^%s*')
if i2 >= i1 then s = sub(s,i2+1) end
local i1,i2 = find(s,'%s*$')
if i2 >= i1 then s = sub(s,1,i1-1) end
return s
end
Why is local used once again with i1 and i2? Aren't they already declared among local variables? Do you have to repeat the local keyword every time you want to assign them?
No, it is not necessary to use local over and over. The variables i1 and i2 will be local in the scope of the function because of the first line itself.
While it should not be done, there is nothing wrong with defining the same variables over and over. It will just assign a new position in stack to the newer, and shadow the older one.
The following is the instruction output for a simple function:
function t()
local i = 2
local i = 3
end
t()
function <temp.lua:1,4> (3 instructions, 12 bytes at 00658990)
0 params, 2 slots, 0 upvalues, 2 locals, 2 constants, 0 functions
1 [2] LOADK 0 -1 ; 2
2 [3] LOADK 1 -2 ; 3
3 [4] RETURN 0 1
and updating the second local i = 3 to just i = 3:
function t()
local i = 2
i = 3
end
t()
function <temp.lua:1,4> (3 instructions, 12 bytes at 00478990)
0 params, 2 slots, 0 upvalues, 1 local, 2 constants, 0 functions
1 [2] LOADK 0 -1 ; 2
2 [3] LOADK 0 -2 ; 3
3 [4] RETURN 0 1
Notice the difference at the second instruction.
Apart from that, the function is quite inefficient. You can instead use the following:
function Trim(sInput)
return sInput:match "^%s*(.-)%s*$"
end
Technically, using local or not in the second declaration are not equivalent. Using a second local would declare another variable.
However in your example code, they have basically the same. Check these simpler code:
local a = 0
local a = 1
and
local a = 0
a = 1
Use luac -p -l outputs the following result:
0+ params, 2 slots, 0 upvalues, 2 locals, 2 constants, 0 functions
1 [1] LOADK 0 -1 ; 0
2 [2] LOADK 1 -2 ; 1
3 [2] RETURN 0 1
and
0+ params, 2 slots, 0 upvalues, 1 local, 2 constants, 0 functions
1 [1] LOADK 0 -1 ; 0
2 [2] LOADK 0 -2 ; 1
3 [2] RETURN 0 1
I'm trying to implement a simple NN in Torch to learn more about it. I created a very simple dataset: binary numbers from 0 to 15 and my goal is to classify the numbers into two classes - class 1 are numbers 0-3 and 12-15, class 2 are the remaining ones. The following code is what i have now (i have removed the data loading routine only):
require 'torch'
require 'nn'
data = torch.Tensor( 16, 4 )
class = torch.Tensor( 16, 1 )
network = nn.Sequential()
network:add( nn.Linear( 4, 8 ) )
network:add( nn.ReLU() )
network:add( nn.Linear( 8, 2 ) )
network:add( nn.LogSoftMax() )
criterion = nn.ClassNLLCriterion()
for i = 1, 300 do
prediction = network:forward( data )
--print( "prediction: " .. tostring( prediction ) )
--print( "class: " .. tostring( class ) )
loss = criterion:forward( prediction, class )
network:zeroGradParameters()
grad = criterion:backward( prediction, class )
network:backward( data, grad )
network:updateParameters( 0.1 )
end
This is how the data and class Tensors look like:
0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1
[torch.DoubleTensor of size 16x4]
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
[torch.DoubleTensor of size 16x1]
Which is what I expect it to be. However when running this code, i get the following error on line loss = criterion:forward( prediction, class ):
torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:69: attempt to
perform arithmetic on a nil value
When i modify the training routine like this (processing a single data point at a time instead of all 16 in a batch) it works and the network successfully learns to recognize the two classes:
for k = 1, 300 do
for i = 1, 16 do
prediction = network:forward( data[i] )
--print( "prediction: " .. tostring( prediction ) )
--print( "class: " .. tostring( class ) )
loss = criterion:forward( prediction, class[i] )
network:zeroGradParameters()
grad = criterion:backward( prediction, class[i] )
network:backward( data[i], grad )
network:updateParameters( 0.1 )
end
end
I'm not sure what might be wrong with the "batch processing" i'm trying to do. A brief look at the ClassNLLCriterion didn't help, it seems i'm giving it the expected input (see below), but it still fails. The input it receives (prediction and class Tensors) looks like this:
-0.9008 -0.5213
-0.8591 -0.5508
-0.9107 -0.5146
-0.8002 -0.5965
-0.9244 -0.5055
-0.8581 -0.5516
-0.9174 -0.5101
-0.8040 -0.5934
-0.9509 -0.4884
-0.8409 -0.5644
-0.8922 -0.5272
-0.7737 -0.6186
-0.9422 -0.4939
-0.8405 -0.5648
-0.9012 -0.5210
-0.7820 -0.6116
[torch.DoubleTensor of size 16x2]
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
[torch.DoubleTensor of size 16x1]
Can someone help me out here? Thanks.
Experience has shown that nn.ClassNLLCriterion expects target to be a 1D tensor of size batch_size or a scalar. Your class is a 2D one (batch_size x 1) but class[i] is 1D, that's why your non-batch version works.
So, this will solve your problem:
class = class:view(-1)
Alternatively, you can replace
network:add( nn.LogSoftMax() )
criterion = nn.ClassNLLCriterion()
with the equivalent:
criterion = nn.CrossEntropyCriterion()
The interesting thing is that nn.CrossEntropyCriterion is also able to take a 2D tensor. Why is nn.ClassNLLCriterion not?