Running Pktgen with Lua scripts - lua

I am currently working with pktgen in order to achieve high-rate packet generation for a project. I am using version 2.9.5 of pktgen, along with DPDK version 2.1.0. It does work perfectly fine ( I am able to generate and send packet at 10 Gb/s rate ) until I try to use lua scripts for formatting packets.
As a matter of fact, scripts examples that you can find with the sources seem to not always have the expected behaviour. Simplest scripts work perfectly fine ( HelloWorld.lua for example, I am also able to set packet size, rate or burst with script commands )
package.path = package.path ..";?.lua;test/?.lua;app/?.lua;"
2
3 pktgen.set("all", "count", 0);
4 pktgen.set("all", "rate", 50);
5 pktgen.set("all", "size", 256);
6 pktgen.set("all", "burst", 128);
7 pktgen.start(0)
( the code above does work )
However, I have encountered issues as I tried to define directly a packet format with a table, like in this example :
-- Lua uses '--' as comment to end of line read the
2 -- manual for more comment options.
3 local seq_table = { -- entries can be in any order
4 ["eth_dst_addr"] = "0011:4455:6677",
5 ["eth_src_addr"] = "0011:1234:5678",
6 ["ip_dst_addr"] = "10.12.0.1",
7 ["ip_src_addr"] = "10.12.0.1/16", -- the 16 is the size of the mask va lue
8 ["sport"] = 9, -- Standard port numbers
9 ["dport"] = 10, -- Standard port numbers
10 ["ethType"] = "ipv4", -- ipv4|ipv6|vlan
11 ["ipProto"] = "udp", -- udp|tcp|icmp
12 ["vlanid"] = 1, -- 1 - 4095
13 ["pktSize"] = 128 -- 64 - 1518
14 };
15 -- seqTable( seq#, portlist, table );
16 pktgen.seqTable(0, "all", seq_table );
17 pktgen.set("all", "seqCnt", 1);
When I try this script, Pktgen will usually give me a
Unknown Ethertype 0x0000
Error as soon as I try start 0 in commandline ( 0 is the port I use to transmit packet ).
Same problem occurs when I try the main.lua script.
Long story short, I have troubles understanding how the pktgen.seqtable function works, and why it does not work properly in my case. I did not find any really helpful documentation on the net on this subject.
the command I use to launch scripts is :
sudo -E $PKTGEN_CMD/pktgen -c 0x7 -n 4 -- -m "1.0,2.1" -f test/set_seq.lua
( test/set_seq.lua being a script example ).

I have got the same script running with small modification to enable vlan in packet generation. Please refer the changes below
local seq_table = { -- entries can be in any order
["eth_dst_addr"] = "0011:4455:6677",
["eth_src_addr"] = "0011:1234:5678",
["ip_dst_addr"] = "10.12.0.1",
["ip_src_addr"] = "10.12.0.1/16", -- the 16 is the size of the mask va lue
["sport"] = 9, -- Standard port numbers
["dport"] = 10, -- Standard port numbers
["ethType"] = "ipv4", -- ipv4|ipv6|vlan
["ipProto"] = "udp", -- udp|tcp|icmp
["vlanid"] = 1100, -- 1 - 4095
["pktSize"] = 128 -- 64 - 1518
};
-- seqTable( seq#, portlist, table );
pktgen.seqTable(0, "all", seq_table );
pktgen.vlan("all", "enable");
pktgen.set("all", "seqCnt", 1);
Executing the modified script does not produce Pktgen will usually give me a Unknown Ethertype 0x0000 Error as soon as I try start 0 in commandline ( 0 is the port I use to transmit packet ).
note: there is mention of a fix to range and vlan id in pktgen doc page 75 3.0.01- Fixed the Range sequence and VLAN problem. hence I recommend to use latest pktgen (I have tested this on PKTGEN 21.11.0)

Related

How do you do alegbra correctly in Lua?

So i have recently been attempting to do alegbra in lua and this was the closet way i could come up with of doing it is this how you are even supposed to do it correctly? An other problem that i find with doing alegbra in lua is that in alegbra there is both a constant and a variable beside eachother well the problem with that is that lua does not like both a Number and a letter beside eachother so it errors is there any way how i could go about doing alegbra inside lua without getting errors?
local a = 5
-- ALEGBRA?
print(((a * 2) / 10) + 15 - 20)
-- 5 * 2 = 10
-- 10/10 = 1
-- 1 + 15 = 16
-- 16 - 20 = -4
-- The Problem lies right here when there is a Variable and a constant together lua does not like that :/
local x = 10
print(5x + 5)
print(5x + 5) will trigger a syntax error; Lua does not allow implicit multiplication. The fix is trivial: Explicitly use the multiplication operator as in your first example: print(5*x + 5) works just fine.

NodeMCU/Lua performance issues

I'm adding some code to the ws2812 module to be able to have some kind of reusable buffer where we could store led values.
The current version is there.
I've two problems.
First I wanted to have some "OO-style" interface. So I did:
local buffer = ws2812.newBuffer(300);
for j = 0,299 do
buffer:set(j, 255, 255, 255)
end
buffer:write(pin);
The probleme here is that buffer:set is resolved at each loop turn, which is costly (this loop takes ~20.2ms):
8 [2] FORPREP 1 6 ; to 15
9 [3] SELF 5 0 -7 ; "set"
10 [3] MOVE 7 4
11 [3] LOADK 8 -8 ; 255
12 [3] LOADK 9 -8 ; 255
13 [3] LOADK 10 -8 ; 255
14 [3] CALL 5 6 1
15 [2] FORLOOP 1 -7 ; to 9
I found a workaround for this problem which doesn't look "nice":
local buffer = ws2812.newBuffer(300);
local set = getmetatable(buffer).set;
for j = 0,299 do
set(buffer, j, 255, 255, 255)
end
buffer:write(pin);
It works well (4.3ms for the loop, more than 4 times faster), but it's more like a hack. :/ Is there a better way to "cache" the buffer:set resolution?
Second question, in my C code, I use:
ws2812_buffer * buffer = (ws2812_buffer*)luaL_checkudata(L, 1, "ws2812.buffer");
Which gives back my buffer ptr and check if it is really a ws2812.buffer. But this call is sloooooow: on my ESP8266, ~50us. If it's done on each call (for my 300 time buffer:set for example), it's ~15ms!
Is there a better way to fetch some user data and check its type, or should I add some "canary" at the beginning of my structure to do my own check (which will almost be "free" compared to 50us...)?
To make it look less of a hack you could try using
local set = buffer.set
This is essentially the same code, but without the getmetatable as the metatable is used implicitly through the __index metamethod.
On our project we made our own implementation of luaL_checkudata.
One option - as you similarly suggested - was to use a wrapper object that holds the type. As all userdata was assumed to be wrapped in the wrapper we could use it to get and confirm the type of the userdata. But there was no benchmarking done and testing metatables was used instead.
I would say testing the metatables is slower than the wrapping since luaL_checkudata does a lot of work to get and test the metatables and with wrapping we have access to the type directly. However benchmarking will tell for sure.

Lua: Hexadecimal Word to Binary Conversion

I'm attempting to create a Lua program to monitor periodic status pings of a slave device. The slave device sends its status in 16-bit hexadecimal words, which I need to convert to a binary string since each bit pertains to a property of the device. I can receive the input string, and I have a table containing 16 keys for each parameter. But I am having a difficult time understanding how to convert the hexadecimal word into a string of 16-bits so I can monitor it.
Here is a basic function of what I'm starting to work on.
function slave_Status(IP,Port,Name)
status = path:read(IP,Port)
sTable = {}
if status then
sTable.ready=bit32.rshift(status:byte(1), 0)
sTable.paused=bit32.rshift(status:byte(1), 1)
sTable.emergency=bit32.rshift(status:byte(1), 2)
sTable.started=bit32.rshift(status:byte(1), 3)
sTable.busy=bit32.rshift(status:byte(1), 4)
sTable.reserved1=bit32.rshift(status:byte(1), 5)
sTable.reserved2=bit32.rshift(status:byte(1), 6)
sTable.reserved3=bit32.rshift(status:byte(1), 7)
sTable.reserved4=bit32.rshift(status:byte(2), 0)
sTable.delay1=bit32.rshift(status:byte(2), 1)
sTable.delay2=bit32.rshift(status:byte(2), 2)
sTable.armoff=bit32.rshift(status:byte(2), 3)
sTable.shieldoff=bit32.rshift(status:byte(2), 4)
sTable.diskerror=bit32.rshift(status:byte(2), 5)
sTable.conoff=bit32.rshift(status:byte(2), 6)
sTable.envoff=bit32.rshift(status:byte(2), 7)
end
end
I hope this approach is understandable? I'd like to receive the Hex strings, for example 0x18C2 and turn that to 0001 1000 1100 0010, shifting the right-most bit to the right and placing that into the proper key. Then later in the function I would monitor if that bit had changed for the better or worse.
If I run a similar function in Terminator in Linux, and print out the pairs I get the following return:
49
24
12
6
3
1
0
0
56
28
14
7
3
1
0
0
This is where I am not understanding how to take each value and set it to bits
I'm pretty new to this so I do not doubt that there is an easier way to do this. If I need to explain further I will try.
tonumber(s, 16) will convert hex representation to decimal and string.char will return a symbol/byte representation of a number. Check this recent SO answer for an example of how they can be used; the solution in the answer may work for you.
I'd approach this in a different fashion than the one suggested by Paul.
First, create a table storing the properties of devices:
local tProperty = {
"ready",
"paused",
"emergency",
"started",
"busy",
"reserved1",
"reserved2",
"reserved3",
"reserved4",
"delay1",
"delay2",
"armoff",
"shieldoff",
"diskerror",
"conoff",
"envoff",
}
Then, since your device sends the data as 0xYYYY, you can call tonumber directly (if not a string). Use a function to store each bit in a table:
function BitConvert( sInput )
local tReturn, iNum = {}, tonumber( sInput ) -- optionally pass 16 as second argument to tonumber
while iNum > 0 do
table.insert( tReturn, 1, iNum % 2 )
iNum = math.floor( iNum / 2 )
end
for i = #tProperty - #tReturn, 1, -1 do
table.insert( tReturn, 1, 0 )
end
return tReturn
end
And then, map both the tables together:
function Map( tKeys, tValues )
local tReturn = {}
for i = 1, #tKeys do
tReturn[ tKeys[i] ] = tValues[i]
end
return tReturn
end
In the end, you would have:
function slave_Status( IP, Port, Name )
local status = path:read( IP, Port )
local sTable = Map( tProperty, BitConvert(status) )
end

Erlang Bit Syntax: How does it knows that it's 3 components?

I've been reading book about Erlang to evaluate if it's suitable for my project, and struble upon the bit syntax part of Learn You Some Erlang for Great Book.
Simply put, here's the code:
1> Color = 16#F09A29.
15768105
2> Pixel = <<Color:24>>.
<<240,154,41>>
What's confusing me is this: the Color variable is 24 bits, but how could Erlang knows that it has to divide the variable (in line 2) into three segments? How is the rule read?
I've tried to read the rest of the chapter, but it's getting more and more confusing me, because I don't understand how it divides the numbers. Could you please explain how the bit syntax works? How can it know that it's 3 segments, and how can it becomes <<154, 41>> when I do this:
1> Color = 16#F09A29.
15768105
2> Pixel = <<Color:16>>.
<<154,41>>
Thanks before.
Color = 16#F09A29 is an integer that can be written as 15768105 in decimal representation, as well as
00000000111100001001101000101001
in binary representation.
when you define a binary Pixel = << Color:24 >>. it just means that you say "Match the 24 less significant bits of Color with the binary Pixel". so Pixel is bounded to
111100001001101000101001,
without any split! when the shell prints it out, it does it byte per byte in decimal representation that is:
11110000 = 15*16 = 240, 10011010 = 9 * 16 + 10 = 154, 00101001 = 2 *
16 + 9 = 41 => << 240,154,41 >>
in the same way, when you define Pixel = << Color:16 >>, it takes only the 16 less significant bits and assign them to the binary =
1001101000101001,
which is printed 10011010 =
9 * 16 + 10 = 154, 00101001 = 2 * 16 + 9 = 41 => << 154,41 >>.
In the case of <> the binary equals now
100001001101000101001
( the 21 less significant bits) and when the shell prints them, it starts as usual, dividing the binary into bytes so
10000100 = 8*16 + 4 = 132, 11010001 = 13 *16 +1 = 209, as it remains only 5 bits 01001, the last chunk of data is printed 5:9 to tell us that the size of the last value is not 8 bits = 1 byte as usual, but only 5 bits =>
<< 132,209,5:9 >>.
The nice thing with binaries, is that you can "decode" them using size specification (maybe it is more clear with the example bellow).
(exec#WXFRB1824L)43> Co=16#F09A29.
15768105
(exec#WXFRB1824L)44> Pi = <<Co:24>>.
<<240,154,41>>
(exec#WXFRB1824L)45> <<R:8,V:8,B:8>> = Pi.
<<240,154,41>>
(exec#WXFRB1824L)46> R.
240
Erlang doesn't really "divide" anything. Binaries are just continuous blocks of data. It's the default human-readable representation that is printed by REPL is a comma-separated list of byte values.
It's just showing the 8-bit bytes that make up the binary. You're telling it to get 24 bits, and it's rendering them in the numeric representation (0-255) of each individual byte.

Size of the array that Fortran can handle

I have 30000 files to process each file has 80000 x 5 lines. I need to read all files and process them finding the average of each line. I have written the code to read and extract all data from the file. My code is in Fortran. There is an array of (30000 X 800000) My program could not go over (3300 X 80000). I need to add the 4th column of each file in 300 file steps, I mean 4th column of 1st file with 4th column of 301st file, 4th col of 2nd file with 4th col of 302nd file and so on .Do you think this is because of the limitation of the size of array that Fortran can handle? If so, is there any way to increase the size of the array that Fortran can handle? What about the no of files? My code looks like this:
This program runs well.
implicit double precision (a-h,o-z),integer(i-n)
dimension x(78805,5),y(78805,5),den(78805,5)
dimension b(3300,78805),bb(78805)
character*70,fn
nf = 3300 ! NUMBER OF FILES
nj = 78804 ! Number of rows in file.
ns = 300 ! No. of steps for files.
ncores = 11 ! No of Cores
c--------------------------------------------------------------------
c--------------------------------------------------------------------
!Initialization
do i = 0,nf
do j = 1, nj
x(j,1) = 0.0
y(j,2) = 0.0
den(j,4) = 0.0
c a(i,j) = 0.0
b(i,j) = 0.0
c aa(j) = 0.0
bb(j) = 0.0
end do
end do
c-------!Body program-----------------------------------------------
iout = 6 ! Output Files upto "ns" no.
DO i= 1,nf ! LOOP FOR THE NUMBER OF FILES
write(fn,10)i
open(1,file=fn)
do j=1,nj ! Loop for the no of rows in the domain
read(1,*)x(j,1),y(j,2),den(j,4)
if(i.le.ns) then
c a(i,j) = prob(j,3)
b(i,j) = den(j,4)
else
c a(i,j) = prob(j,3) + a(i-ns,j)
b(i,j) = den(j,4) + b(i-ns,j)
end if
end do
close(1)
c ----------------------------------------------------------
c -----Write Out put [Probability and density matrix]-------
c ----------------------------------------------------------
if(i.ge.(nf-ns)) then
do j = 1, nj
c aa(j) = a(i,j)/(ncores*1.0)
bb(j) = b(i,j)/(ncores*1.0)
write(iout,*) int(x(j,1)),int(y(j,2)),bb(j)
end do
close(iout)
iout = iout + 1
end if
END DO
10 format(i0,'.txt')
END
It's hard to say for sure because you haven't given all the details yet, but your problem is quite possibly that you are using a 32 bit compiler producing 32 bit executables and you are simply running out of address space.
Although your operating system supports 64 bit address space, your 32 bit process is still limited to 32 bit addresses.
You have found a limit at 3300*78805*8 which is just under 2GB and this supports my theory.
No matter what is the cause of your immediate problem, your fundamental problem is that you appear to be loading everything into memory at once. I've not closely studied your algorithm but on first inspection it seems likely that you could re-arrange it to avoid having everything in memory at once.

Resources