I'm writing a Wireshark Lua dissector for a protocol that straddles fields across octet boundaries:
Octet 0:
bits 0..3: a
bits 4..6: b
bits 7: c
Octet 1:
bits 0..3: x
bits 4..7: y (ls nibble)
Octet 2:
bits 0..3: y (ms nibble)
bits 4..7: z
How would one manage these fields in Lua?
This should get you most of the way there. (The problem is with y since you indicated that the least significant nibble is in the lower octet rather than the most significant nibble as one might normally expect.)
local p_foo = Proto("foo", "FOO Protocol")
local f_foo_a = ProtoField.uint8("foo.a", "A", base.DEC, nil, 0xf0)
local f_foo_b = ProtoField.uint8("foo.b", "B", base.DEC, nil, 0x0e)
local f_foo_c = ProtoField.uint8("foo.c", "C", base.DEC, nil, 0x01)
local f_foo_x = ProtoField.uint8("foo.x", "X", base.DEC, nil, 0xf0)
local f_foo_y = ProtoField.uint16("foo.y", "Y", base.DEC, nil, 0x0ff0)
local f_foo_z = ProtoField.uint8("foo.z", "Z", base.DEC, nil, 0x0f)
p_foo.fields = { f_foo_a, f_foo_b, f_foo_c, f_foo_x, f_foo_y, f_foo_z }
function p_foo.dissector(buf, pinfo, tree)
local foo_tree = tree:add(p_foo, buf(0,-1))
pinfo.cols.protocol:set("FOO")
foo_tree:add(f_foo_a, buf(0, 1))
foo_tree:add(f_foo_b, buf(0, 1))
foo_tree:add(f_foo_c, buf(0, 1))
foo_tree:add(f_foo_x, buf(1, 1))
foo_tree:add(f_foo_y, buf(1, 2))
foo_tree:add(f_foo_z, buf(2, 1))
end
-- Registration: TODO
If you really need to handle y as you indicated, then you'll have to bit-swap. There's probably a more elegant way to do this, but here's one solution:
local p_foo = Proto("foo", "FOO Protocol")
local f_foo_a = ProtoField.uint8("foo.a", "A", base.DEC, nil, 0xf0)
local f_foo_b = ProtoField.uint8("foo.b", "B", base.DEC, nil, 0x0e)
local f_foo_c = ProtoField.uint8("foo.c", "C", base.DEC, nil, 0x01)
local f_foo_x = ProtoField.uint8("foo.x", "X", base.DEC, nil, 0xf0)
local f_foo_y = ProtoField.uint16("foo.y", "Y", base.DEC, nil, 0x0ff0)
local f_foo_z = ProtoField.uint8("foo.z", "Z", base.DEC, nil, 0x0f)
p_foo.fields = { f_foo_a, f_foo_b, f_foo_c, f_foo_x, f_foo_y, f_foo_z }
nib2bin = {
[0] = "0000", [1] = "0001",
[2] = "0010", [3] = "0011",
[4] = "0100", [5] = "0101",
[6] = "0110", [7] = "0111",
[8] = "1000", [9] = "1001",
[10] = "1010", [11] = "1011",
[12] = "1100", [13] = "1101",
[14] = "1110", [15] = "1111"
}
function nibble2binary(n)
return nib2bin[bit.band(n, 0x0f)]
end
function p_foo.dissector(buf, pinfo, tree)
local foo_tree = tree:add(p_foo, buf(0,-1))
local y_lsn = bit.band(buf(1, 1):uint(), 0x0f)
local y_msn = bit.band(buf(2, 1):uint(), 0xf0)
local y = bit.bor(y_lsn, y_msn)
pinfo.cols.protocol:set("FOO")
foo_tree:add(f_foo_a, buf(0, 1))
foo_tree:add(f_foo_b, buf(0, 1))
foo_tree:add(f_foo_c, buf(0, 1))
foo_tree:add(f_foo_x, buf(1, 1))
foo_tree:add(f_foo_y, buf(1, 2)):set_text(".... " ..
nibble2binary(bit.rshift(y_msn, 4)) .. " " .. nibble2binary(y_lsn) ..
" .... = Y: " .. y)
foo_tree:add(f_foo_z, buf(2, 1))
end
-- Registration: TODO
Related
I want to make a translator that translates R U R' U' into r^ u< rv u>. This is very difficult because R' has R in it. So when I try to use this, it spits a result back to me like
put in your algorithm: R U R' U'
r^ u< r^' u<'
Not a very human way of doing it. I think this is because R is a substring of R', but this is what I have been told. I am using this translation thingy from a package called luastring. The dev of luastring won't help me, and this question is too advanced for everyone on the discord server. This is what I have tried so far.
io.write("put in your algorithm: ")
local alg = io.read()
function split(str, pat)
local t = {}
local fpat = "(.-)" .. pat
local last_end = 1
local s, e, cap = str:find(fpat, 1)
while s do
if s ~= 1 or cap ~= "" then table.insert(t, cap) end
last_end = e + 1
s, e, cap = str:find(fpat, last_end)
end
if last_end <= #str then
cap = str:sub(last_end)
table.insert(t, cap)
end
return t
end
-- table stuff so we can see
local moves = {"R", "U", "L", "F", "D", "B", "M", "E", "S", "R'", "U'", "L'", "F'", "D'", "B'", "M'", "E'", "S'"}
local neomoves = {"r^", "u<", "lv", "f>", "d>", "b<", "mv", "e>", "s>", "rv", "u>", "l^", "f<", "d<", "b>", "m^", "e<", "s<"}
local string = require("luastring")
local translation_table = {
["R'"] = "rv", ["U'"] = "u<", ["L'"] = "lv", ["F'"] = "f>", ["D'"] = "d>", ["B'"] = "b<", ["M'"] = "mv", ["E'"] = "e>", ["S'"] = "s>", ["R"] = "r^", ["U"] = "u<", ["L"] = "lv", ["F"] = "f>", ["D"] = "d>", ["B"] = "b<", ["M"] = "mv", ["E"] = "e>", ["S"] = "s>"
}
-- translate the moves to neomoves
local translated = string.translate(alg, translation_table)
print(translated)
What on earth do I have to do to let Lua know what it is supposed to do?
As your translated string is lower case and the source string is upper case you can simply translate R' befor you translate R.
You only need to split your translation_table into two.
Also you can simply use Lua's standard string.gsub for this. I don't see why you would use luastring here.
Simplified example:
local str = "R' R R' R'"
local t_a = {["R'"] = "rv"}
local t_b = {["R"] = "r^"}
print((str:gsub("%S+", t_a):gsub("%S+", t_b)))
The maximum write speed I can achieve is 2.4 KB/s. Is there a way to increase this?
Using LUA on a NodeMCU ESP8266 and the SPI module in User_Modules.h. #define BUILD_FATFS is also enabled in user_config.h.
I have a datalogger that is sampling 920SPS or ~1.1ms/Sample for 10 hours at a time. 1.1 ms should be lots of time to write two Bytes to a SD card or a buffer of xxx Bytes in between samples, however the max write speed I see is 498 ms to write 1200 Bytes or 7ms to write 3 Bytes. This is a long way from SD class 0 standard of 12.5MB/s. The logger ends up missing ~450 Samples when I dump 1200 B to the card.
local adc1 = nil
local t_tbl={}
local n=1
function adcReady(_,_,c)
_,_, adctbl[n], _ = adc1:read()
n=n+1
if n>400 then
t_tbl[1]=tmr.now()
file.open("/SD0/sddata.txt","a")
for k,v in ipairs(adctbl) do
file.write(v..",")
adctbl[k]=nil
end
file.close()
t_tbl[2]=tmr.now()
print(t_tbl[2] - t_tbl[1])
n=1
end
end
do
local adc = {
ADC1_ID = 0,
ADC1_ADDRESS = ads1115.ADDR_GND,
GAIN = ads1115.GAIN_4_096V,
SAMPLES = ads1115.DR_920SPS,
CHANNEL = ads1115.SINGLE_0,
MODE = ads1115.CONTINUOUS,
CONV_READY = ads1115.CONV_RDY_1,
}
i2c.setup(i2c0.id, i2c0.sda, i2c0.scl, i2c0.speed)
ads1115.reset()
adc1 = ads1115.ads1015(adc.ADC1_ID, adc.ADC1_ADDRESS)
adc1:setting(adc.GAIN, adc.SAMPLES, adc.CHANNEL, adc.MODE, adc.CONV_READY)
spi.setup(1, spi.MASTER, spi.CPOL_LOW, spi.CPHA_LOW, 8, 2, spi.HALFDUPLEX)
vol = file.mount("/SD0", 8) -- 2nd parameter is optional for non-standard SS/CS pin
file.open("/SD0/sddata.txt","w+")
file.close()
tmr.create():alarm(1000,tmr.ALARM_SINGLE,function()
gpio.mode(i2c0.conv_rdy,gpio.INT)
gpio.trig(i2c0.conv_rdy,'up', adcReady) --enable interrupt, active low rising edge==conv ready
end)
end
You can speedup file write by preparing 2Kbyte-aligned chunks of text.
Replace your adcReady with:
local log_text = ""
local chunk_size = 2*1024
function adcReady(_,_,c)
_, _, adctbl[n], _ = adc1:read()
n = n + 1
if n > 400 then
t_tbl[1] = tmr.now()
log_text = log_text..table.concat(adctbl, ",", 1, n-1)..","
local size = #log_text - #log_text % chunk_size
local log_text_to_save = log_text:sub(1, size)
log_text = log_text:sub(size + 1)
t_tbl[2] = tmr.now()
if size ~= 0 then
file.open("/SD0/sddata.txt","a")
file.write(log_text_to_save)
file.close()
end
t_tbl[3] = tmr.now()
print(t_tbl[2] - t_tbl[1], t_tbl[3] - t_tbl[2]) -- for strings and GC, for File operations
n = 1
end
end
Is it faster than 498 ms?
UPDATE:
New version with cached tostring()
local num2str = {}
function adcReady(_,_,c)
_, _, adctbl[n], _ = adc1:read()
n = n + 1
if n > 400 then
t_tbl[1] = tmr.now()
for i = 1, n - 1 do
local v = adctbl[i]
local s = num2str[v]
if not s then
s = v..","
num2str[v] = s
end
adctbl[i] = s
end
local log_text_to_save = table.concat(adctbl, "", 1, n-1)
t_tbl[2] = tmr.now()
file.open("/SD0/sddata.txt","a")
file.write(log_text_to_save)
file.close()
t_tbl[3] = tmr.now()
print(t_tbl[2] - t_tbl[1], t_tbl[3] - t_tbl[2]) -- for strings and GC, for File operations
n = 1
end
end
Is it faster than previous version?
UPDATE2:
local chr = string.char
function adcReady(_,_,c)
_, _, adctbl[n], _ = adc1:read()
n = n + 1
if n > 400 then
t_tbl[1] = tmr.now()
for i = 1, n - 1 do
local v = adctbl[i]
-- 0<=v<=4095
local s
if v < 10 then
s = chr(v + 48, 44)
else
local m10 = v % 10
if v < 100 then
s = chr((v - m10)/10 + 48, m10 + 48, 44)
else
local m100 = v % 100
if v < 1000 then
s = chr((v - m100)/10 + 48, (m100 - m10)/10 + 48, m10 + 48, 44)
else
local m1000 = v % 1000
s = chr((v - m1000)/1000 + 48, (m1000 - m100)/100 + 48, (m100 - m10)/10 + 48, m10 + 48, 44)
end
end
end
adctbl[i] = s
end
local log_text_to_save = table.concat(adctbl, "", 1, n-1)
t_tbl[2] = tmr.now()
file.open("/SD0/sddata.txt","a")
file.write(log_text_to_save)
file.close()
t_tbl[3] = tmr.now()
print(t_tbl[2] - t_tbl[1], t_tbl[3] - t_tbl[2]) -- for strings and GC, for File operations
n = 1
end
end
UPDATE3:
For Lua 5.3 and hex digits in log:
-- log output is in hex
local high = {} -- [1] = "", [2] = "1", ..., [256] = "FF"
local low = {} -- [1] = "0,", [2] = "1,", ..., [16] = "F,"
for x = 0, 255 do -- replace 255 with 127 (to save memory) if ADC generates only positive values 0x0000-0x7FF0
high[x+1] = string.format("%X", x*16):sub(1, -2)
if x < 16 then
low[x+1] = string.format("%X,", x)
end
end
-- in case of out-of-memory error reduce measures count (400) to 256
local measures = 400 -- recommended values are powers of 2
local measures_2 = measures*2
-- adctbl[] is not used anymore, text_buffer[] is used instead
local text_buffer = {} -- array of (2*measures) elements
for x = 1, measures_2 do
text_buffer[x] = ""
end
function adcReady(_,_,c)
local _, _, v = adc1:read()
-- 0x0000<=v<=0xFFF0
text_buffer[n] = high[(v>>8)+1]
text_buffer[n+1] = low[((v>>4)&15)+1]
n = n + 2
if n > measures_2 then
t_tbl[1] = tmr.now()
local log_text_to_save = table.concat(text_buffer, "", 1, n-1)
t_tbl[2] = tmr.now()
file.open("/SD0/sddata.txt","a")
file.write(log_text_to_save)
file.close()
t_tbl[3] = tmr.now()
print(t_tbl[2] - t_tbl[1], t_tbl[3] - t_tbl[2]) -- for strings and GC, for File operations
n = 1
end
end
I'm trying to implement this paper https://arxiv.org/pdf/1804.06962.pdf with lua/torch7
During the forward pass I got no problem but for the backward pass modele.gapbranch:backward(n, loss_grad) I got this error :
/home/narimene/distro/install/bin/luajit:
...e/narimene/distro/install/share/lua/5.1/nn/Container.lua:67: In 2 module of nn.Sequential:
/home/narimene/distro/install/share/lua/5.1/nn/Concat.lua:92: bad argument #1 to 'narrow' (number expected, got nil)
stack traceback:
[C]: in function 'narrow'
/home/narimene/distro/install/share/lua/5.1/nn/Concat.lua:92: in function </home/narimene/distro/install/share/lua/5.1/nn/Concat.lua:47>
[C]: in function 'xpcall'
...e/narimene/distro/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
.../narimene/distro/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
gap2.lua:240: in function 'opfunc'
/home/narimene/distro/install/share/lua/5.1/optim/sgd.lua:44: in function 'sgd'
gap2.lua:247: in main chunk
[C]: in function 'dofile'
...ene/distro/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x563fabe66570
WARNING: If you see a stack trace below, it doesn't point to the place
where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
...e/narimene/distro/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
.../narimene/distro/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
gap2.lua:240: in function 'opfunc'
/home/narimene/distro/install/share/lua/5.1/optim/sgd.lua:44: in function 'sgd'
gap2.lua:247: in main chunk
[C]: in function 'dofile'
...ene/distro/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x563fabe66570
Here is the code (gap2.lua):
require 'nn'
require 'cunn'
require 'cutorch'
local GapBranch, Parent = torch.class('nn.GapBranch', 'nn.Module')
function GapBranch:__init(label, num_classes, args, threshold)
Parent.__init(self)
self.gt_labels = label
num_classes = num_classes ~= nil and num_classes or 10
self.threshold = threshold or 0.6
self.gapbranch = nn.Sequential()
self.gapbranch:add(nn.SpatialConvolution(3,512, 3, 3, 1, 1, 1, 1)) -- cette ligne est a enlever
self.cls = self:classifier(512, num_classes)
self.cls_erase = self:classifier(512, num_classes)
self.gapbranch:add(nn.Concat():add(self.cls):add(self.cls_erase))
--self.gapbranch:add(self.cls_erase)
--Optimizer
self.loss_cross_entropy = nn.CrossEntropyCriterion():cuda()
end
function GapBranch:classifier(in_planes, out_planes)
gapcnn = nn.Sequential()
gapcnn:add(nn.SpatialConvolution(in_planes, 1024, 3, 3, 1, 1, 1, 1))
gapcnn:add(nn.ReLU())
gapcnn:add(nn.SpatialConvolution(1024, 1024, 3, 3, 1, 1, 1, 1))
gapcnn:add(nn.ReLU())
gapcnn:add(nn.SpatialConvolution(1024,out_planes, 1, 1, 1,1))
return gapcnn
end
function mulTensor(tensor1, tensor2)
newTensor = torch.Tensor(tensor1:size()):cuda()
for i=1, tensor1:size()[1] do
for j=1, tensor1:size()[2] do
newTensor[{i,j}] = torch.cmul(tensor1[{i,j}],tensor2[{i,1}])
end
end
return newTensor
end
function GapBranch:erase_feature_maps(atten_map_normed, feature_maps, threshold)
if #atten_map_normed:size()>3 then
atten_map_normed = torch.squeeze(atten_map_normed)
end
atten_shape = atten_map_normed:size()
pos = torch.ge(atten_map_normed, threshold)
mask = torch.ones(atten_shape):cuda() -- cuda
mask[pos] = 0.0
m = nn.Unsqueeze(2)
m = m:cuda()
mask = m:forward(mask)
erased_feature_maps = mulTensor(feature_maps,mask) -- Variable
return erased_feature_maps
end
function GapBranch:normalize_atten_maps(atten_map)
atten_shape = atten_map:size()
batch_mins, _ = torch.min(atten_map:view(atten_shape[1],-1),2)
batch_maxs, _ = torch.max(atten_map:view(atten_shape[1],-1),2)
atten_normed = torch.cdiv(atten_map:view(atten_shape[1],-1)-batch_mins:expandAs(atten_map:view(atten_shape[1],-1)), (batch_maxs - batch_mins):expandAs(atten_map:view(atten_shape[1],-1)))
atten_normed = atten_normed:view(atten_shape)
return atten_normed
end
function GapBranch:get_atten_map(feature_maps, gt_labels, normalize)
normalize = normalize or true
label = gt_labels:long()
feature_map_size = feature_maps:size()
batch_size = feature_map_size[1]
atten_map = torch.zeros(feature_map_size[1], feature_map_size[3], feature_map_size[4])
atten_map = atten_map:cuda()
for batch_idx = 1, batch_size do
-- label.data[batch_idx]
--label[batch_idx]
print('label ',label:size())
print('feature_maps ', feature_maps:size())
atten_map[{batch_idx}] = torch.squeeze(feature_maps[{batch_idx,label[batch_idx]}])
end
if normalize then
atten_map = self:normalize_atten_maps(atten_map)
end
return atten_map
end
function GapBranch:gaplayer()
gaplayer = nn.Sequential()
gaplayer:add(nn.SpatialZeroPadding(1, 1, 1 ,1))
gaplayer:add(nn.SpatialAveragePooling(3, 3, 1, 1))
return gaplayer
end
function GapBranch:updateOutput(input) -- need label
-- Backbone
feat = self.gapbranch:get(1):forward(input)
self.gap = self:gaplayer()
self.gap:cuda()
feat3 = self.gap:forward(feat)
m = nn.Unsqueeze(2)
m = m:cuda()
-- Branch A
out = self.gapbranch:get(2):get(1):forward(feat3)
self.map1 = out
logits_1 = torch.squeeze(torch.mean(torch.mean(out, 3), 4))
logits_1 = m:forward(logits_1)
print('logits_1 ',logits_1:size())
--feat5 = self.gapbranch:get(2):get(2):forward(feat3)
localization_map_normed = self:get_atten_map(out, self.gt_labels, true)
self.attention = localization_map_normed
feat_erase = self:erase_feature_maps(localization_map_normed, feat3, self.threshold)
-- Branch B
out_erase = self.gapbranch:get(2):get(2):forward(feat_erase)
self.map_erase = out_erase
logits_ers = torch.squeeze(torch.mean(torch.mean(out_erase, 3), 4))
m = nn.Unsqueeze(2)
m = m:cuda()
logits_ers = m:forward(logits_ers)
print('logits_ers ', logits_ers:size())
return {logits_1, logits_ers}
end
function GapBranch:get_loss(resModele, gt_labels)
--[[ if self.onehot == 'True' then
gt = gt_labels:float()
else
gt = gt_labels:long()
end
--]]
print('resModele ', resModele[1])
loss_cls = self.loss_cross_entropy:forward(resModele[1], gt_labels)
loss_cls_ers = self.loss_cross_entropy:forward(resModele[2], gt_labels)
loss_val = loss_cls + loss_cls_ers
return {loss_val, }
end
require 'paths'
if (not paths.filep("cifar10torchsmall.zip")) then
os.execute('wget -c https://s3.amazonaws.com/torch7/data/cifar10torchsmall.zip')
os.execute('unzip cifar10torchsmall.zip')
end
trainset = torch.load('cifar10-train.t7')
testset = torch.load('cifar10-test.t7')
classes = {'airplane', 'automobile', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
-- ignore setmetatable for now, it is a feature beyond the scope of this tutorial. It sets the index operator.
setmetatable(trainset,
{__index = function(t, i)
return {t.data[i], t.label[i]}
end}
);
trainset.data = trainset.data:double() -- convert the data from a ByteTensor to a DoubleTensor.
function trainset:size()
return self.data:size(1)
end
mean = {} -- store the mean, to normalize the test set in the future
stdv = {} -- store the standard-deviation for the future
for i=1,3 do -- over each image channel
mean[i] = trainset.data[{ {}, {i}, {}, {} }]:mean() -- mean estimation
print('Channel ' .. i .. ', Mean: ' .. mean[i])
trainset.data[{ {}, {i}, {}, {} }]:add(-mean[i]) -- mean subtraction
stdv[i] = trainset.data[{ {}, {i}, {}, {} }]:std() -- std estimation
print('Channel ' .. i .. ', Standard Deviation: ' .. stdv[i])
trainset.data[{ {}, {i}, {}, {} }]:div(stdv[i]) -- std scaling
end
trainset.data = trainset.data:cuda()
trainset.label = trainset.label:cuda()
modele = nn.GapBranch(trainset.label):cuda()
modele.gapbranch = modele.gapbranch:cuda()
print(modele.gapbranch)
theta, gradTheta = modele.gapbranch:getParameters()
optimState = {learningRate = 0.15}
require 'optim'
for epoch = 1, 1 do
function feval(theta)
for i=1, 1 do
modele.gapbranch:zeroGradParameters()
m = nn.Unsqueeze(1)
m = m:cuda()
n = m:forward(trainset.data[i])
h = modele:forward(n)
j = modele:get_loss(h,trainset.label[i])
loss_cls_grad = modele.loss_cross_entropy:backward(h[1],trainset.label[i])
loss_cls_ers_grad = modele.loss_cross_entropy:backward(h[2],trainset.label[i])
loss_grad = loss_cls_grad + loss_cls_ers_grad
loss_grad = torch.randn(1,10,32,32):cuda()
modele.gapbranch:backward(n, loss_grad)
end
return j, gradTheta
end
print('***************************')
optim.sgd(feval, theta, optimState)
end
If anyone could help i would be very grateful
I keep getting this error and I cannot find it. Please help.
LUA ERROR: Cannot load buffer.
[string "LuaMacros script"]:191: '}' expected (to close '{' at line 85) near '['
Here is the script:
--Start Script
sendToAHK = function (key)
--print('It was assigned string: ' .. key)
local file = io.open("C:\\Users\\TaranWORK\\Documents\\GitHub\\2nd-keyboard-master\\LUAMACROS\\keypressed.txt", "w") -- writing this string to a text file on disk is probably NOT the best method. Feel free to program something better!
--Make sure to substitute the path that leads to your own "keypressed.txt" file, using the double backslashes.
--print("we are inside the text file")
file:write(key)
file:flush() --"flush" means "save"
file:close()
lmc_send_keys('{F24}') -- This presses F24. Using the F24 key to trigger AutoHotKey is probably NOT the best method. Feel free to program something better!
end
local config = { -- this is line 85
[45] = "insert",
[36] = "home",
[33] = "pageup",
[46] = "delete",
[35] = "end",
[34] = "pagedown",
[27] = "escape",
[112] = "F1",
[113] = "F2",
[114] = "F3",
[115] = "F4",
[116] = "F5",
[117] = "F6",
[118] = "F7",
[119] = "F8",
[120] = "F9",
[121] = "F10",
[122] = "F11",
[123] = "F12",
[8] = "backspace",
[220] = "backslash",
[13] = "enter",
[16] = "rShift",
[17] = "rCtrl",
[38] = "up",
[37] = "left",
[40] = "down",
[39] = "right",
[32] = "space",
[186] = "semicolon",
[222] = "singlequote",
[190] = "period",
[191] = "slash",
[188] = "comma",
[219] = "leftbracket",
[221] = "rightbracket",
[189] = "minus",
[187] = "equals",
[96] = "num0",
[97] = "num1",
[98] = "num2",
[99] = "num3",
[100] = "num4",
[101] = "num5",
[102] = "num6",
[103] = "num7",
[104] = "num8",
[105] = "num9",
[106] = "numMult",
[107] = "numPlus",
[108] = "numEnter" --sometimes this is different, check your keyboard
[109] = "numMinus",
[110] = "numDelete",
[111] = "numDiv",
[144] = "numLock", --probably it is best to avoid this key. I keep numlock ON, or it has unexpected effects
[192] = "`", --this is the tilde key just before the number row
[9] = "tab",
[20] = "capslock",
[18] = "alt",
[string.byte('Q')] = "q",
[string.byte('W')] = "w",
[string.byte('E')] = "e",
[string.byte('R')] = "r",
[string.byte('T')] = "t",
[string.byte('Y')] = "y",
[string.byte('U')] = "u",
[string.byte('I')] = "i",
[string.byte('O')] = "o",
[string.byte('P')] = "p",
[string.byte('A')] = "a",
[string.byte('S')] = "s",
[string.byte('D')] = "d",
[string.byte('F')] = "f",
[string.byte('G')] = "g",
[string.byte('H')] = "h",
[string.byte('J')] = "j",
[string.byte('K')] = "k",
[string.byte('L')] = "l",
[string.byte('Z')] = "z",
[string.byte('X')] = "x",
[string.byte('C')] = "c",
[string.byte('V')] = "v",
[string.byte('B')] = "b",
[string.byte('N')] = "n",
[string.byte('M')] = "m",
[string.byte('0')] = "0",
[string.byte('1')] = "1",
[string.byte('2')] = "2",
[string.byte('3')] = "3",
[string.byte('4')] = "4",
[string.byte('5')] = "5",
[string.byte('6')] = "6",
[string.byte('7')] = "7",
[string.byte('8')] = "8",
[string.byte('9')] = "9",
--[255] = "printscreen" --these keys do not work
}
-- define callback for whole device
lmc_set_handler('MACROS', function(button, direction)
--Ignoring upstrokes ensures keystrokes are not registered twice, but activates faster than ignoring downstrokes. It also allows press and hold behaviour
if (direction == 0) then return end -- ignore key upstrokes.
if type(config[button]) == "string" then
print(' ')
print('Your key ID number is: ' .. button)
print('It was assigned string: ' .. config[button])
sendToAHK(config[button])
else
print(' ')
print('Not yet assigned: ' .. button)
end
end)
There's a comma missing after the string here:
[108] = "numEnter" --sometimes this is different, check your keyboard
I've been asking questions on random numbers, and I decide the Fisher-Yates shuffle would be the best option. I make a table 't'
t = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
Now, how would I even shuffle these and be able to use them individually, for example generate the results in another table u
u = {}
For those that find this answer later, this will shuffle in place without making a new table:
local function ShuffleInPlace(t)
for i = #t, 2, -1 do
local j = math.random(i)
t[i], t[j] = t[j], t[i]
end
end
And this one that returns a shuffled table without touching the original (unlike the current answer, which both shuffles in-place and returns a copy):
local function Shuffle(t)
local s = {}
for i = 1, #t do s[i] = t[i] end
for i = #t, 2, -1 do
local j = math.random(i)
s[i], s[j] = s[j], s[i]
end
return s
end
Usage:
local t = {"a", "b", "c", "d", "e", "f"}
print(table.concat(t)) --> abcdef
local s = Shuffle(t)
print(table.concat(t)) --> abcdef (unchanged)
print(table.concat(s)) --> fbcade (shuffled)
ShuffleInPlace(t)
print(table.concat(t)) --> dcbfea (shuffled)
And a quick sanity check that they're uniform:
local t = {"a", "b", "c"}
local results = {abc = 0,acb = 0,bac = 0,bca = 0,cab = 0,cba = 0}
for i = 1, 10000000 do
ShuffleInPlace(t)
local r = table.concat(t)
results[r] = results[r] + 1
end
for k, v in pairs(results) do print(k, v) end
--[[
cba 1667473
cab 1666235
bca 1665672
bac 1666782
acb 1666447
abc 1667391
--]]
NOTE: Check the other answer https://stackoverflow.com/a/68486276/1190388 which fixes an issue in the code snippet below as well as providing other alternatives
If you do not have holes in your table:
math.randomseed(os.time()) -- so that the results are always different
function FYShuffle( tInput )
local tReturn = {}
for i = #tInput, 1, -1 do
local j = math.random(i)
tInput[i], tInput[j] = tInput[j], tInput[i]
table.insert(tReturn, tInput[i])
end
return tReturn
end