I have some important metadata information in the UserComment field of the EXIF header of a PNG image. I am trying to retrieve a readable text, but I don't know what is the current encoding.
<exif:UserComment>
<rdf:Alt>
<rdf:li xml:lang="x-default"> FM0 FC000000000:zzzzzz1 f144 078043881a29e1e816c14c0 bac 87152 a9 012c a3 0106 a8 0 e0 b1 0 ba bc 0 94 d1 0 6e 102 0 48 155 0 22 1af 0 36 12f 0 3f 13f 0 48 19f 0 51 1b6 0 48 18d 0 48 18d 0..........................................................................................................................................................................................................................................................................................</rdf:li>
</rdf:Alt>
</exif:UserComment>
Related
I have a Firebase Cloud Function that converts a buffer (from PDF) to a PNG using GraphcisMagick. When I attempt to write this PNG buffer to Firebase Storage, I get a file stub but no content (empty file). My conversion to PNG is failing...
async function createThumbnail(newthumbname, mimetype, filebuffer) {
const file = bucket.file(newthumbname)
const thumbstream = file.createWriteStream({metadata:{contentType:mimetype}})
const gm = require('gm').subClass({imageMagick: true})
gm(filebuffer)
.toBuffer('png', (err, thumbbuffer)=>{
console.log(filebuffer)
console.log(thumbbuffer)
thumbstream.end(thumbbuffer)
})
}
The filebuffer passed into the createThumbnail() has content,
<Buffer 25 50 44 46 2d 31 2e 33 0a 25 c4 e5 f2 e5 eb a7 f3 a0 d0 c4 c6 0a 33 20 30 20 6f 62 6a 0a 3c 3c 20 2f 46 69 6c 74 65 72 20 2f 46 6c 61 74 65 44 65 63 ... 6464 more bytes>
But the gm(filebuffer).toBuffer() is producing and empty thumbbuffer with Error: Stream yields empty buffer
What am I doing wrong here?
This appears to be a png issue, as jpg seems to work with both, .stream() and .toBuffer().
I'll settle for jpg until I can figure out what's wrong with png.
async function createThumbnail(newthumbname, mimetype, filebuffer) {
const file = bucket.file(newthumbname)
const thumbstream = file.createWriteStream({metadata:{contentType:mimetype}})
const gm = require('gm').subClass({imageMagick: true})
gm(filebuffer)
// .setFormat("jpg")
// .stream()
// .pipe(thumbstream)
.toBuffer('jpg', (err, thumbbuffer)=>{
thumbstream.end(thumbbuffer)
})
}
I can get AOB from 4 bytes value (DWORD) using this function on Cheat Engine Lua:
local bt = dwordToByteTable(1075734118)
for i, v in ipairs(bt) do
print(i, string.format('%02x', v))
end
result = [[
1 66
2 66
3 1e
4 40
]]
but I want the result as '66 66 1e 40'.
How set the regex for this?.
If I have a table like this:
cd = {
1075734118,
1075734118,
1075996262,
1076953088,
1076651622,
1076953088,
1076835123
}
How I get AOB for each item on the table with output as no.1?
Found solution:
cd = { 1075734118, 1075734118, 1075996262, 1076953088, 1076651622, 1076953088, 1076835123 }
function byte2aob(b) return type(b)=='number' and b<256 and b>=0 and string.format('%02X',b) or '??' end
function aob2byte(a) a = tonumber(a,16) return type(a)=='number' and a <256 and a>=0 and a or -1 end
function imap(t,f) local s={} for i=1,#t do s[i]=f(t[i]) end return s end
function n2bt(n,t) t=type(t)=='string' and t or 'dword' return rawget(_G,t..'ToByteTable')(n) end
function t2aob(t,sep) return table.concat(imap(t,byte2aob),type(sep)=='string' and sep or ' ') end
function n2aob(n,t) return t2aob(n2bt(n,t)) end
for i = 1, #cd do
print(n2aob(cd[i],'dword'))
end
result = [[
66 66 1E 40
66 66 1E 40
66 66 22 40
00 00 31 40
66 66 2C 40
00 00 31 40
33 33 2F 40
]]
Try this code:
print((string.format("%08x",1075734118):reverse():gsub("(.)(.)","%2%1 ")))
I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.
I translated two functions in delphi but i don't know if they are right, I need to write the def do_aes_encrypt(key2_t_xor) to know if I am right.
This is what I wrote in delphi:
function key_transform (old_key:string): string;
var
x :integer;
begin
result:='';
for x := 32 downto 0 do
result:= result + chr(ord(old_key[x-1])-( x mod $0C)) ;
end;
function key_xoring ( key2_t :string ; kilo_challenge :string) : string ;
var
i :integer;
begin
result := '';
i:=0 ;
while i <= 28 do begin
result := result + chr(ord(key2_t[i+1]) xor ord(kilo_challenge[3]));
result := result + chr(ord(key2_t[i+2]) xor ord(kilo_challenge[2])) ;
result := result+ chr(ord(key2_t[i+3]) xor ord (kilo_challenge[1])) ;
i := i + 4 ;
end;
end;
This is the original python code:
def key_transform(old_key):
new_key = ''
for x in range(32,0,-1):
new_key += chr(ord(old_key[x-1]) - (x % 0x0C))
return new_key
def key_xoring(key2_t, kilo_challenge):
key2_t_xor = ''
i = 0
while i <= 28:
key2_t_xor += chr(ord(key2_t[i]) ^ ord(kilo_challenge[3]))
key2_t_xor += chr(ord(key2_t[i+1]) ^ ord(kilo_challenge[2]))
key2_t_xor += chr(ord(key2_t[i+2]) ^ ord(kilo_challenge[1]))
key2_t_xor += chr(ord(key2_t[i+3]) ^ ord(kilo_challenge[0]))
i = i + 4
return key2_t_xor
def do_aes_encrypt(key2_t_xor):
plaintext = b''
for k in range(0,16):
plaintext += chr(k)
obj = AES.new(key2_t_xor, AES.MODE_ECB)
return obj.encrypt(plaintext)
/////////////////////////////////////////////////////////////////////////////
{
kilo_challenge = kilo_header[8:12]
chalstring = ":".join("{:02x}".format(ord(k)) for k in kilo_challenge)
key2 = 'qndiakxxuiemdklseqid~a~niq,zjuxl' # if this doesnt work try 'lgowvqnltpvtgogwswqn~n~mtjjjqxro'
kilo_response = do_aes_encrypt(key_xoring(key_transform(key2),kilo_challenge))}
this code is for calculate data line 16 byte to be send as an addition to 32 byte
before
look photo the marked line in blue is what i need to calculate by the 4 byte hex befor marked in porple
and this is the key
key2 = 'qndiakxxuiemdklseqid~a~niq,zjuxl'
in delphi
because python code is working perfect
look to the photo
how it work
this is for lg phones upgrading firmware when i receive the KILOCENT ANSOWER AS THE photo show`s
this below change every time phone connected
||
V
4b 49 4c 4f 43 45 4e 54 ([ac e5 b1 06]) 00 00 00 00 KILOCENT¬å±.....
00 00 00 00 00 00 00 00 30 d4 00 00 b4 b6 b3 b0 ........0Ô..´¶³°
i have to send KILOMETER REQUEST to phone the first and second line is fixed no change but the third i have to change it by the AES ECB MODE encryption look
4b 49 4c 4f 4d 45 54 52 00 00 00 00 02 00 00 00 KILOMETR........
00 00 00 00 10 00 00 00 85 b6 00 00 b4 b6 b3 b0 ........…¶..´¶³°
fc 21 d8 e5 5b aa fd 58 1e 33 58 fd e9 0b 65 38 ü!Øå[ªýX.3Xýé.e8 <==this
and this is old key
key2 = 'qndiakxxuiemdklseqid~a~niq,zjuxl'
I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in C++. But the result is different than I expected.
With the following inputs:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1
-2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71
and the following outputs:
0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296
1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500
2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096
4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144
169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841
900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849
1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249
3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041
I used fitting tool network. with matrix rows. Training is 70%, validation is 15% and testing is 15% as well. The number of hidden neurons is two. Then in command lines I wrote this:
purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})
Other information :
My net.b[1] is: -1.16610230053776 1.16667147712026
My net.b[2] is: 51.3266249426358
And net.IW(1) is: 0.344272596370387 0.344111217766824
net.LW(2) is: 31.7635369693519 -31.8082184881063
When my inputTest is 3, the result of this command is 16, while it should be about 9. Have I made an error somewhere?
I found the Stack Overflow post Neural network in MATLAB that contains a problem like my problem, but there is a little difference, and the differences is in that problem the ranges of input and output are same, but in my problem is no. That solution says I need to scale out the results, but how can I scale out my result?
You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:
>> net = fitnet(2);
>> net.inputs{1}.processFcns
ans =
'removeconstantrows' 'mapminmax'
>> net.outputs{2}.processFcns
ans =
'removeconstantrows' 'mapminmax'
The second preprocessing function applied to both input/output is mapminmax with the following parameters:
>> net.inputs{1}.processParams{2}
ans =
ymin: -1
ymax: 1
>> net.outputs{2}.processParams{2}
ans =
ymin: -1
ymax: 1
to map both into the range [-1,1] (prior to training).
This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.
One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.
You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).
Here is an example to illustrate the idea (adapted from another post of mine):
%%# data
x = linspace(-71,71,200); %# 1D input
y_model = x.^2; %# model
y = y_model + 10*randn(size(x)).*x; %# add some noise
%%# create ANN, train, simulate
net = fitnet(2); %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);
%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on
%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);
%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(#plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} ); %# output layer
%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);
%# compare against MATLAB output
max( abs(out - y_hat) ) %# this should be zero (or in the order of `eps`)
I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;