Lua: writing hexadecimal values as a binary file - lua

I have several hex-values that I try to write to a file. It seems that Lua doesn't support that out of the box, since they are all treated as strings instead of values. I figured I would have to break up a longer hex-value, for example AABBCC into AA, BB, CC and use string.char() on all of their decimal values consecutively to get the job done.
Is there a built in function that allows me to write such values directly without converting them first? I used escape characters such as "0xAA" and "\xAA", but those didn't work out.
Edit: Let me give you an example. I'm looking at a test file in a hex editor:
00000000 00 00 00 00 00 00 ......
And I want to write to it in the following fashion with the string "AABBCC":
00000000 AA BB CC 00 00 00 ......
What I get though with the escape characters is:
00000000 41 41 42 42 43 43 AABBCC

I use the following functions to convert between a hex string and a "raw binary":
function string.fromhex(str)
return (str:gsub('..', function (cc)
return string.char(tonumber(cc, 16))
end))
end
function string.tohex(str)
return (str:gsub('.', function (c)
return string.format('%02X', string.byte(c))
end))
end
They can be used as follows:
("Hello world!"):tohex() --> 48656C6C6F20776F726C6421
("48656C6C6F20776F726C6421"):fromhex() --> Hello world!

So you have a string like this:
value = 'AABBCC'
And you want to print it (or turn it into a string) like this?
'101010101011101111001100'
How about this?
function hex2bin(str)
local map = {
['0'] = '0000'
['1'] = '0001'
['2'] = '0010'
-- etc. up to 'F'
}
return str:gsub('[0-9A-F]', map)
end
Note that it leaves untouched any characters which could not be interpreted as hex.

There is so such function because it's that easy to write one.
function writeHex(str,fh)
for byte in str:gmatch'%x%x' do
fh:write(string.char(tonumber(byte,16)))
end
end
This just plainly writes the values to the file pointed to by the fh filehandle.

Related

Calculating Hex In Cheat Engine Lua?

I have a 4 byte hexadecimal value that I have a script to print out, But I want to now take that value then subtract the value C8 from it 37 times and save them as different variables each time, But the problem is I don't know how to do hexadecimal calculations in lua, If anyone can link me to any documentation on how to do this then that would be much appreciated.
You can make a hexadecimal literal in Lua by prefixing it with 0x, as stated in the reference manual. I found this by googling "lua hex"; such searches usually get good results.
"Hexadecimal numbers" aren't anything special, hexadecimal is just a way to represent numbers, same as decimal or binary. You can do 1000-0xC8 and you'll get the decimal number 800.
Code to convert:
function convertHex()
local decValue = readInteger(0x123456);
hexValue = decValue
end
function hexSubtract()
for i = 1,37 do
local value = 0xC8
hexValue = hexValue - 0xC8
result = hexValue
if i == 37 then
print(result) --Prints dec value
print(string.format('%X',result)); --Prints hex value
end
end
end
Replace 0x123456 with your address, use those functions like this convertHex(),hexSubtract()

Extract image type from NSData or encoded image

Background
I have a function where a user can upload and send over an image from my ios to my rails app and it works for .jpg files. I would like to make it work for all image types. All I need to do is send over the image type in the API POST.
So far the user uploads the image and its an NSData, then I encoded into a image.base64EncodedStringWithOptions and put it into a JSON and send it over. This works for .jpg's.
Question
How do I get the image's type from the NSData or the encoded string of an image?
Examples
Here is the NSData of a very small .png that the user might try to upload.
<89504e47 0d0a1a0a 0000000d 49484452 0000000a 0000000a 08060000 008d32cf bd000000 01735247 4200aece 1ce90000 00097048 59730000 16250000 16250149 5224f000 00001c69 444f5400 00000200 00000000 00000500 00002800 00000500 00000500 00005ec1 07ed5500 00002a49 44415428 1562f88f 04181818 fea36398 34038c01 a2d11581 f8308060 11ab109b 691826e2 5284ac10 000000ff ff232a1e 6b000000 27494441 5463f80f 040c0c0c 3831481e 0418c004 0e856015 5002a742 644560c3 c0041613 c9560800 782fe719 4293f838 00000000 49454e44 ae426082>
Here is output of
strBase64:String = image.base64EncodedStringWithOptions(.Encoding64CharacterLineLength)
print(strBase64)
iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAAXNSR0IArs4c6QAA\r\nAAlwSFlzAAAWJQAAFiUBSVIk8AAAABxpRE9UAAAAAgAAAAAAAAAFAAAAKAAAAAUA\r\nAAAFAAAAXsEH7VUAAAAqSURBVCgVYviPBBgYGP6jY5g0A4wBotEVgfgwgGARqxCb\r\naRgm4lKErBAAAAD//yMqHmsAAAAnSURBVGP4DwQMDAw4MUgeBBjABA6FYBVQAqdC\r\nZEVgw8AEFhPJVggAeC/nGUKT+DgAAAAASUVORK5CYII=
You should check your Base 64 encoded string for magic numbers in files.
https://en.m.wikipedia.org/wiki/Magic_number_(programming)#Magic_numbers_in_files
For example for PNG:
PNG image files begin with an 8-byte signature which identifies the file as a PNG file and allows detection of common file transfer problems: \211 P N G \r \n \032 \n (89 50 4E 47 0D 0A 1A 0A).
For JPEG:
JPEG image files begin with FF D8 and end with FF D9. JPEG/JFIF files contain the ASCII code for "JFIF" (4A 46 49 46) as a null terminated string. JPEG/Exif files contain the ASCII code for "Exif" (45 78 69 66) also as a null terminated string, followed by more metadata about the file.
So parse your string, detect matching of any magic number, map magic number to image file type.

Incorrect values from reading image EXIF Orientation on iOS?

I am using Exif information to have a correct rotation for an image captured from mobile camera.
In Android version the possible values are 1,3,6,8, and 9.
In iOS, I am using the same code, but getting invalid values like 393216, 196608, 524288, 65536 etc..
I don't understand why there is such a difference ?
Short answer:
For iOS you need to read those bytes in reverse order for correct value. Plus you are incorrectly reading 24-bits (3 bytes) instead of just 16-bits (2 bytes). Or maybe you are extracting 2 bytes but somehow your bytes are getting an extra "zero" byte added at the end??
You could try having an OR check inside an If statement thats checks both Endian type equivalents. Since where Android = 3 would become iOS = 768, you can try:
if (orient_val == 3 || orient_val == 768)
{ /* do whatever you do here */ }
PS: 1==256 2==512 3==768 4==1024 5==1280 6==1536 7==1792 8==2048, 9==2304
long version:
Android processors typically read bytes as Little Endian. Apple processors read bytes as Big Endian. Basically one type is read right-to-left, the other, is left-to-right. Where Android has ABCD that becomes in iOS as DCBA.
Some pointers:
Your 3 as (2 bytes) in Lil' E is written 00+03... but in
Big E it's written 03+00.
Problem is, if you dont adapt and just read that 03 00 as though it's still LE then you get 768.
Worst still, somehow you are reading it as 03 00 00 which gives you
that 196608.
Another is 06 00 00 giving you 393216 instead of reading 60 00 for 1536.
Fix your code to drop the extra 00 byte at the end.
You were lucky on Android cos I suspect it wants 4 bytes instead of 2 bytes. So that 00 00 06 was being read as 00 00 00 06 and since x000006 and x00000006 mean the same thing=6.
Anyways to fix this normally you could just tell AS3 to consider your Jpeg bytes as Big Endian but that would now fix iOS but then break it on Android.
A quick easy solution is to check if the number you got is bigger than 1 digit, if it is then you assume app is running on iOS and try reverse-ordering to see if now the result is 1 digit. So..
Note: option B shown in code is risky because if you have wrong numbers anyway you'll get a wrong result. You know computers.. "bad input = bad output; do Next();"
import flash.utils.ByteArray;
var Orientation_num:uint = 0;
var jpeg_bytes:ByteArray = new ByteArray(); //holds entire JPEG data as bytes
var bytes_val:ByteArray = new ByteArray(); //holds byte values as needed
Orientation_num = 2048; //Example: Detected big number that should be 8.
if (Orientation_num > 8 ) //since 8 is maximum of orientation types
{
trace ("Orientation_num is too big : Attempting fix..");
//## A: CORRECT.. Either read directly from JPEG bytes
//jpeg_bytes.position = (XX) - 1; //where XX is start of EXIF orientation (2 bytes)
//bytes_val = jpeg_bytes.readShort(); //extracts the 2 bytes
//## B: RISKY.. Or use the already detected big number anyway
bytes_val.writeShort(Orientation_num);
//Flip the bytes : Make x50 x00 become x00 x50
var tempNum_ba : ByteArray = new ByteArray(); //temporary number as bytes
tempNum_ba[0] = bytes_val[1];
tempNum_ba[1] = bytes_val[0];
//tempNum_ba.position = 0; //reset pos before checking
Orientation_num = tempNum_ba.readShort(); //pos also MOVES forward by 2 bytes
trace ("Orientation_num (FIXED) : " + Orientation_num);
}

What does the ampersand do in a print statement? [duplicate]

I was reading through some code examples and came across a & on Oracle's website on their Bitwise and Bit Shift Operators page. In my opinion it didn't do too well of a job explaining the bitwise &. I understand that it does a operation directly to the bit, but I am just not sure what kind of operation, and I am wondering what that operation is. Here is a sample program I got off of Oracle's website: http://docs.oracle.com/javase/tutorial/displayCode.html?code=http://docs.oracle.com/javase/tutorial/java/nutsandbolts/examples/BitDemo.java
An integer is represented as a sequence of bits in memory. For interaction with humans, the computer has to display it as decimal digits, but all the calculations are carried out as binary. 123 in decimal is stored as 1111011 in memory.
The & operator is a bitwise "And". The result is the bits that are turned on in both numbers. 1001 & 1100 = 1000, since only the first bit is turned on in both.
The | operator is a bitwise "Or". The result is the bits that are turned on in either of the numbers. 1001 | 1100 = 1101, since only the second bit from the right is zero in both.
There are also the ^ and ~ operators, that are bitwise "Xor" and bitwise "Not", respectively. Finally there are the <<, >> and >>> shift operators.
Under the hood, 123 is stored as either 01111011 00000000 00000000 00000000 or 00000000 00000000 00000000 01111011 depending on the system. Using the bitwise operators, which representation is used does not matter, since both representations are treated as the logical number 00000000000000000000000001111011. Stripping away leading zeros leaves 1111011.
It's a binary AND operator. It performs an AND operation that is a part of Boolean Logic which is commonly used on binary numbers in computing.
For example:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
You can also perform this on multiple-bit numbers:
01 & 00 = 00
11 & 00 = 00
11 & 01 = 01
1111 & 0101 = 0101
11111111 & 01101101 = 01101101
...
If you look at two numbers represented in binary, a bitwise & creates a third number that has a 1 in each place that both numbers have a 1. (Everywhere else there are zeros).
Example:
0b10011011 &
0b10100010 =
0b10000010
Note that ones only appear in a place when both arguments have a one in that place.
Bitwise ands are useful when each bit of a number stores a specific piece of information.
You can also use them to delete/extract certain sections of numbers by using masks.
If you expand the two variables according to their hex code, these are:
bitmask : 0000 0000 0000 1111
val: 0010 0010 0010 0010
Now, a simple bitwise AND operation results in the number 0000 0000 0000 0010, which in decimal units is 2. I'm assuming you know about the fundamental Boolean operations and number systems, though.
Its a logical operation on the input values. To understand convert the values into the binary form and where bot bits in position n have a 1 the result has a 1. At the end convert back.
For example with those example values:
0x2222 = 10001000100010
0x000F = 00000000001111
result = 00000000000010 => 0x0002 or just 2
Knowing how Bitwise AND works is not enough. Important part of learning is how we can apply what we have learned. Here is a use case for applying Bitwise AND.
Example:
Adding any even number in binary with 1's binary will result in zeros. Because all the even number has it's last bit(reading left to right) 0 and the only bit 1 has is 1 at the end.
If you were to ask write a function which takes an argument as a number and returns true for even number without using addition, multiplication, division, subtraction, modulo and you cannot convert number to string.
This function is a perfect use case for using Bitwise AND. As I have explained earlier. You ask show me the code? Here is the java code.
/**
* <p> Helper function </p>
* #param number
* #return 0 for even otherwise 1
*/
private int isEven(int number){
return (number & 1);
}
is doing the logical and digit by digit so for example 4 & 1 became
10 & 01 = 1x0,0x1 = 00 = 0
n & 1 is used for checking even numbers since if a number is even the oeration it will aways be 0
import.java.io.*;
import.java.util.*;
public class Test {
public static void main(String[] args) {
int rmv,rmv1;
//this R.M.VIVEK complete bitwise program for java
Scanner vivek=new Scanner();
System.out.println("ENTER THE X value");
rmv = vivek.nextInt();
System.out.println("ENTER THE y value");
rmv1 = vivek.nextInt();
System.out.println("AND table based\t(&)rmv=%d,vivek=%d=%d\n",rmv,rmv1,rmv&rmv1);//11=1,10=0
System.out.println("OR table based\t(&)rmv=%d,vivek=%d=%d\n",rmv,rmv1,rmv|rmv1);//10=1,00=0
System.out.println("xOR table based\t(&)rmv=%d,vivek=%d=%d\n",rmv,rmv1,rmv^rmv1);
System.out.println("LEFT SWITH based to %d>>4=%d\n",rmv<<4);
System.out.println("RIGTH SWITH based to %d>>2=%d\n",rmv>>2);
for(int v=1;v<=10;v++)
System.out.println("LIFT SWITH based to (-NAGATIVE VALUE) -1<<%d=%p\n",i,-1<<1+i);
}
}

Poke opcodes into memory

Hi I am trying to understand whether it is possible to take instruction opcodes and 'poke' them into memory or smehow convert them to a binary program. I have found an abandoned lisp project here: http://common-lisp.net/viewvc/cl-x86-asm/cl-x86-asm/ which takes x86 asm instructions and converts them into opcodes (please see example below). The project does not go further to actually complete the creation of the binary executable. Hence I would need to do that 'manually' Any ideas can help me. Thanks.
;; assemble some code in it
(cl-x86-asm::assemble-forms
'((.Entry :PUSH :EAX)
(:SUB :EAX #XFFFEA)
(:MOV :EAX :EBX)
(:POP :EAX)
(:PUSH :EAX)
(.Exit :RET))
Processing...
;; print the assembled segment
(cl-x86-asm::print-segment)
* Segment type DATA-SEGMENT
Segment size 0000000C bytes
50 81 05 00 0F FF EA 89
03 58 50 C3
Clozure Common Lisp for example has this built-in. This is usually called LAP, Lisp Assembly Program.
See defx86lapfunction.
Example:
(defx86lapfunction fast-mod ((number arg_y) (divisor arg_z))
(xorq (% imm1) (% imm1))
(mov (% number) (% imm0))
(div (% divisor))
(mov (% imm1) (% arg_z))
(single-value-return))
SBCL can do some similar with VOP (Virtual Operations).
http://g000001.cddddr.org/2011-12-08
I learned that it can be done using CFFI/FFI for example the very simple asm code:
(:movl 12 :eax)
(:ret)
This will be converted to the following sequence of octets: #(184 12 0 0 0 195) which in hex it is: #(B8 C 0 0 0 C3). The next step is to send it to a location in memory as such:
(defparameter pointer (cffi:foreign-alloc :unsigned-char :initial-contents #(184 12 0 0 0 195)))
;; and then execute it as such to return the integer 12:
(cffi:foreign-funcall-pointer pointer () :int)
=> result: 12
Thanks to the experts in #lisp (freenode irc channel) for helping out with this solution.

Resources