I know that printf(%08x) shows 4 octets of the stack (410484e4 for instance). Let's say that this value correspond to the begining of a char array (called tab), so what would be the value of tab[0], would it be 08 ('A' converted in ASCII) or e4 (รค) ?
Thank you
p.s: the executable I'm working on is an ELF 32 binary
Related
I've a question it is a kind of disagreement with a university professor, about Endianness, So I did not find any mean to solve this and find the right answer but asking and open a discussion in Stack Overflow community.
Let's say that we have this number (hex)11FF1 defined as an integer, for example in C++ it will be like : int num = 0x11FF1, and I say that the number will be presented in the memory in a little endian machine as :
addr[0] is f1 addr[1] is 1f addr[2] is 01 addr[3] is 00
in binary : 1111 0001 0001 1111 0000 0001 0000 0000
as the compiler considers 0x11ff1 as 0x0001ff1 and considers also 00 as the 1st byte and 01 as the 2nd byte and so on, and for the Big Endian I believe it will look like:
addr[0] is 00 addr[1] is 01 addr[2] is 1f addr[3] is f1
in binary : 0000 0000 0000 0001 0001 1111 1111 0001
but he has another opinion, he says :
Little Endian
Big Endian:
Actually I don't see anything logical in his representation, so I hope the developers Resolve this disagreement, Thanks in advance.
Your hex and binary numbers are correct.
Your (professor's?) French image for little-endian makes no sense at all, none of the 3 representations are consistent with either of the other 2.
73713 is 0x11ff1 in hex, so there aren't any 0xFF bytes (binary 11111111).
In 32-bit little-endian, the bytes are F1 1F 01 00 in order of increasing memory address.
You can get that by taking pairs of hex digits (bytes / octets) from the low end of the full hex value, then fill with zeros once you've consumed the value.
It looks like they maybe padded the wrong side of the hex value with 0s to zero-extend to 32 bits as 0x11ff1000, not 0x00011ff1. Note these are full hex values of the whole number, not an attempt to break it down into separate hex bytes in any order.
But the hex and binary don't match each other; their binary ends with an all-ones byte, so it has FF as the high byte, not the 3rd byte. I didn't check if that matches their hex in PDP (mixed) endian.
They broke up their hex column into 4 byte-sized groups, which would seem to indicate that it's showing bytes in memory order. But that column is the same between their big- and little-endian images, so apparently that's not what they're doing, and they really did just extend it to 32 bits by left shifting (padding with low instead of high zero).
Also, the binary field in the big vs. little endian aren't the reverse of each other. To flip from big to little endian, you reverse the order of the bytes within the integer, keeping each byte value the same. (like x86 bswap). Their 11111111 (FF) byte is 2nd in their big-endian version, but last in little-endian.
TL:DR: unfortunately, nothing about those images makes any sense that I can see.
a super stupid question:
I have an integer in my code, which occupies 4 bytes ( of course ), this information in memory is represented as a pack of four hexadecimal of two digits, for example
int x = 1000
in memory is represented as
e8 03 00 00
where the first hex represents the "lower" byte and the last is the "highest".
How is this representation called? Are there other representations? I just need the name. I'm struggling to find online this information :(
Thanks
The word you are looking for is Endianness.
Is there a way I can Convert my Lockbox 2 Cipher text to LockBox 3 Cipher text.
We are migrating our application built on Delphi 2007 to Delphi xe2, we used the Lockbox 2 RSA Encryption algorithm in Delphi 2007 and we intend to use lockbox 3 in Delphi xe2 to support Unicode data. since the cipher text generated by both of them differs as Xe2 supports Unicode data we face a problem . So we would like to convert the cipher text that is generated by Lockbox 2 to LockBox 3 somehow.
Since your cipher text by definition is unrecognizable, there is no easy way to tell if the underlying plaintext data was Ansi or Unicode....so you likely need to manage a new associated property.
It obviously depends on the layout of your application and where this data is stored and how the clients are going to be upgraded, but there could be a new version flag of some sort associated with the stored ciphertext. If it's in a local table say, add a new column for PlainTextVersion and set the version to some value to flag that the ciphertext was saved from Unicode plaintext. When reading the ciphertext and this new field doesn't match the Unicdoe flag, you could upgrade the ciphertext by decrypting, and encrypting using Unicode plaintext, and then re-save the ciphertext and set the new flag (or simply put-off the ciphertext version upgrade until the plaintext has changed and needs to be updated.)
Or, better yet, auto-upgrade all current ciphertext at one time if feasible.
To convert, it would be easiest to use Lockbox 2 to decrypt your cypher text and use Lockbox 3 to reencrypt it.
The reason is that from what I can tell, Lockbox 2 stuffed up the implementation of the RSA block type 2 padding which means that Lockbox 2's RSA encryption is not compatible with anybody else's RSA decryption.
Lockbox 2's RSA encryption pads out the message incorrectly as follows (found by placing a breakpoint and inspecting the memory at biBlock.Fi.IntBuf.pBuf):
message-bytes 0x00 random-padding-bytes 0x02 0x00
e.g. 'test' was padded to:
$01C883AC 74 65 73 74 00 D4 50 50 test..PP
$01C883B4 A7 BO E5 51 7A 4C C2 BC ...QzL..
$01C883BC 8C B8 69 8A 97 DF AA 1D ..I.....
$01C883C4 78 67 1E OE 8B AB 02 00 xg......
But it should be padded out to (e.g. look at this worked example):
0x00 0x02 random-padding-bytes 0x00 message-bytes
Lockbox 2 isn't just storing the bytes in reverse (otherwise the message "test" would also be reversed) or reversed 32 bit little endian (otherwise the 02 00 would be swapped too). Everything works so long as you use Lockbox 2 for both encryption and decryption.
Also I noticed another bug where Lockbox 2 calls e.RandomSimplePrime() to generate the public exponent e but it generates an even number i.e. a fairly noteworthy bug in RandomSimplePrime() eh? I only looked at Lockbox 2.07. Lockbox 3 was a complete rewrite, so it won't have these bugs.
I am having an issue with the StrToFloat routine. I am running Delphi 7 on Windows Vista with the regional format set to German (Austria)
If I run the following code -
DecimalSeparator:='.';
anum:=StrToFloat('50.1123');
edt2.Text:=FloatToStr(anum);
when I convert the string to a float anum becomes 50,1123 and when I convert it back to a sting it becomes '50.1123'
How do it so that when I convert the string to a float the number appears with a decimal point rather than a comma as the decimal separator.
thanks
Colin
You have to appreciate the difference between a floating-point number and a textual representation of it (that is, a string of characters).
A floating-point number, as it is normally stored in a computer (e.g. in a Delphi float variable), does not have a decimal separator. Only a textual representation of it does. If the IDE displays the anum as '50,1123' this simply means that the IDE uses your computer's local regional settings when it creates a textual representation of the number inside the IDE.
In your computer's memory, the value '50.1123' (or, if you prefer, '50,1123'), is only stored using ones and zeroes. In hexadecimal notation, the number is stored as 9F AB AD D8 5F 0E 49 40 and contains no information about how it should be displayed. It is not like you can grab a magnifying glass and direct it to a RAM module to find a tiny, tiny, string '50.1123' (or '50,1123').
Of course, when you want to display the number to the user, you use FloatToStr which takes the number and creates a string of characters out of it. The result can be either '50.1123' or '50,1123', or something else. (In memory, these strings are 35 30 2C 31 31 32 33 and 35 30 2E 31 31 32 33 (ASCII), respectively.)
I'm trying to write a bitmap (.bmp) parser/reader by reading raw bytes from the file and simply checking their values, and I've come across something I simply cannot wrap my mind around.
The image I'm trying to read is 512x512 pixels, and when I look at the width property (at 0x12 and 4 bytes onward) it says 00 02 00 00 (when viewed in a hex editor). I assume this is the same as the binary value 00000000 00000010 00000000 00000000. This somehow represents 512, I just cannot figure out the steps to get there.
So what I really need to know is how are integers represented binarily, and how do I parse them correctly? Any help is much appreciated. :)
What you are seeing in your hex editor is actually right. Just remember that bytes are in little endian order, so the value is actually 00 00 02 00 = 0x0200 = 512.
Actually 0x200 in hex equals 512 in decimal. You may have the position of the width/height properties wrong.