During the study of the PCI firmware specification and the looking at the existing implementations of the PXE Boot Agents, I had a misunderstanding of how this should work.
According to PCI Firmware Specification, during the POST procedure the BIOS should map Option ROM into UMB memory (0xC000-0xF000), then call "Init" entry point by the offset 0x3, and after this the BIOS can disable Option ROM.
PXE oprom binary consists from three parts: "Initialization code", "Base code" and "UNDI code".
BIOS loads into UMB only "Initialization code". Base code and UNDI code are loaded into memory later through copying directly from the flash memory (from PCI Flash BAR (BAR1, according Intel specifications).
The question: what are the reasons for the need for such an algorithm of work?
Why the vendors do not use the BIOS mechanisms and do not load the entire Extension ROM into memory (instead copying from Flash BARs)?
A monolithic PXE option ROM was a single unit but most PXE option ROMs now have a split architecture (split into UNDI option ROM and a BC option ROM). Although, the BC ROM is typically embedded in the BIOS and may not even appear as an option ROM.
The NIC only has one option ROM nowadays, the UNDI option ROM.
Option ROM Header: 0x000DA000
55 AA 08 E8 76 10 CB 55 BC 01 00 00 00 00 00 00 U...v..U........
00 00 00 00 00 00 20 00 40 00 60 00 ...... .#.`.
Signature 0xAA55
Length 0x08 (4096 bytes)
Initialization entry 0xCB1076E8 //call then far return
Reserved 0x55 0xBC 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Reserved 0x00 0x00 0x00 0x00 0x00
PXEROMID Offset 0x0020 //RWEverything didn't pick it up as a separate field and made it part of the reserved section so I separated it.
PCI Data Offset 0x0040
Expansion Header Offset 0x0060
UNDI ROM ID Structure: 0x000DA020 //not recognised by RW Everything so I parsed it myself
55 4E 44 49 16 08 00 00 01 02 32 0D 00 08 B0 C4 UNDI......2...
80 46 50 43 49 52 ¦-ÇFPCIR
Signature UNDI
StructLength 0x16
Checksum 0x08
StructRev 0x00
UNDIRev 0x00 0x01 0x02
UNDI Loader Offset 0x0D32
StackSize 0x0800
DataSize 0xC4B0
CodeSize 0x4680
BusType PCIR
PCI Data Structure: 0x000DA040
50 43 49 52 EC 10 68 81 00 00 1C 00 03 00 00 02 PCIR..h.........
08 00 01 02 00 80 08 00 ........
Signature PCIR
Vendor ID 0x10EC - Realtek Semiconductor
Device ID 0x8168
Product Data 0x0000
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x00 0x02
Image Length 0x0008
Revision Level 0x0201
Code Type 0x00
Indicator 0x80
Reserved 0x0008
PnP Expansion Header: 0x000DA060
24 50 6E 50 01 02 00 00 00 D7 00 00 00 00 AF 00 $PnP............
92 01 02 00 00 E4 00 00 00 00 C1 0B 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x0000
Reserved 0x00
Checksum 0xD7
Device ID 0x00000000
Manufacturer 0x00AF - Intel Corporation
Product Name 0x0192 - Realtek PXE B02 D00
Device Type Code 0x02 0x00 0x00
Device Indicators 0xE4
Boot Connection Vector 0x0000
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0BC1 // will be at 0xDABC1
Reserved 0x0000
Resource info. vector 0x0000
Related
I'm struggling with an old radiation sensor and his communication protocol.
The sensor is event driven, the master starts the communication with a data transmission or a data request.
Each data telegram uses a CRC16 to check only the variable data block and a CRC8 to check all the telegram.
My main problem is the crc16, According to the datasheet the poly used to check the data block is: CRC16 = X^14 + X^12 + X^5 + 1 --> 0x5021 ??
I captured some data with a valid CRC16 and tried to replicate the expected value in order to send my own data transmission, but I can't get the same value.
I'm using the sunshine CRC calculator trying any possible combination with that poly.
I also try CRC Reveng but no results.
Here are a few data with the correct CRC16:.
Data | CRC16 (MSB LSB)
14 00 00 0A | 1B 84
15 00 00 0C | 15 88
16 00 00 18 | 08 1D
00 00 00 00 | 00 00
00 00 00 01 | 19 D8
00 00 00 02 | 33 B0
01 00 00 00 | 5A DC
08 00 00 00 | c6 c2
10 00 00 00 | 85 95
80 00 00 00 | 0C EC
ff ff ff ff | f3 99
If I send an invalid CRC16 in the telegram, the sensor send a negative acknowledge with the expected value, so I can try any data in order to test or get more examples if needed.
if useful, the sensor uses a 8bit 8051 microprocessor, and this is an example of a valid CRC8 checked with sunshine CRC:
CRC8 = X^8 + X^6 + X^3 + 1 --> 0x49
Input reflected Result reflected
control byte | Data |CRC16 | CRC8
01 0E 01 00 24 2A 06 ff ff ff ff f3 99 |-> 0F
Any help is appreciated !
Looks like a typo on the polynomial. An n-bit CRC polynomial always starts with xn. Like your correct 8-bit polynomial. The 16-bit polynomial should read X16 + X12 + X5 + 1, which in fact is a very common 16-bit CRC polynomial.
To preserve the note in the comment, the four data bytes in the examples are swapped in each pair of bytes, which needs to be undone to get the correct CRC. (The control bytes in the CRC8 example are not swapped.)
So 14 00 00 0a becomes 00 14 0a 00, for which the above-described CRC gives the expected 0x1b84.
I would guess that the CRC is stored in the stream also swapped, so the message as bytes would be 00 14 0a 00 84 1b. That results in a sequence whose total CRC is 0.
I am trying to get my head around the H.264 NALU headers in the following data stored in a mov container.
Example from file:
00 00 00 02 09 30 00 00 00 0E 06 01 09 00 02 08
24 68 00 00 03 00 01 80 00 00 2B 08 21 9A 01 01
64 47 D4 B2 5C 45 76 DA 72 E4 3B F3 AE A9 56 91
B2 3F FE CE 87 1A 48 13 14 A9 E0 12 C8 AD E9 22
...
So far I have assumed that the bit-stream is not byte aligned due to the start code sequence offset to the left by one bit:
0x00 0x00 0x00 0x02 -> 00000000 00000000 00000000 00000010
So I have shifted the these and subsequent bytes to the right one bit which results in the following start sequence code and header bits for the first header:
0000000 00000000 00000000 00000001 [0 00 00100]
However I am coming unstuck when I reach following byte sequence in the example:
0x00 0x00 0x00 0x0E
I am assuming it is another start sequence code but with a different byte alignment.
00000000 00000000 00000000 00001110 00000110 00000001 00001001 00000000
After byte alignment I am getting the following header byte:
00000 00000000 00000000 00000001 [1 10 00000]
The first bit in the header (the forbidden_zero_bit) is non-zero which violates the rule that it must be zero
Where am I tripping up?
Am I making the wrong assumptions here?
As was already answered MOV-container (or MP4) doesn't use Annex B encoding with start codes. It use MP4-style encoding where NALs are prefixed with NALUnitLength field. This field can be of different size (and that size signaled somewhere else in container) but usually it is 4 bytes. In your case NALUnitLength is probably 4 bytes and 3 NALs from you dump have sizes of: 2-bytes (00 00 00 02), 14-bytes (00 00 00 0E) and 11016-bytes (00 00 2B 08).
Start codes are used in "Byte stream format" (H.264 Annex B) and are byte aligned themselves. Decoder is supposed to identify start code by checking byte sequences, without bit shifts.
MOV, MP4 containers don't use start codes, however they have their own structure (atoms, boxes) with parameter set NAL units, without prefixes, in sample description atoms and then data itself separately again as original NAL units.
What you quoted is presumably a fragment of MOV atoms which correspond to file structure bytes and not NAL units.
Please,
I am trying to write a simple Binary Block to mifare 1k tag with a ACR122U reader.
I am trying write to block 01, 5 bytes, text:'teste' and read it back.
But I always get an error 6300 when update this block.
Any thoughts?
I am using windows 8.1/delphi xe8.
My log is:
SCardEstablishContext succeeded.
Card State changed in ACS ACR122U PICC Interface 0 to available
New reader found: ACS ACR122U PICC Interface 0
Card inserted in ACS ACR122U PICC Interface 0
ATR = 3B 8F 80 01 80 4F 0C A0 00 00 03 06 03 00 01 00 00 00 00 6A
SCardConnect (shared) succeeded.
Active Protocol: T=1
ISO 14443 A, Part3 Card Type: Mifare Standard 1K is detected
Sending APDU to card: FF 82 00 01 06 FF FF FF FF FF FF
SCardTransmit succeeded.
Card response status word: 9000 (OK)
Sending APDU to card: FF 86 00 00 05 01 00 01 60 01
SCardTransmit succeeded.
Card response status word: 9000 (OK)
Sending APDU to card: FF 86 00 00 05 01 00 01 60 01
SCardTransmit succeeded.
Card response status word: 9000 (OK)
Sending APDU to card: FF D6 00 01 05 74 65 73 74 65
SCardTransmit succeeded.
Card response status word: 6300 (State of non-volatile memory changed.)
This is easily resolved by reading the documentation.
You're writing to a block and you have to provide a complete block of information. The only option for Lc is x04 or x10 - four bytes or sixteen bytes. For the Mifare 1K, it's prettly clear that you need to supply 16 bytes. You have only 5 bytes of data, so pad the rest with zeros.
| CMD | block1 | 16 bytes | data ...
FF D6 00 01 10 74 65 73 74 65 00 00 00 00 00 00 00 00 00 00 00
I was reading Python guide about Unicode. In this section, it says:
To summarize the previous section: a Unicode string is a sequence of code points, which are numbers from 0 to 0x10ffff. This sequence needs to be represented as a set of bytes (meaning, values from 0-255) in memory. The rules for translating a Unicode string into a sequence of bytes are called an encoding.
The first encoding you might think of is an array of 32-bit integers. In this representation, the string “Python” would look like this:
P y t h o n
0x50 00 00 00 79 00 00 00 74 00 00 00 68 00 00 00 6f 00 00 00 6e 00 00 00
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Why might we think of 32-bit integers if code points are numbers from 0 to 0x10ffff? Maybe is it assuming that we are on a 32-bit system?
Why does bios read at partition's boot record at 0000:7c00 ? What is special about that address ? what ':' doing in referencing an address ?
The simple answer is that 7C00h is 1k (512 bytes for the boot sector plus an additional 512 bytes for possible boot sector use) from the bottom of the original 32k installed memory.
The happy answer is that org 7C00h has become synonymous with boot sector - boot loader programming.
The ":" is a holdover from segmented memory days, when PCs ran in real mode and could only do 64K at a time. The number to the left of the ":" is your segment, the number to the right is your address.
The windows debug command accepts this notation if you want to poke around in memory yourself:
C:\Users\Seth> debug
-d0000:7c00
0000:7C00 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C10 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C20 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C30 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C40 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C50 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C60 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
0000:7C70 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
With regard to this particular address, it's just an address that was picked to load the MBR, See: https://web.archive.org/web/20140701052540/http://www.ata-atapi.com/hiwmbr.html
"If an MBR is found it is read into memory at location 0000:7c00 and INT 19 jumps to memory location 0000:7c00"
In the original IBM PC it was thought inconceivable to have more than 32K RAM. In segmented addressing terms this is 0000:8000 where 8000 hex is 32768 decimal. The fashion of the time was for the BIOS POST conclude by loading the Boot Sector of the floppy in A: or the Master Boot Record of the hard drive in C: at the location 512 bytes below the top of memory which means subtract 0200 hex from 8000 hex to get 7C00. So the boot sequence loaded the first valid 512 byte first sector into, and then set the Instruction Pointer to 0000:7C00 to execute it. I used to write the code for these first sectors to load the operating system.
Read this article:
http://en.wikibooks.org/wiki/X86_Assembly/Bootloaders
From the above URL, BIOS (which is effectively PC hardware) will make the jump to memory at 0000:7c00 to continue execution in 16-bit mode.
And to quote from above:
A bootloader runs under certain conditions that the programmer must appreciate in order to make a successful bootloader. The following
pertains to bootloaders initiated by the PC BIOS:
The first sector of
a drive contains its boot loader.
One sector is 512 bytes — the last
two bytes of which must be 0xAA55 (i.e. 0x55 followed by 0xAA), or
else the BIOS will treat the drive as unbootable.
If everything is in
order, said first sector will be placed at RAM address 0000:7C00, and
the BIOS's role is over as it transfers control to 0000:7C00. (I.e. it
JMPs to that address)
So from bootup, if u want the CPU to start executing your code, it has to be located in memory at 0000:7c00. And this part of the code is loaded from the first sector the harddisk - also done by hardware. And it is only the first sector which is loaded, the remaining of other parts of the code then have to be loaded by this initial "bootloader".
More information on harddisk's first sector and the 7c00 design:
http://www.ata-atapi.com/hiwdos.html
http://www.ata-atapi.com/hiwmbr.html
Please don't confuse with the starting up mode of the CPU - the first instruction it will fetch and execute is at physical address 0xfffffff0 (see page 9-5):
http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf
and at this stage it is executing non-volatile (meaning you cannot reprogram it easily, and thus not part of bootloader's responsibility) BIOS code.