NDEF format for WiFI Pairing Data - wifi

I have derived the WiFI Authentication and Encryption data format by writing it and then reading it from an application running on microcontroller.
Can some one please tell me the macros or values for the Authentication type (WPA-Personel,Shared,WPA2-Enterprise,WpA2-Enterprise,WPA2-Personel)fields and Encryption type fields (None,WEP,TKIP,ES,AES/TKIP).
I have figured out for WPA/WPA2-Personel authentication type and AES encryption type so far as below .
52 0x10 0x03 2 Attribute ID: Authentication Type
54 0x00 0x02 2 Attribute Length: 2 octets
56 0x00 0x20 2 Authentication Type: WPA2-Personal
58 0x10 0x0F 2 Attribute ID: Encryption Type
60 0x00 0x02 2 Attribute Length: 2 octets
62 0x00 0x08 2 Encryption Type: AES
Any official documentation will be highly appreciated.

Related

How do I use labels in LLDB memory reads?

I'm just getting started with LLDB after decades of using GDB. I've run into what seems like an absurdly simple request: show me the contents of memory at a specific label. The context: a trivial Hello World program written in NASM, which actually works when I run it from the command line, and when I run it from inside LLDB. But my problem is reading memory in that program. When it hits the breakpoint at 'main':
Process 16808 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100003f99 lldb_prob`main at lldb_prob.asm:10
7 _main:
8 ; Print greeting
9 mov eax, 0x2000004
-> 10 mov edi, 1
11 lea rsi, [msg]
12 mov edx, msg.len
13 syscall
Target 0: (lldb_prob) stopped.
(lldb) x -s1 -fx -c32 main
error: invalid start address expression.
error: address expression "main" evaluation failed
(lldb) x -s1 -fx -c32 _main
error: invalid start address expression.
error: address expression "_main" evaluation failed
(lldb) x -s1 -fx -c32 0x100003f99
0x100003f99: 0xbf 0x01 0x00 0x00 0x00 0x48 0x8d 0x35
0x100003fa1: 0x5b 0x00 0x00 0x00 0xba 0x0e 0x00 0x00
0x100003fa9: 0x00 0x0f 0x05 0xb8 0x01 0x00 0x00 0x02
0x100003fb1: 0xbf 0x00 0x00 0x00 0x00 0x0f 0x05 0x01
So I can read the memory, but I can't use the label in a memory read as I can in setting a breakpoint. Or, to be more precise, I haven't figured out how to use labels to read memory. Any help would be appreciated.

How to differentiate between H.264 bitstream and HEVC bitstream?

I have two parsers to parse h.264 and HEVC bit stream.When I get a bit stream how can I differentiate between the bitstream so that I can use the correct parser.
Thanks for the help
For H.264 you are looking for:
(0x00) 0x00 0x00 0x01 [Access Unit Delimiter]
Where Access Unit Delimiter must be: (byte & 0x1f) == 0x09
For H.265 you are looking for
(0x00) 0x00 0x00 0x01 [Access Unit Delimiter | VPS | SPS]
Where Access Unit Delimiter must be: (byte >> 1 & 0x3f) == 0x23 or
VPS must: (byte >> 1 & 0x3f) == 0x20 or
SPS must: (byte >> 1 & 0x3f) == 0x21

How to write 4 bytes of data to sigma dsp ADAU1452 from Pic32?

I am trying to write 4 bytes of data from Pic32 to SigmaDsp ADAU1452 over I2C Bus. According to sigma dsp datasheet I have to do the following:
Send device address = 0x70
Send Register address. The register I am trying to write to is "0x0021"
SubAddress Byte 1 = 0x00
SubAddress Byte 2 = 0x21
Data Bytes = 0x01, 0x00, 0x00, 0x00
Would it be correct to send 4 bytes of Data to 2 bytes of address? I think this is not possible.
Can someone please tell me how I can write these 4 bytes to the register address 0x0021?

Unexpected memcache GETs in rails 3.2 app

My rails 3.2 app is trying to fetch values from the cache (memcached via dalli) that I'm not expecting it to be caching. It's not getting any cache hits, but I'm puzzled about what is going on. This happens with config.action_controller.perform_caching = true on production as well as development, using WEBrick.
Here's a snippet of what I'm seeing in memcache verbose output:
<30 GET https://www.myrailsapp.com/?
>30 Writing an error: Not found
>30 Writing bin response:
>30 0x81 0x00 0x00 0x00
>30 0x00 0x00 0x00 0x01
>30 0x00 0x00 0x00 0x09
>30 0x00 0x00 0x00 0x00
>30 0x00 0x00 0x00 0x00
>30 0x00 0x00 0x00 0x00
<30 Read binary protocol data:
<30 0x80 0x00 0x00 0xd0
<30 0x00 0x00 0x00 0x00
<30 0x00 0x00 0x00 0xd0
<30 0x00 0x00 0x00 0x00
<30 0x00 0x00 0x00 0x00
<30 0x00 0x00 0x00 0x00
Note that there is only a cache GET and I'm not seeing any cache writes.
I see similar cache GET attempts for all my actions, most of which are JSON API calls. All of them result in a cache miss. Like this,
<31 GET https://www.myrailsapp.com/api/somecall?param1=foo&param2=bar
>31 Writing an error: Not found
I have not specified any caches_action directives anywhere in my app.
Is this a rails bug?
If no, where should I look to stop these unnecessary cache GETs?
Thanks.
As per cswilliams on the Github thread you've posted:
It appears to be an issue with the rack_cache being enabled by default (since updated in the master branch).
Disabling it in your application or environments properties file (e.g., config/environments/development.rb) seems to resolve the issue:
config.action_dispatch.rack_cache = false

Convert Lockbox2 cipher text to Lockbox3 cipher text

Is there a way I can Convert my Lockbox 2 Cipher text to LockBox 3 Cipher text.
We are migrating our application built on Delphi 2007 to Delphi xe2, we used the Lockbox 2 RSA Encryption algorithm in Delphi 2007 and we intend to use lockbox 3 in Delphi xe2 to support Unicode data. since the cipher text generated by both of them differs as Xe2 supports Unicode data we face a problem . So we would like to convert the cipher text that is generated by Lockbox 2 to LockBox 3 somehow.
Since your cipher text by definition is unrecognizable, there is no easy way to tell if the underlying plaintext data was Ansi or Unicode....so you likely need to manage a new associated property.
It obviously depends on the layout of your application and where this data is stored and how the clients are going to be upgraded, but there could be a new version flag of some sort associated with the stored ciphertext. If it's in a local table say, add a new column for PlainTextVersion and set the version to some value to flag that the ciphertext was saved from Unicode plaintext. When reading the ciphertext and this new field doesn't match the Unicdoe flag, you could upgrade the ciphertext by decrypting, and encrypting using Unicode plaintext, and then re-save the ciphertext and set the new flag (or simply put-off the ciphertext version upgrade until the plaintext has changed and needs to be updated.)
Or, better yet, auto-upgrade all current ciphertext at one time if feasible.
To convert, it would be easiest to use Lockbox 2 to decrypt your cypher text and use Lockbox 3 to reencrypt it.
The reason is that from what I can tell, Lockbox 2 stuffed up the implementation of the RSA block type 2 padding which means that Lockbox 2's RSA encryption is not compatible with anybody else's RSA decryption.
Lockbox 2's RSA encryption pads out the message incorrectly as follows (found by placing a breakpoint and inspecting the memory at biBlock.Fi.IntBuf.pBuf):
message-bytes 0x00 random-padding-bytes 0x02 0x00
e.g. 'test' was padded to:
$01C883AC 74 65 73 74 00 D4 50 50 test..PP
$01C883B4 A7 BO E5 51 7A 4C C2 BC ...QzL..
$01C883BC 8C B8 69 8A 97 DF AA 1D ..I.....
$01C883C4 78 67 1E OE 8B AB 02 00 xg......
But it should be padded out to (e.g. look at this worked example):
0x00 0x02 random-padding-bytes 0x00 message-bytes
Lockbox 2 isn't just storing the bytes in reverse (otherwise the message "test" would also be reversed) or reversed 32 bit little endian (otherwise the 02 00 would be swapped too). Everything works so long as you use Lockbox 2 for both encryption and decryption.
Also I noticed another bug where Lockbox 2 calls e.RandomSimplePrime() to generate the public exponent e but it generates an even number i.e. a fairly noteworthy bug in RandomSimplePrime() eh? I only looked at Lockbox 2.07. Lockbox 3 was a complete rewrite, so it won't have these bugs.

Resources