Understand NAL Unit of h.264 stream - parsing

NAL Units start code: 00 00 00 01 X Y
X = IDR Picture NAL Units (25, 45, 65)
X = Non IDR Picture NAL Units (01, 21, 41, 61) ; 01 = b-frames, 41 = p-frames
What does 61 mean?

"01 = b-frames, 41 = p-frames" this is incorrect
Specification is available online for free: http://www.itu.int/rec/T-REC-H.264
Similar question was here just a few days ago: Non IDR Picture NAL Units - 0x21 and 0x61 meaning

Related

Reading UIDs of NFC Cards in iOS 13

I would like to retrive the UID of MiFare cards. I'm using an iPhone X, Xcode 11 and iOS 13.
I'm aware this wasn't possible (specifically reading the UID) until iOS 13 according to this website: https://gototags.com/blog/apple-expands-nfc-on-iphone-in-ios-13/ and this guy: https://www.reddit.com/r/apple/comments/c0gzf0/clearing_up_misunderstandings_and/
The phones NFC reader is correctly detecting the card however the unique identifier is always returned as empty or nil. I can read the payload however and irrelvant to iOS but I can do this in Android (confirms the card isn't faulty or just odd)
Apple Sample Project: https://developer.apple.com/documentation/corenfc/building_an_nfc_tag-reader_app
func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) {
if case let NFCTag.miFare(tag) = tags.first! {
session.connect(to: tags.first!) { (error: Error?) in
let apdu = NFCISO7816APDU(instructionClass: 0, instructionCode: 0xB0, p1Parameter: 0, p2Parameter: 0, data: Data(), expectedResponseLength: 16)
tag.queryNDEFStatus(completionHandler: {(status: NFCNDEFStatus, e: Int, error: Error?) in
debugPrint("\(status) \(e) \(error)")
})
tag.sendMiFareISO7816Command(apdu) { (data, sw1, sw2, error) in
debugPrint(data)
debugPrint(error)
debugPrint(tag.identifier)
debugPrint(String(data: tag.identifier, encoding: .utf8))
}
}
}
}
I'm aware these sorts of hacks: CoreNFC not reading UID in iOS
But they are closed and only apply to iOS 11 for a short time in the past.
Ok I have an answer.
tag.identifier isn't empty -- per se -- if you examine from Xcodes debugger it appears empty (0x00 is the value!). It's type is Data and printing it will reveal the length of the Data but not how it's encoded. In this case it's a [UInt8] but stored as a bag of bits, I don't understand why Apple have done it this way -- it's clunky -- I'm sure they have good reasons. I would have stored it as a type String -- after all the whole point of a high level language like Swift is to abstract us away from such hadware implementation details.
The following code will retrive the UID from a MiFare card:
if case let NFCTag.miFare(tag) = tags.first! {
session.connect(to: tags.first!) { (error: Error?) in
let apdu = NFCISO7816APDU(instructionClass: 0, instructionCode: 0xB0, p1Parameter: 0, p2Parameter: 0, data: Data(), expectedResponseLength: 16)
tag.sendMiFareISO7816Command(apdu) { (apduData, sw1, sw2, error) in
let tagUIDData = tag.identifier
var byteData: [UInt8] = []
tagUIDData.withUnsafeBytes { byteData.append(contentsOf: $0) }
var uidString = ""
for byte in byteData {
let decimalNumber = String(byte, radix: 16)
if (Int(decimalNumber) ?? 0) < 10 { // add leading zero
uidString.append("0\(decimalNumber)")
} else {
uidString.append(decimalNumber)
}
}
debugPrint("\(byteData) converted to Tag UID: \(uidString)")
}
}
}
I know you have said that it returns nil but for clarity for future readers:
Assuming it is not a Felica tag, it should be on the identifier field when it is detected:
func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) {
if case let NFCTag.miFare(tag) = tags.first! {
print(tag.identifier as NSData)
}
}
But in your case, it's empty (see edit below). For most tags the APDU to get the UID of a tag is
0xff // Class
0xca // INS
0x00 // P1
0x00 // P2
0x00 // Le
so you could try using tag.sendMiFareCommand to send that command manually.
Edit: Response from OP, it wasn't empty but was unclear because printing Data in Swift doesn't show in console
In iOS13 I was able read the Tag.identifier value for various MIFARE family's DESfire and UltraLight tags same as #scott-condron's answer, but for various MIFARE Classic ICs (the unknown family member?) my Console shows different error types.
Perhaps private framework APIs similar to the iOS11 work-around in the hack you mentioned would be helpful in these cases, e.g. to intercept and amend the discovery polling routine, but I wouldn't know which ones or how to use them.
Below you can find some test results for MIFARE Classic 4K (emulation) tags, as also reported in this github thread and this MIFARE support thread. Following Table 6 of Application Note #10833, the Select Acknowledge (SAK) value of 0x38 of the emulation tags below translates into 0 0 1 1 1 0 0 0 for bits 8..1, i.e. bits 6, 5, and 4 are 1, and therefore these SAK values classify as Smart MX with CLASSIC 4K as per Figure 3 of Application Note #10834.
an Infineon Classic 4k Emulation successfully logs 1 tags found with the correct UID (31:9A:2F:88), ATQA (0x0200), SAK (detects 0x20, i.e. ISO 14443-4 protocol, and 0x18, i.e. MIFARE 4K, both part of the expected value: 0x38) and respective tag type (both Generic 4A and MiFare classified correctly), but then throws a Stack Error:
error 14:48:08.675369 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper connectTag:]:1436 Failed to connect to tag:
<NFTagInternal: 0x104e05cd0>-{length = 8, bytes = 0x7bad030077180efa}
{ Tech=A Type=Generic 4A ID={length = 4, bytes = 0x319a2f88}
SAK={length = 1, bytes = 0x20} ATQA={length = 2, bytes = 0x0200} historicalBytes={length = 0, bytes = 0x}}
:
error 14:48:08.682881 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper connectTag:]:1436 Failed to connect to tag:
<NFTagInternal: 0x104e1d600>-{length = 8, bytes = 0x81ad0300984374f3}
{ Tech=A Type=MiFare ID={length = 4, bytes = 0x319a2f88}
SAK={length = 1, bytes = 0x18} ATQA={length = 2, bytes = 0x0200} historicalBytes={length = 0, bytes = 0x}}
:
default 14:48:08.683150 +0200 nfcd 00000001 04e07470 -
[_NFReaderSession handleRemoteTagsDetected:]:445 1 tags found
default 14:48:08.685792 +0200 nfcd 00000001 04e07470 -
[_NFReaderSession connect:callback:]:507 NFC-Example
:
error 14:48:08.693429 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper connectTag:]:1436 Failed to connect to tag:
<NFTagInternal: 0x104e05cd0>-{length = 8, bytes = 0x81ad0300984374f3}
{ Tech=A Type=MiFare ID={length = 4, bytes = 0x319a2f88}
SAK=(null) ATQA=(null) historicalBytes={length = 0, bytes = 0x}}
:
error 14:48:08.694019 +0200 NFC-Example 00000002 802e2700 -
[NFCTagReaderSession _connectTag:error:]:568 Error
Domain=NFCError Code=100 "Stack Error" UserInfo={NSLocalizedDescription=Stack Error, NSUnderlyingError=0x2822a86c0
{Error Domain=nfcd Code=15 "Stack Error" UserInfo={NSLocalizedDescription=Stack Error}}}
an NXP SmartMX (Classic 4k emulation) with UID CF:3E:40:04 is discovered initially, but a reception error during ISO 14443-4A presence check (Proc Iso-Dep pres chk ntf: Receiption failed) continuously restarts the discovery polling until the session finally expires, possibly preventing the other SAK value 0x18 (for MIFARE 4K tag type) to be received:
error 10:44:50.650673 +0200 nfcd Proc Iso-Dep pres chk ntf: Receiption failed
:
error 10:44:50.677470 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper disconnectTag:tagRemovalDetect:]:1448 Failed to disconnect tag:
<NFTagInternal: 0x104f09930>-{length = 8, bytes = 0x07320d00f3041861}
{ Tech=A Type=Generic 4A ID={length = 4, bytes = 0xcf3e4004}
SAK={length = 1, bytes = 0x20} ATQA={length = 2, bytes = 0x0200} historicalBytes={length = 0, bytes = 0x}}
default 10:44:50.677682 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper restartDiscovery]:1953
an actual NXP Classic 4k with UID 2D:FE:9B:87 remains undetected and throws no error. The discovery polling session for this tag simply times out after 60 seconds and logs the last 128 discovery messages transmitted (Tx) and received (Rx), among which the following pattern is repeated (which does include the expected UID: 2D FE 9B 87):
error 11:42:19.511354 +0200 nfcd 1571305339.350902 Tx '21 03 07 03 FF 01 00 01 01 01 6F 61'
error 11:42:19.511484 +0200 nfcd 1571305339.353416 Rx '41 03 01'
error 11:42:19.511631 +0200 nfcd 1571305339.353486 Rx '00 F6 89'
error 11:42:19.511755 +0200 nfcd 1571305339.362455 Rx '61 05 14'
error 11:42:19.511905 +0200 nfcd 1571305339.362529 Rx '01 80 80 00 FF 01 09 02 00 04 2D FE 9B 87 01 18 00 00 00 00 2D 11'
error 11:42:19.512152 +0200 nfcd 1571305339.362734 Tx '21 06 01 00 44 AB'
error 11:42:19.512323 +0200 nfcd 1571305339.363959 Rx '41 06 01'
error 11:42:19.512489 +0200 nfcd 1571305339.364028 Rx '00 1D 79'
error 11:42:19.512726 +0200 nfcd 1571305339.364300 Rx '61 06 02'
error 11:42:19.512914 +0200 nfcd 1571305339.364347 Rx '00 00 EB 78'

Decode UDP message with LUA

I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.

converting images to indexed 2-bit grayscale BMP

First of all, my question is different to How do I convert image to 2-bit per pixel? and unfortunately its solution does not work in my case...
I need to convert images to 2-bit per pixel grayscale BMP format. The sample image has the following properties:
Color Model: RGB
Depth: 4
Is Indexed: 1
Dimension: 800x600
Size: 240,070 bytes (4 bits per pixel but only last 2 bits are used to identify the gray scales as 0/1/2/3 in decimal or 0000/0001/0010/0011 in binary, plus 70 bytes BMP metadata or whatever)
The Hex values of the beginning part of the sample BMP image:
The 3s represent white pixels at the beginning of the image. Further down there are some 0s, 1s and 2s representing black, dark gray and light gray:
With the command below,
convert pic.png -colorspace gray +matte -depth 2 out.bmp
I can get visually correct 4-level grayscale image, but wrong depth or size per pixel:
Color Model: RGB
Depth: 8 (expect 4)
Dimension: 800x504
Size: 1,209,738 bytes (something like 3 bytes per pixel, plus metadata)
(no mention of indexed colour space)
Please help...
OK, I have written a Python script following Mark's hints (see comments under original question) to manually create a 4-level gray scale BMP with 4bpp. This specific BMP format construction is for the 4.3 inch e-paper display module made by WaveShare. Specs can be found here: http://www.waveshare.com/wiki/4.3inch_e-Paper
Here's how to pipe the original image to my code and save the outcome.
convert in.png -colorspace gray +matte -colors 4 -depth 2 -resize '800x600>' pgm:- | ./4_level_gray_4bpp_BMP_converter.py > out.bmp
Contents of 4_level_gray_4bpp_BMP_converter.py:
#!/usr/bin/env python
"""
### Sample BMP header structure, total = 70 bytes
### !!! little-endian !!!
Bitmap file header 14 bytes
42 4D "BM"
C6 A9 03 00 FileSize = 240,070 <= dynamic value
00 00 Reserved
00 00 Reserved
46 00 00 00 Offset = 70 = 14+56
DIB header (bitmap information header)
BITMAPV3INFOHEADER 56 bytes
28 00 00 00 Size = 40
20 03 00 00 Width = 800 <= dynamic value
58 02 00 00 Height = 600 <= dynamic value
01 00 Planes = 1
04 00 BitCount = 4
00 00 00 00 compression
00 00 00 00 SizeImage
00 00 00 00 XPerlPerMeter
00 00 00 00 YPerlPerMeter
04 00 00 00 Colours used = 4
00 00 00 00 ColorImportant
00 00 00 00 Colour definition index 0
55 55 55 00 Colour definition index 1
AA AA AA 00 Colour definition index 2
FF FF FF 00 Colour definition index 3
"""
# to insert File Size, Width and Height with hex strings in order
BMP_HEADER = "42 4D %s 00 00 00 00 46 00 00 00 28 00 00 00 %s %s 01 00 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00 55 55 55 00 AA AA AA 00 FF FF FF 00"
BMP_HEADER_SIZE = 70
BPP = 4
BYTE = 8
ALIGNMENT = 4 # bytes per row
import sys
from re import findall
DIMENTIONS = 1
PIXELS = 3
BLACK = "0"
DARK_GRAY = "1"
GRAY = "2"
WHITE = "3"
# sample data:
# ['P5\n', '610 590\n', '255\n', '<1 byte per pixel for 4 levels of gray>']
# where item 1 is always P5, item 2 is width heigh, item 3 is always 255, items 4 is pixels/colours
data = sys.stdin.readlines()
width = int(data[DIMENTIONS].strip().split(' ')[0])
height = int(data[DIMENTIONS].strip().split(' ')[1])
if not width*height == len(data[PIXELS]):
print "Error: pixel data (%s bytes) and image size (%dx%d pixels) do not match" % (len(data[PIXELS]),width,height)
sys.exit()
colours = [] # enumerate 4 gray levels
for p in data[PIXELS]:
if not p in colours:
colours.append(p)
if len(colours) == 4:
break
# it's possible for the converted pixels to have less than 4 gray levels
colours = sorted(colours) # sort from low to high
# map each colour to e-paper gray indexes
# creates hex string of pixels
# e.g. "0033322222110200....", which is 4 level gray with 4bpp
if len(colours) == 1: # unlikely, but let's have this case here
pixels = data[PIXELS].replace(colours[0],BLACK)
elif len(colours) == 2: # black & white
pixels = data[PIXELS].replace(colours[0],BLACK)\
.replace(colours[1],WHITE)
elif len(colours) == 3:
pixels = data[PIXELS].replace(colours[0],DARK_GRAY)\
.replace(colours[1],GRAY)\
.replace(colours[2],WHITE)
else: # 4 grays as expected
pixels = data[PIXELS].replace(colours[0],BLACK)\
.replace(colours[1],DARK_GRAY)\
.replace(colours[2],GRAY)\
.replace(colours[3],WHITE)
# BMP pixel array starts from last row to first row
# and must be aligned to 4 bytes or 8 pixels
padding = "F" * ((BYTE/BPP) * ALIGNMENT - width % ((BYTE/BPP) * ALIGNMENT))
aligned_pixels = ''.join([pixels[i:i+width]+padding for i in range(0, len(pixels), width)][::-1])
# convert hex string to represented byte values
def Hex2Bytes(hexStr):
hexStr = ''.join(hexStr.split(" "))
bytes = []
for i in range(0, len(hexStr), 2):
byte = int(hexStr[i:i+2],16)
bytes.append(chr(byte))
return ''.join(bytes)
# convert integer to 4-byte little endian hex string
# e.g. 800 => 0x320 => 00000320 (big-endian) =>20030000 (little-endian)
def i2LeHexStr(i):
be_hex = ('0000000'+hex(i)[2:])[-8:]
n = 2 # split every 2 letters
return ''.join([be_hex[i:i+n] for i in range(0, len(be_hex), n)][::-1])
BMP_HEADER = BMP_HEADER % (i2LeHexStr(len(aligned_pixels)/(BYTE/BPP)+BMP_HEADER_SIZE),i2LeHexStr(width),i2LeHexStr(height))
sys.stdout.write(Hex2Bytes(BMP_HEADER+aligned_pixels))
Edit: everything about this e-paper display and my code to display things on it can be found here: https://github.com/yy502/ePaperDisplay
This works for me in Imagemagick 6.9.10.23 Q16 Mac OSX Sierra
Input:
convert logo.png -colorspace gray -depth 2 -type truecolor logo_depth8_gray_rgb.bmp
Adding -type truecolor converts the image to RGB, but in gray tones as per the -colorspace gray. And the depth 2 creates only 4 colors in the histogram.
identify -verbose logo_depth8_gray_rgb.bmp
Image:
Filename: logo_depth8_gray_rgb.bmp
Format: BMP (Microsoft Windows bitmap image)
Class: DirectClass
Geometry: 640x480+0+0
Units: PixelsPerCentimeter
Colorspace: sRGB
Type: Grayscale
Base type: Undefined
Endianness: Undefined
Depth: 8/2-bit
Channel depth:
red: 2-bit
green: 2-bit
blue: 2-bit
Channel statistics:
Pixels: 307200
Red:
min: 0 (0)
max: 255 (1)
mean: 228.414 (0.895742)
standard deviation: 66.9712 (0.262632)
kurtosis: 4.29925
skewness: -2.38354
entropy: 0.417933
Green:
min: 0 (0)
max: 255 (1)
mean: 228.414 (0.895742)
standard deviation: 66.9712 (0.262632)
kurtosis: 4.29925
skewness: -2.38354
entropy: 0.417933
Blue:
min: 0 (0)
max: 255 (1)
mean: 228.414 (0.895742)
standard deviation: 66.9712 (0.262632)
kurtosis: 4.29925
skewness: -2.38354
entropy: 0.417933
Image statistics:
Overall:
min: 0 (0)
max: 255 (1)
mean: 228.414 (0.895742)
standard deviation: 66.9712 (0.262632)
kurtosis: 4.29928
skewness: -2.38355
entropy: 0.417933
Colors: 4 <--------
Histogram: <--------
12730: (0,0,0) #000000 black
24146: (85,85,85) #555555 srgb(85,85,85)
9602: (170,170,170) #AAAAAA srgb(170,170,170)
260722: (255,255,255) #FFFFFF white
Look at https://en.wikipedia.org/wiki/BMP_file_format#File_structure .
The problem is that you do not specify a color table. According to the wikipedia-article, those are mandatory if the bit depth is less than 8 bit.
Well done on solving the problem. You could consider also making a personal delegate, or custom delegate, for ImageMagick to help automate the process. ImageMagick is able to delegate formats it cannot process itself to delegates, or helpers, such as your 2-bit helper ;-)
Rather than interfere with the system-wide delegates, which probably live in /etc/ImageMagick/delegates.xml, you can make your own in $HOME/.magick/delegates.xml. Yours would look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<delegatemap>
<delegate encode="epaper" command="convert "%f" +matte -colors 4 -depth 8 -colorspace gray pgm:- | /usr/local/bin/4_level_gray_4bpp_BMP_converter.py > out.bmp"/>
</delegatemap>
Then if you run:
identify -list delegate
you will see yours listed as a "known" helper.
This all means that you will be able to run commands like:
convert a.png epaper:
and it will do the 2-bit BMP thing automagically.
You can just use this:
convert in.jpg -colorspace gray +matte -colors 2 -depth 1 -resize '640x384>' pgm:- > out.bmp**
I too have this epaper display. After alot of trial and error, I was able to correctly convert the images using ImageMagick using the following command:
convert -verbose INPUT.BMP -resize 300x300 -monochrome -colorspace sRGB -colors 2 -depth 1 BMP3:OUTPUT.BMP

Constant TAO CORBA IOR

How to configure TAO corba server so that IOR string of this server generated from object_to_string is constant?
Each time the IOR string generated from object_to_string changes once server restarts. This is inconvenient since client has to update its cached server IOR string via reloading IOR file or namingservice accessing. As a result, it would be useful if server can generate a constant IOR string, no matter how many times it restarts.
My corba server is based on ACE+TAO and i do remember TAO supports constant IOR string: the IOR string each time it generates are same, and the solution is add some configurations for server. But i could not remember these configurations now.
=============================================
UPDATE:
In ACE_wrappers/TAO/tests/POA/Persistent_ID/server.cpp, i added a new function named testUniqe() which is similiar to creatPOA method. And the update file content is:
void testUniqu(CORBA::ORB_ptr orb_, PortableServer::POA_ptr poa_){
CORBA::PolicyList policies (2);
policies.length (2);
//IOR is the same even it is SYSTEM_ID
policies[0] = poa_->create_id_assignment_policy (PortableServer::USER_ID);
policies[1] = poa_->create_lifespan_policy (PortableServer::PERSISTENT);
PortableServer::POAManager_var poa_manager = poa_->the_POAManager ();
PortableServer::POA_ptr child_poa_ = poa_->create_POA ("childPOA", poa_manager.in (), policies);
// Destroy the policies
for (CORBA::ULong i = 0; i < policies.length (); ++i) {
policies[i]->destroy ();
}
test_i *servant = new test_i (orb_, child_poa_);
PortableServer::ObjectId_var oid = PortableServer::string_to_ObjectId("xushijie");
child_poa_->activate_object_with_id (oid, servant);
PortableServer::ObjectId_var id = poa_->activate_object (servant);
CORBA::Object_var object = poa_->id_to_reference (id.in ());
test_var test = test::_narrow (object.in ());
CORBA::String_var ior = orb_->object_to_string(test.in());
std::cout<<ior.in()<<std::endl;
poa_->the_POAManager()->activate();
orb_->run();
}
int
ACE_TMAIN (int argc, ACE_TCHAR *argv[])
{
try
{
CORBA::ORB_var orb =
CORBA::ORB_init (argc, argv);
int result = parse_args (argc, argv);
CORBA::Object_var obj =
orb->resolve_initial_references ("RootPOA");
PortableServer::POA_var root_poa =
PortableServer::POA::_narrow (obj.in ());
PortableServer::POAManager_var poa_manager =
root_poa->the_POAManager ();
testUniqu(orb.in(), root_poa.in());
orb->destroy ();
}
catch (const CORBA::Exception& ex)
{
ex._tao_print_exception ("Exception caught");
return -1;
}
return 0;
}
The problem is that the output server IORs are still different once restart. I also compared this code to the one in Page 412(Advance Corba Programming), but still fail..
///////////////////////////////////
UPDATE:
With "server -ORBListenEndpoints iiop://:1234 > /tmp/ior1", the generated two IORs are:
IOR:010000000d00000049444c3a746573743a312e300000000001000000000000007400000001010200150000007368696a69652d5468696e6b5061642d543431300000d2041b00000014010f0052535453f60054c6f80c000000000001000000010000000002000000000000000800000001000000004f41540100000018000000010000000100010001000000010001050901010000000000
IOR:010000000d00000049444c3a746573743a312e300000000001000000000000007400000001010200150000007368696a69652d5468696e6b5061642d543431300000d2041b00000014010f0052535468f60054da280a000000000001000000010000000002000000000000000800000001000000004f41540100000018000000010000000100010001000000010001050901010000000000
The result for tao_catior for ior1 and ior2:
ior1:
The Byte Order: Little Endian
The Type Id: "IDL:test:1.0"
Number of Profiles in IOR: 1
Profile number: 1
IIOP Version: 1.2
Host Name: **
Port Number: 1234
Object Key len: 27
Object Key as hex:
14 01 0f 00 52 53 54 53 f6 00 54 c6 f8 0c 00 00
00 00 00 01 00 00 00 01 00 00 00
The Object Key as string:
....RSTS..T................
The component <1> ID is 00 (TAG_ORB_TYPE)
ORB Type: 0x54414f00 (TAO)
The component <2> ID is 11 (TAG_CODE_SETS)
Component length: 24
Component byte order: Little Endian
Native CodeSet for char: Hex - 10001 Description - ISO8859_1
Number of CCS for char 1
Conversion Codesets for char are:
1) Hex - 5010001 Description - UTF-8
Native CodeSet for wchar: Hex - 10109 Description - UTF-16
Number of CCS for wchar 0
ecoding an IOR:
//ior2
The Byte Order: Little Endian
The Type Id: "IDL:test:1.0"
Number of Profiles in IOR: 1
Profile number: 1
IIOP Version: 1.2
Host Name: **
Port Number: 1234
Object Key len: 27
Object Key as hex:
14 01 0f 00 52 53 54 68 f6 00 54 da 28 0a 00 00
00 00 00 01 00 00 00 01 00 00 00
The Object Key as string:
....RSTh..T.(..............
The component <1> ID is 00 (TAG_ORB_TYPE)
ORB Type: 0x54414f00 (TAO)
The component <2> ID is 11 (TAG_CODE_SETS)
Component length: 24
Component byte order: Little Endian
Native CodeSet for char: Hex - 10001 Description - ISO8859_1
Number of CCS for char 1
Conversion Codesets for char are:
1) Hex - 5010001 Description - UTF-8
Native CodeSet for wchar: Hex - 10109 Description - UTF-16
Number of CCS for wchar 0
The diff result is:
< 14 01 0f 00 52 53 54 53 f6 00 54 c6 f8 0c 00 00
---
> 14 01 0f 00 52 53 54 68 f6 00 54 da 28 0a 00 00
19c19
< ....RSTS..T................
---
> ....RSTh..T.(..............
Similar diff result is:
< 14 01 0f 00 52 53 54 62 fd 00 54 2c 9a 0e 00 00
---
> 14 01 0f 00 52 53 54 02 fd 00 54 f9 a9 09 00 00
19c19
< ....RSTb..T,...............
---
> ....RST...T................
The difference is in ObjectKey.
============================================
update:
Instead of using above code, i find a better solution with helper TAO_ORB_Manager which is used NamingService and TAO/examples/Simple. TAO_ORB_Manager encapsulates API and generate persistent IORs, as example code in Simple.cpp:
if (this->orb_manager_.init_child_poa (argc, argv, "child_poa") == -1){
CORBA::String_var str =
this->orb_manager_.activate_under_child_poa (servant_name,
this->servant_.in ());
}
This is some description for TAO_ORB_Manager:
class TAO_UTILS_Export TAO_ORB_Manager
{
/**
* Creates a child poa under the root poa with PERSISTENT and
* USER_ID policies. Call this if you want a #a child_poa with the
* above policies, otherwise call init.
*
* #retval -1 Failure
* #retval 0 Success
*/
int init_child_poa (int &argc,
ACE_TCHAR *argv[],
const char *poa_name,
const char *orb_name = 0);
/**
* Precondition: init_child_poa has been called. Activate <servant>
* using the POA <activate_object_with_id> created from the string
* <object_name>. Users should call this to activate objects under
* the child_poa.
*
* #param object_name String name which will be used to create
* an Object ID for the servant.
* #param servant The servant to activate under the child POA.
*
* #return 0 on failure, a string representation of the object ID if
* successful. Caller of this method is responsible for
* memory deallocation of the string.
*/
char *activate_under_child_poa (const char *object_name,
PortableServer::Servant servant);
...................
}
After build, I can get what i want with -ORBListenEndpoints iiop://localhost:2809 option. Thanks #Johnny Willemsen help
You have to create the POA with a persistent lifespan policy, see ACE_wrappers/TAO/tests/POA/Persistent_ID as part of the TAO distribution.

simple midi file writer in Objective C

I'm writing a program in Objective C to generate a MIDI file. As a test, I'm asking it to write a file which plays one note and stops it a delta tick afterwards.
But I'm trying to open it with Logic and Sibelius, and they both say that the file is corrupted.
Here's the hex readout of the file..
4D 54 68 64 00 00 00 06 00 01 00 01 00 40 - MThd header
4D 54 72 6B 00 00 00 0D - MTrk - with length of 13 as 32bit hex [00 00 00 0D]
81 00 90 48 64 82 00 80 48 64 - the track
delta noteOn delta noteOff
FF 2F 00 - end of file
And here's my routines to write the delta time, and write the note -
- (void) appendNote:(int)note state:(BOOL)on isMelody:(BOOL)melodyNote{ // generate a MIDI note and add it to the 'track' NSData object
char c[3];
if( on ){
c[0] = 0x90;
c[2] = volume;
} else {
c[0] = 0x80;
c[2] = lastVolume;
}
c[1] = note;
[track appendBytes:&c length:3];
}
- (void) writeVarTime:(int)value{ // generate a MIDI delta time and add it to the 'track' NSData object
char c[2];
if( value < 128 ){
c[0] = value;
[track appendBytes:&c length:1];
} else {
c[0] = value/128 | 0x80;
c[1] = value % 128;
[track appendBytes:&c length:2];
}
}
are there any clever MIDI gurus out there who can tell what's wrong with this MIDI file?
The delta time of the EOF event is missing.

Resources