What is the meaning of this Queue Property (iOS Audio Queues)? - ios

I want to write a player to play the music. I see the code like below:
AudioFileGetPropertyInfo(audioFile,
kAudioFilePropertyMagicCookieData, &size, nil);
if (size > 0) {
cookie = malloc(sizeof(char) * size);
AudioFileGetProperty(audioFile,
kAudioFilePropertyMagicCookieData, &size, cookie);
AudioQueueSetProperty(aduioQueue,
kAudioQueueProperty_MagicCookie, cookie, size);
free(cookie);
}
i don't know why to set the AudioQueueProperty,and what is the means about kAudioQueueProperty_MagicCookie? I can't find the help from the documentation.
who can give a direction to slove the problem.

Actually magic cookie is more than just a signature, it holds some information about the encoder, most useful items are "Maximum Bit Rate" and "Average Bit Rate", specially for a compressed format like AudioFileMPEG4Type. For this specific type magic cookie is same as "esds" box in MPEG-4 data file. You can find the exact bit settings at:
http://xhelmboyx.tripod.com/formats/mp4-layout.txt
8+ bytes vers. 2 ES Descriptor box
= long unsigned offset + long ASCII text string 'esds'
- if encoded to ISO/IEC 14496-10 AVC standards then optionally use:
= long unsigned offset + long ASCII text string 'm4ds'
-> 4 bytes version/flags = 8-bit hex version + 24-bit hex flags
(current = 0)
-> 1 byte ES descriptor type tag = 8-bit hex value 0x03
-> 3 bytes extended descriptor type tag string = 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length = 8-bit unsigned length
-> 2 bytes ES ID = 16-bit unsigned value
-> 1 byte stream priority = 8-bit unsigned value
- Defaults to 16 and ranges from 0 through to 31
-> 1 byte decoder config descriptor type tag = 8-bit hex value 0x04
-> 3 bytes extended descriptor type tag string = 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length = 8-bit unsigned length
-> 1 byte object type ID = 8-bit unsigned value
- type IDs are system v1 = 1 ; system v2 = 2
- type IDs are MPEG-4 video = 32 ; MPEG-4 AVC SPS = 33
- type IDs are MPEG-4 AVC PPS = 34 ; MPEG-4 audio = 64
- type IDs are MPEG-2 simple video = 96
- type IDs are MPEG-2 main video = 97
- type IDs are MPEG-2 SNR video = 98
- type IDs are MPEG-2 spatial video = 99
- type IDs are MPEG-2 high video = 100
- type IDs are MPEG-2 4:2:2 video = 101
- type IDs are MPEG-4 ADTS main = 102
- type IDs are MPEG-4 ADTS Low Complexity = 103
- type IDs are MPEG-4 ADTS Scalable Sampling Rate = 104
- type IDs are MPEG-2 ADTS = 105 ; MPEG-1 video = 106
- type IDs are MPEG-1 ADTS = 107 ; JPEG video = 108
- type IDs are private audio = 192 ; private video = 208
- type IDs are 16-bit PCM LE audio = 224 ; vorbis audio = 225
- type IDs are dolby v3 (AC3) audio = 226 ; alaw audio = 227
- type IDs are mulaw audio = 228 ; G723 ADPCM audio = 229
- type IDs are 16-bit PCM Big Endian audio = 230
- type IDs are Y'CbCr 4:2:0 (YV12) video = 240 ; H264 video = 241
- type IDs are H263 video = 242 ; H261 video = 243
-> 6 bits stream type = 3/4 byte hex value
- type IDs are object descript. = 1 ; clock ref. = 2
- type IDs are scene descript. = 4 ; visual = 4
- type IDs are audio = 5 ; MPEG-7 = 6 ; IPMP = 7
- type IDs are OCI = 8 ; MPEG Java = 9
- type IDs are user private = 32
-> 1 bit upstream flag = 1/8 byte hex value
-> 1 bit reserved flag = 1/8 byte hex value set to 1
-> 3 bytes buffer size = 24-bit unsigned value
-> 4 bytes maximum bit rate = 32-bit unsigned value
-> 4 bytes average bit rate = 32-bit unsigned value
-> 1 byte decoder specific descriptor type tag
= 8-bit hex value 0x05
-> 3 bytes extended descriptor type tag string
= 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length
= 8-bit unsigned length
-> ES header start codes = hex dump
-> 1 byte SL config descriptor type tag = 8-bit hex value 0x06
-> 3 bytes extended descriptor type tag string = 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length = 8-bit unsigned length
-> 1 byte SL value = 8-bit hex value set to 0x02
"
Magic Cookie that comes from kAudioFilePropertyMagicCookieData starts from ES Descriptor (just ignore the first 4 bytes described in the map and rest will be an exact match to magick cookie).
A sample magic cookie would be like this:
03 80 80 80 22 00 00 00 04 80 80 80 14 40 15 00 18 00 00 00 FA 00 00 00 FA 00 05 80 80 80 02 12 08 06 80 80 80 01 02
Maximum bit rate is at offset 18 -> 0XFA00 (or 64,000)
Average bit rate is at offset 22 -> 0XFA00 (or 64,000)
Although according to Apple documentation, magic cookie is read/write, but I had no chance changing the bit rate before creating or converting files.
Hope that helps someone.

The "magic cookie" is a file type signature consisting of a unique sequence of bytes at the beginning of the file, indicating the file format. The audio queue framework uses this information to determine how to decode or extract audio information from a file stream (instead of using or trusting the file name extension). The code you posted reads this set of bytes from the file, and passes it to the audio queue as a cookie. (It would be a mistake to let them be interpreted as PCM samples instead, for instance).

Related

Extract 4 bits vs 2Bit of Bluetooth HEX Data, why same method results in error

This is a follow on question from this SO (Extract 4 bits of Bluetooth HEX Data) which an answer has been accepted. I wanna understand more why the difference between what I was using; example below; (which works) when applied to the SO (Extract 4 bits of Bluetooth HEX Data) does not.
To decode Cycling Power Data, the first 2 bits are the flags and it's used to determine what capabilities the power meter provides.
guard let characteristicData = characteristic.value else { return -1 }
var byteArray = [UInt8](characteristicData)
// This is the output from the Sensor (In Decimal and Hex)
// DEC [35, 0, 25, 0, 96, 44, 0, 33, 229] Hex:{length = 9, bytes = 0x23001900602c0021e5} FirstByte:100011
/// First 2 Bits is Flags
let flags = byteArray[1]<<8 + byteArray[0]
This results in the flags bit being concatenate from the first 2 bits. After which I used the flags bit and masked it to get the relevant bit position.
eg: to get power balance, I do (flags & 0x01 > 0)
This method works and I'm a happy camper.
However, Why is it that when I used this same method on SO Extract 4 bits of Bluetooth HEX Data it does not work? This is decoding Bluetooth FTMS Data (different from above)
guard let characteristicData = characteristic.value else { return -1 }
let byteArray = [UInt8](characteristicData)
let nsdataStr = NSData.init(data: (characteristic.value)!)
print("pwrFTMS 2ACC Feature Array:[\(byteArray.count)]\(byteArray) Hex:\(nsdataStr)")
PwrFTMS 2ACC Feature Array:[8][2, 64, 0, 0, 8, 32, 0, 0] Hex:{length = 8, bytes = 0x0240000008200000}
Based on the specs, the returned data has 2 characteristics, each of them 4 octet long.
doing
byteArray[3]<<24 + byteArray[2]<<16 + byteArray[1]<<8 + byteArray[0]
to join the first 4bytes results in an wrong output to start the decoding.
edit: Added clarification
There is a problem with this code that you say works... but it seems to work "accidentally":
let flags = byteArray[1]<<8 + byteArray[0]
This results in a UInt8, but the flags field in the first table is 16 bits. Note that byteArray[1] << 8 always evaluates to 0, because you are shifting all of the bits of the byte out of the byte. It appeared to work because the only bit you were interested in was in byteArray[0].
So you need it convert it to 16-bit (or larger) first and then shift it:
let flags = (UInt16(byteArray[1]) << 8) + UInt16(byteArray[0])
Now flags is UInt16
Similarly when you do 4 bytes, you need them to be 32-bit values, before you shift. So
let flags = UInt32(byteArray[3]) << 24
+ UInt32(byteArray[2]) << 16
+ UInt32(byteArray[1]) << 8
+ UInt32(byteArray[0])
but since that's just reading a 32-bit value from a sequence of bytes that are in little endian byte order, and all current Apple devices (and the vast majority of all other modern computers) are little endian machines, here is an easier way:
let flags = byteArray.withUnsafeBytes {
$0.bindMemory(to: UInt32.self)[0]
}
In summary, in both cases, you had been only preserving byte 0 in your shift-add, because the other shifts all evaluated to 0 due to shifting the bits completely out of the byte. It just so happened that in the first case byte[0] contained the information you needed. In general, it's necessary to first promote the value to the size you need for the result, and then shift it.

Unpack byte array from BLE packet

Im developing an app using BLE where iPhone device is the peripheral, and will respond to write requests of type CBATTRequest from the Central.
My take is that this represents a byte array from value of CBATTRequest via request.value of type NSData that I can unpack to read packet # etc. Given the size (octets) and position of each field, how can I unpack and read each value, conceptually and technically?. And how would I go about constructing/packing this same byte array as if I was preparing to send this request? Since I will have to pack data in the same manner for the response.
When you receive the data, it's probably in a CBATTRequest. The data is contained in a member value of type NSData. The member length tells the length in bytes/octects.
CBATTRequest* request = ...;
NSData* value = request.value;
int packetLen = value.length;
It then makes sense to cast this to a struct that corresponds to the structure of the packet:
struct Packet {
unsgined char pktNo;
unsigned char ctrlCmd;
unsigned char txPowerRequest;
unsigned char uuid[2];
unsigned char txCnt;
unsigned char userPayload[14];
};
Packet* packet= (Packet)value.bytes;
Note that packet is of variable length. So only part of the userPayload is valid. The valid length is:
int userPayloadLength = packetLen - 6;
Now you can easily access the members:
int packetNumber = packet->pktNo;
To construct a similar packet, you would approach is slightly similarly.
Packet reponse;
response.pktNo = ...;
reponse.ctrlCmd = ...;
int userPayloadLength = 5;
NSData* value = [NSData dataWithBytes: &response length: userPayloadLength + 6];
Bit 4 to 0 set to 0x01 for..
This most likely is relative to a single octect, e.g. to ctrlCmd. To test it:
if (((packet->ctrlCmd >> 0) & 0x1f) == 0x01) ...
0x1f is the bit mask for 5 consecutive bits set (bit 0 to 5). >> 0 doesn't do anything but would be required if the bits were shifted, e.g. for bit 2 to 5 you would need to shift by 2.
A typical UUID is 16 bytes long. So I assume byte index 13 & 12 refers to bytes 12 and 13 within a 16 byte UUID (as only two bytes are transmitted). The remaining bytes are probably fixed to the base Bluetooth UUID:
00000000-0000-1000-8000-00805F9B34FB

How much value do the 8 bit variable holds?

Probably answer is 256 but I am not satisfied with it.
Suppose a variable has 8 bits , its mean its 8th bit can hold the value 256 . But it also has other seven bits . Wouldn't the total value be the sum of all bits?
To me final value that 8 bit variable holds would be the sum of all bits. But it doesn't. Why?
The max value 8 bits can hold is: 11111111 which is equal to 255. If you have a signed value, the max value it can hold is 127, the left-most bit is used for sign.
The binary 10000000 equals 128 (2 ^ 7), not 256. That's where your confusion lays I think.
00000001 = 2 ^ 0 = 1
00000010 = 2 ^ 1 = 2
00000100 = 2 ^ 2 = 4
00001000 = 2 ^ 3 = 8
00010000 = 2 ^ 4 = 16
00100000 = 2 ^ 5 = 32
01000000 = 2 ^ 6 = 64
10000000 = 2 ^ 7 = 128
The value is indeed the sum of all bits set to 1, but the place value of the eighth bit is 27 (128), not 256 as you suggest - the least significant bit is 20 (i.e. 1), so for eight bits the MSB is 27. You appear to have started from 21 (2) .
For an unsigned integer:
Bit 0 = 20 = 1
Bit 1 = 21 = 2
Bit 2 = 22 = 4
Bit 3 = 23 = 8
Bit 4 = 24 = 16
Bit 5 = 25 = 32
Bit 6 = 26 = 64
Bit 7 = 27 = 128
Sum of all ones = 255 - not 256 as you suggest: 0 to 255 = 28 (256) values.
For a two's complement signed 8 bit type:
Bit 7 = -27 = -128
Sum of all ones = -1,
while if Bit 8 = 0, sum = +127,
and all zeros except bit 8 = -128.
(-128 to +127 = 28 (256) values).
Either way an 8 bit integer signed or otherwise has 28 (256) possible bit patterns.

Comparing signed 64 bit number using 32 bit bitwise operations in Lua

I am using Lua on Redis and want to compare two signed 64-bit numbers, which are stored in two 8-byte/character strings.
How can I compare them using the libraries available in Redis?
http://redis.io/commands/EVAL#available-libraries
I'd like to know >/< and == checks. I think this probably involves pulling two 32-bit numbers for each 64-bit int, and doing some clever math on those, but I am not sure.
I have some code to make this less abstract. a0, a1, b0, b1 are all 32 bit numbers used to represent the msb & lsb's of two 64-bit signed int 64s:
-- ...
local comp_int64s = function (a0, a1, b0, b1)
local cmpres = 0
-- TOOD: Real comparison
return cmpres
end
local l, a0, a1, b0, b1
a0, l = bit.tobit(struct.unpack("I4", ARGV[1]))
a1, l = bit.tobit(struct.unpack("I4", ARGV[1], 5))
b0, l = bit.tobit(struct.unpack("I4", blob))
b1, l = bit.tobit(struct.unpack("I4", blob, 5))
print("Cmp result", comp_int64s(a0, a1, b0, b1))
EDIT: Added code
I came up with a method that looks like it's working. It's a little ugly though.
The first step is to compare top 32 bits as 2 compliment #’s
MSB sign bit stays, so numbers keep correct relations
-1 —> -1
0 —> 0
9223372036854775807 = 0x7fff ffff ffff ffff -> 0x7ffff ffff = 2147483647
So returning the result from the MSB's works unless they are equal, then the LSB's need to get checked.
I have a few cases to establish the some patterns:
-1 = 0xffff ffff ffff ffff
-2 = 0xffff ffff ffff fffe
32 bit is:
-1 -> 0xffff ffff = -1
-2 -> 0xffff fffe = -2
-1 > -2 would be like -1 > -2 : GOOD
And
8589934591 = 0x0000 0001 ffff ffff
8589934590 = 0x0000 0001 ffff fffe
32 bit is:
8589934591 -> ffff ffff = -1
8589934590 -> ffff fffe = -2
8589934591 > 8589934590 would be -1 > -2 : GOOD
The sign bit on MSB’s doesn’t matter b/c negative numbers have the same relationship between themselves as positive numbers. e.g regardless of sign bit, lsb values of 0xff > 0xfe, always.
What about if the MSB on the lower 32 bits is different?
0xff7f ffff 7fff ffff = -36,028,799,166,447,617
0xff7f ffff ffff ffff = -36,028,797,018,963,969
32 bit is:
-..799.. -> 0x7fff ffff = 2147483647
-..797.. -> 0xffff ffff = -1
-..799.. < -..797.. would be 2147483647 < -1 : BAD!
So we need to ignore the sign bit on the lower 32 bits. And since the relationships are the same for the LSBs regardless of sign, just using
the lowest 32 bits unsigned works for all cases.
This means I want signed for the MSB's and unsigned for the LSBs - so chaging I4 to i4 for the LSBs. Also making big endian official and using '>' on the struct.unpack calls:
-- ...
local comp_int64s = function (as0, au1, bs0, bu1)
if as0 > bs0 then
return 1
elseif as0 < bs0 then
return -1
else
-- msb's equal comparing lsbs - these are unsigned
if au1 > bu1 then
return 1
elseif au1 < bu1 then
return -1
else
return 0
end
end
end
local l, as0, au1, bs0, bu1
as0, l = bit.tobit(struct.unpack(">i4", ARGV[1]))
au1, l = bit.tobit(struct.unpack(">I4", ARGV[1], 5))
bs0, l = bit.tobit(struct.unpack(">i4", blob))
bu1, l = bit.tobit(struct.unpack(">I4", blob, 5))
print("Cmp result", comp_int64s(as0, au1, bs0, bu1))
Comparing is a simple string compare s1 == s2.
Greater than is when not s1 == s2 and i1 < i2.
Less than is the real work. string.byte allows to get single bytes as unsigned char. In case of unsigned integer, you would just have to check bytes-downwards: b1==b2 -> check next byte; through all bytes -> false (equal); b1>b2 -> false (greater than); b1<b2 -> true. Signed requires more steps: first check the sign bit (uppermost byte >127). If sign 1 is set but not sign 2, integer 1 is negative but not integer 2 -> true. The opposite would obviously result in false. When both signs are equal, you can do the unsigned processing.
When you can pack more bytes to an integer, it's fine too, but you have to adjust the sign bit check. When you have LuaJIT, you can use the ffi library to cast your string into a byte array into an int64.

Reading a Shapefile with ColdFusion

I am trying to read a binary file and parse the bytes I have the white paper spec on Shapefiles to know how to parse the file, however I cannot seem to find the correct functions in ColdFusion to handle reading bytes and deciding what to do with them.
<cffile action="READBINARY"
file="mypath/www/_Dev/tl_2009_25_place.shp"
variable="infile" >
PDF file with spec:http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf
For example I have the spec:
Position Field Value Type Order
Byte 0 File Code 9994 Integer Big
Byte 4 Unused 0 Integer Big
Byte 8 Unused 0 Integer Big
Byte 12 Unused 0 Integer Big
Byte 16 Unused 0 Integer Big
Byte 20 Unused 0 Integer Big
Byte 24 File Length File Length Integer Big
Byte 28 Version 1000 Integer Little
Byte 32 Shape Type Shape Type Integer Little
Byte 36 Bounding Box Xmin Double Little
Byte 44 Bounding Box Ymin Double Little
Byte 52 Bounding Box Xmax Double Little
Byte 60 Bounding Box Ymax Double Little
Byte 68* Bounding Box Zmin Double Little
Byte 76* Bounding Box Zmax Double Little
Byte 84* Bounding Box Mmin Double Little
Byte 92* Bounding Box Mmax Double Little
If this was just a flat text file i would use mid function to read my positions.
Can this be done in ColdFusion and Which functions can achieve my goal?
I found this function inside of FarStream.as found at http://code.google.com/p/vanrijkom-flashlibs/wiki/SHP which is an Actionscript3 file, but it represents the kind of task i need to do.
private function readHeader(e: ProgressEvent): void {
// check header:
if (! ( readByte()==0x46
&& readByte()==0x41
&& readByte()==0x52
))
{
dispatchEvent(new IOErrorEvent
( IOErrorEvent.IO_ERROR
, false,false
, "File is not FAR formatted")
);
close();
return;
}
// version:
vMajor = readByte();
vMinor = readByte();
if (vMajor>VMAJOR) {
dispatchEvent(new IOErrorEvent
( IOErrorEvent.IO_ERROR
, false,false
, "Unsupported archive version (v."+vMajor+"."+vMinor+")")
);
close();
return;
}
// table size:
tableSize = readUnsignedInt();
// done processing header:
gotHeader= true;
}
And here is the final solution
<cfset shapeFile = createObject("java","com.bbn.openmap.layer.shape.ShapeFile").init('/www/_Dev/tl_2009_25_place.shp')>
<cfdump var="#shapeFile.getFileLength()#">
<cffile action="READBINARY" file="mypath/www/_Dev/tl_2009_25_place.shp" variable="infile" >
<cfset shapeFile = createObject("java","com.bbn.openmap.layer.shape.ShapeFile").init(infile)>
<cfdump var="#shapeFile#">
Maybe something like this?

Resources