Guys!
I have found a demo in github that is :-VideoToolboxDemo. And I also found a question in stackoverflow how-to-use-videotoolbox-to-decompress-h-264-video-stream which someone has implemented in github:https:/ /github.com/lileilei1119/VTDemo
But there is a different between they in finding SPS and PPS.
The VideoToolboxDemo is:
uint8_t *data = pCodecCtx -> extradata;
int size = pCodecCtx -> extradata_size;
it uses extradata of pCodecCtx from FFMPEG to find start code like 0x00 00 01(or 0x00 00 00 01)
but the introduction in stackoverflow is:
[_h264Decoder decodeFrame:packet.data withSize:packet.size];
use data of packet?
I have try these two ways, but I still can't find start code of SPS and PPS. Does anyone know why? Is there something wrong in my file?
My mp4 file is http:/ /7u2m53.com1.z0.glb.clouddn.com/201601131107187320.mp4
Videotoolbox does not use annex b, hence does not produce a start code. Read more here. Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
Related
I am using Exif information to have a correct rotation for an image captured from mobile camera.
In Android version the possible values are 1,3,6,8, and 9.
In iOS, I am using the same code, but getting invalid values like 393216, 196608, 524288, 65536 etc..
I don't understand why there is such a difference ?
Short answer:
For iOS you need to read those bytes in reverse order for correct value. Plus you are incorrectly reading 24-bits (3 bytes) instead of just 16-bits (2 bytes). Or maybe you are extracting 2 bytes but somehow your bytes are getting an extra "zero" byte added at the end??
You could try having an OR check inside an If statement thats checks both Endian type equivalents. Since where Android = 3 would become iOS = 768, you can try:
if (orient_val == 3 || orient_val == 768)
{ /* do whatever you do here */ }
PS: 1==256 2==512 3==768 4==1024 5==1280 6==1536 7==1792 8==2048, 9==2304
long version:
Android processors typically read bytes as Little Endian. Apple processors read bytes as Big Endian. Basically one type is read right-to-left, the other, is left-to-right. Where Android has ABCD that becomes in iOS as DCBA.
Some pointers:
Your 3 as (2 bytes) in Lil' E is written 00+03... but in
Big E it's written 03+00.
Problem is, if you dont adapt and just read that 03 00 as though it's still LE then you get 768.
Worst still, somehow you are reading it as 03 00 00 which gives you
that 196608.
Another is 06 00 00 giving you 393216 instead of reading 60 00 for 1536.
Fix your code to drop the extra 00 byte at the end.
You were lucky on Android cos I suspect it wants 4 bytes instead of 2 bytes. So that 00 00 06 was being read as 00 00 00 06 and since x000006 and x00000006 mean the same thing=6.
Anyways to fix this normally you could just tell AS3 to consider your Jpeg bytes as Big Endian but that would now fix iOS but then break it on Android.
A quick easy solution is to check if the number you got is bigger than 1 digit, if it is then you assume app is running on iOS and try reverse-ordering to see if now the result is 1 digit. So..
Note: option B shown in code is risky because if you have wrong numbers anyway you'll get a wrong result. You know computers.. "bad input = bad output; do Next();"
import flash.utils.ByteArray;
var Orientation_num:uint = 0;
var jpeg_bytes:ByteArray = new ByteArray(); //holds entire JPEG data as bytes
var bytes_val:ByteArray = new ByteArray(); //holds byte values as needed
Orientation_num = 2048; //Example: Detected big number that should be 8.
if (Orientation_num > 8 ) //since 8 is maximum of orientation types
{
trace ("Orientation_num is too big : Attempting fix..");
//## A: CORRECT.. Either read directly from JPEG bytes
//jpeg_bytes.position = (XX) - 1; //where XX is start of EXIF orientation (2 bytes)
//bytes_val = jpeg_bytes.readShort(); //extracts the 2 bytes
//## B: RISKY.. Or use the already detected big number anyway
bytes_val.writeShort(Orientation_num);
//Flip the bytes : Make x50 x00 become x00 x50
var tempNum_ba : ByteArray = new ByteArray(); //temporary number as bytes
tempNum_ba[0] = bytes_val[1];
tempNum_ba[1] = bytes_val[0];
//tempNum_ba.position = 0; //reset pos before checking
Orientation_num = tempNum_ba.readShort(); //pos also MOVES forward by 2 bytes
trace ("Orientation_num (FIXED) : " + Orientation_num);
}
My App would download a file which the first 12 byte data was encrypted,so I have to read this file from Document and decrypt the first 12 byte data.
My first try was to read the whole file and decrypt it then write to file again. but if the file is too large , this will cost a lot of memory.
So ,Is there anyway to let me read the first 12 byte data and replace it?
This is a std way of doing in any language.
Read a file in chunk Replace in Buffer write in Temporary file delete the original file and rename temporary file as original file.
As far as objective C concerned i found a useful link
http://www.techotopia.com/index.php/Working_with_Files_in_Objective-C
goto this topic "Writing Data to a File"
Well Objective-C is based on C and hence all the functions including file-operation functions should work out-of-the-box. You can convert the NSString file-path to char array and perform the desired file operations with the help of the this link.
#include <stdio.h>
FILE *f = fopen("yourFileName", "rb");
if(f == NULL)
; // handle exception
int nBytesToRead;
char theBuffer[nBytesToRead];
size_t bytesRead = fread(theBuffer, 1, nBytesToRead, f);
// do with your bytesRead
fclose(f);
Please guide me to resolve this issue.
I have parsed the h264 video stream and identified the frames[I/P/B]. I have followed the below steps.
• NAL Units start code: 00 00 01 X Y
• X = IDR Picture NAL Units (25, 45, 65)
• Y = Non IDR Picture NAL Units (01, 21, 41, 61) ; 01 = b-frames, 41 = p-frames
Now my question is how to know the length of individual frames so that i can write each frames to a file. Please give some help.
Regards,
Spk
Ok, so your source is an annex-b formated elementary stream. Basically every NALu begins with a start code (2 or more 0x00 bytes followed by a 0x01 byte). The next byte contains the type (the first 5 bits). The rest is payload. The NALU ends when the next start code in encountered, or you reach the end of the stream. So, to get the length, you must look for the next start code and subtract.
You will likely find this post useful. Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
I've specified and instantiated two Audio Units: a multichannel mixer unit and a generator of subtype AudioFilePlayer.
I would have thought I needed to set the ASBD of the filePlayer's output to match the ASBD I set for the mixer input. However when I attempt to set the filePlayer's output I get a kAudioUnitErr_FormatNotSupported (-10868) error.
Here's the stream format I set on the mixer input (successfully) and am also trying to set on the filePlayer (it's the monostream format copied from Apple's mixerhost sample project):
Sample Rate: 44100
Format ID: lpcm
Format Flags: C
Bytes per Packet: 2
Frames per Packet: 1
Bytes per Frame: 2
Channels per Frame: 1
Bits per Channel: 16
In the course of troubleshooting this I queried the filePlayer AU for the format it is 'natively' set to. This is what's returned:
Sample Rate: 44100
Format ID: lpcm
Format Flags: 29
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
All the example code I've found sends the output of the filePlayer unit to an effect unit and set the filePlayer's output to match the ASBD set for the effect unit. Given I have no effect unit it seems like setting the filePlayer's output to the mixer input's ASBD would be the correct - and required - thing to do.
How have you configured the AUGraph? I might need to see some code to help you out.
Setting the output scope of AUMultiChannelMixer ASBD once only (as in MixerHost) works. However if you have any kind of effect at all, you will need to think about where their ASBDs are defined and how you arrange your code so CoreAudio does not jump in and mess with your effects AudioUnits ASBDs. By messing with I mean overriding your ASBD to the default kAudioFormatFlagIsFloat, kAudioFormatFlagIsPacked, 2 channels, non-interleaved. This was a big pain for me at first.
I would set the effects AudioUnits to their default ASBD. Assuming you have connected the AUFilePlayer node, then you can pull it out later in the program like this
result = AUGraphNodeInfo (processingGraph,
filePlayerNode,
NULL,
&filePlayerUnit);
And then proceed to set
AudioUnitSetProperty(filePlayerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&monoStreamFormat,
sizeof(monoStreamFormat));
Hopefully this helps.
Basically I didn't bother setting the filePlayer ASBD but rather retrieved the 'native' ASBD it was set to and updated only the sample rate and channel count.
Likewise I didn't set input on the mixer and let the mixer figure it's format out.
I'm trying to extract a 1024-bit RSA public key from an already generated key pair (two SecKeyRefs), in order to send it over the wire. All I need is a plain (modulus, exponent) pair, which should take up exactly 131 bytes (128 for the modulus and 3 for the exponent).
However, when I fetch the key info as a NSData object, I get 140 bits instead of 131. Here's an example result:
<30818902 818100d7 514f320d eacf48e1 eb64d8f9 4d212f77 10dd3b48 ba38c5a6
ed6ba693 35bb97f5 a53163eb b403727b 91c34fc8 cba51239 3ab04f97 dab37736
0377cdc3 417f68eb 9e351239 47c1f98f f4274e05 0d5ce1e9 e2071d1b 69a7cac4
4e258765 6c249077 dba22ae6 fc55f0cf 834f260a 14ac2e9f 070d17aa 1edd8db1
0cd7fd4c c2f0d302 03010001>
After retrying the key generation a couple of times and comparing the resulting NSData objects, the bytes that remain the same for all keys are the first 7:
<30818902 818100>
The last three bytes look like the exponent (65537, a common value). There are also two bytes between the "modulus" and the exponent:
<0203>
Can someone with more crypto experience help me identify what encoding is this? DER? How do I properly decode the modulus and exponent?
I tried manually stripping out the modulus and exponent using
NSData* modulus = [keyBits subdataWithRange:(NSRange){ 7, 128 }];
NSData* exponent = [keyBits subdataWithRange:(NSRange){ 7 + 128 + 2, 3 }];
but I get errors when trying to decrypt data which the remote host encoded using that "key".
EDIT:
Here's a gist of the solution I ended up using to unpack the RSA blob: https://gist.github.com/vl4dimir/6079882
Assuming you want the solution to work under iOS, please have a look at this thread. The post confirms that the encoding is DER and shows how to extract the exponent and modulus from the NSData object you started with.
There is another solution that won't work on iOS, but will work on Desktop systems (including MacOS X) that have OpenSSL installed in this thread. Even if you are looking for the iOS-only solution you can still use this to verify your code is working correctly.