Muxing AAC audio and h.264 video streams to mp4 with AVFoundation - ios

For OSX and IOS, I have streams of real time encoded video (h.264) and audio (AAC) data coming in, and I want to be able to mux these together into an mp4.
I'm using an AVAssetWriterto perform the muxing.
I have video working, but my audio still sounds like jumbled static. Here's what I'm trying right now (skipping some of the error checks here for brevity):
I initialize the writer:
NSURL *url = [NSURL fileURLWithPath:mContext->filename];
NSError* err = nil;
mContext->writer = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:&err];
I initialize the audio input:
NSDictionary* settings;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
settings = nil; // set output to nil so it becomes a pass-through
CMAudioFormatDescriptionRef audioFormatDesc = nil;
{
AudioStreamBasicDescription absd = {0};
absd.mSampleRate = mParameters.audioSampleRate; //known sample rate
absd.mFormatID = kAudioFormatMPEG4AAC;
absd.mFormatFlags = kMPEG4Object_AAC_Main;
CMAudioFormatDescriptionCreate(NULL, &absd, 0, NULL, 0, NULL, NULL, &audioFormatDesc);
}
mContext->aacWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:settings sourceFormatHint:audioFormatDesc];
mContext->aacWriterInput.expectsMediaDataInRealTime = YES;
[mContext->writer addInput:mContext->aacWriterInput];
And start the writer:
[mContext->writer startWriting];
[mContext->writer startSessionAtSourceTime:kCMTimeZero];
Then, I have a callback where I receive a packet with a timestamp (milliseconds), and a std::vector<uint8_t> with the data containing 1024 compressed samples. I make sure isReadyForMoreMediaData is true. Then, if this is our first time receiving the callback, I set up the CMAudioFormatDescription:
OSStatus error = 0;
AudioStreamBasicDescription streamDesc = {0};
streamDesc.mSampleRate = mParameters.audioSampleRate;
streamDesc.mFormatID = kAudioFormatMPEG4AAC;
streamDesc.mFormatFlags = kMPEG4Object_AAC_Main;
streamDesc.mChannelsPerFrame = 2; // always stereo for us
streamDesc.mBitsPerChannel = 0;
streamDesc.mBytesPerFrame = 0;
streamDesc.mFramesPerPacket = 1024; // Our AAC packets contain 1024 samples per frame
streamDesc.mBytesPerPacket = 0;
streamDesc.mReserved = 0;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
error = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &streamDesc, sizeof(acl), &acl, 0, NULL, NULL, &mContext->audioFormat);
And finally, I create a CMSampleBufferRef and send it along:
CMSampleBufferRef buffer = NULL;
CMBlockBufferRef blockBuffer;
CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, packet.data.size(), kCFAllocatorDefault, NULL, 0, packet.data.size(), kCMBlockBufferAssureMemoryNowFlag, &blockBuffer);
CMBlockBufferReplaceDataBytes((void*)packet.data.data(), blockBuffer, 0, packet.data.size());
CMTime duration = CMTimeMake(1024, mParameters.audioSampleRate);
CMTime pts = CMTimeMake(packet.timestamp, 1000);
CMSampleTimingInfo timing = {duration , pts, kCMTimeInvalid };
size_t sampleSizeArray[1] = {packet.data.size()};
error = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, true, NULL, nullptr, mContext->audioFormat, 1, 1, &timing, 1, sampleSizeArray, &buffer);
// First input buffer must have an appropriate kCMSampleBufferAttachmentKey_TrimDurationAtStart since the codec has encoder delay'
if (mContext->firstAudioFrame)
{
CFDictionaryRef dict = NULL;
dict = CMTimeCopyAsDictionary(CMTimeMake(1024, 44100), kCFAllocatorDefault);
CMSetAttachment(buffer, kCMSampleBufferAttachmentKey_TrimDurationAtStart, dict, kCMAttachmentMode_ShouldNotPropagate);
// we must trim the start time on first audio frame...
mContext->firstAudioFrame = false;
}
CMSampleBufferMakeDataReady(buffer);
BOOL ret = [mContext->aacWriterInput appendSampleBuffer:buffer];
I guess the part I'm most suspicious of is my call to CMSampleBufferCreate. It seems I have to pass in a sample sizes array, otherwise I get this error message immediately when checking my writer's status:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12735), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x604001e50770 {Error Domain=NSOSStatusErrorDomain Code=-12735 "(null)"}}
Where the underlying error appears to be kCMSampleBufferError_BufferHasNoSampleSizes.
I did notice an example in Apple's documentation for creating the buffer with AAC data:
https://developer.apple.com/documentation/coremedia/1489723-cmsamplebuffercreate?language=objc
In their example, they specify a long sampleSizeArray with an entry for every single sample. Is that necessary? I don't have that information with this callback. And in our Windows implementation we didn't need that data. So I tried sending in packet.data.size() as the sample size but that doesn't seem right and it certainly doesn't produce pleasant audio.
Any ideas? Either tweaks to my calls here or different APIs I should be using to mux together streams of encoded data.
Thanks!

If you don't want to transcode, do not pass the outputSetting dictionary. You should pass nil there:
mContext->aacWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:nil sourceFormatHint:audioFormatDesc];
It is explained somewhere in this article:
https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/05_Export.html

Related

AVAssetWriterInput appendSampleBuffer succeeds, but logs error kCMSampleBufferError_BufferHasNoSampleSizes from CMSampleBufferGetSampleSize

Starting from iOS 12.4 beta versions, Calling appendSampleBuffer on an AVAssetWriterInput is logging the following error :
CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.12/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153
We don't see this error in prior version, and neither on iOS 13 beta.
Does anyone else encounter this, and can provide any information to help us fix it?
More details
Our app is recording video and audio, using two AVAssetWriterInput objects, one for video input (appending pixel buffers) and one for audio input - appending audio buffers created with CMSampleBufferCreate. (See code below.)
Since our audio data in non-interleaved, after creation we convert it to interleaved format, and pass it to appendSampleBuffer.
Relevant Code
// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
CMTimeMake(1, _asbdFormat.mSampleRate),
currentAudioTime,
kCMTimeInvalid };
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
0,
NULL,
&buff);
// checking for error... (non returned)
// Converting from non-interleaved to interleaved.
float zero = 0.f;
vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
// Channel L
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
// Channel R
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);
_interleavedABL.mBuffers[0].mDataByteSize = _interleavedASBD.mBytesPerFrame * numFrames;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
&_interleavedABL);
// checking for error... (non returned)
if (_assetWriterAudioInput.readyForMoreMediaData) {
BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer]; // THIS PRODUCES THE ERROR.
// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}
Before all that, here's how the _assetWriterAudioInput is created:
-(BOOL) initializeAudioWriting
{
BOOL success = YES;
NSDictionary *audioCompressionSettings = // settings dictionary, see below.
if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
_assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
[_assetWriter addInput:_assetWriterAudioInput];
}
else {
// return error
}
}
else {
// return error
}
return success;
}
audioCompressionSettings is defined as:
+ (NSDictionary*)audioSettingsForRecording
{
AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
double preferredHardwareSampleRate;
if ([sharedAudioSession respondsToSelector:#selector(sampleRate)])
{
preferredHardwareSampleRate = [sharedAudioSession sampleRate];
}
else
{
preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
}
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
return #{
AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: #2,
AVSampleRateKey: #(preferredHardwareSampleRate),
AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
AVEncoderBitRateKey: #160000
};
}
The appendSampleBuffer logs the following error and call stack (relevant part) :
CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.6/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153
0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]
1 My App 0x0000000103212dfc -[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260]
...
Any help would be appreciated.
EDIT: Adding the following information:
we are passing 0 and NULL to the numSampleSizeEntries and sampleSizeArray parameters of CMSampleBufferCreate - which according to the doc is what we must pass when creating a buffer of non-interleaved data (although this doc is a bit confusing to me).
We have tried passing 1 and a pointer to a size_t parameter, such as:
size_t sampleSize = 4;
but it didn't help:
It logged an error of:
figSampleBufferCheckDataSize signalled err=-12731 (kFigSampleBufferError_RequiredParameterMissing) (bbuf vs. sbuf data size mismatch)
and we are not clear as to what value should be there (how to know the sample size of each sample),
or whether this is the correct solution at all.
I think we have the answer:
Passing the numSampleSizeEntries and sampleSizeArray
parameters of CMSampleBufferCreate as follows seem to fix it (still requires full verifications).
To my understanding the reason is that we are at the end appending the interleaved buffer, it needs to have the sample sizes (at least in 12.4 version).
// _asbdFormat is the AudioStreamBasicDescription.
size_t sampleSize = _asbdFormat.mBytesPerFrame;
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
1,
&sampleSize,
&buff);
This error means the data length parameters passed to the CMBlockBufferCreate... and CMSampleBufferCreate... functions do not match.

Append audio samples to AVAssetWriter from streaming

I'm using a project when I'm recording video from the camera, but the audio comes from streaming. The audio frames obviously are not synchronised with video frames.
If I use AVAssetWriter without video, recording audio frames from streaming it is working fine. But if I append video and audio frames, I can't hear anything.
Here it is the method for convert the audiodata from the stream to CMsampleBuffer
AudioStreamBasicDescription monoStreamFormat = [self getAudioDescription];
CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0,NULL, 0, NULL, NULL, &format);
if (status != noErr) {
// really shouldn't happen
return nil;
}
CMSampleTimingInfo timing = { CMTimeMake(1, 44100.0), kCMTimeZero, kCMTimeInvalid };
CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
if (status != noErr) {
// couldn't create the sample alguiebuffer
NSLog(#"Failed to create sample buffer");
CFRelease(format);
return nil;
}
// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
if (status != noErr) {
NSLog(#"Failed to add samples to sample buffer");
CFRelease(sampleBuffer);
CFRelease(format);
return nil;
}
I don't know if this is related with the timing. But I would like to append the audio frames from the first second of the video.
is it that possible?
Thanks
Finally I did this
uint64_t _hostTimeToNSFactor = hostTime;
_hostTimeToNSFactor *= info.numer;
_hostTimeToNSFactor /= info.denom;
uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = self.initialiseTime;//CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), presentationTime, kCMTimeInvalid };

Using CMSampleTimingInfo, CMSampleBuffer and AudioBufferList from raw PCM stream

I'm receiving a raw PCM stream from Google's WebRTC C++ reference implementation (a hook inserted into VoEBaseImpl::GetPlayoutData). The audio appears to be linear PCM, signed int16, but when recording this using an AssetWriter it saves to the audio file highly distorted and higher pitch.
I am assuming this is an error somewhere with the input parameters, most probably with respect to the conversion of the stereo-int16 to an AudioBufferList and then on to a CMSampleBuffer. Is there any issue with the below code?
void RecorderImpl::RenderAudioFrame(void* audio_data, size_t number_of_frames, int sample_rate, int64_t elapsed_time_ms, int64_t ntp_time_ms) {
OSStatus status;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = sample_rate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mChannelsPerFrame * audioFormat.mBitsPerChannel / 8;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket / audioFormat.mFramesPerPacket;
CMSampleTimingInfo timing = { CMTimeMake(1, sample_rate), CMTimeMake(elapsed_time_ms, 1000), kCMTimeInvalid };
CMFormatDescriptionRef format = NULL;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, sizeof(acl), &acl, 0, NULL, NULL, &format);
if(status != 0) {
NSLog(#"Failed to create audio format description");
return;
}
CMSampleBufferRef buffer;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, (CMItemCount)number_of_frames, 1, &timing, 0, NULL, &buffer);
if(status != 0) {
NSLog(#"Failed to allocate sample buffer");
return;
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;
bufferList.mBuffers[0].mDataByteSize = (UInt32)(number_of_frames * audioFormat.mBytesPerFrame);
bufferList.mBuffers[0].mData = audio_data;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, &bufferList);
if(status != 0) {
NSLog(#"Failed to convert audio buffer list into sample buffer");
return;
}
[recorder writeAudioFrames:buffer];
CFRelease(buffer);
}
For reference, the sample rate I'm receiving from WebRTC on an iPhone 6S+ / iOS 9.2 is 48kHz with 480 samples per invocation of this hook and I'm receiving data every 10 ms.
First of all, congratulations on having the temerity to create an audio CMSampleBuffer from scratch. For most, they are neither created nor destroyed, but handed down immaculate and mysterious from CoreMedia and AVFoundation.
The presentationTimeStamps in your timing info are in integral milliseconds, which cannot represent your 48kHz samples' positions in time.
Instead of CMTimeMake(elapsed_time_ms, 1000), try CMTimeMake(elapsed_frames, sample_rate), where elapsed_frames are the number of frames that you have previously written.
That would explain the distortion, but not the pitch, so make sure that the AudioStreamBasicDescription matches your AVAssetWriterInput setup. It's hard to say without seeing your AVAssetWriter code.
p.s Look out for writeAudioFrames - if it's asynchronous, you'll have problems with ownership of the audio_data.
p.p.s. it looks like you're leaking the CMFormatDescriptionRef.
I ended up opening up the audio file that was generated in Audacity and saw that every frame had half of it dropped, as shown in this rather bizarre looking waveform:
Changing acl.mChannelLayoutTag to kAudioChannelLayoutTag_Mono and changing audioFormat.mChannelsPerFrame to 1 solved the issue and now the audio quality is perfect. Hooray!

iOS: VideoToolBox decompress h263 Video abnormal

I am working on H263 decompression with VideoToolBox.but when decoding 4CIF video stream, the output pixel data are all 0 value, and there is no error info.
I don't know why this happened, as video stream with CIF resolution is decompressed correctly.
Is any one has the same problem?
this is a piece of my code:
CMFormatDescriptionRef newFmtDesc = nil;
OSStatus status = CMVideoFormatDescriptionCreate(kCFAllocatorDefault,
kCMVideoCodecType_H263,
width,
height,
NULL,
&_videoFormatDescription);
if (status)
{
return -1;
}
CFMutableDictionaryRef dpba = CFDictionaryCreateMutable(kCFAllocatorDefault,
2,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dpba,
kCVPixelBufferOpenGLCompatibilityKey,
kCFBooleanFalse);
VTDictionarySetInt32(dpba,
kCVPixelBufferPixelFormatTypeKey,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange); // use NV12
VTDictionarySetInt32(dpba,
kCVPixelBufferWidthKey,
dimension.width);
VTDictionarySetInt32(dpba,
kCVPixelBufferHeightKey,
dimension.height);
VTDictionarySetInt32(dpba,
kCVPixelBufferBytesPerRowAlignmentKey,
dimension.width);// setup decoder callback record
VTDecompressionOutputCallbackRecord decoderCallbackRecord;
decoderCallbackRecord.decompressionOutputCallback = onDecodeCallback;
decoderCallbackRecord.decompressionOutputRefCon = this;// create decompression session
status = VTDecompressionSessionCreate(kCFAllocatorDefault,
_videoFormatDescription,
nil,
dpba,
&decoderCallbackRecord,
&_session);
// Do Decode
CMSampleBufferRef sampleBuffer;
sampleBuffer = VTSampleBufferCreate(_videoFormatDescription, (void*)data_start, data_len, ts);
VTDecodeFrameFlags flags = 0;
VTDecodeInfoFlags flagOut = 0;
OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(_session,
sampleBuffer,
flags,
nil,
&flagOut);
I tried compressing H263 with VideoToolBox, I int the session with resolution of 4CIF, and push 4CIF NV12 image to the compression session, but the output of H263 stream is in CIF resolution!
Is VideoToolBox can't support 4CIF H263 Video on both compression and decompression?

Decoding H264 VideoToolkit API fails with Error -12911 in VTDecompressionSessionDecodeFrame

I'm trying to decode a raw stream of .H264 video data but I can't find a way to create a proper
- (void)decodeFrameWithNSData:(NSData*)data presentationTime:
(CMTime)presentationTime
{
#autoreleasepool {
CMSampleBufferRef sampleBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
VTDecodeInfoFlags infoFlags;
int sourceFrame;
if( dSessionRef == NULL )
[self createDecompressionSession];
CMSampleTimingInfo timingInfo ;
timingInfo.presentationTimeStamp = presentationTime;
timingInfo.duration = CMTimeMake(1,100000000);
timingInfo.decodeTimeStamp = kCMTimeInvalid;
//Creates block buffer from NSData
OSStatus status = CMBlockBufferCreateWithMemoryBlock(CFAllocatorGetDefault(), (void*)data.bytes,data.length*sizeof(char), CFAllocatorGetDefault(), NULL, 0, data.length*sizeof(char), 0, &blockBuffer);
//Creates CMSampleBuffer to feed decompression session
status = CMSampleBufferCreateReady(CFAllocatorGetDefault(), blockBuffer,self.encoderVideoFormat,1,1,&timingInfo, 0, 0, &sampleBuffer);
status = VTDecompressionSessionDecodeFrame(dSessionRef,sampleBuffer, kVTDecodeFrame_1xRealTimePlayback, &sourceFrame,&infoFlags);
if(status != noErr) {
NSLog(#"Decode with data error %d",status);
}
}
}
At the end of the call I'm getting -12911 error in VTDecompressionSessionDecodeFrame that translates to kVTVideoDecoderMalfunctionErr which after reading this [post] pointed me that I should make a VideoFormatDescriptor using CMVideoFormatDescriptionCreateFromH264ParameterSets. But how can I create a new VideoFormatDescription if I don't have information of the currentSps or currentPps? How can I get that information from my raw .H264 streaming?
CMFormatDescriptionRef decoderFormatDescription;
const uint8_t* const parameterSetPointers[2] =
{ (const uint8_t*)[currentSps bytes], (const uint8_t*)[currentPps bytes] };
const size_t parameterSetSizes[2] =
{ [currentSps length], [currentPps length] };
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL,
2,
parameterSetPointers,
parameterSetSizes,
4,
&decoderFormatDescription);
Thanks in advance,
Marcos
[post] : Decoding H264 VideoToolkit API fails with Error -8971 in VTDecompressionSessionCreate
You you MUST call CMVideoFormatDescriptionCreateFromH264ParameterSets first. The SPS/PPS may be stored/transmitted separately from the video stream. Or may come inline.
Note that for VTDecompressionSessionDecodeFrame your NALUs must be preceded with a size, and not a start code.
You can read more here:
Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream

Resources