Display H.264 encoded images via AVSampleBufferDisplayLayer - ios

I've been exploring options on iOS to achieve hardware accelerated decoding of raw H.264 stream and so far I only found that the only option is to write the H.264 stream into an MP4 file and then pass the file to an instance of AVAssetReader. Although this method works, it's not particulary suitable for realtime applications. AVFoundation reference indicates the existence of a CALayer that can display compressed video frames (AVSampleBufferDisplayLayer) and I believe this would be a valid alternative to the method mentioned above. Unfortunately this layer is only available on OSX. I would like to file an enchament radar but before I do so I would like to know from someone that has experience with this layer if indeed could be use to display H.264 raw data if was available on iOS. Currently in my app, the decompressed YUV frames are rendered via openGLES. Using this layer means that I will not need to use openGLES anymore?

In iOS 8 the AVSampleBufferDisplayLayer class is available now.
Take a Look and have Fun

Related

How to decode multiple videos simultaneously using AVAssetReader?

I'm trying to decode frames from multiple video files, and use them as opengl texture.
I know how to decode a h264 file using AVAssetReader object, but it seems you have to read the frames after you call startReading in a while loop when the status is AVAssetReaderStatusReading. What I want to do is to call startReading then call copyNextSampleBuffer anywhere anytime I want. In this way, I can create a new video reader class from AVAssetReader, and load video frames from multiple video files whenever I want to use them as opengl textures.
Is this doable?
Short answer is yes, you can decode one frame at a time. You will need to manage the decode logic yourself and the most simple thing is to just allocate a buffer of BGRA pixels and then copy the framebuffer data into your temp buffer. Be warned that you will likely not be able to find a little code snippit that does all this. Thing is, streaming all the data from movies into OpenGL is not easy to implement. I would suggest that you avoid attempting to do this yourself and use a 3rd party library that already implements the hard stuff. If you want to see a complete example of something like this already implemented then you can have a look at my blog post Load OpenGL textures with alpha channel on iOS. This post shows how to stream video into OpenGL but you would need to decode from h.264 to disk first using this approach. It should also be possible to use other libraries to do the same thing, just keep in mind that playing multiple videos at the same time is resource intensive, so you may run into the limits of what can be done on your hardware device quickly. Also, if you do not actually need OpenGL textures, then it is a lot easier to just operate on CoreGraphics APIs directly under iOS.

AVAssetReader with streamed H.264 samples

I'm writing an RTSP/H.264 client. Live555 for parsing the RTSP is great, but using ffmpeg for software decoding is just too slow. I'd like to use AVFoundation to hardware decode the samples. I'm not sure how to do this. My question is, is there any way to get AVFoundation (AVAssetReader?) to decode these samples as they come in and display the feed on-screen?
From now the media sample encoded with H264 comes from memory can't use hardware decode, because iOS doesn't open these interfaces, you can only decode local file or by HTTP Live Streaming. However, there is a possible solution that write every sample into a separate mp4 file, then read it with AVAssetReader, but I didn't try that, maybe speed is a limit.
This may at least get you started
https://github.com/mooncatventures-group/FFPlayer-tests

Decode video using CoreMedia.framework on iOS

I need to decode mp4 file and draw it using OpenGL in ios app. I need to extract and decode h264 frames from mp4 file and I heard what it posible to do using CoreMedia. Anybody has any idea how to do it? Any examples of CoreMedia using?
It's not Core Media you're looking for, it's AVFoundation. In particular, you'd use an AVAssetReader to load from your movie and iterate through the frames. You then can upload these frames as OpenGL ES textures either by using glTexImage2D() or (on iOS 5.0) by using the much faster texture caches.
If you don't want to roll your own implementation of this, I have working AVFoundation-based movie loading and processing via OpenGL ES within my GPUImage framework. The GPUImageMovie class encapsulates movie reading and the process of uploading to a texture. If you want to extract that texture for use in your own scene, you can chain a GPUImageTextureOutput to it. Examples of both of these classes can be found in the SimpleVideoFileFilter and CubeExample sample applications within the framework distribution.
You can use this directly, or just look at the code I wrote to perform these same actions within the GPUImageMovie class.

Mixing and equalizing multiple streams of compressed audio on iOS

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(
I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Specify software-based codec for AVAssetReaderAudioMixOutput?

On an ios device, can AVAssetReaderOutput be told to only use software-based decoders (i.e. kAppleSoftwareAudioCodecManufacturer rather than kAppleHardwareAudioCodecManufacturer)?
I see that this is possible using Audio Format Services in AudioToolbox, but I don't see how to carry this over to AVFoundation.
The reason for this is that I'd like to decode compressed audio from the itunes library while iPodMusicPlayer is playing - since hardware-assisted decoding does not support simultaneous decoding of multiple songs, my app will need to use software decoding (right?)
I'd rather not do the software decoding as a 2-step process (i.e. export compressed file to app sandbox, then open that using AudioToolbox).
Well, although I haven't found a way to specify the software decoder in AVFoundation, I ended up working around this by reading each track of the compressed song file with an AVAssetReaderTrackOutput, then passing the compressed buffers to an AudioConverterRef.

Resources