AVAssetReader with streamed H.264 samples - ios

I'm writing an RTSP/H.264 client. Live555 for parsing the RTSP is great, but using ffmpeg for software decoding is just too slow. I'd like to use AVFoundation to hardware decode the samples. I'm not sure how to do this. My question is, is there any way to get AVFoundation (AVAssetReader?) to decode these samples as they come in and display the feed on-screen?

From now the media sample encoded with H264 comes from memory can't use hardware decode, because iOS doesn't open these interfaces, you can only decode local file or by HTTP Live Streaming. However, there is a possible solution that write every sample into a separate mp4 file, then read it with AVAssetReader, but I didn't try that, maybe speed is a limit.

This may at least get you started
https://github.com/mooncatventures-group/FFPlayer-tests

Related

Random access and decode AAC audio track in a mp4 file in iOS

I working on a project which involves decoding AAC track from a mp4 file into PCM format. So far, the only way I found that does this is by using AVAssetReader. However, this approach has 2 problems for me:
1) According to the guide, the AVAssetReader is not recommended for real-time processing. However, My project requires live decoding and playback, where the decoded PCMs are post-processed. Will this be a problem? If yes, what will be the alternative?
2) AVAssetReader seems to decode the track sequentially. It does not seem to allow jumping to a random point and decode from there, which is something required by my project. What will be solution?
Answer 1: If you have to deal with track in iPod library, AVAssetReader is the only way.If not, you can choose another decoder like FFmpeg.
Answer 2: AVAssetReader supports random access.It has a timeRange property, see https://stackoverflow.com/a/6719873/1060971.

OpenCV stream captured CAM with H264 (mp4) codec

I like to stream the web cam pictures wich are captured by opencv. I think about a solution with ffmpeg and live555 (poorly they are not document so well). My problems are:
How can convert the captured images to a H264 format so the picures/second match. If it is in a loop I get more than 25 pictures/sekond and the video is to fast.
How can i directly stream the converted H264 stream over the network via rtp / rtps or similar.
Thanks for your help!
This is a common problem.
if you are not require to distribute your software (private use / server side / open-source), you may use FFMpeg compiled with x264 encoder, there's a config flag for that in FFMpeg's config script.
If you do require to distribute your software, i don't know any LGPL licensed library for that, i believe there is no such library. You'd have to use some paid solution.
You should implement DeviceSource.cpp, see DeviceSource.hh and use it as the FramedSource.
Edit: Apple revealed video encoder API, allowing access to stream of h264 frames in iOS8
For an example of how to use x264 and Live555 to encode and stream frames, see the following:
spyPanda open source project.
How to write a Live555 FramedSource to allow me to stream H.264 live SO question.

Mixing and equalizing multiple streams of compressed audio on iOS

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(
I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Converting raw pcm to speex?

For latency issues, I would like to send speex encoded audio frame data to a server instead of the raw PCM like I'm sending right now.
The problem is that I'm doing this in flash, and I want to use a socket connection to stream encoded spx frames of data.
I read the speex manual and it unfortunately does not go over the actual CELP algorithm used to convert pcm to spx data, it briefly introduces the use of excitation gains and how it grabs the filter coefficients.
It's libraries are in dlls- dead ends.
I really would like to create a conversion class in actionscript. Is this possible? Is there any documentation on this? I've been googling to no avail. You'd think there would be more documentation on speex out there...
And if I can't do this, what would be the most documente audio format to use?
thanks

Specify software-based codec for AVAssetReaderAudioMixOutput?

On an ios device, can AVAssetReaderOutput be told to only use software-based decoders (i.e. kAppleSoftwareAudioCodecManufacturer rather than kAppleHardwareAudioCodecManufacturer)?
I see that this is possible using Audio Format Services in AudioToolbox, but I don't see how to carry this over to AVFoundation.
The reason for this is that I'd like to decode compressed audio from the itunes library while iPodMusicPlayer is playing - since hardware-assisted decoding does not support simultaneous decoding of multiple songs, my app will need to use software decoding (right?)
I'd rather not do the software decoding as a 2-step process (i.e. export compressed file to app sandbox, then open that using AudioToolbox).
Well, although I haven't found a way to specify the software decoder in AVFoundation, I ended up working around this by reading each track of the compressed song file with an AVAssetReaderTrackOutput, then passing the compressed buffers to an AudioConverterRef.

Resources