Decoding h264 on ios - ios

This is based on the answer you provided for this thread.
Can CMSampleBuffer decode H264 frames?
Can you give more pointers , how you achieved it ?
I am getting a h264 raw stream from socket on my iphone. How to play it ?
hope you will give some hints.

Unfortunately Apple doesn't give us direct access to the hardware, and there's no way (as far as I know) to just get the AVAssetReader to take raw h.264 data. Perhaps somebody has figured it out and would be kind enough to shed some light on it for us.
So the other solution would be to write it to an MP4 file on disk, or switch to HLS as your streaming method. You could also switch to a software decoder. That would solve your problem but give you a new problem in that it will eat up a lot of CPU resources if you're doing HD at a high frame rate.

You can go with FFMPGE or VideoToolKit framework provided by Apple Inc.

Related

Capturing PCM data from AVPlayer playback of HLS

We are trying to use capture the PCM data from an HLS stream for processing, ideally just before it is played, though just after is acceptable. We want to do all this while still using AVPlayer.
Has anyone done this? For non-HLS streams, as well as local files, this seems to be possible with the MPAudioProcessingTap, but not with HLS. This issue discusses doing it with non-HLS:
AVFoundation audio processing using AVPlayer's MTAudioProcessingTap with remote URLs
Thanks!
Unfortunately, this has been confirmed to be unsupported, at least for the time being.
From an Apple engineer:
The MTAudioProcessingTap is not available with HTTP live streaming. I suggest filing an enhancement if this feature is important to you - and it's usually helpful to describe the type of app you're trying to design and how this feature would be used.
Source: https://forums.developer.apple.com/thread/45966
Our best bet is to file enhancement radars to try to get them to devote some development time towards it. I am in the same unfortunate boat as you.

AVAssetReader with streamed H.264 samples

I'm writing an RTSP/H.264 client. Live555 for parsing the RTSP is great, but using ffmpeg for software decoding is just too slow. I'd like to use AVFoundation to hardware decode the samples. I'm not sure how to do this. My question is, is there any way to get AVFoundation (AVAssetReader?) to decode these samples as they come in and display the feed on-screen?
From now the media sample encoded with H264 comes from memory can't use hardware decode, because iOS doesn't open these interfaces, you can only decode local file or by HTTP Live Streaming. However, there is a possible solution that write every sample into a separate mp4 file, then read it with AVAssetReader, but I didn't try that, maybe speed is a limit.
This may at least get you started
https://github.com/mooncatventures-group/FFPlayer-tests

Mixing and equalizing multiple streams of compressed audio on iOS

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(
I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Converting raw pcm to speex?

For latency issues, I would like to send speex encoded audio frame data to a server instead of the raw PCM like I'm sending right now.
The problem is that I'm doing this in flash, and I want to use a socket connection to stream encoded spx frames of data.
I read the speex manual and it unfortunately does not go over the actual CELP algorithm used to convert pcm to spx data, it briefly introduces the use of excitation gains and how it grabs the filter coefficients.
It's libraries are in dlls- dead ends.
I really would like to create a conversion class in actionscript. Is this possible? Is there any documentation on this? I've been googling to no avail. You'd think there would be more documentation on speex out there...
And if I can't do this, what would be the most documente audio format to use?
thanks

Snapshot using vlc (to get snapshot on RAM)

I was planning to use the vlc library to decode an H.264 based RTSP stream and extract each frame from it (convert vlc picture to IplImage). I have done a bit of exploration of the vlc code and concluded that there is a function called libvlc_video_take_snapshot which does a similar thing. However the captured frame in this case is saved on the hard disk which I wish to avoid due to the real time nature of my application. What would be the best way to do this? Would it be possible without modifying the vlc source (I want to avoid recompilation if possible). I have heard of vmem etc but could not really figure out what it does and how to use it.
The picture_t structure is internal to the library, how can we get an access to the same.
Awaiting your response.
P.S. Earlier I tried doing this using FFMPEG, however the ffmpeg library has a lot of issues while decoding an H.264 based RTSP stream on windows and hence I had to switch to VLC.
Regards,
Saurabh Gandhi

Resources