Identify audio sample in MDAT Atom without MOOV Atom - ios

I am trying to write a live video broadcaster over RTSP from an ios device. I am utilizing AVAssetWriter so I can take advantage of hardware encoding. To send over RTSP I have to get the avcC information out of the MOOV block, however the MOOV block is only written from AVAssetWriter when you have finished the session, which of course is not finished as I am streaming this live.
I have gotten around this with the video by encoding, writing, and then finishing a single sample buffer to file, and the parsing the file to get the avcC information out. That works just fine.
After that for the live stream, since AVAssetWriter will only write to a file, I am writing it out to file and then reading from that file with a chasing file offset. When I do this with video only, I can read the Nalu's from the MDAT Atom in the written file without any MOOV information as the size of each Nalu is given in the first 4 bytes of the Nalu. So I can read that amount, process it, and send it on its way over an RTSP stream. So with video only, everything works perfectly fine and I get real good HD stream to a stream server.
The problem I am now having is when I try to incorporate audio into the stream from the mic. I can encode it just fine with AVAssetWriter and I get proper interleaved formated mp4 file to read from, however unlike the H264 Nalu's, the audio samples in the file do not have the size of the sample as their first byte. So far the only way I can see to define that is with the STSZ and STCO Atoms in the MOOV, which of course I dont have because it is a live stream.
With all that in mind, does any one know a way to identify audio sample segments in an MDAT Atom without the information from the MOOV Atom? As soon as I figure that out, Im home free.
Thanks in advance for any insight.

After a lot of research and emails out to people, I at least have an answer, and the answer is, I cant do it this way. Normally AAC samples in streams where dont have an index is wrapped in ADTS headers which holds the length field for the packet. However, since I am using AVAssetWriter for the audio, and AVAssetWriter writes directly to an MP4 file, the ADTS wrap is stripped off because of the index that will be in the MOOV Atom.
Therefore I will have to encode the audio differently, probably through Audio Queue services and meld it into the Video packets when applying to the RTSP stream.
Maybe this will help someone else in the future looking down this same road.
Many thanks to Geraint Davies at http://www.gdcl.co.uk for leading me down the right path.

Related

Transcoding fMP4 to HLS while writing on iOS using FFmpeg

TL;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why?
I'm trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I've tried
Rolling AVAssetWriters where each writes for 8 seconds, then concatenating the MP4s together via FFmpeg.
What went wrong - There are blips in the audio and video at times. I've identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it's possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based on the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn't record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I'm thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn't looked into it much at all before that. Since an fMP4 file can be read, and played while it's being written, I thought that it would be possible for FFmpeg to transcode the file as it's being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I'm not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
Is this a feasible solution? Is anybody familiar enough with fMP4 to know if I can actually accomplish this?
How will I know that AVFoundation has finished writing a fragment within the file so that I can pipe it into FFmpeg?
How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments?
Strictly speaking you don't need to transcode the fmp4 if it contains h264+aac, you just need to repackage the sample data as TS. (using ffmpeg -codec copy or gpac)
Wrt. alignment (1.2) I suppose this all depends on your encoder settings (frame rate, sample rate and GOP size). It is certainly possible to make sure that audio and video align exactly at fragment boundaries (see for example: this table). If you're targeting iOS, I would recommend using HLS protocol version 3 (or 4) allowing timing to be represented more accurately. This also allows you to stream audio and video separately (non-multiplexed).
I believe ffmpeg should be capable of pushing a live fmp4 stream (ie. using a long-running HTTP POST), but playout requires origin software to do something meaningful with it (ie. stream to HLS).

FFmpeg save stream to mp3

I have an iOS project that play online radio streams, it is use FFmpeg to play. Also I added ability to record streams, decode streams via avcodec_decode_audio4 function, and write output to .wav file. But this files are too big, because it is uncompressed format, so I want to decode files to .mp3.
I have found couple ways to convert audio but only when audio it is ready file, but I want decode to some compressed format as soon as I get chunk of data from stream, not ready file.
Is it possible?
Can you give me some advise how to achieve this?
You can use ffmpeg (aka libav) to encode the audio you're reading with avcodec_decode_audio4 into a file as mp3, as long as libav was configured with lame (--enable-libmp3lame).
Basically, you configure an mp3 codec, then call avcodec_encode_audio2 (who names these things?) on the progressive output of avcodec_decode_audio4.
The canonical example can be confusing because it also deals with video, but you should be able to tease the details you want out of it.
This post on transcoding audio by arashafiei is broadly helpful.

Capture, encode then stream video from an iPhone to a server

I've got experience with building iOS apps but don't have experience with video. I want to build an iPhone app that streams real time video to a server. Once on the server I will deliver that video to consumers in real time.
I've read quite a bit of material. Can someone let me know if the following is correct and fill in the blanks for me.
To record video on the iPhone I should use the AVFoundation classes. When using the AVCaptureSession the delegate method captureOutput:didOutputSampleBuffer::fromConnection I can get access to each frame of video. Now that I have the video frame I need to encode the frame
I know that the Foundation classes only offer H264 encoding via AVAssetWriter and not via a class that easily supports streaming to a web server. Therefore, I am left with writing the video to a file.
I've read other posts that say they can use two AssetWritters to write 10 second blocks then NSStream those 10 second blocks to the server. Can someone explain how to code the use of two AVAssetWriters working together to achieve this. If anyone has code could they please share.
You are correct that the only way to use the hardware encoders on the iPhone is by using the AVAssetWriter class to write the encoded video to a file. Unfortunately the AVAssetWriter does not write the moov atom to the file (which is required to decode the encoded video) until the file is closed.
Thus one way to stream the encoded video to a server would be to write 10 second blocks of video to a file, close it, and send that file to the server. I have read that this method can be used with no gaps in playback caused by the closing and opening of files, though I have not attempted this myself.
I found another way to stream video here.
This example opens 2 AVAssetWriters. Then on the first frame it writes to two files but immediately closes one of the files so the moov atom gets written. Then with the moov atom data the second file can be used as a pipe to get a stream of encoded video data. This example only works for sending video data but it is very clean and easy to understand code that helped me figure out how to deal with many issues with video on the iPhone.

is it possible for avassetwriter to output to memory

I would like to write an iphone app that continuously capture video, h.264 encode them in 10 seconds interval and upload to a storage server. This can be done with avassetwriter, and I can keep on deleting the old files as I create new ones. However, as flash memory have a limited write cycles, this scheme will destroy the flash after a few thousand write cycles through the flash. Is there a way to redirect avassetwriter to memory, or create a ram drive on the iphone?
Thanks!
Yes avassetwriter is the only way to get to the hardware decoder. and simply reading back the file while its written doesn't give you the moov atoms so avfoundation or mpmediaplayer based players won't be able to read it back. you only have a couple choices , periodically stop the asassetwriter and write to the file on a background thread, effectively segmenting your movie into smaller complete files. or you could deal with the incomplete mp4 on the server side, you will have to decode the raw nalu's and recreate the missing moov atoms. If your using ffmpeg mov.c is source to look at. This is also were an incomplete mp4 file would fail.

AUGraph setup on iOS

I am designing an AUGraph for an iOS application and would appreciate help on the following things.
If I want to play a number of audio files at once, does each file need an audio unit?
From the Core-Audio docs
Linear PCM and IMA/ADPCM (IMA4) audio You can play multiple linear PCM or IMA4 format sounds simultaneously in iOS without incurring CPU resource problems.
AAC, MP3, and Apple Lossless (ALAC) audio Playback for AAC, MP3, and Apple Lossless (ALAC) sounds uses efficient hardware-based decoding on iPhone and iPod touch. You can play only one such sound at a time.
So multiple AAC or MP3 files cannot be played at the same time. What is the optimal LPCM format to play multiple sounds at once?
Does this apply to Audio-Units too, as this in under the AudioQueue documentation.
Can an audio unit in an AUGraph be inactive? If an AUGraph looks like this
Speaker/output < recorder unit < mixer unit < number of audio file playing units
what happens if the recorder is not active, would it still pull, but just not write the buffers to a file?
No; you need to use the mixer audio unit. Check this:
http://developer.apple.com/library/ios/DOCUMENTATION/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1
Mostly reading the document above, wrapping the sample code in a class and creating a pair of utility structures, I coded this 'Simple Sound Engine' from scratch:
ttp://nicolasmiari.com/blog/a-simple-sound-engine-for-ios-using-the-audio-unit-framework/
(Link to article in my blog containing the source code). Sorry, moved blog to Jekyll/Github and this article didn't make the cut.
...I was going to start a repo on github, but it's too much trouble. I am a visual guy, still pretty much git-phobic. Okay, that was a long time ago... Now I use git from the command line :-)
You can use it as-is, or extract the Audio Unit-related code and adapt it to your project.
I believe the Cocos Denshion 'Simple Audio Engine' does pretty much the same thing, but haven't checked the source code.
Known issues
If you have an exception breakpoint set for C++ exceptions, when debugging, the code will stop 2 or 3 times on AUGraphInitialize(). This is a 'non-crashing' exception, so you can click on continue and the code works OK.
To convert your wav files to the uncompressed .caf format, use this command on the Terminal:
%afconvert -f caff -d LEI16 mysoundFile.wav mySoundFile.caf
EDIT: So I created a GitHub repo after all:
https://github.com/nicolas-miari/Sound-Engine
Both ordinary common .wav and .caf files contain raw PCM audio samples, and can be played without hardware assist or DSP processing if already at the destination sample rate.
When there's no audio file or other synthesized data to feed an audio unit that's pulling buffers, the usual practice is to feed it buffers of silence (or perhaps a taper to zero if the previous buffer ended with non-zero amplitude).

Resources