Using AVAssetWriter to create MPEG-2 TS - ios

I'm currently using AVAssetWriter to take a video saved on disk and transcode it to lower bitrates. This works fine. Ultimately, I would like to have the iPhone create the necessary TS and M3U8 files to then send to a server. I've looked everywhere for examples of how to do this but I have had no luck. My understanding is that MPEG-2 TS only differs in the header structure, but I'm weary of messing with the file directly.
Any thoughts around how to approach this?

Related

Is it possible to encode broadcast samples into a MPEG-ts or fmp4 file using AVAssetWriter?

I know that this is a frequently asked question which does not have a trivial solution.
Found a demo which does the same - http://blog.denivip.ru/index.php/2017/01/live-streaming-on-ios/?lang=en, but it does not use AVAssetWriter.
People have also suggested using the bento4 library but I want to be able to do it programmatically without creating a new process, and without the latency of having to create a new file, and using AVAssetWriter.
If not possible, why does IOS not have in-built support to create those files using AVAssetWriter using samples?
Is RTSP the only option that Apple recommends for Live Streaming?
Answering my question.
Use AVAssetWriter to create sequence of mp4 files.
While reading the files to write to socket, use qt-faststart to create a streamable version of the mp4 file. This is the java equivalent - https://github.com/ypresto/qtfaststart-java/tree/master/src/main/java/net/ypresto/qtfaststart
Looks like ios cannot be asked to do this.
EDIT#1: Sadly, many mp4 files do not seem to have the moov atom at the end. So, back to square one. Is there some way to force the AVAssetWriter to always write the moov atom at least at the end if not beginning?
EDIT#2: Viola! Looks like Apple does support the feature. See this: https://developer.apple.com/documentation/avfoundation/avassetwriter/1389811-shouldoptimizefornetworkuse?language=objc and this: What does shouldOptimizeForNetworkUse actually do?

Best way to compress MP3 sound files

I have an app (game) in the app store that is very large (over 250mb) and realized it's due to the sound files in it rather than the images. Is there any good way to compress mp3 files?
Thanks a lot!
MP3 audio files are compressed. However, if you still have the original uncompressed audio and you used "default" settings convert it to MP3 the first time, it's quite possible that the result wasn't as small as it could be.
Try saving at a lower bitrate.
Try using mono samples instead of stereo.
Try using shorter samples repeated instead of a really long sample.

HTML5 and MP4 vs. M2TS containers

Problem:
To get an iOS app that streams video accepted into the app store, we need to have a HLS version.
What’s the problem?
Android does not support HLS well, and for other reasons, we need to store MP4 and HLS files of the same content.
What’s the difference between MP4 and HLS and why do you need to store both?
MP4 is a container that stores H.264 video and AAC audio for best compatibility in HTML 5 browsers – jsvideo players often have flash fallback if the browser does not support MP4 video in HTML 5 that use the same MP4 file, but played through flash.
HLS is a protocol where text files (.m3u8) contain references to playlists, which themselves reference .ts files (or m2ts), which are mpeg-2 transport streams, not to be confused with mpeg-2 video. The .ts files are containers for the same H.264 video and AAC audio.
Why am I complaining?
It takes time to create the HLS files and playlists from the MP4 files
(Most importantly) We are now storing twice as much data on S3
Why should I care? If your S3 bill is $10K per month for storing both MP4 and HLS, now it is only $5K. Or put another way, if you are paying $100K for storing data in MP4, it would cost $200K to store the same content in both MP4 and HLS.
What do I want?
I want to store only the .ts files and serve both desktop users, iOS users, and Android users with that single file.
Is it possible?
Doesn’t HLS require 5-10 second .ts segments instead of one big file?
As of IETF draft 7, and version 4 of the protocol, HLS supports the tag EXT-X-BYTERANGE which allows you to specify a media segment as a byte range (subrange) of a larger URL.
Are .ts files compatible with HTML5 video?
Apparently not. They are a different container than MP4, yet contain the same video and audio content. Worth looking into how to store the raw video/audio data once and simply using the correct containers when necessary. If JS video players can convert HTML 5 MP4 files into Flash video on the fly if the browser does not support HTML 5 MP4, then why not be able to do the same with M2TS data?
I might be ignorant on some level, but maybe someone can shed some light on this issue, and possibly present a solution.
There currently is no good solution.
A little background.
Video streaming used to require custom protocols such as RTP/RTMP/RTSP etc. These protocols work fine except, we were basically building two separate networks. One HTTP based for standard web traffic, and the other one. The idea came along to split video into little chunks and serve them to the player over HTTP. This way we do not need special servers/software and we could take advantage of the giant HTTP CDNs that were being built. In addition. because the video was split into chunks, we can can encode the same video into different qualities/file sizes. Then the player can choose the best quality video for its current bandwidth. This was the perfect solution for mobile because of the constant changing network conditions. Several competing standard were developed. Move networks was the first to market [citation needed]. The design was copied by Microsoft (Smooth Streaming) and Apple (HTTP Live streaming aka HLS). Microsoft is phasing out smooth streaming in favor of DASH. DASH looks like it will become the default streaming solution of the future. Except, because of its design-by-comity approach, it has basically been stuck in comity for a few years. Now, in those few years, Apple sold Millions of IOS devices. So HLS can not just be discontinued. Why doesn't everyone just use HLS then? I can think of three reasons 1) Its Apples standard, and people are haters. 2) Transport streams are a complicate file format. and 3) Transport streams a patent encumbered. MP4 is not patent encumbered but it also does not have the adaptive abilities. This make user experience poor on 2G networks. The only network supported by the iPhone 1. Also AT&T at the time did not want full bitrate video streamed over there woefully inadequate celular network. HLS was the compromise. All of this predates HTML5. So the video tag was not even considered in its design.
Addressing your points:
1) It takes time to create the HLS files and playlists from the MP4
files
This is a programing website, Automate it.
2) We are now storing twice as much data on S3
[sic] I want to store only the .ts files and serve both desktop users,
iOS users, and Android users with that single file.
You and me both man :).
Possible solutions.
1) What is specifically wrong with Androids implementation? (except for lack of in older devices)
2) JW player can play HLS (Not sure about on android)
3) Server side transmux on demand.
Doesn’t HLS require 5-10 second .ts segments instead of one big file?
You can do byte-ranges, but you need to make sure all devices you are interested in support it.
If JS video players can convert HTML 5 MP4 files into Flash video on
the fly if the browser does not support HTML 5 MP4, then why not be
able to do the same with M2TS data?
They don't convert. Flash natively supports mp4. It is possible to convert TS in AS3/JS. I have done it. JW player can convert TS in browser. video.js may be able to as well.

IOS: Good way to add XMP metadata to AAC encoded m4a file?

I'm creating an AAC encoded m4a file from raw PCM samples for streaming purposes. I'm using AAC hardware encoding provided in this example. iPhoneExtAudioFileConvertTest
Now I would really want to add metadata such as album artwork and titles.
As I understand m4a or mp4 containers are MPEG-4 Part 14. So the specified metadata format is XMP. However I do not know the good tool for working with XMP metadata. Any ideas?
I'm aware of Adobe XMP SDK, but it seems quite heavyweight, maybe there is a better solution for iOS. I mean, I doubt that it's possible to do in AVFoundation, as XMP is Adobe technology, but maybe someone wrote a nice library especially for this purpose.
I don't know in what terms you think XMP SDK to be a heavyweight.But I can assure you that it hardly takes 15 mins to download, compile and start using the SDK.
You could start by editing one of the Samples(Modify) that come with XMP SDK and then use the snippet inside your application.

Mixing and equalizing multiple streams of compressed audio on iOS

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(
I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Resources