IOS: Good way to add XMP metadata to AAC encoded m4a file? - ios

I'm creating an AAC encoded m4a file from raw PCM samples for streaming purposes. I'm using AAC hardware encoding provided in this example. iPhoneExtAudioFileConvertTest
Now I would really want to add metadata such as album artwork and titles.
As I understand m4a or mp4 containers are MPEG-4 Part 14. So the specified metadata format is XMP. However I do not know the good tool for working with XMP metadata. Any ideas?
I'm aware of Adobe XMP SDK, but it seems quite heavyweight, maybe there is a better solution for iOS. I mean, I doubt that it's possible to do in AVFoundation, as XMP is Adobe technology, but maybe someone wrote a nice library especially for this purpose.

I don't know in what terms you think XMP SDK to be a heavyweight.But I can assure you that it hardly takes 15 mins to download, compile and start using the SDK.
You could start by editing one of the Samples(Modify) that come with XMP SDK and then use the snippet inside your application.

Related

Is it possible to encode broadcast samples into a MPEG-ts or fmp4 file using AVAssetWriter?

I know that this is a frequently asked question which does not have a trivial solution.
Found a demo which does the same - http://blog.denivip.ru/index.php/2017/01/live-streaming-on-ios/?lang=en, but it does not use AVAssetWriter.
People have also suggested using the bento4 library but I want to be able to do it programmatically without creating a new process, and without the latency of having to create a new file, and using AVAssetWriter.
If not possible, why does IOS not have in-built support to create those files using AVAssetWriter using samples?
Is RTSP the only option that Apple recommends for Live Streaming?
Answering my question.
Use AVAssetWriter to create sequence of mp4 files.
While reading the files to write to socket, use qt-faststart to create a streamable version of the mp4 file. This is the java equivalent - https://github.com/ypresto/qtfaststart-java/tree/master/src/main/java/net/ypresto/qtfaststart
Looks like ios cannot be asked to do this.
EDIT#1: Sadly, many mp4 files do not seem to have the moov atom at the end. So, back to square one. Is there some way to force the AVAssetWriter to always write the moov atom at least at the end if not beginning?
EDIT#2: Viola! Looks like Apple does support the feature. See this: https://developer.apple.com/documentation/avfoundation/avassetwriter/1389811-shouldoptimizefornetworkuse?language=objc and this: What does shouldOptimizeForNetworkUse actually do?

Using AVAssetWriter to create MPEG-2 TS

I'm currently using AVAssetWriter to take a video saved on disk and transcode it to lower bitrates. This works fine. Ultimately, I would like to have the iPhone create the necessary TS and M3U8 files to then send to a server. I've looked everywhere for examples of how to do this but I have had no luck. My understanding is that MPEG-2 TS only differs in the header structure, but I'm weary of messing with the file directly.
Any thoughts around how to approach this?

Libraries for compressing and saving sound in iOS

I wish to export a sound recording on iOS from an app into some kind of format that's suitable to be sent over email. Since this leaves a compressed format this leaves out uncompressed wav, and leaves a choice of mp3, ogg, m4a..
What readily available libraries (or even APIs) are available in iOS to do this task?
AVFoundation will do the job for you. In particular, you should look at AVExportSession, which is explicity written for encoding pcm into mp3 or m4a. Sorry, no ogg here. This stuff was available since iOS4.

Mixing and equalizing multiple streams of compressed audio on iOS

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(
I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Specify software-based codec for AVAssetReaderAudioMixOutput?

On an ios device, can AVAssetReaderOutput be told to only use software-based decoders (i.e. kAppleSoftwareAudioCodecManufacturer rather than kAppleHardwareAudioCodecManufacturer)?
I see that this is possible using Audio Format Services in AudioToolbox, but I don't see how to carry this over to AVFoundation.
The reason for this is that I'd like to decode compressed audio from the itunes library while iPodMusicPlayer is playing - since hardware-assisted decoding does not support simultaneous decoding of multiple songs, my app will need to use software decoding (right?)
I'd rather not do the software decoding as a 2-step process (i.e. export compressed file to app sandbox, then open that using AudioToolbox).
Well, although I haven't found a way to specify the software decoder in AVFoundation, I ended up working around this by reading each track of the compressed song file with an AVAssetReaderTrackOutput, then passing the compressed buffers to an AudioConverterRef.

Resources