comprehensive tutorial about using AVAssetReader with Audio Queue/Streaming Services - ios

I'm working on an app that allows a user to select music tracks on their iphone, listen to it and share it with another person live so that the other person can listen to the same song in sync.
i've managed to get the following prototype working: manually add a file to the bundle i'm working with, then decode it using AudioFileReadPackets and sending it over the network using GKSession.
On the receiving end.. I use audio queue/streaming services to read the stream and play the music (ie AudioFileStreamOpen, AudioFileStreamParseBytes, AudioQueueNewOutput, AudioQueueStart etc. See here for more details).
That said, I found out that I can't simply read a file from the iphone's file system and decode it.. rather I gotta use the AVAssetReader and so on. There are many examples of doing that on Stack Over Flow, but they focus on the immediate technical implementation rather than explaining the big picture.. I couldn't find much comprehensive guid or documentation from Apple's developer guide website (see how they describe CMSampleBuffer Reference for example; function parameters have no descriptions etc).
Any links/books/ etc that may lead me in the right direction here? Specifically about accessing audio files in the iPod library and segmenting them using AVAssetReader such that they can be sent in a streaming fashion over a network, to be played via audio queue/streaming services

I found this book to be amazingly helfpul: Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS

Related

How are videos on Youtube and such sites loaded and how is progress saved?

I hope you are doing well!
I am working on an eLearning website and came across the topic of the video loading. Since videos are of various sizes, it would be impossible to make the user wait for the entire download of the video for them to start watching, so it must be taken as a stream where the video keeps loading content as the user watches (similar to YouTube I guess). However, I am failing to find how this works? I've been recommended the use of SCORM and xAPI to help with this but I am only finding help on how to upload SCORM files or how to write xAPI code and not how to set them up in our website.
How can we make our videos download as the User watches? Are SCORM and xAPI actually what we should be looking for?
For context, we will be using React JS for our Frontend and will be saving the videos on a server.
I would greatly appreciate any advice you have and thank you for your time!
We tried using xAPI and SCORM however we aren't understanding how they might help
SCORM and xAPI by themselves are not going to assist you with this in general. To stream video via an eLearning course you will need to use a video player (such as the HTML5 video player or video.js) that understands streaming video protocols and to encode the video files in a format supported by that player. I would suggest reading about HLS for instance, though I didn't read the entire page, this is a good place to start: https://www.dacast.com/blog/hls-streaming-protocol/
A traditional eLearning course, such as you would have with SCORM, is going to provide a reasonable way to wrap the playing of video such that it can be launched for a learner via an LMS and may capture data such as completion. xAPI is probably suggested because it provides a more robust way of enabling the capture of interaction data such as when the learner plays, pauses, or seeks in a video. My preferred approach for doing this is to leverage cmi5, and there is an example of xAPI video profile usage within a cmi5 course in the Project CATAPULT sample content, see https://github.com/adlnet/CATAPULT/tree/main/course_examples. It could be adapted to leverage something like HLS and get streaming capability. Confirm with your LMS of choice ahead of time whether it supports cmi5 as adoption is still lower than for SCORM.
SCORM Cloud (a bit of a misnomer, https://cloud.scorm.com/) provides builtin video handling via the cmi5 mechanism and will soon support video streaming beyond just from YouTube without the need to author a course separately.

Meter audio level from Embedded YouTube video that plays inline in iOS

I'm trying to find a way to get the average power level for a channel, that comes out from the audio played in the embedded video. I'm using YouTube's iOS helper library for embedding the video https://developers.google.com/youtube/v3/guides/ios_youtube_helper
A lot of the answers I've found in StackOverflow refer to AVAudioPlayer, but that's not my case. I also looked in the docs of AudioKit framework to find something that can give the output level of the current audio, but I couldn't find anything related, maybe I missed something over there. I also looked in EZAudio framework even tough it's deprecated, and I also couldn't find something that relates to my case.
My direction of thinking was to find a way to get the actual level that's coming out from the device, but I found one answer in SO that's saying this is not allowed in iOS, although he didn't mention any source for this statement.
https://stackoverflow.com/a/12664340/4711172
So, any help would be much appreciated.
The iOS security sandbox blocks apps from seeing the device's digital audio output stream, or any other app's internal audio output (unless explicitly shared, e.g. inter-app audio, etc.) (when using Apple App store permitted public APIs.)
(Just a guess, but this was possibly and originally implemented in iOS to prevent apps from capturing samples of DRM'd music and/or recording phone call conversations.)
Might be a bit off/weird, but just in case -
Have you considered closing a loop? Meaning - record the incoming audio using 'AVAudio​Recorder' and get the audio levels from there?.
See Apple's documentation for AVAudioRecorder (in the overview they're specifying: "Obtain input audio-level data that you can use to provide level metering")
AVAudioRecorder documentation

iOS Multi-Channel Audio with AVFoundation and Swift

I am currently in the research and prototyping stages of a project to develop a native iOS app (Swift 3) that includes a multi-channel audio player (multiple stereo MP3 files). I have found very limited information online, particularly written in Swift 3, so thought as I continue my research I would pose a question here.
Regarding frameworks it seems clear from what I've looked at so far that AVFoundation is going to do the job. It's not too low level and has a good set of functionality. It has support for playing multiple audio files with AVAudioPlayer. I am planning to start prototyping something with this soon.
But I am new to Swift and to iOS development with its huge number of libraries, so I'm wondering if I'm missing anything, if I'm on the right track here. Any answers with general information and thoughts on this will be up-voted. For an accepted answer some sample outline code using an appropriate framework, AVFoundation or a justified alternative.
If no answer is forthcoming I will post my own code when I get there.
Specifically I need from two to ten input channels, from MP3 files within the project resources, each with their own gain that can be individually adjusted, and then all of these mixed, maintaining their stereo channels, to a single output (the device) with a master gain. Some of the tracks need to loop, others not. The tracks need to be accurately synchronised. This is just for info and outline code would be fine covering the important points.
Research Notes and Resources
Apple: AVFoundation
A collection of resources relating to AVFoundation.
Apple: AVFoundation Programming Guide
This document seems encouraging at first, but actually only deals with video. It says:
There are two facets to the AVFoundation framework—APIs related to video and APIs related just to audio. The older audio-related classes provide easy ways to deal with audio. They are described in the Multimedia Programming Guide, not in this document.
The "Multimedia Programming Guide" which is also mentioned elsewhere at Apple in relation to this, is never linked and Google results point to not found pages on the Apple site. It seems to have disappeared.
Rudi Strahl: Mixing Multiple Audio Tracks with AVFoundation
Compares using AVComposition to using multiple AVPlayers. Example code is Objective-C. Not sure how the AVPlayers are mixed in the second solution. Perhaps with AVAudioMix. Currently looking at this. The article talks a little about it but doesn't deliver any specifics.
Audio Session Programming Guide
This document looks at AVAudioSession which provides supporting functionality:
AVAudioSession gives you control your app’s audio behavior. You can:
Select the appropriate input and output routes for your app
Determine how your app integrates audio from other apps
Handle interruptions from other apps
Automatically configure audio for the type of app your are creating
Techotopia: Playing Audio on iOS 10 using AVAudioPlayer
Some useful information on using AVAudioPlayer.
Stack Overflow: Playing a Sound with AVAudioPlayer
Basic Swift code for playing a sound. Some answers include a little extra functionality.
Hacking with Swift: How to Play Sounds Using AVAudioPlayer
Again, covers the basics.
Sweet Tutos: How To Play Sounds Files And Manage Duration Progress – AVAudioPlayer Tutorial
Updated to Swift 3. Some useful info.
Xamarin: Playing Sound with AVAudioPlayer
Written in Swift 2, I think.
Apple Video: WWDC 2013 Moving to AV Kit and AV Foundation
While not directly related, I found the first 30 minutes of this video introducing developers to AV Kit and AV Foundation in OS X 10 provides a useful overview of the technology.
I was working on the same problem, best what I could do it is, to transcode media content to be playing using avplayer, here is a draft, maybe it can help.

Designing a library for Hardware-accelerated unsupported containers on iOS (and Airplay)

I'm trying to put together an open source library that allows iOS devices to play files with unsupported containers, as long as the track formats/codecs are supported. e.g.: a Matroska video (MKV) file with an H264 video track and an AAC audio track. I'm making an app that surely could use that functionality and I bet there are many more out there that would benefit from it. Any help you can give (by commenting here or—even better— collaborating with me) is much appreciated. This is where I'm at so far:
I did a bit of research trying to find out how players like AVPlayerHD or Infuse can play non-standard containers and still have hardware acceleration. It seems like they transcode small chunks of the whole video file and play those in sequence instead.
It's a good solution. But if you want to throw that video to an Apple TV, things don't work as planned since the video is actually a bunch of smaller chunks being played as a playlist. This site has way more info, but at its core streaming to Apple TV is essentially a progressive download of the MP4/MPV file being played.
I'm thinking a sort of streaming proxy is the way to go. For the playing side of things, I've been investigating AVSampleBufferDisplayLayer (more info here) as a way of playing the video track. I haven't gotten to audio yet. Things get interesting when you think about the AirPlay side of things: by having a "container proxy", we can make any file look like it has the right container without the file size implications of transcoding.
It seems like GStreamer might be a good starting point for the proxy. I need to read up on it; I've never used it before. Does this approach sound like a good one for a library that could be used for App Store apps?
Thanks!
Finally got some extra time to go over GStreamer. Especially this article about how it is already updated to use the hardware decoding provided by iOS 8. So no need to develop this; GStreamer seems to be the answer.
Thanks!
The 'chucked' solution is no longer necessary in iOS 8. You should simply set up a video decode session and pass in NALUs.
https://developer.apple.com/videos/wwdc/2014/#513

AVFoundation audio processing using AVPlayer's MTAudioProcessingTap with remote URLs

There is precious little documentation on AVAudioMix and MTAudioProcessingTap, which allow processing to be applied to the audio tracks (PCM access) of media assets in AVFoundation (on iOS). This article and a brief mention in a WWDC 2012 session is all I have found.
I have got the setup described here working for local media files but it doesn't seem to work with remote files (namely HLS streaming URLs). The only indication that this is expected is the note at the end of this Technical Q&A:
AVAudioMix only supports file-based assets.
Does any one know more about this? is there really no way of accessing the audio PCM data when the asset is not file based? Can anyone find actual Apple documentation relating to MTAudioProcessingTap?
I've noticed quite a few people asking about this around the internet, and the general consensus seemed to be that it wasn't possible.
Turns out it is - I was looking into this for a recent personal project and determined that it is indeed possible to make MTAudioProcessingTap work with remote streams. The trick is to KVObserve the status of the AVPlayerItem; when it's ready to play, you can safely retrieve the underlying AVAssetTrack and set an AudioMix on it.
I did a basic writeup with some (mostly working) code here: http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
If you already managed to handle this, more power to you, but I figured I'd answer this question since it comes up pretty quickly in Google for this stuff.

Resources