AVPlayer HLS live stream level meter (Display FFT Data) - ios

I'm using AVPlayer for a radio app using HTTP live streaming. Now I want to implement a level meter for that audio stream. The very best would a level meter showing the different frequencies, but a simple left / right solution would be a great starting point.
I found several examples using AVAudioPlayer. But I cannot find a solution for getting the required informations off AVPlayer.
Can someone think of a solution for my problem?
EDIT I want to create something like this (but nicer)
EDIT II
One suggestion was to use MTAudioProcessingTap to get the raw audio data. The examples I could find using the [[[_player currentItem] asset] tracks] array, which is, in my case, an empty array. Another suggestion was to use [[_player currentItem] audioMix] which is null for me.
EDIT III
After years already, there still not seems to be a solution. I did indeed make progress, so I'm sharing it.
During setup, I'm adding a key-value observer to the playerItem:
[[[self player] currentItem] addObserver:self forKeyPath:#"tracks" options:kNilOptions context:NULL];
//////////////////////////////////////////////////////
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)changecontext:(void *)context
if ([keyPath isEqualToString:#"tracks"] && [[object tracks] count] > 0) {
for (AVPlayerItemTrack *itemTrack in [object tracks]) {
AVAssetTrack *track = [itemTrack assetTrack];
if ([[track mediaType] isEqualToString:AVMediaTypeAudio]) {
[self addAudioProcessingTap:track];
break;
}
}
}
- (void)addAudioProcessingTap:(AVAssetTrack *)track {
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalise;
// more tap setup...
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
[inputParams setAudioTapProcessor:tap];
[audioMix setInputParameters:#[inputParams]];
[[[self player] currentItem] setAudioMix:audioMix];
}
So far so good. This all works, I could find the right track and setup the inputParams and audioMix etc.
But unfortunately the only callback, that gets called is the init callback. None of the others will fire at any point.
I tried different (kinds of) stream sources, one of them an official Apple HLS stream: http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8

Sadly, using an HLS stream with AVFoundation doesn't give you any control over the audio tracks. I ran into the same problem trying to mute an HLS stream, which turned out to be impossible.
The only way you could read audio data would be to tap into the AVAudioSession.
EDIT
You can access the AVAudioSession like this:
[AVAudioSession sharedInstance]
Here's the documentation for AVAudioSession

Measuring audio using AVPlayer looks to be an issue that is still ongoing. That being said, I believe that the solution can be reached by combining AVPlayer with AVAudioRecorder.
While the two classes have seemingly contradictory purposes, there is a work around that allows AVAudioRecorder to access the AVPlayer's audio output.
Player / Recorder
As described in this Stack Overflow Answer, recording the audio of a AVPlayer is possible if you access the audio route change using kAudioSessionProperty_AudioRouteChange.
Notice that the audio recording must be started after accessing the audio route change. Use the linked stack answer as a reference - it includes more details and necessary code.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once you have access to the AVPlayer's audio route and are recording, the measuring is relatively straightforward.
Audio Levels
In my answer to a stack question regarding measuring microphone input I describe the steps necessary to access the audio level measurements. Using AVAudioRecorder to monitor volume changes is more complex than one would think, so I included a GitHub project that acts as a template for monitoring audio changes while recording.
~~~~~~~~~~~~~~~~~~~~~~~~~~ Please Note ~~~~~~~~~~~~~~~~~~~~~~~~~~
This combination during an HLS live stream is not something that I have tested. This answer is strictly theoretical, so it may take a sound understanding of both classes to work out completely.

Related

Can't get AVAudioPlayer to Play [duplicate]

I'm hearing some conflicting reports about this. What I'm trying to do is stream an mp3 file from a URL. I've done hours of research, but I cannot find any good guides on how to do this, or even what kind of audio player I should use.
Some friends tell me that AVPlayer can stream mp3, but the Apple documentation says it can't. I've poured over Matt Gallagher's audio streamer (http://www.cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html), but that code was made a good while ago, and I'm new enough to this that it's hard to work through the autoreleases and retains and all that.
The audio I'm trying to stream is a fairly large mp3 file from a libsyn server, with a URL of format..
http://traffic.libsyn.com/podcastname/episode.mp3
All I need to do is grab it and start playing, with the ability to pause and scrub. So first things first, CAN AVPlayer stream mp3's? And if so, does anybody have any guides or code they can point me to? And if not, is there any kind of audio player class that can stream audio?
I've tried creating an AVPlayerItem, initialized with the URL, then adding it to an AVPlayer, but I'm getting a ton of Error Loading... and Symbol Not Found... errors. I'd appreciate any information on this, thank you!
try this
-(void)playselectedsong{
AVPlayer *player = [[AVPlayer alloc]initWithURL:[NSURL URLWithString:urlString]];
self.songPlayer = player;
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:[songPlayer currentItem]];
[self.songPlayer addObserver:self forKeyPath:#"status" options:0 context:nil];
[NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(updateProgress:) userInfo:nil repeats:YES];
}
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
if (object == songPlayer && [keyPath isEqualToString:#"status"]) {
if (songPlayer.status == AVPlayerStatusFailed) {
NSLog(#"AVPlayer Failed");
} else if (songPlayer.status == AVPlayerStatusReadyToPlay) {
NSLog(#"AVPlayerStatusReadyToPlay");
[self.songPlayer play];
} else if (songPlayer.status == AVPlayerItemStatusUnknown) {
NSLog(#"AVPlayer Unknown");
}
}
}
- (void)playerItemDidReachEnd:(NSNotification *)notification {
// code here to play next sound file
}
You can also try my open source Audjustable library which supports HTTP streaming. It's based on Matt's AudioStreamer but has been tidied, optimised and updated to support multiple data sources (non HTTP) and gapless playback.
https://github.com/tumtumtum/audjustable.
In addition to Sumit Mundra's answer, which helped me a lot, I found that this technique doesn't actually stream MP3 files from a remote server. When I implemented this, the file downloaded synchronously, blocking my UI, before playing. The way to properly stream the MP3 that I found worked very well was to point to an M3U file. This is just a text file with an .m3u extension which contains a link to the original MP3. Point Sumit's code at that file instead, and you have a stream that starts playing immediately.
This is the place I found that information: http://www.soundabout.net/streammp3.htm
Matt Gallagher's AudioStreamer was updated 2 months ago https://github.com/mattgallagher/AudioStreamer/commits/master
But for what your looking for check out the sample code StichedStreamPlayer http://developer.apple.com/library/ios/#samplecode/StitchedStreamPlayer/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010092
It uses an AVPlayer object and if you look at method - (IBAction)loadMovieButtonPressed:(id)sender you should be able to follow how it sets up the AVPlayer Object.
Aaron's post about using an m3u file instead of an mp3 worked for me. I also found that AVPlayer was picky about the m3u syntax. For example, when I tried the following, I was unable to get a valid duration (it was always indefinite), and relative paths didn't work:
#EXTM3U
#EXTINF:71
https://test-domain.com/90c9a240-51b3-11e9-bb69-c1300ce2348f.mp3
However, after updating the m3u file to the following, both issues were resolved:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:70
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:70.000,
8577d650-51b3-11e9-8e69-4f2b085e94aa.mp3
#EXT-X-ENDLIST

Play audio through upper (phone call) speaker

I'm trying to get audio in my app to play through the upper speaker on the iPhone, the one you press to your ear during a phone call. I know it's possible, because I've played a game from the App Store ("The Heist" by "tap tap tap") that simulates phone calls and does exactly that.
I've done a lot of research online, but I'm having a surprisingly hard time finding ANYONE who has even discussed the possibility. The overwhelming majority of posts seem to be about the handsfree speaker vs plugged-in earphones, (like this and this and this), rather than the upper "phone call" speaker vs the handsfree speaker. (Part of that problem might be not having a good name for it: "phone speaker" often means the handsfree speaker at the bottom of the device, etc, so it's hard to do a really well-targeted search). I've looked into Apple's Audio Session Category Route Overrides, but those again seem to (correct me if I'm wrong) deal only with the handsfree speaker at the bottom, not the speaker at the top of the phone.
I have found ONE post that seems to be about this: link. It even provides a bunch of code, so I thought I was home free, but now I can't seem to get the code to work. For simplicity I just copied the DisableSpeakerPhone method (which if I understand it correctly should be the one to re-route audio to the upper speaker) into my viewDidLoad to see if it would work, but the first "assert" line fails, and the audio continues to play out the bottom. (I also imported the AudioToolbox Framework, as suggested in the comment, so that isn't the problem.)
Here is the main block of code I'm working with (this is what I copied into my viewDidLoad to test), although there are a few more methods in the article I linked to:
void DisableSpeakerPhone () {
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
// Overriding the output audio route
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
dataSize = sizeof(audioRouteOverride);
AudioSessionSetProperty(
kAudioSessionProperty_OverrideAudioRoute,
dataSize,
&audioRouteOverride);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
So my question is this: can anyone either A) help me figure out why that code doesn't work, or B) offer a better suggestion for being able to press a button and route the audio up to the upper speaker?
PS I am getting more and more familiar with iOS programming, but this is my first foray into the world of AudioSessions and such, so details and code samples are much appreciated! Thank you for your help!
UPDATE:
From the suggestion of "He Was" (below) I've removed the code quoted above and replaced it with:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
[[AVAudioSession sharedInstance] setActive: YES error:nil];
at the beginning of viewDidLoad. It still isn't working, though, (by which I mean the audio is still coming out of the speaker at the bottom of the phone instead of the receiver at the top). Apparently the default behavior should be for AVAudioSessionCategoryPlayAndRecord to send audio out of the receiver on its own, so something is still wrong.
More specifically what I'm doing with this code is playing audio through the iPod Music Player (initialized right after the AVAudioSession lines above in viewDidLoad, for what it's worth):
_musicPlayer = [MPMusicPlayerController iPodMusicPlayer];
and the media for that iPod Music Player is chosen through an MPMediaPickerController:
- (void) mediaPicker: (MPMediaPickerController *) mediaPicker didPickMediaItems: (MPMediaItemCollection *) mediaItemCollection {
if (mediaItemCollection) {
[_musicPlayer setQueueWithItemCollection: mediaItemCollection];
[_musicPlayer play];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
This all seems fairly straightforward to me, I've got no errors or warnings, and I know that the Media Picker and Music Player are working correctly because the correct songs start playing, it's just out of the wrong speaker. Could there be a "play media using this AudioSession" method or something? Or is there a way to check what audio session category is currently active, to confirm that nothing could have switched it back or something? Is there a way to emphatically tell the code to USE the receiver, rather than relying on the default to do so? I feel like I'm on the one-yard line, I just need to cross that final bit...
EDIT: I just thought of a theory, wherein it's something about the iPod Music Player that doesn't want to play out of the receiver. My reasoning: it is possible to set a song to start playing through the official iPod app and then seamlessly adjust it (pause, skip, etc) through the app I'm developing. The continuous playback from one app to the next made me think that maybe the iPod Music Player has its own audio route settings, or maybe it doesn't stop to check the settings in the new app? Does anyone who knows what they're talking about think it could it be something like that?
Was struggling with this for a while too. Maybe this would help someone later.You can also use the newer methods of overriding ports. Many of the methods in your sample code are actually deprecated.
So if you have your AudioSession sharedInstance by getting,
NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive: YES error:nil];
The session category has to be AVAudioSessionCategoryPlayAndRecord
You can get the current output by checking this value.
AVAudioSessionPortDescription *routePort = session.currentRoute.outputs.firstObject;
NSString *portType = routePort.portType;
And now depending on the port you want to send it to, simply toggle the output using
if ([portType isEqualToString:#"Receiver"]) {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&error];
}
This should be a quick way to toggle the outputs to the speaker phone and receiver.
You have to initialise your audio session first.
Using the C API
AudioSessionInitialize (NULL, NULL, NULL, NULL);
In iOS6 you can use AVAudioSession methods instead (you will need to import the AVFoundation framework to use AVAudioSession):
Initialization using AVAudioSession
self.audioSession = [AVAudioSession sharedInstance];
Setting the audioSession category using AVAudioSession
[self.audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
error:nil];
For further research, if you want better search terms, here are the full names of the constants for the speakers:
const CFStringRef kAudioSessionOutputRoute_BuiltInReceiver;
const CFStringRef kAudioSessionOutputRoute_BuiltInSpeaker;
see apple's docs here
But the real mystery is why you are having any trouble routing to the receiver. It's the default behaviour for the playAndRecord category. Apple's documentation of kAudioSessionOverrideAudioRoute_None:
"Specifies, for the kAudioSessionCategory_PlayAndRecord category, that output audio should go to the receiver. This is the default output audio route for this category."
update
In your updated question you reveal that you are using the MPMusicPlayerController class. This class invokes the global music player (the same player used in the Music app). This music player is separate from your app, and so doesn't share the same audio session as your app's audioSession. Any properties you set on your app's audioSession will be ignored by the MPMusicPlayerController.
If you want control over your app's audio behaviour, you need to use an audio framework internal to your app. This would be AVAudioRecorder / AVAudioPlayer or Core Audio (Audio Queues, Audio Units or OpenAL). Whichever method you use, the audio session can be controlled either via AVAudioSession properties or via the Core Audio API. Core Audio gives you more fine-grained control, but with each new release of iOS more of it is ported over to AVFoundation, so start with that.
Also remember that the audio session provides a way for you to describe the intended behaviour of your app's audio in relation to the total iOS environment, but it will not hand you total control. Apple takes care to ensure that the user's expectations of their device's audio behaviour remain consistent between apps, and when one app needs to interrupt another's audio stream.
update 2
In your edit you allude to the possibility of audio sessions checking other app's audio session settings. That does not happen1. The idea is that each app sets it's preferences for it's own audio behaviour using it's self-contained audio session. The operating system arbitrates between conflicting audio requirements when more than one app competes for an unshareable resource, such as the internal microphone or one of the speakers, and will usually decide in favour of that behaviour which is most likely to meet the user's expectations of the device as a whole.
The MPMusicPlayerController class is slightly unusual in that it gives you the ability for one app to have some degree of control over another. In this case, your app is not playing the audio, it is sending a request to the Music Player to play audio on your behalf. Your control is limited by the extent of the MPMusicPlayerController API. For more control, your app will have to provide it's own implementation of audio playback.
In your comment you wonder:
Could there be a way to pull an MPMediaItem from the MPMusicPlayerController and then play them through the app-specific audio session, or anything like that?
That's a (big) subject for a new question. Here is a good starting read (from Chris Adamson's blog) From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary - it's the sequel to From iphone media library to pcm samples in dozens of confounding and potentially lossy steps - that should give you a sense to the complexity you will face. This may have got easier since iOS6 but I wouldn't be so sure!
1 there is an otherAudioPlaying read-only BOOL property in ios6, but that's about it
Swift 3.0 Code
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
let routePort: AVAudioSessionPortDescription? = obsession. current Route. outputs. first
let portType: String? = routePort?.portType
if (portType == "Receiver") {
try? audioSession.overrideOutputAudioPort(.speaker)
}
else {
try? audioSession.overrideOutputAudioPort(.none)
}
swift 5.0
func activateProximitySensor(isOn: Bool) {
let device = UIDevice.current
device.isProximityMonitoringEnabled = isOn
if isOn {
NotificationCenter.default.addObserver(self, selector: #selector(proximityStateDidChange), name: UIDevice.proximityStateDidChangeNotification, object: device)
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord)
try session.setActive(true)
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
} else {
NotificationCenter.default.removeObserver(self, name: UIDevice.proximityStateDidChangeNotification, object: device)
}
}
#objc func proximityStateDidChange(notification: NSNotification) {
if let device = notification.object as? UIDevice {
print(device)
let session = AVAudioSession.sharedInstance()
do{
let routePort: AVAudioSessionPortDescription? = session.currentRoute.outputs.first
let portType = routePort?.portType
if let type = portType, type.rawValue == "Receiver" {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} else {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
}
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
}
}

AVAudioPlayer not working with .pls shoutcast file?

Good Day,
I am working on one Radio APP that gets shoutcast streaming .pls file and plays it with help of AVFoundation framework.
This job is easily done with AVPlayer, but the problem with it is that I can not code or find any good solution to get it working with volume slider, the AVPlayer class does not have volume property.
So now I am trying to get it working with AVAudioPlayer which has volume property, and here is my code:
NSString *resourcePatch = #"http://vibesradio.org:8002/listen.pls";
NSData *_objectData = [NSData dataWithContentsOfURL:[NSURL URLWithString:resourcePatch]];
NSError *error;
vPlayer = [[AVAudioPlayer alloc] initWithData:_objectData error:&error];
vPlayer.numberOfLoops = 0;
vPlayer.volume = 1.0f;
if (vPlayer == nil)
NSLog(#"%#", [error description]);
else
[vPlayer play];
This code is working with uploaded .mp3 files on my server but it is not working with .pls files generated by shoutcast,
Is there any way to fix AVAudioPlayer to work with .pls files, or implement volume slider to AVPlayer ?
Using AVAudioPlayer for stream from network is not a good idea. See what Apple's documentation say on AVAudioPlayer:
An instance of the AVAudioPlayer class, called an audio player,
provides playback of audio data from a file or memory.
Apple recommends that you use this class for audio playback unless you
are playing audio captured from a network stream or require very low
I/O latency.
For change AVPlayer's volume check this question:Adjusting the volume of a playing AVPlayer

AVFoundation: Video to OpenGL texture working - How to play and sync audio?

I've managed to load a video-track of a movie frame by frame into an OpenGL texture with AVFoundation. I followed the steps described in the answer here: iOS4: how do I use video file as an OpenGL texture?
and took some code from the GLVideoFrame sample from WWDC2010 which can be downloaded here.
How do I play the audio-track of the movie synchronously to the video? I think it would not be a good idea to play it in a separate player, but to use the audio-track of the same AVAsset.
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
I retrieve a videoframe and it's timestamp in the CADisplayLink-callback via
CMSampleBufferRef sampleBuffer = [self.readerOutput copyNextSampleBuffer];
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
where readerOutput is of type AVAssetReaderTrackOutput*
How to get the corresponding audio-samples?
And how to play them?
Edit:
I've looked around a bit and I think, best would be to use AudioQueue from the AudioToolbox.framework using the approach described here: AVAssetReader and Audio Queue streaming problem
There is also an audio-player in the AVFoundation: AVAudioPlayer. But I don't know exactly how I should pass data to its initWithData-initializer which expects NSData. Furthermore, I don't think it's the best choice for my case because a new AVAudioPlayer-instance would have to be created for every new chunk of audio samples, as I understand it.
Any other suggestions?
What's the best way to play the raw audio samples which I get from the AVAssetReaderTrackOutput?
You want do do an AV composition. You can merge multiple media sources, synchronized temporally, into one output.
http://developer.apple.com/library/ios/#DOCUMENTATION/AVFoundation/Reference/AVComposition_Class/Reference/Reference.html

iOS AVAudioPlayer multiple instances, multiple sounds at once

I am working on an interactive children's book for the iPad that has a "read to me" option.
For each page (that has an index), theres an audio clip that produces the "read to me" feature. The feature works well, except for the fact that when I turn the page, the previous pages audio still plays, even when the new audio starts, here's my example:
- (void) didTurnToPageAtIndex:(NSUInteger)index
{
if ([delegate respondsToSelector:#selector(leavesView:didTurnToPageAtIndex:)])
[delegate leavesView:self didTurnToPageAtIndex:index];
if([[NSUserDefaults standardUserDefaults] boolForKey:#"kReadToMe"] == YES)
{
NSString* filename = [voices objectAtIndex:index];
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"m4v"];
NSLog(#"File: %#, Index: %i",path,index);
//Create new audio for next page
AVAudioPlayer * newAudio = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:nil];
rtmAudio = newAudio; // automatically retain audio and dealloc old file if new file is loaded
//[newAudio release];
[rtmAudio play];
}
}
Say for instance, I turn to page 3 before the audio for page 2 stops playing, both clips play over eachother, which will annoy the sh*t out of kids, I know it does me.
I've tried placing [rtmAudio stop] before I allocate the new file, but that doesnt seem to work. I need a way to kill the prevous audio clip before starting the new clip.
I would suggest that you have one instance of the audio player in your whole application. Then you can check if it playing, if so stop it and then move on.
you are creating a new player in this method before stopping the old.... I believe
As AlexChaffee mentioned the Apple docs, "Play multiple sounds simultaneously, one sound per audio player, with precise synchronization". It seems preferable to use multipe instances across the app with NSNotificationCenter's notifications.

Resources