How to scrub audio with AVPlayer? - ios

I'm using seekToTime for an AVPlayer. It works fine, however, I'd like to be able to hear audio as I scrub through the video, much like how Final Cut or other video editors work. Just looking for ideas or if I've missed something obvious.

The way to do this is to scrub a simultaneous AVplayer asynchronously alongside the video. I did it this way (in Swift 4):
// create the simultaneous player and feed it the same URL:
let videoPlayer2 = AVPlayer(url: sameUrlAsVideoPlayer!)
videoPlayer2.volume = 5.0
//set the simultaneous player at exactly the same point as the video player.
videoPlayer2.seek(to: sameSeekTimeAsVideoPlayer)
// create a variable(letsScrub)that allows you to activate the audio scrubber when the video is being scrubbed:
var letsScrub: Bool?
//when you are scrubbing the video you will set letsScrub to true:
if letsScrub == true {audioScrub()}
//create this function to scrub audio asynchronously:
func audioScrub() {
DispatchQueue.main.async {
//set the variable to false so the scrubbing does not get interrupted:
self.letsScrub = false
//play the simultaneous player
self.videoPlayer2.play()
//make sure that it plays for at least 0.25 of a second before it stops to get that scrubbing effect:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.25) {
//now that 1/4 of a second has passed (you can make it longer or shorter as you please) - pause the simultaneous player
self.videoPlayer2.pause()
//now move the simultaneous player to the same point as the original videoplayer:
self.videoPlayer2.seek(to: self.sameSeekTimeAsVideoPlayer)
//setting this variable back to true allows the process to be repeated as long as video is being scrubbed:
self.letsScrub = true
}
}
}

Related

How to fade out one audio file while playing the next in AudioKit

I'm creating a traditional music player with AudioKit. Initially it plays one file, then you can tap the next button to skip to the next audio file. The songs aren't all known up-front, the playlist can change while a song is currently playing, so it's not known what the next audio file will be until we go to play it.
My current implementation for that works well. I create a player for the first audio file and set that to AudioKit.output and call AudioKit.start() then player.play(), then when next is tapped I call AudioKit.stop() and then create the new player, set it as the output, start AudioKit, and play the new player. If you don't stop AudioKit before modifying the output, you'll encounter an exception as I saw previously.
Now you should also be able to tap a fade button which will crossfade between the current song and the next song - fade out the current song for 3 seconds and immediately play the next song. This is proving to be difficult. I'm not sure how to properly implement it.
The AudioKit playgrounds have a Mixing Nodes example where multiple AKPlayers are created, AKMixer is used to combine them, and the mixer is assigned to the output. But it appears you cannot change the players in the mixer. So the solution I have currently is to stop AudioKit when the fade button is tapped, recreate the AKMixer adding a new player for the next song, start AudioKit, then resume playback of the first player and play the new player. This experience isn't smooth; you can certainly hear the audio stop and resume.
How can I properly fade out one song while playing the next song?
Please see my sample project on GitHub. I've included its code below:
final class Maestro: NSObject {
static let shared = Maestro()
private var trackPlayers = [AKPlayer]() {
didSet {
do {
try AudioKit.stop()
} catch {
print("Maestro AudioKit.stop error: \(error)")
}
mixer = AKMixer(trackPlayers)
AudioKit.output = mixer
do {
try AudioKit.start()
} catch {
print("Maestro AudioKit.start error: \(error)")
}
trackPlayers.forEach {
if $0.isPlaying {
let pos = $0.currentTime
$0.stop()
$0.play(from: pos)
}
}
}
}
private var mixer: AKMixer?
private let trackURLs = [
Bundle.main.url(forResource: "SampleAudio_0.4mb", withExtension: "mp3")!,
Bundle.main.url(forResource: "SampleAudio_0.7mb", withExtension: "mp3")!
]
func playFirstTrack() {
playNewPlayer(fileURL: trackURLs[0])
}
func next() {
trackPlayers.forEach { $0.stop() }
trackPlayers.removeAll()
playNewPlayer(fileURL: trackURLs[1])
}
func fadeAndStartNext() {
playNewPlayer(fileURL: trackURLs[1])
//here we would adjust the volume of the players and remove the first player after 3 seconds
}
private func playNewPlayer(fileURL: URL) {
let newPlayer = AKPlayer(url: fileURL)!
trackPlayers.append(newPlayer) //triggers didSet to update AudioKit.output
newPlayer.play()
}
}

AvPlayer and AirPlay not working quite well

I am coding a video player with Xamarin, AVPlayer and AvPlayerViewController. My code supports AirPlay and configures AVPlayer according docs but is not working quite well. The following scenario seems to fail:
Play a video, send it to Apple TV.
Exit playback as it goes on the Apple TV
Try to resume playback from the current position. This leads to playback directly starting on the Apple TV but the seek which I do to resume from the last playback position kind of fails. What happens is that AvPlayerViewController UI shows that playback starts from the beginning while the Apple TV is playing back from the resume point. The AvPlayerViewController play/pause button is also wrong most of the time and is not reflecting the right play state.
I do the seek so I resume the video like this: wait ( by KVO ) for the avplayer status to become ready to play, seek and then play the video.
Note ... all this works just fine when player is not in airplay mode. Video resumes properly and UI is looking correct.
Code looks like this:
avItem = new AVPlayerItem(avAsset);
avPlayer = new AVPlayer(avItem)
{
AllowsExternalPlayback = true,
UsesExternalPlaybackWhileExternalScreenIsActive = true
};
avPlayerViewController = new AVPlayerViewController()
{
View =
{
ContentMode = UIViewContentMode.ScaleAspectFill,
AutoresizingMask = UIViewAutoresizing.All,
},
UpdatesNowPlayingInfoCenter = false,
Player = avPlayer
};
initialPlaybackTime = TimeSpan.FromSeconds(bookmarkTime);
AddChildViewController(avPlayerViewController);
View.AddSubview(avPlayerViewController.View);
avPlayerViewController.DidMoveToParentViewController(this);
avItem.AddObserver(this, new NSString("status"), NSKeyValueObservingOptions.Initial, IntPtr.Zero);
public override async void ObserveValue(NSString keyPath, NSObject ofObject, NSDictionary change, IntPtr context)
{
if (Equals(ofObject, avItem) && keyPath.Equals((NSString)"status"))
{
if (!initialPlaybackTimeSet && (avPlayer.Status == AVPlayerStatus.ReadyToPlay))
{
await avPlayer.SeekAsync(CMTime.FromSeconds(initialPlaybackTime.TotalSeconds, avPlayer.CurrentItem.Asset.Duration.TimeScale));
avPlayer.Play();
initialPlaybackTimeSet = true;
}
}
}

iOS10 Speech Recognition "Listening" sound effect

I am doing live speech recognition with the new iOS10 framework. I use AVCaptureSession to get to audio.
I have a "listening" beep sound to notify the user he can begin talking. The best way to put that sound is at the 1st call to captureOutput(:didOutputSampleBuffer..), but if I try to play a sound after starting the session the sound just won't play. And no error is thrown.. it just silently fail to play...
What I tried:
Playing through a system sound (AudioServicesPlaySystemSound...())
Play an asset with AVPlayer
Also tried both above solutions async/sync on main queue
It seems like regardless of what I am doing, it is impossible to trigger playing any kind of audio after triggering the recognition (not sure if it's specifically the AVCaptureSession or the SFSpeechAudioBufferRecognitionRequest / SFSpeechRecognitionTask...)
Any ideas? Apple even recommends playing a "listening" sound effect (and do it themselves with Siri) but I couldn't find any reference/example showing how to actually do it... (their "SpeakToMe" example doesn't play sound)
I can play the sound before triggering the session, and it does work (when starting the session at the completion of playing the sound) but sometimes theres a lag in actually staring the recognition (mostly when using BT headphones and switching from a different AudioSession category - for which I do not have a completion event...) - because of that I need a way to play the sound when the recording actually starts, and not before it triggers and cross fingers it won't lag starting it...
Well, apparently there are a bunch of "rules" one must follow in order to successfully begin a speech recognition session and play a "listening" effect only when (after) the recognition really began.
The session setup & triggering must be called on main queue. So:
DispatchQueue.main.async {
speechRequest = SFSpeechAudioBufferRecognitionRequest()
task = recognizer.recognitionTask(with: speechRequest, delegate: self)
capture = AVCaptureSession()
//.....
shouldHandleRecordingBegan = true
capture?.startRunning()
}
The "listening" effect should be player via AVPlayer, not as a system sound.
The safest place to know we are definitely recording, is in the delegate call of AVCaptureAudioDataOutputSampleBufferDelegate, when we get our first sampleBuffer callback:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
//only once per recognition session
if shouldHandleRecordingBegan {
shouldHandleRecordingBegan = false
player = AVPlayer(url: Bundle.main.url(forResource: "listening", withExtension: "aiff")!)
player.play()
DispatchQueue.main.async {
//call delegate/handler closure/post notification etc...
}
}
// append buffer to speech recognition
speechRequest?.appendAudioSampleBuffer(sampleBuffer)
}
End of recognition effect is hell of a lot easier:
var ended = false
if task?.state == .running || task?.state == .starting {
task?.finish() // or task?.cancel() to cancel and not get results.
ended = true
}
if true == capture?.isRunning {
capture?.stopRunning()
}
if ended {
player = AVPlayer(url: Bundle.main.url(forResource: "done", withExtension: "aiff")!)
player.play()
}

How to get animation to work at exact points during playback of a music file?

Question:
In Swift code, apart from using an NSTimer, how can I get animations
to start at exact points during playback of a music file played using AVFoundation?
Background
I have a method that plays a music file using AVFoundation (below). I also have UIView animations that I want to start at exact points during the music file being played.
One way I could achieve this is using an NSTimer, but that has the potential to get out of sync or not be exact enough.
Is there a method that I can tap into AVFoundation accessing the music file's time elapsed (time counter), so when certain points during the music playback arrive, animations start?
Is there an event / notification that AVFoundation triggers that gives a constant stream of time elapsed since the music file has started playing?
For example
At 0:52.50 (52 seconds and 1/2), call startAnimation1(), at 1:20.75 (1 minute, 20 seconds and 3/4), call startAnimation2(), and so on?
switch musicPlayingTimeElapsed {
case 0:52.50:
startAnimation1()
case 1:20.75:
startAnimation2()
default:
()
}
Playing music using AVFoundation
import AVFoundation
var myMusic : AVAudioPlayer?
func playMusic() {
if let musicFile = self.setupAudioPlayerWithFile("fileName", type:"mp3") {
self.myMusic = musicFile
}
myMusic?.play()
}
func setupAudioPlayerWithFile(file:NSString, type:NSString) -> AVAudioPlayer? {
let path = NSBundle.mainBundle().pathForResource(file as String, ofType: type as String)
let url = NSURL.fileURLWithPath(path!)
var audioPlayer:AVAudioPlayer?
do {
try audioPlayer = AVAudioPlayer(contentsOfURL: url)
} catch {
print("AVAudioPlayer not available")
}
return audioPlayer
}
If you use AVPlayer instead of AVAudioPlayer, you can use the (TBH slightly awkward) addBoundaryTimeObserverForTimes method:
let times = [
NSValue(CMTime:CMTimeMake(...)),
NSValue(CMTime:CMTimeMake(...)),
NSValue(CMTime:CMTimeMake(...)),
// etc
];
var observer: AnyObject? = nil // instance variable
self.observer = self.player.addBoundaryTimeObserverForTimes(times, queue: nil) {
switch self.player.currentTime() {
case 0:52.50:
startAnimation1()
case 1:20.75:
startAnimation2()
default:
break
}
}
// call this to stop observer
self.player.removeTimeObserver(self.observer)
The way I solve this is to divide the music up into separate segments beforehand. I then use one of two approaches:
I play the segments one at a time, each in its own audio player. The audio player's delegate is notified when a segment finishes, and so starting the next segment — along with accompanying action — is up to me.
Alternatively, I queue up all the segments onto an AVQueuePlayer. I then use KVO on the queue player's currentItem. Thus, I am notified exactly when we move to a new segment.
You might try using Key Value Observing to observe the duration property of your sound as it plays. When the duration reaches your time thresholds you'd trigger each animation. You'd need to make the time thresholds match times >= the trigger time, since you will likely not get a perfect match with your desired time.
I don't know how well that would work however. First, I'm not sure if the sound player's duration is KVO-compliant.
Next, KVO is somewhat resource-intensive, and if your KVO listener gets called thousands of times a second it might bog things down. It would at least be worth a try.

Keep AVAudioPlayer sound in the memory

I use AVAudioPlayer to play a click sound if the user taps on a button.
Because there is a delay between the tap and the sound, I play the sound once in viewDidAppear with volume = 0
I found that if the user taps on the button within a time period the sound plays immediately, but after a certain time there is a delay between the tap and the sound in this case also.
It seems like in the first case the sound comes from cache of the initial play, and in the second case the app has to load the sound again.
Therefore now I play the sound every 2 seconds with volume = 0 and when the user actually taps on the button the sound comes right away.
My question is there a better approach for this?
My goal would be to keep the sound in cache within the whole lifetime of the app.
Thank you,
To avoid audio lag, use the .prepareToPlay() method of AVAudioPlayer.
Apple's Documentation on Prepare To Play
Calling this method preloads buffers and acquires the audio hardware
needed for playback, which minimizes the lag between calling the
play() method and the start of sound output.
If player is declared as an AVAudioPlayer then player.prepareToPlay() can be called to avoid the audio lag. Example code:
struct AudioPlayerManager {
var player: AVAudioPlayer? = AVAudioPlayer()
mutating func setupPlayer(soundName: String, soundType: SoundType) {
if let soundURL = Bundle.main.url(forResource: soundName, withExtension: soundType.rawValue) {
do {
player = try AVAudioPlayer(contentsOf: soundURL)
player?.prepareToPlay()
}
catch {
print(error.localizedDescription)
}
} else {
print("Sound file was missing, name is misspelled or wrong case.")
}
}
Then play() can be called with minimal lag:
player?.play()
If you save the pointer to AVAudioPlayer then your sound remains in memory and no other lag will occur.
First delay is caused by sound loading, so your 1st playback in viewDidAppear is right.

Resources