I am playing a live stream using AVPlayer and AVPlayerItem.
Is there a way to know if the user has muted/unmuted the video while playing it
You can check the isMuted value of the AVPlayer.
Documentation:
https://developer.apple.com/documentation/avfoundation/avplayer/1387544-ismuted
You can use a combination of KVO and Combine to observe changes to isMuted property:
let player: AVPlayer
private var cancellables = Set<AnyCancellable>()
//...
player
.publisher(for: \.isMuted)
.sink { isMuted in
// here you can handle the fact the the user has muted or unmuted the player
}
.store(in: &cancellables)
You should implement KVo to detect mute/unmute changes.
Note: in swift, although the property is isMuted, but you still should use ObjectiveC properties in KVo, so listen for muted key and you are set.
Related
I am using AVCaptureSession to develop a camera app.
I need trigger some vibration feedback when touch some UI. using code like below:
private var _impact = UIImpactFeedbackGenerator()
private var _select = UISelectionFeedbackGenerator()
private init() {
_impact.prepare()
_select.prepare()
}
func impact() {
_impact.impactOccurred()
}
func select() {
_select.selectionChanged()
}
so I can invoke select() or impact() to trigger a feedback.
But, if I add Audio Device to AVCaptureSession, all feedbacks will be invalidated. so, for the feedback effect, I have to removed AudioDevice from AVCaptureSession first, when I need to record a video, then add the Audio Device to AVCaptureSession, this operation will lag captureOutput that make camera preview freezed in little time.
So, I found another way to try this, first always add Audio Device to AVCaptureSession, then get all AVCaptureConnection about Audio from AVCaptureSession.connections. and set isEnable to false or true, But this way didn't work. even AVCaptureConnection.isEnable is false, the feedbacks also invalidate.
I think maybe there is only way to make feedback worked is don't add audio device to AVCaptureSession.
I am using the Player library https://github.com/piemonte/Player for video playback in my app.
I'm trying to figure out how to add the functionality to change the playback speed/rate of the video, like this: https://developer.apple.com/reference/avfoundation/avplayer/1388846-rate
I didn't see a playback function to allow this type of direct control in the Player docs.
Is there a way to change the "rate" of the underlying AVPlayer?
In this lib have the Player.swift, there you can access "_avplayer" variable that is a AVPlayer object..
You can make _avplayer public and access it from everywhere, or you can just make a getter and setter like:
open var rate: Float {
get {
return self._avplayer.rate
}
set {
self._avplayer.rate = newValue
}
}
I am testing this using iOS 10.2 on my actual iPhone 6s device.
I am playing streamed audio and am able to play/pause audio, skip tracks, etc. I also have enabled background modes and the audio plays in the background and continues through a playlist properly. The only issue I am having is getting the lock screen controls to show up. Nothing displays at all...
In viewDidLoad() of my MainViewController, right when my app launches, I call this...
func setupAudioSession(){
UIApplication.shared.beginReceivingRemoteControlEvents()
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, with: AVAudioSessionCategoryOptions.mixWithOthers)
self.becomeFirstResponder()
do {
try AVAudioSession.sharedInstance().setActive(true)
print("AVAudioSession is Active")
} catch let error as NSError {
print(error.localizedDescription)
}
} catch let error as NSError {
print(error.localizedDescription)
}
}
and then in my AudioPlayer class after I begin playing audio I call ...
func setupLockScreen(){
let commandCenter = MPRemoteCommandCenter.shared()
commandCenter.nextTrackCommand.isEnabled = true
commandCenter.nextTrackCommand.addTarget(self, action:#selector(skipTrack))
MPNowPlayingInfoCenter.default().nowPlayingInfo = [MPMediaItemPropertyTitle: "TESTING"]
}
When I lock my iPhone and then tap the power button again to go to the lock screen, the audio controls are not displayed at all. It is as if no audio is playing, I just see my normal background photo. Also no controls are displayed in the control panel (swiping up on home screen and then swiping left to where the music controls should be).
Is the issue because I am not using AVAudioPlayer or AVPlayer? But then how does, for example, Spotify get the lock screen controls to display using their own custom audio player? Thanks for any advice / help
The issue turned out to be this line...
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, with: AVAudioSessionCategoryOptions.duckOthers)
Once I changed it to
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, with: [])
everything worked fine. So it seems that passing in any argument for AVAudioSessionCategoryPlaybackOptions causes the lock screen controls to not display. I also tried passing in .mixWithOthers an that too caused the lock screen controls to not be displayed
In Swift 4. This example is only to show the player on the lock screen and works with iOS 11. To know how to play auidio on the device you can follow this thread https://stackoverflow.com/a/47710809/1283517
import MediaPlayer
import AVFoundation
Declare player
var player : AVPlayer?
Now create a function
func setupLockScreen(){
let commandCenter = MPRemoteCommandCenter.shared()
commandCenter.nextTrackCommand.isEnabled = true
commandCenter.togglePlayPauseCommand.addTarget(self, action: #selector(controlPause))
MPNowPlayingInfoCenter.default().nowPlayingInfo = [MPMediaItemPropertyTitle: currentStation]
}
now create a function for control play and pause event. I have created a BOOL "isPlaying" to determine the status of the player.
#objc func controlPause() {
if isPlaying == true {
player?.pause()
isPlaying = false
} else {
player?.play()
isPlaying = true
}
}
And ready. Now the player will be displayed on the lock screen
Yes, for the lock screen to work you need to use iOS APIs to play audio. Not sure how Spotify does it but they may be using a second audio session in parallel for this purpose and use the controls to control both. Your background handler (the singleton in my case) could start playing the second audio with 0 volume when it goes into background and stop it when in foreground. I haven't tested it myself but an option to try.
I am doing live speech recognition with the new iOS10 framework. I use AVCaptureSession to get to audio.
I have a "listening" beep sound to notify the user he can begin talking. The best way to put that sound is at the 1st call to captureOutput(:didOutputSampleBuffer..), but if I try to play a sound after starting the session the sound just won't play. And no error is thrown.. it just silently fail to play...
What I tried:
Playing through a system sound (AudioServicesPlaySystemSound...())
Play an asset with AVPlayer
Also tried both above solutions async/sync on main queue
It seems like regardless of what I am doing, it is impossible to trigger playing any kind of audio after triggering the recognition (not sure if it's specifically the AVCaptureSession or the SFSpeechAudioBufferRecognitionRequest / SFSpeechRecognitionTask...)
Any ideas? Apple even recommends playing a "listening" sound effect (and do it themselves with Siri) but I couldn't find any reference/example showing how to actually do it... (their "SpeakToMe" example doesn't play sound)
I can play the sound before triggering the session, and it does work (when starting the session at the completion of playing the sound) but sometimes theres a lag in actually staring the recognition (mostly when using BT headphones and switching from a different AudioSession category - for which I do not have a completion event...) - because of that I need a way to play the sound when the recording actually starts, and not before it triggers and cross fingers it won't lag starting it...
Well, apparently there are a bunch of "rules" one must follow in order to successfully begin a speech recognition session and play a "listening" effect only when (after) the recognition really began.
The session setup & triggering must be called on main queue. So:
DispatchQueue.main.async {
speechRequest = SFSpeechAudioBufferRecognitionRequest()
task = recognizer.recognitionTask(with: speechRequest, delegate: self)
capture = AVCaptureSession()
//.....
shouldHandleRecordingBegan = true
capture?.startRunning()
}
The "listening" effect should be player via AVPlayer, not as a system sound.
The safest place to know we are definitely recording, is in the delegate call of AVCaptureAudioDataOutputSampleBufferDelegate, when we get our first sampleBuffer callback:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
//only once per recognition session
if shouldHandleRecordingBegan {
shouldHandleRecordingBegan = false
player = AVPlayer(url: Bundle.main.url(forResource: "listening", withExtension: "aiff")!)
player.play()
DispatchQueue.main.async {
//call delegate/handler closure/post notification etc...
}
}
// append buffer to speech recognition
speechRequest?.appendAudioSampleBuffer(sampleBuffer)
}
End of recognition effect is hell of a lot easier:
var ended = false
if task?.state == .running || task?.state == .starting {
task?.finish() // or task?.cancel() to cancel and not get results.
ended = true
}
if true == capture?.isRunning {
capture?.stopRunning()
}
if ended {
player = AVPlayer(url: Bundle.main.url(forResource: "done", withExtension: "aiff")!)
player.play()
}
I'm using seekToTime for an AVPlayer. It works fine, however, I'd like to be able to hear audio as I scrub through the video, much like how Final Cut or other video editors work. Just looking for ideas or if I've missed something obvious.
The way to do this is to scrub a simultaneous AVplayer asynchronously alongside the video. I did it this way (in Swift 4):
// create the simultaneous player and feed it the same URL:
let videoPlayer2 = AVPlayer(url: sameUrlAsVideoPlayer!)
videoPlayer2.volume = 5.0
//set the simultaneous player at exactly the same point as the video player.
videoPlayer2.seek(to: sameSeekTimeAsVideoPlayer)
// create a variable(letsScrub)that allows you to activate the audio scrubber when the video is being scrubbed:
var letsScrub: Bool?
//when you are scrubbing the video you will set letsScrub to true:
if letsScrub == true {audioScrub()}
//create this function to scrub audio asynchronously:
func audioScrub() {
DispatchQueue.main.async {
//set the variable to false so the scrubbing does not get interrupted:
self.letsScrub = false
//play the simultaneous player
self.videoPlayer2.play()
//make sure that it plays for at least 0.25 of a second before it stops to get that scrubbing effect:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.25) {
//now that 1/4 of a second has passed (you can make it longer or shorter as you please) - pause the simultaneous player
self.videoPlayer2.pause()
//now move the simultaneous player to the same point as the original videoplayer:
self.videoPlayer2.seek(to: self.sameSeekTimeAsVideoPlayer)
//setting this variable back to true allows the process to be repeated as long as video is being scrubbed:
self.letsScrub = true
}
}
}