I am trying to find a way to pause any playing media on the device, so I was thinking of triggering the same logic that is fired when a user press the headphone "middle button"
I managed to prevent music from resuming (after I pause it within my app, which basically start an AVAudioSession for recording) by NOT setting the AVAudioSession active property to false and leave it hanging, but I am pretty sure thats a bad way to do it. If I deactivate it the music resumes. The other option I am thinking of is playing some kind of silent loop that would "imitate" the silence I need to do. But I think if what I am seeking is doable, it would be the best approach as I understood from this question it cannot be done using the normal means
func stopAudioSession() {
let audioSession = AVAudioSession.sharedInstance(
do {
if audioSession.secondaryAudioShouldBeSilencedHint{
print("someone is playing....")
}
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
isSessionActive = false
} catch let error as NSError {
print("Unable to deactivate audio session: \(error.localizedDescription)")
print("retying.......")
}
}
In this code snippet as the function name implies I set active to false, tried to find other options but I could not find another way of stopping my recording session and prevent resume of the other app that was already playing
If someone can guide me to which library I should look into, if for example I can tap into the H/W part and trigger it OR if I can find out which library is listening to this button press event and handling the pause/play functionality
A friend of mine who is more experienced in IOS development suggested the following workaround and it worked - I am posting it here as it might help someone trying to achieve a similar behaviour.
In order to stop/pause what is currently being played on a user device, you will need to add a music player into your app. then at the point where you need to pause/stop the current media, you just initiate the player, play and then pause/stop it - simple :)
like so:
let musicPlayer = MPMusicPlayerApplicationController.applicationQueuePlayer
func stopMedia(){
MPMediaLibrary.requestAuthorization({(newPermissionStatus: MPMediaLibraryAuthorizationStatus) in
self.musicPlayer.setQueue(with: .songs())
self.musicPlayer.play()
print("Stopping music player")
self.musicPlayer.pause()
print("Stopped music player")
})
}
the part with MPMediaLibrary.requestAuthorization is needed to avoid an authorisation error when accessing user's media library.
and of course you will need to add the Privacy - Media Library Usage Description
key into your Info.plist file
Related
I am developing a Voice to Text application using iOS SFSpeechRecognizer API.
Found a great tutorial here: and it worked fine.
I wanted to process the text and perform some action as soon as the voice input is stopped. So, was curious whether there is a delegate method available for SFSpeechRecognizer which can recognise when the voice input is stopped so that I can capture the input and process further?
So, was curious whether there is a delegate method available for SFSpeechRecognizer which can recognise when the voice input is stopped so that I can capture the input and process further?
Not built into the SFSpeechRecognizer API, no. On the contrary, that is exactly why you must provide interface that allows the user to tell the recognizer that the input is finished (e.g. a Done button of some sort). Your app will be rejected if you omit that interface.
A possible solution maybe to use a third party library like FDSoundActivatedRecorder which start recording when sound is detected and
stops recording when the user is done talking.
Then you can use the recorded audio as in this link to convert it to text in a go.
func transcribeAudio(url: URL) {
// create a new recognizer and point it at our audio
let recognizer = SFSpeechRecognizer()
let request = SFSpeechURLRecognitionRequest(url: url)
// start recognition!
recognizer?.recognitionTask(with: request) { [unowned self] (result, error) in
// abort if we didn't get any transcription back
guard let result = result else {
print("There was an error: \(error!)")
return
}
// if we got the final transcription back, print it
if result.isFinal {
// pull out the best transcription...
print(result.bestTranscription.formattedString)
}
}
}
I have an app with Callkit functionality. When I press the loudspeaker button, it will flash and animate to the OFF state (sometimes the speaker is set to LOUD but the icon is still OFF). When I tap on it multiple times... it can be clearly seen that this functionality is not behaving correctly.
However, WhatsApp has at the beginning the loudspeaker turned OFF and after 3+ seconds it activates it and its working. Has anyone encountered anything similar and can give me a solution?
Youtube video link to demonstrate my problem
There is a workaround proposed by an apple engineer which should fix callkit not activating the audio session correctly:
a workaround would be to configure your app's audio session (call configureAudioSession()) earlier in your app's lifecycle, before the -provider:performAnswerCallAction: method is invoked. For instance, you could call configureAudioSession() immediately before calling -[CXProvider reportNewIncomingCallWithUUID:update:completion:] in order to ensure that the audio session is fully configured prior to informing CallKit about the incoming call.
From: https://forums.developer.apple.com/thread/64544#189703
If this doesn't help, you probably should post an example project which reproduces your behaviour for us to be able to analyse it further.
Above answer is correct, "VoiceChat" mode ruin everything.
Swift 4 example for WebRTC.
After connection was established call next
let rtcAudioSession = RTCAudioSession.sharedInstance()
rtcAudioSession.lockForConfiguration()
do {
try rtcAudioSession.setCategory(AVAudioSession.Category.playAndRecord.rawValue, with:
AVAudioSession.CategoryOptions.mixWithOthers)
try rtcAudioSession.setMode(AVAudioSession.Mode.default.rawValue)
try rtcAudioSession.overrideOutputAudioPort(.none)
try rtcAudioSession.setActive(true)
} catch let error {
debugPrint("Couldn't force audio to speaker: \(error)")
}
rtcAudioSession.unlockForConfiguration()
You can use AVAudioSession.sharedInstance() as well instead RTC
Referd from Abnormal behavior of speaker button on system provided call screen
The same issue has been experienced in the previous versions as well. So this is not the new issue happening on the call kit.
This issue has to be resolved from iOS. We don't have any control over this.
Please go through the apple developer forum
CallKit/detect speaker set
and
[CALLKIT] audio session not activating?
Maybe you can setMode to AVAudioSessionModeDefault.
When I use CallKit + WebRTC
I configure AVAudioSessionModeDefault mode.
Alloc CXProvider and reportNewIncomingCallWithUUID
Use WebRTC , after ICEConnected, WebRTC change mode to AVAudioSessionModeVoiceChat, then speaker issue happen.
Later I setMode back to AVAudioSessionModeDefault, the speaker works well.
I've fixed the issue by doing following steps.
In CXAnswerCallAction, use below code to set audiosession config.
RTCDispatcher.dispatchAsync(on: RTCDispatcherQueueType.typeAudioSession) {
let audioSession = RTCAudioSession.sharedInstance()
audioSession.lockForConfiguration()
let configuration = RTCAudioSessionConfiguration.webRTC()
configuration.categoryOptions = [AVAudioSessionCategoryOptions.allowBluetoothA2DP,AVAudioSessionCategoryOptions.duckOthers,
AVAudioSessionCategoryOptions.allowBluetooth]
try? audioSession.setConfiguration(configuration)
audioSession.unlockForConfiguration()}
After call connected, I'm resetting AudioSession category to default.
func configureAudioSession() {
let session = RTCAudioSession.sharedInstance()
session.lockForConfiguration()
do {
try session.setCategory(AVAudioSession.Category.playAndRecord.rawValue, with: .allowBluetooth)
try session.setMode(AVAudioSession.Mode.default.rawValue)
try session.setPreferredSampleRate(44100.0)
try session.setPreferredIOBufferDuration(0.005)
}
catch let error {
debugPrint("Error changeing AVAudioSession category: \(error)")
}
session.unlockForConfiguration()}
Thanks to SO #Алексей Смольский for the help.
I am trying to run the Apple SpeakToMe: Using Speech Recognition with AVAudioEngine sample from their website here. My problem is that when you stop the AVAudioEngine and SpeechRecognizer you can no longer use system sounds.
How do you release the AVAudioEngine and SpeechRecognizer so that sounds will work again after recording stops?
To duplicate this:
download the sample code
add a UITextField to the storyboard.
run the project and type into the text field (you'll notice you can hear your typing event sounds).
Then start recording and stop recording
Type into the text field again (Now there will be no sound)
UPDATE
This only happens on a real device - not on the simulator.
After hours of debugging I came across the un-released object causing issues. In their sample code they do not release the AVAudioSession. This causes the sound channels to be blocked.
The fix is to make the AVAudioSession a property:
private var audioSession : AVAudioSession?
And then set audioSession.active to false when stopping the recording:
if let audioSession = audioSession {
do {
try audioSession.setActive(false, with: .notifyOthersOnDeactivation)
} catch {
// handle error
}
}
I have an application for Muslims prayer alert in IOS ,
Iam already playing mp3 file by using when i click a button
and this is my code
super.viewDidLoad()
let tapSound = NSBundle.mainBundle().URLForResource("mp", withExtension: "mp3")
self.soundFileURLRef = tapSound
do {
player = try AVAudioPlayer(contentsOfURL: soundFileURLRef)
} catch _ {
player = nil
}
player?.delegate = self
player?.prepareToPlay()
}
#IBAction func play(sender: AnyObject) {
NSLog("started playing")
player?.play()
}
func audioPlayerDidFinishPlaying(player: AVAudioPlayer, successfully flag: Bool) {
//
NSLog("finished playing")
}
and it's working perfeclty
Now I am looking for a way to play mp3 file in specific time like
when time is 12:00 AM Play the mp3 file
even the application is closed
any suggestion how to make that in IOS ?
Thanks for help
To schedule events like this, you can use Local Notifications. It's how all (that I know of) alarm clock apps alert you when the app isn't open. Local notifications, however, only allow you to play a 30 sec (max) sound clip that you have bundled with your application.
Currently there is no way to have your app play music as a background service unless it's currently open (or was open when you locked the screen if you opt out of multitasking... see the above link).
The built-in alarm clock app is allowed to do this, however, because it is using a private API.
Local Notifications work as suggested here but you should keep in mind that there is no 100% guarantee the notification will be fired precisely when you schedule them. Depending on the workload, the OS can delay it a little while (it could change from seconds to minutes). Just wanted to point that out since prayer times are very precise ;)
I just started testing this very simple audio recording application that was built through Monotouch on actual iPhone devices today. I encountered an issue with what seemed to be the re-use of the AVAudioRecorder and AVPlayer objects after their first use and I am wondering how I might could solve it.
Basic Overview
The application consists of the following three sections :
List of Recordings (TableViewController)
Recording Details (ViewController)
New Recording (ViewController)
Workflow
When creating a recording, the user would click the "Add" button from the List of Recordings area and the application pushes the New Recording View Controller.
Within the New Recording Controller, the following variables are available:
AVAudioRecorder recorder;
AVPlayer player;
each are initialized prior to their usage:
//Initialized during the ViewDidLoad event
recorder = AVAudioRecorder.Create(audioPath, audioSettings, out error);
and
//Initialized in the "Play" event
player = new AVPlayer(audioPath);
Each of this work as intended on the initial load of the New Recording Controller area, however any further attempts do not seem to work (No Audio Playback)
The Details area also has a playback portion to allow the user to playback any recordings, however, much like the New Recording Controller, playback doesn't function there either.
Disposal
They are both disposed as follows (upon exiting / leaving the View) :
if(recorder != null)
{
recorder.Dispose();
recorder = null;
}
if(player != null)
{
player.Dispose();
player = null;
}
I have also attempted to remove any observers that could possible keep any of the objects "alive" in hopes that would solve the issue and have ensured they are each instantiated with each display of the New Recording area, however I still receive no audio playback after the initial Recording session.
I would be happy to provide more code if necessary. (This is using MonoTouch 6.0.6)
After further investigation, I determined that the issue was being caused by the AudioSession as both recording and playback were occurring within the same controller.
The two solutions that I determined were as follows:
Solution 1 (AudioSessionCategory.PlayAndRecord)
//A single declaration of this will allow both AVAudioRecorders and AVPlayers
//to perform alongside each other.
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
//Upon noticing very quiet playback, I added this second line, which allowed
//playback to come through the main phone speaker
AudioSession.OverrideCategoryDefaultToSpeaker = true;
Solution 2 (AudioSessionCategory.RecordAudio & AudioSessionCategory.MediaPlayback)
void YourRecordingMethod()
{
//This sets the session to record audio explicitly
AudioSession.Category = AudioSessionCategory.RecordAudio;
MyRecorder.record();
}
void YourPlaybackMethod()
{
//This sets the session for playback only
AudioSession.Category = AudioSessionCategory.MediaPlayback;
YourAudioPlayer.play();
}
For some additional information on usage of the AudioSession, visit Apple's AudioSession Development Area.