In my Flutter Web App, I have a sound that plays after a timer expires. It's a stop watch utility. In my Apple and Android apps, on desktop browsers...it works fine everywhere except on iOS Chrome and Safari. Sound is locked down on those applications to only play on a user tap....this is by design for iphones and tablets by Apple.
In order to start the times, the user taps, it's just that 4-7 seconds later. My requirement is for the timer to expire and a sound should play. I installed a mute/unmute, which will play the sound on unmute...so the volume is set and the sound does play on unmute. But when starting the time, it will not play on timer expiration.
I can't imaging there isn't a viable solution out there. Thoughts?
iOS, both with Chrome and Safari don't allow sounds unless initiated by a user action. The delay kills the sound.
You need to start the sound on a user action, then create a listener, and pause as soon as it starts. Then you can resume the sound later. It's asynchronous, so you need the listener.
Future resume() async {
await player.play();
}
Future start() async {
await player.stop();
await player.setUrl('asset:/assets/sounds/sound.wav');
player.play();
player.playerStateStream.listen((state) {
if (state.playing && playStart) {
debugPrint('quick pause here');
playerSafe.pause();
}
});
}
Related
I'm developing a Sleep Timer App. As soon as the timer is done, I want to stop the audio on the device. Is there any way to stop the system Sound or do I have to play a silent audio file?
I already tried this Code:
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playback)
try AVAudioSession.sharedInstance().setActive(true)
} catch { print(error.localizedDescription) }
.playback will only pause non-mixable sessions. If the other app has a mixable session, then it will continue. The only mode that will pause mixable sessions is .record (which will require that you request recording permission, even though you're not going to use it).
I would like to insert an alarm clock function in an iOS app I am developing, and as a reference, I installed a popular App called "Alarmy."
I managed to keep my app running in the background, just using AVAudioSession properties; however, I noticed that the app consumes a lot of battery during the phone sleep.
After some testing, I think this is due to the app activating the speakers (and keeping them ON) immediately after the AVAudioSession activation.
Even if there is no actual sound playing until the audioPlayer.play(atTime: audioPlayer.deviceCurrentTime + Double(seconds)) is triggered, if I get very very close to my iPhone 7 speakers, I can hear the little buzzing sound that indicates that the speakers are ON. This implicates that the speakers are playing an "empty sound" de facto.
This buzzing sound does not exist when I set the alarm with Alarmy; it just starts playing when it is supposed to.
I did not find any other way to maintain my app in the background and play an alarm sound at a specified time. There are Local Notifications, of course, but they do not allow to play a sound when the phone is silenced.
Going back to "Alarmy," I've seen that they are not only able to play a background alarm without any need to activate the speakers first, but they are also able to put the volume at the max level in the background. Are they maybe triggering some other iOS background mode to achieve those, perhaps using Background Fetch or Processing in some clever way? Is there any known way to replicate those behaviors?
Thanks in advance!
Here is the current code I use to set the alarm:
private func setNewAlarm(audioPlayer: AVAudioPlayer, seconds: Int, ringtone: String) {
do {
self.setNotificationAlarm(audioPlayer: audioPlayer, seconds: seconds, ringtone: ringtone, result: result)
//This calls the method I use to set a secondary alarm using local notifications, just in case the user closes the app
try AVAudioSession.sharedInstance().setActive(false)
try AVAudioSession.sharedInstance().setCategory(.playback, options: [ .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
} catch let error as NSError {
print("AVAudioSession error: \(error.localizedDescription)")
}
audioPlayer.prepareToPlay()
audioPlayer.play(atTime: audioPlayer.deviceCurrentTime + Double(seconds))
result(true)
}
I have a music player application. Where user can listen music even the app is not in foreground as desire. Everything's work fine.
Now the problem is when user left the device idle music playing smoothly. But after two or three songs it's stop working. As far I noticed when a song ended it's trying to change the track. For that app needs to execute some business logic but it is not happening as system suspended the app. I tried the following.
Capabilities:
Audio session:
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print("Failed to register")
}
Still it's not working. Am I missing something?
The problem is that you need to execute your business logic and start playing next track as quick as possible, once you finish a track in the background.
The iOS suspend your app if it's not playing audio in the background for about 3~4 seconds. If your business logic exceed this time limit, try UIApplication.shared.beginBackgroundTask before execute your logic.
I recently discovered Cordova, and started building an app that i've been thinking of. I need to figure out if i, through Cordova, can detect if the user is leaving the app (not locking the phone), or e.g. picking up a phone call, is there an event for this that i can bind to? I've searched through the most npm-library packages and did a lot of research but can't really find a good answer.
Cordova providing Pause and resume events.
pause
The pause event fires when the native platform puts the application into the background, typically when the user switches to a different application.
I think this will help you.
eg:
document.addEventListener("pause", onPause, false);
function onPause() {
// Handle the pause event
}
resume
The resume event fires when the native platform pulls the application out from the background.
iOS Quirks
Any interactive functions called from a pause event handler execute later when the app resumes, as signaled by the resume event. These include alerts, console.log(), and any calls from plugins or the Cordova API, which go through Objective-C.
active event
The iOS-specific active event is available as an alternative to resume, and detects when users disable the Lock button to unlock the device with the app running in the foreground. If the app (and device) is enabled for multi-tasking, this is paired with a subsequent resume event, but only under iOS 5. In effect, all locked apps in iOS 5 that have multi-tasking enabled are pushed to the background. For apps to remain running when locked under iOS 5, disable the app's multi-tasking by setting UIApplicationExitsOnSuspend to YES. To run when locked on iOS 4, this setting does not matter.
resume event
When called from a resume event handler, interactive functions such as alert() need to be wrapped in a setTimeout() call with a timeout value of zero, or else the app hangs. For example:
document.addEventListener("resume", onResume, false);
function onResume() {
setTimeout(function() {
// TODO: do your thing!
}, 0);
}
I am having an intermittent (aargh!) problem playing Text-to-Speech in the background, triggered from Apple Watch. I have properly set up the background mode, the AVSession category, and the WatchKitExtensionRequest handler. (See below.) I had this working before, and can't figure out what changed. (Could it be iOS 9 has issues? "Before" means, among other things, iOS 8.)
The problem is this: when the app gets the request from the Watch and the app is either in the background or the phone is sleeping (locked), the speech sometimes plays right away, and other times doesn't play until the app is brought to the foreground. The OS seems to be sometimes queuing the audio, and sometimes not. I can't find any common thread between success and failure cases. I can verify with logging that the call to speakUtterance() is being made in all situations. But its behavior varies, apparently randomly. The only clue is that it might be the case that the longer the app is in the background, the less likely it is to speak right away.
This is making me pull my hair out. Suggestions welcome.
In info.plist:
Required background modes: App plays audio or streams audio/video using AirPlay
In AppDelegate.application:didFinishLaunching:withOptions():
do {
try AVAudioSession.sharedInstance().setCategory(
AVAudioSessionCategoryPlayback,
withOptions:.DuckOthers
)
try AVAudioSession.sharedInstance().setActive(true)
} catch let error as NSError {
// etc...
}
In AppDelegate.application:handleWatchKitExtensionRequest...():
var bgTaskId:UIBackgroundTaskIdentifier = 0
bgTaskId = application.beginBackgroundTaskWithName(
"Prose WKE handler",
expirationHandler: {
application.endBackgroundTask(bgTaskId)
}
)
//... Post notification to call Text-to-Speech
application.endBackgroundTask(bgTaskId)
Here's a workaround: play a second snippet of sound (I used a half-second of silence), using AVAudioPlayer, right after the call to speakUtterance(), This seems to "jog the pipeline".