IOS WebAudio only works on headphones - ios

I've been running into an issue now for a while where on some ios devices my webaudio system only seems to work with headphones where as other devices (exact same os, model, etc) the audio plays perfectly fine through the speakers or headphones. I've searched for a solution to this but haven't found anything on this exact issue. The only thing I can think of is that maybe it's an audio channel issue or something.
How can I fix this?

#Alastair is correct, the mute toggle switch does mute WebAudio, but it does not mute HTML5 tags. Thanks to his work I managed to find a work around for the web which enables WebAudio to play even when the mute toggle switch is on. I'd post this as a comment on his reply, but I don't have the reputation.
In order to play WebAudio you must also play at least one WebAudio sound source node and one HTML5 tag during a user action. It is fine if these sounds are short bits of silence. I found that this self contained code works without any extra files needed:
EDIT 11/29/19:
Removed vestigial typescript typedefs. Thanks #Joep. I also realized the code below is woefully out of date and janky. Just consider it an example. Editing this post prompted me to create an open source solution for this. You can see a demo of it here: https://spencer-evans.com/share/github/unmute/ and check out the repo here: https://github.com/swevans/unmute
/**
* PLEASE DONT USE THIS AS IT IS, THIS IS JUST EXAMPLE CODE.
* If you want a drop in solution I have a script on git hub
* Demo:
* #see https://spencer-evans.com/share/github/unmute/
* Github Repo:
* #see https://github.com/swevans/unmute
*/
var isWebAudioUnlocked = false;
var isHTMLAudioUnlocked = false;
function unlock() {
if (isWebAudioUnlocked && isHTMLAudioUnlocked) return;
// Unlock WebAudio - create short silent buffer and play it
// This will allow us to play web audio at any time in the app
var buffer = myContext.createBuffer(1, 1, 22050); // 1/10th of a second of silence
var source = myContext.createBufferSource();
source.buffer = buffer;
source.connect(myContext.destination);
source.onended = function()
{
console.log("WebAudio unlocked!");
isWebAudioUnlocked = true;
if (isWebAudioUnlocked && isHTMLAudioUnlocked)
{
console.log("WebAudio unlocked and playable w/ mute toggled on!");
window.removeEventListener("mousedown", unlock);
}
};
source.start();
// Unlock HTML5 Audio - load a data url of short silence and play it
// This will allow us to play web audio when the mute toggle is on
var silenceDataURL = "data:audio/mp3;base64,//MkxAAHiAICWABElBeKPL/RANb2w+yiT1g/gTok//lP/W/l3h8QO/OCdCqCW2Cw//MkxAQHkAIWUAhEmAQXWUOFW2dxPu//9mr60ElY5sseQ+xxesmHKtZr7bsqqX2L//MkxAgFwAYiQAhEAC2hq22d3///9FTV6tA36JdgBJoOGgc+7qvqej5Zu7/7uI9l//MkxBQHAAYi8AhEAO193vt9KGOq+6qcT7hhfN5FTInmwk8RkqKImTM55pRQHQSq//MkxBsGkgoIAABHhTACIJLf99nVI///yuW1uBqWfEu7CgNPWGpUadBmZ////4sL//MkxCMHMAH9iABEmAsKioqKigsLCwtVTEFNRTMuOTkuNVVVVVVVVVVVVVVVVVVV//MkxCkECAUYCAAAAFVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV";
var tag = document.createElement("audio");
tag.controls = false;
tag.preload = "auto";
tag.loop = false;
tag.src = silenceDataURL;
tag.onended = function()
{
console.log("HTMLAudio unlocked!");
isHTMLAudioUnlocked = true;
if (isWebAudioUnlocked && isHTMLAudioUnlocked)
{
console.log("WebAudio unlocked and playable w/ mute toggled on!");
window.removeEventListener("mousedown", unlock);
}
};
var p = tag.play();
if (p) p.then(function(){console.log("play success")}, function(reason){console.log("play failed", reason)});
}
window.addEventListener("mousedown", unlock);

This is likely because the iPhone's side switch is on "mute". It's very confusing - HTML5 <audio> tags still play fine when the phone is muted, but WebAudio does not. Why? Who knows. But it's a restriction I currently haven't found a way around.

If the iPhone mute button is down, meaning that the iPhone is muted, what is played through Web Audio Api will be muted.
Unfortunately there is no way to check if that physical button (located on the left edge towards the top of the iPhone) is on or off through Javascript.
This issue is completely independent from the fact that in iOS Safari the audio has to be started by a user action for it to be unmuted. There are some tricks that can be done to overcome that fact, including the one suggested by here Spencer, were you use "any action or a specific action" started by the user to "play" a silent audio file to allow subsequently playing audio files to play unmuted.

had same issue, and finally understood problem.
indeed WebView don't play sound on internal speakers if phone is in mute.
when i dig deeper i found a workaround :)
original post => https://stackoverflow.com/a/37874619/8064246
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
//print("AVAudioSession Category Playback OK")
do {
try AVAudioSession.sharedInstance().setActive(true)
//print("AVAudioSession is Active")
} catch _ as NSError {
//print(error.localizedDescription)
}
} catch _ as NSError {
//print(error.localizedDescription)
}

Related

Mono audio output in iOS app when using a webRTC powered video call

The app i'm writing contains 2 parts:
An audio player that plays stereo MP3 files
Video conferencing using webRTC
Each part works perfectly in isolation, but the moment i try them together, one of two things happens:
The video conference audio fades out and we just hear the audio files (in stereo)
We get audio output from both, but the audio files are played in mono, coming out of both ears equally
My digging had taken me down a few routes:
https://developer.apple.com/forums/thread/90503
&
https://github.com/twilio/twilio-video-ios/issues/77
Which suggest that the issue could be with the audio session category, mode or options.
However i've tried lots of the combos and am struggling to get anything working as intended.
Does anyone have a better understanding of the audio options to point in the right direction?
My most recent combination
class BBAudioClass {
static private var audioCategory : AVAudioSession.Category = AVAudioSession.Category.playAndRecord
static private var audioCategoryOptions : AVAudioSession.CategoryOptions = [
AVAudioSession.CategoryOptions.mixWithOthers,
AVAudioSession.CategoryOptions.allowBluetooth,
AVAudioSession.CategoryOptions.allowAirPlay,
AVAudioSession.CategoryOptions.allowBluetoothA2DP
]
static private var audioMode = AVAudioSession.Mode.default
static func setCategory() -> Void {
do {
let audioSession: AVAudioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(
BBAudioClass.audioCategory,
mode: BBAudioClass.audioMode,
options: BBAudioClass.audioCategoryOptions
)
} catch {
};
}
}
Update
I managed to get everything working as i wanted by:
Starting the audio session
Connecting to the video conference (at this point all audio is mono)
Forcing all output to the speaker
Forcing output back to the headphones
Obviously this is a crazy thing to have to do, but does prove that it should work.
But it would be great if anyone knew WHY this works, in order that i can actually get things to work properly first time without going through all these hacky steps

Play audio through upper (phone call) speaker

I'm trying to get audio in my app to play through the upper speaker on the iPhone, the one you press to your ear during a phone call. I know it's possible, because I've played a game from the App Store ("The Heist" by "tap tap tap") that simulates phone calls and does exactly that.
I've done a lot of research online, but I'm having a surprisingly hard time finding ANYONE who has even discussed the possibility. The overwhelming majority of posts seem to be about the handsfree speaker vs plugged-in earphones, (like this and this and this), rather than the upper "phone call" speaker vs the handsfree speaker. (Part of that problem might be not having a good name for it: "phone speaker" often means the handsfree speaker at the bottom of the device, etc, so it's hard to do a really well-targeted search). I've looked into Apple's Audio Session Category Route Overrides, but those again seem to (correct me if I'm wrong) deal only with the handsfree speaker at the bottom, not the speaker at the top of the phone.
I have found ONE post that seems to be about this: link. It even provides a bunch of code, so I thought I was home free, but now I can't seem to get the code to work. For simplicity I just copied the DisableSpeakerPhone method (which if I understand it correctly should be the one to re-route audio to the upper speaker) into my viewDidLoad to see if it would work, but the first "assert" line fails, and the audio continues to play out the bottom. (I also imported the AudioToolbox Framework, as suggested in the comment, so that isn't the problem.)
Here is the main block of code I'm working with (this is what I copied into my viewDidLoad to test), although there are a few more methods in the article I linked to:
void DisableSpeakerPhone () {
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
// Overriding the output audio route
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
dataSize = sizeof(audioRouteOverride);
AudioSessionSetProperty(
kAudioSessionProperty_OverrideAudioRoute,
dataSize,
&audioRouteOverride);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
So my question is this: can anyone either A) help me figure out why that code doesn't work, or B) offer a better suggestion for being able to press a button and route the audio up to the upper speaker?
PS I am getting more and more familiar with iOS programming, but this is my first foray into the world of AudioSessions and such, so details and code samples are much appreciated! Thank you for your help!
UPDATE:
From the suggestion of "He Was" (below) I've removed the code quoted above and replaced it with:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
[[AVAudioSession sharedInstance] setActive: YES error:nil];
at the beginning of viewDidLoad. It still isn't working, though, (by which I mean the audio is still coming out of the speaker at the bottom of the phone instead of the receiver at the top). Apparently the default behavior should be for AVAudioSessionCategoryPlayAndRecord to send audio out of the receiver on its own, so something is still wrong.
More specifically what I'm doing with this code is playing audio through the iPod Music Player (initialized right after the AVAudioSession lines above in viewDidLoad, for what it's worth):
_musicPlayer = [MPMusicPlayerController iPodMusicPlayer];
and the media for that iPod Music Player is chosen through an MPMediaPickerController:
- (void) mediaPicker: (MPMediaPickerController *) mediaPicker didPickMediaItems: (MPMediaItemCollection *) mediaItemCollection {
if (mediaItemCollection) {
[_musicPlayer setQueueWithItemCollection: mediaItemCollection];
[_musicPlayer play];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
This all seems fairly straightforward to me, I've got no errors or warnings, and I know that the Media Picker and Music Player are working correctly because the correct songs start playing, it's just out of the wrong speaker. Could there be a "play media using this AudioSession" method or something? Or is there a way to check what audio session category is currently active, to confirm that nothing could have switched it back or something? Is there a way to emphatically tell the code to USE the receiver, rather than relying on the default to do so? I feel like I'm on the one-yard line, I just need to cross that final bit...
EDIT: I just thought of a theory, wherein it's something about the iPod Music Player that doesn't want to play out of the receiver. My reasoning: it is possible to set a song to start playing through the official iPod app and then seamlessly adjust it (pause, skip, etc) through the app I'm developing. The continuous playback from one app to the next made me think that maybe the iPod Music Player has its own audio route settings, or maybe it doesn't stop to check the settings in the new app? Does anyone who knows what they're talking about think it could it be something like that?
Was struggling with this for a while too. Maybe this would help someone later.You can also use the newer methods of overriding ports. Many of the methods in your sample code are actually deprecated.
So if you have your AudioSession sharedInstance by getting,
NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive: YES error:nil];
The session category has to be AVAudioSessionCategoryPlayAndRecord
You can get the current output by checking this value.
AVAudioSessionPortDescription *routePort = session.currentRoute.outputs.firstObject;
NSString *portType = routePort.portType;
And now depending on the port you want to send it to, simply toggle the output using
if ([portType isEqualToString:#"Receiver"]) {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&error];
}
This should be a quick way to toggle the outputs to the speaker phone and receiver.
You have to initialise your audio session first.
Using the C API
AudioSessionInitialize (NULL, NULL, NULL, NULL);
In iOS6 you can use AVAudioSession methods instead (you will need to import the AVFoundation framework to use AVAudioSession):
Initialization using AVAudioSession
self.audioSession = [AVAudioSession sharedInstance];
Setting the audioSession category using AVAudioSession
[self.audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
error:nil];
For further research, if you want better search terms, here are the full names of the constants for the speakers:
const CFStringRef kAudioSessionOutputRoute_BuiltInReceiver;
const CFStringRef kAudioSessionOutputRoute_BuiltInSpeaker;
see apple's docs here
But the real mystery is why you are having any trouble routing to the receiver. It's the default behaviour for the playAndRecord category. Apple's documentation of kAudioSessionOverrideAudioRoute_None:
"Specifies, for the kAudioSessionCategory_PlayAndRecord category, that output audio should go to the receiver. This is the default output audio route for this category."
update
In your updated question you reveal that you are using the MPMusicPlayerController class. This class invokes the global music player (the same player used in the Music app). This music player is separate from your app, and so doesn't share the same audio session as your app's audioSession. Any properties you set on your app's audioSession will be ignored by the MPMusicPlayerController.
If you want control over your app's audio behaviour, you need to use an audio framework internal to your app. This would be AVAudioRecorder / AVAudioPlayer or Core Audio (Audio Queues, Audio Units or OpenAL). Whichever method you use, the audio session can be controlled either via AVAudioSession properties or via the Core Audio API. Core Audio gives you more fine-grained control, but with each new release of iOS more of it is ported over to AVFoundation, so start with that.
Also remember that the audio session provides a way for you to describe the intended behaviour of your app's audio in relation to the total iOS environment, but it will not hand you total control. Apple takes care to ensure that the user's expectations of their device's audio behaviour remain consistent between apps, and when one app needs to interrupt another's audio stream.
update 2
In your edit you allude to the possibility of audio sessions checking other app's audio session settings. That does not happen1. The idea is that each app sets it's preferences for it's own audio behaviour using it's self-contained audio session. The operating system arbitrates between conflicting audio requirements when more than one app competes for an unshareable resource, such as the internal microphone or one of the speakers, and will usually decide in favour of that behaviour which is most likely to meet the user's expectations of the device as a whole.
The MPMusicPlayerController class is slightly unusual in that it gives you the ability for one app to have some degree of control over another. In this case, your app is not playing the audio, it is sending a request to the Music Player to play audio on your behalf. Your control is limited by the extent of the MPMusicPlayerController API. For more control, your app will have to provide it's own implementation of audio playback.
In your comment you wonder:
Could there be a way to pull an MPMediaItem from the MPMusicPlayerController and then play them through the app-specific audio session, or anything like that?
That's a (big) subject for a new question. Here is a good starting read (from Chris Adamson's blog) From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary - it's the sequel to From iphone media library to pcm samples in dozens of confounding and potentially lossy steps - that should give you a sense to the complexity you will face. This may have got easier since iOS6 but I wouldn't be so sure!
1 there is an otherAudioPlaying read-only BOOL property in ios6, but that's about it
Swift 3.0 Code
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
let routePort: AVAudioSessionPortDescription? = obsession. current Route. outputs. first
let portType: String? = routePort?.portType
if (portType == "Receiver") {
try? audioSession.overrideOutputAudioPort(.speaker)
}
else {
try? audioSession.overrideOutputAudioPort(.none)
}
swift 5.0
func activateProximitySensor(isOn: Bool) {
let device = UIDevice.current
device.isProximityMonitoringEnabled = isOn
if isOn {
NotificationCenter.default.addObserver(self, selector: #selector(proximityStateDidChange), name: UIDevice.proximityStateDidChangeNotification, object: device)
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord)
try session.setActive(true)
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
} else {
NotificationCenter.default.removeObserver(self, name: UIDevice.proximityStateDidChangeNotification, object: device)
}
}
#objc func proximityStateDidChange(notification: NSNotification) {
if let device = notification.object as? UIDevice {
print(device)
let session = AVAudioSession.sharedInstance()
do{
let routePort: AVAudioSessionPortDescription? = session.currentRoute.outputs.first
let portType = routePort?.portType
if let type = portType, type.rawValue == "Receiver" {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} else {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
}
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
}
}

Native video in Air on iOS stopping sound and microphone

I'm trying to play a native video in a StageWebView in an Air for iPad app that plays sound and requires user interaction via the microphone.
Everything seems to work, but when I start playing the video, sound and microphone stop.
If I dispose the StageWebView, sound and mic get back but only after 15 seconds (and I need sound and mic to work at least straight after the StageWebView is released).
I tried to get this work on an iOS5 iPad1, and on an iOS6 iPad2, using Air 3.4, 3.5 and 3.6 beta. I tried to switch the mute button of the iPad, and I also tried to change the SoundMixer.audioPlaybackMode to Media and Ambient.
But it didn't worked and I'm stucked.
Here is my code that deals with the microphone :
var microphone:Microphone = Microphone.getMicrophone();
microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, __micHandler);
private function micHandler(event : SampleDataEvent) : void {
trace("mic is working !");
}
For the audio :
_snd = new Sound();
_snd.load(new URLRequest(path));
_sndChannel = _snd.play();
private function soundStopHandler(event : MouseEvent) : void {
if(_sndChannel) _sndChannel.stop();
}
And for the video player :
_webview = new StageWebView();
_webview.stage = stage;
_webview.viewPort = new Rectangle(10, 120, 480, 300);
_webview.loadURL(path);
private function videoStopHandler(event : MouseEvent) : void {
if(_webview) {
_webview.dispose();
_webview = null;
}
}
Did anyone faced this problem before me ? Is there anything I forgot or did in a wrong way ?
Maybe this problem is related to the iOS System.
In iOS perspective, as far as I know, officially it is not possible. When you start the recording sessions, except for recording does not occupy it. Once you playback and recording sessions in the session must be solved. recording and playback session, the session can not be occupied simultaneously.
refer a apple documentation: Audio Session
AVAudioSessionCategoryPlayAndRecord or the equivalent kAudioSessionCategory_PlayAndRecord—Use this category for an application that inputs and outputs audio. The input and output need not occur simultaneously, but can if needed. This is the category to use for audio chat applications.
In AIR, if you code no problem. most likely this reason.

Ipad html5 load multiple audio files

I want to preload multiple audio files. To do this, I tried to create multiple Audio elements in JavaScript.
function loadAudio(){
audio1 = new Audio();
audio1.addEventListener('canplaythrough', isLoaded, false);
audio1.src = 'assets/audio/Maid with the Flaxen Hair.mp3';
audio1.load();
}
function isLoaded(){
audio1.removeEventListener('canplaythrough', isAppLoaded);
alert('maid');
alert('start audio 2');
audio2 = new Audio();
audio2.addEventListener('canplaythrough', isLoaded2, false);
audio2.src = 'assets/audio/Kalimba.mp3';
audio2.load();
}
function isLoaded2(){
alert('kalimba');
}
I only get the first alert, the second one never works.
I found that I can only play one sound at a time, but can I also only load one? Does the script need another user input for every new Audio object I create? Or does anyone have another way to create a preloader for audio?
this could have to do with the limitations on autoplaying (and, as far as i know autoloading) audiofiles on ios-devices (see here: Limitations of HTML5 Audio on iOS 4?).
In short: You cannot programmatically start audio-playback on ios devices, this is only allowed from within event-handlers for trusted (=initiated by the user) events.
Yo can play multiple audio on IOS device
using following way
$("#btn").on("click",function(){
alert("audio clicked");
var aud=new Audio();
aud.src="music.mp3";
aud.play();
var aud1=new Audio();
aud1.src="intro.mp3";
aud1.play();
})
The above code play multiple audio on device

HTML5 Video Volume

I'm currently working on an HTML5 video player, I have it working fully everywhere, except on the iPad.
Basically, I can control everything, except the sound, I have a mute button, it works fine on Google Chrome, Firefox 3.6 and Safari on Mac OS, but on the iPad no matter what value I put in video.volume, there is no change happening.
Did anybody get it working properly?
Here's my HTML code:
<video src="video_url" width="608" height="476" autobuffer="autobuffer" id="html5-player" preload>
Your browser doesn't support HTML5.
</video
And here's the Javascript:
var muted = false;
$j('.player-mute').click(function(){
if(muted) {
videoPlayer.volume = 1;
muted = false;
} else {
videoPlayer.volume = 0;
muted = true;
}
});
The volume property on the iOS devices is read-only according to Apple's documentation:
On iOS devices, the audio level is always under the user’s physical control. The volume property is not settable in JavaScript. Reading the volume property always returns 1.
If you read the iPad html5 video documentation it says that only the user of the device can start video, and change volume.
As #dobrin said, the volume property is read only on IOS for the video.
However, you could use the muted property, this allow you to mute or unmute video, most of the time, this will respond to the problem.
So, you can't set a specific volume between 0 and 1 but you can set the volume either to 0 or 1, then Apple assume that you will use the physical buttons for setting the volume.
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/muted
var muted = false;
$j('.player-mute').click(function(){
if(muted) {
videoPlayer.volume(1);
muted = false;
} else {
videoPlayer.volume(0);
muted = true;
}
});
use like function

Resources