Distorted audio in iOS 7.1 with WebAudio API - ios

On iOS 7.1, I keep getting a buzzing / noisy / distorted sound when playing back audio using the Web Audio API. It sounds distorted like this, in place of normal like this.
The same files are fine when using HTML5 audio. It all works fine on desktop (Firefox, Chrome, Safari.)
EDIT:
The audio is distorted in the iOS Simulator versions iOS 7.1, 8.1, 8.2. The buzzing sound often starts before I even playback anything.
The audio is distorted on a physical iPhone running iOS 7.1, in both Chrome and Safari.
The audio is fine on a physical iPhone running iOS 8.1, in both Chrome and Safari.
i.e.: the buzzing audio is on iOS 7.1. only.
Howler.js is not the issue. The problem is still there using pure JS like so:
var context;
var sound;
var extension = '.' + ( new Audio().canPlayType( 'audio/ogg' ) !== '' ? 'ogg' : 'mp3');
/** Test for WebAudio API support **/
try {
// still needed for Safari
window.AudioContext = window.AudioContext || window.webkitAudioContext;
// create an AudioContext
context = new AudioContext();
} catch(e) {
// API not supported
throw new Error( 'Web Audio API not supported.' );
}
function loadSound( url ) {
var request = new XMLHttpRequest();
request.open( 'GET', url, true );
request.responseType = 'arraybuffer';
request.onload = function() {
// request.response is encoded... so decode it now
context.decodeAudioData( request.response, function( buffer ) {
sound = buffer;
}, function( err ) {
throw new Error( err );
});
}
request.send();
}
function playSound(buffer) {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.start(0);
}
loadSound( '/tests/Assets/Audio/En-us-hello' + extension );
$(document).ready(function(){
$( '#clickme' ).click( function( event ) {
playSound(sound);
});
}); /* END .ready() */
A live version of this code is available here: Web Audio API - Hello world
Google did not bring up any result about such a distorted sound issue on iOS 7.1.
Has anyone else run into it? Should I file a bug report to Apple?

I believe the issue is caused due to resetting the audioContext.sampleRate prop, which seem to happen after the browser/OS plays something recorded in a different sampling rate.
I've devised the following workaround, which basically silently plays a short wav file recorded in the sampling rate that the device currently does playback on:
"use strict";
var getData = function( context, filePath, callback ) {
var source = context.createBufferSource(),
request = new XMLHttpRequest();
request.open( "GET", filePath, true );
request.responseType = "arraybuffer";
request.onload = function() {
var audioData = request.response;
context.decodeAudioData(
audioData,
function( buffer ) {
source.buffer = buffer;
callback( source );
},
function( e ) {
console.log( "Error with decoding audio data" + e.err );
}
);
};
request.send();
};
module.exports = function() {
var AudioContext = window.AudioContext || window.webkitAudioContext,
context = new AudioContext();
getData(
context,
"path/to/short/file.wav",
function( bufferSource ) {
var gain = context.createGain();
gain.gain.value = 0;
bufferSource.connect( gain );
gain.connect( context.destination );
bufferSource.start( 0 );
}
);
};
Obviously, if some of the devices have different sampling rates, you would need to detect and use a specific file for every rate.

it looks like iOS6+ Safari defaults to a sample rate of 48000. If you type this into the developer console when you first open mobile safari, you'll get 48000:
var ctx = new window.webkitAudioContext();
console.log(ctx.sampleRate);
Further Reference: https://forums.developer.apple.com/thread/20677
Then if you close the initial context on load: ctx.close(), the next created context will use the sample rate most other browsers use (44100) and sound will play without distortion.
Credit to this for pointing me in the right direction (and in case the above no longer works in the future): https://github.com/Jam3/ios-safe-audio-context/blob/master/index.js
function as of post date:
function createAudioContext (desiredSampleRate) {
var AudioCtor = window.AudioContext || window.webkitAudioContext
desiredSampleRate = typeof desiredSampleRate === 'number'
? desiredSampleRate
: 44100
var context = new AudioCtor()
// Check if hack is necessary. Only occurs in iOS6+ devices
// and only when you first boot the iPhone, or play a audio/video
// with a different sample rate
if (/(iPhone|iPad)/i.test(navigator.userAgent) &&
context.sampleRate !== desiredSampleRate) {
var buffer = context.createBuffer(1, 1, desiredSampleRate)
var dummy = context.createBufferSource()
dummy.buffer = buffer
dummy.connect(context.destination)
dummy.start(0)
dummy.disconnect()
context.close() // dispose old context
context = new AudioCtor()
}
return context
}

Related

Play multiple Audio files on Safari at once

I want to play multiple Audio files simultaneously on iOS .
On the click of a button I create multiple instance of an Audio file and put them into an array.
let audio = new Audio('path.wav')
audio.play().then(() => {
audio.pause();
possibleAudiosToPlay.push(audio);
});
After a while I play them all:
possibleAudiosToPlay.forEach(el => {
el.currentTime = 0;
el.play();
});
While this plays all audio files: When a new one begins it stops the old one. (on iOS)
Apples developer guide says this isn't possible at all with HTML5 Audio:
Playing multiple simultaneous audio streams is also not supported.
But can this be achieved with the Web Audio API?
There isn't anything written about it in Apples developer guide.
Yes you can with Web Audio API. You have to create an AudioBufferSourceNode for each one of your audio sources, since each source can be played only once (you can't stop it and play it again).
const AudioContext = window.AudioContext || window.webkitAudioContext;
const ctx = new AudioContext();
const audioPaths = [
"path/to/audio_file1.wav",
"path/to/audio_file2.wav",
"path/to/audio_file3.wav"
];
let promises = [];
// utility function to load an audio file and resolve it as a decoded audio buffer
function getBuffer(url, audioCtx) {
return new Promise((resolve, reject) => {
if (!url) {
reject("Missing url!");
return;
}
if (!audioCtx) {
reject("Missing audio context!");
return;
}
let xhr = new XMLHttpRequest();
xhr.open("GET", url);
xhr.responseType = "arraybuffer";
xhr.onload = function() {
let arrayBuffer = xhr.response;
audioCtx.decodeAudioData(arrayBuffer, decodedBuffer => {
resolve(decodedBuffer);
});
};
xhr.onerror = function() {
reject("An error occurred.");
};
xhr.send();
});
}
audioPaths.forEach(p => {
promises.push(getBuffer(p, ctx));
});
// Once all your sounds are loaded, create an AudioBufferSource for each one and start sound
Promise.all(promises).then(buffers => {
buffers.forEach(b => {
let source = ctx.createBufferSource();
source.buffer = b;
source.connect(ctx.destination);
source.start();
})
});

onaudioprocess not called on ios11

I am trying to get audio capture from the microphone working on Safari on iOS11 after support was recently added
However, the onaudioprocess callback is never called. Here's an example page:
<html>
<body>
<button onclick="doIt()">DoIt</button>
<ul id="logMessages">
</ul>
<script>
function debug(msg) {
if (typeof msg !== 'undefined') {
var logList = document.getElementById('logMessages');
var newLogItem = document.createElement('li');
if (typeof msg === 'function') {
msg = Function.prototype.toString(msg);
} else if (typeof msg !== 'string') {
msg = JSON.stringify(msg);
}
var newLogText = document.createTextNode(msg);
newLogItem.appendChild(newLogText);
logList.appendChild(newLogItem);
}
}
function doIt() {
var handleSuccess = function (stream) {
var context = new AudioContext();
var input = context.createMediaStreamSource(stream)
var processor = context.createScriptProcessor(1024, 1, 1);
input.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function (e) {
// Do something with the data, i.e Convert this to WAV
debug(e.inputBuffer);
};
};
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(handleSuccess);
}
</script>
</body>
</html>
On most platforms, you will see items being added to the messages list as the onaudioprocess callback is called. However, on iOS, this callback is never called.
Is there something else that I should do to try and get it called on iOS 11 with Safari?
There are two problems. The main one is that Safari on iOS 11 seems to automatically suspend new AudioContext's that aren't created in response to a tap. You can resume() them, but only in response to a tap.
(Update: Chrome mobile also does this, and Chrome desktop will have the same limitation starting in version 70 / December 2018.)
So, you have to either create it before you get the MediaStream, or else get the user to tap again later.
The other issue with your code is that AudioContext is prefixed as webkitAudioContext in Safari.
Here's a working version:
<html>
<body>
<button onclick="beginAudioCapture()">Begin Audio Capture</button>
<script>
function beginAudioCapture() {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var processor = context.createScriptProcessor(1024, 1, 1);
processor.connect(context.destination);
var handleSuccess = function (stream) {
var input = context.createMediaStreamSource(stream);
input.connect(processor);
var recievedAudio = false;
processor.onaudioprocess = function (e) {
// This will be called multiple times per second.
// The audio data will be in e.inputBuffer
if (!recievedAudio) {
recievedAudio = true;
console.log('got audio', e);
}
};
};
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(handleSuccess);
}
</script>
</body>
</html>
(You can set the onaudioprocess callback sooner, but then you get empty buffers until the user approves of microphone access.)
Oh, and one other iOS bug to watch out for: the Safari on iPod touch (as of iOS 12.1.1) reports that it does not have a microphone (it does). So, getUserMedia will incorrectly reject with an Error: Invalid constraint if you ask for audio there.
FYI: I maintain the microphone-stream package on npm that does this for you and provides the audio in a Node.js-style ReadableStream. It includes this fix, if you or anyone else would prefer to use that over the raw code.
Tried it on iOS 11.0.1, and unfortunately this problem still isn't fixed.
As a workaround, I wonder if it makes sense to replace the ScriptProcessor with a function that takes the steam data from a buffet and then processes it every x milliseconds. But that's a big change to the functionality.
Just wondering... do you have the setting enabled in Safari settings? It comes enabled by default in iOS11, but maybe you just disabled it without noticing.

How to get webcam video feed in a firefox addon?

I am currently developing an addon where the requirement is to capture the webcam video. I did some testing and noticed that navigator.mediaDevices.getUserMedia() is available within panel and hence have written the following content script for the panel to get webcam video feed from addon.
var mediastream;
var mediarecorder;
// Get the instance of mediaDevices object to use.
navigator.mediaDevices = navigator.mediaDevices || ((navigator.mozGetUserMedia || navigator.webkitGetUserMedia) ? {
getUserMedia: function(c) {
return new Promise(function(y, n) {
(navigator.mozGetUserMedia ||
navigator.webkitGetUserMedia).call(navigator, c, y, n);
});
}
} : null);
function startVideoCapture(width, height, framerate) {
// Check if the browser supports video recording
if (!navigator.mediaDevices) {
return;
}
// Lets initialize the video settings for use for our video recording session
var constraints = { audio: false, video: { width: 640, height: 320, framerate: 25 } };
// Make request to start video capture
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
// Lets initialize the timestamp for this video
var date = new Date();
var milliseconds = "000" + date.getMilliseconds();
var timestamp = date.toLocaleFormat("%Y-%m-%d %H:%M:%S.") + milliseconds.substr(-3);
// Lets make the stream globally available so that we will be able to control it later.
mediastream = stream;
// Lets display the available stream in the video element available inside the panel.
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(stream);
video.onloadedmetadata = function(e) {
video.play();
};
// We are not here to just show the video to screen. Lets get a media recorder to store the video into memory
mediarecorder = new MediaRecorder(stream);
// Lets decide what to do with the recorded video once we are done with the recording
mediarecorder.ondataavailable = function(evt) {
// recorded video will be available as a blob in evt.data object.
// The only way to use it properly is through FileReader Object
var reader = new FileReader();
// Lets decide what we are going to do with the data that we will read from blob
reader.onloadend = function() {
// create a video object containing the timestamp and the binary video string
var videoObject = new Object();
videoObject.timestamp = timestamp;
videoObject.video = reader.result;
// send the video to the main script for safe keeping
self.port.emit("videoAvailable", videoObject);
}
// instruct the FileReader to start reading the blob
reader.readAsBinaryString(evt.data);
}
// Lets start the video capture
mediarecorder.start();
})
.catch(function(err) {
self.port.emit("VideoError", err);
});
}
function stopVideoCapure(){
if (mediarecorder !== undefined && mediarecorder !== null) {
mediarecorder.stop();
}
if (mediastream !== undefined && mediastream !== null) {
mediastream.stop();
}
}
function updateVideoSettings(settings){
stopVideoCapture();
startVideoCapture(settings.width, settings.height, settings.framerate);
}
self.port.on("VideoPreferenceUpdated", updateVideoSettings);
// Start video capture
startVideoCapture(self.options.width, self.options.height, self.options.framerate);
Now the problem here is the code is perfectly working when from a webpage i.e. if I save the open the panel.html file directly in the browser with proper adjustment of self.options and self.port lines. But when I am using the code in the contentscript for panel in my addon, I am getting the following error
JavaScript error: resource:///modules/webrtcUI.jsm, line 186: TypeError: stringBundle is undefined
Now that is an error from the inbuilt jsm module in firefox. Is there a way I can get past that error or any other way to get webcam video feed in my addon?
Thanks

Cordova Phonegap inappbrowser losing File Api functionality

I am developing an app using Web Audio Api. I have discovered that there is a memory leak in the way that Safari handles audio and doesn't garbage college the Audio Context correctly. For this reason I wish to load a new page. Have that page create the Audio Context, complete the operation and then close the window, so that the memory is released.
I have done the following to achieve this.
ref = window.open('record.html', '_self'); This will open the record.html page in the Cordova WebView according to https://wiki.apache.org/cordova/InAppBrowser
1 window.open('local-url.html');// loads in the
Cordova WebView
2 window.open('local-url.html', '_self');
// loads in the Cordova WebView
The record.html page loads a javascript file, that runs the operations that I wish to run. Here is the recordLoad.js file that makes some calls to native operations ( The native API is only available if loaded in the Cordova Webview and as you can see I need to access the file system, so this is the only way I can see to do it.
window.onload = createAudioContext;
ref = null;
function createAudioContext(){
console.log('createAudioContext');
window.AudioContext = window.AudioContext || window.webkitAudioContext;
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
window.URL = window.URL || window.webkitURL;
audioContext = new AudioContext;
getDirectory();
}
function getDirectory(){
console.log('getDirectory');
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, getFileSystem, fail);
}
function getFileSystem(directory){
console.log('getFileSystem');
var audioPath = localStorage.getItem('audioPath');
directory.root.getFile(audioPath, null, getVocalFile, fail);
}
function getVocalFile(fileEntry){
console.log('getVocalFile');
fileEntry.file(readVocalsToBuffer, fail);
}
function readVocalsToBuffer(file){
console.log('readVocalsToBuffer');
var reader = new FileReader();
reader.onloadend = function(evt){
var x = audioContext.decodeAudioData(evt.target._result, function(buffer){
if(!buffer){
console.log('error decoding file to Audio Buffer');
return;
}
window.voiceBuffer = buffer;
buffer = null;
loadBuffers();
});
}
reader.readAsArrayBuffer(file);
}
//web
function loadBuffers(){
console.log('loadBuffers');
var srcSong = localStorage.getItem('srcSong');
try{
var bufferLoader = new BufferLoader(
audioContext,
[
"."+srcSong
],
createOffLineContext
);
bufferLoader.load()
}
catch(e){
console.log(e.message);
}
}
//
function createOffLineContext(bufferList){
console.log('createOfflineContext');
offline = new webkitOfflineAudioContext(2, window.voiceBuffer.length, 44100);
var vocalSource = offline.createBufferSource();
vocalSource.buffer = window.voiceBuffer;
vocalSource.connect(offline.destination);
var backing = offline.createBufferSource();
backing.buffer = bufferList[0];
backing.connect(offline.destination);
vocalSource.start(0);
backing.start(0);
offline.oncomplete = function(ev){
bufferList = null;
console.log('audioContext');
console.log(audioContext);
audioContext = null;
console.log(audioContext);
vocalSource.stop(0);
backing.stop(0);
vocalSource.disconnect(0);
backing.disconnect(0);
vocalSource = null;
backing = null;
window.voiceBuffer = null;
window.renderedFile = ev.renderedBuffer;
var bufferR = ev.renderedBuffer.getChannelData(0);
var bufferL = ev.renderedBuffer.getChannelData(1);
var interleaved = interleave(bufferL, bufferR);
var dataview = encodeWAV(interleaved);
window.audioBlob = new Blob([dataview], {type: 'Wav'});
saveFile();
}
offline.startRendering();
}
// This file is very long, but once it is finished mixing the two audio buffers it writes a new file to the file system. And when that operation is complete I use
function gotFileWriter(writer){
console.log('gotFileWriter');
writer.onwriteend = function(evt){
console.log('onwriteEnd');
console.log(window.audioBlob);
delete window.audioBlob;
console.log(window.audioBlob);
// checkDirectory();
var ref = window.open('index.html', '_self');
// ref.addEventListener('exit', windowClose);
}
writer.write(audioBlob);
}
I return back to the original index.html file. This solves the memory issues. However, once I try to run the same operation a second time. ie load in the record.html file, and run the recordLoad.js file I receive an error ReferenceError: Can't find variable: LocalFileSystem
It would appear that in reload index.html some, but not all of the links to the Cordova API have been lost. I can still for example use the Media Api but not the File Api. I understand that this is a bit of a hacky way, (opening and closing windows) to solve the memory leak, but I cannot find any other way of doing it. I really need some help with this. So any pointers very very welcome.

AIR4: How to restore audio in iOS 8 (beta 5) after requesting mic access?

Problem: iOS 8 (beta 5) fades out all sound output in my app after listening for sample data, and never restores it, but only if sound has been played before requesting microphone sample data.
Another user noted that this behaviour stems from requesting microphone access, but sound plays as expected in our released app using iOS7 in both of the cases below, and the workaround to close/reopen the app isn't a viable solution since microphone recording is an recurring part of our app.
Conditions:
Flex 4.6.0 & AIR 4.0
iPad 3 (MD335LL/A)
iOS 8 (12A4345D)
Both test cases assume that microphone permission has been granted..
Test case 0:
Play sound
Stop sound channel
Audio stops
Connect to microphone, remove listener once sample data received
Attempt to play sound
No audible output, nor does the sound complete event ever fire
Test case 1:
Connect to microphone, remove listener once sample data received
Sounds plays without a problem and sound complete event is fired
Sample code:
protected var _url:String = 'audio/organfinale.mp3';
protected var _sound:Sound;
protected var _soundChannel:SoundChannel;
protected var _microphone:Microphone;
public function init():void
{
initSound();
}
/** SOUND **/
protected function initSound():void
{
// Set up to mimic app
SoundMixer.audioPlaybackMode = AudioPlaybackMode.MEDIA;
_sound = new Sound( new URLRequest( _url ) );
_sound.addEventListener(Event.COMPLETE, onSoundLoaded );
}
protected function onSoundLoaded(e:Event):void
{
_sound.removeEventListener(Event.COMPLETE, onSoundLoaded);
switch ( 0 )
{
// Will cut audio completely, prevent audio dispatch
case 0:
playSound();
setTimeout( initMicrophone, 1500 );
break;
// Works perfectly (because no overlap between sound/microphone?)
case 1:
initMicrophone();
break;
}
}
protected function playSound():void
{
trace( 'Play sound' );
if ( _soundChannel && _soundChannel.hasEventListener( Event.SOUND_COMPLETE ) )
_soundChannel.removeEventListener( Event.SOUND_COMPLETE, onSoundCompleted );
_soundChannel = _sound.play( 0 );
_soundChannel.addEventListener( Event.SOUND_COMPLETE, onSoundCompleted, false, 0, true );
}
protected function onSoundCompleted( e:Event ):void
{
trace( "Sound complete" );
}
/** MICROPHONE **/
protected function initMicrophone():void
{
if ( Microphone.isSupported )
{
// Testing pre-emptive disposal of sound that might be conflicting with microphone in iOS 8
if ( _soundChannel )
{
// Tested this, but it throws an error because sound is not streaming
// _sound.close();
_soundChannel.removeEventListener( Event.SOUND_COMPLETE, onSoundCompleted );
_soundChannel.stop();
_soundChannel = null;
// Instead the sound will be cut abruptedly, and will fail to dispatch complete event
}
_microphone = Microphone.getMicrophone();
_microphone.setSilenceLevel(0);
// _microphone.setUseEchoSuppression(true);
// _microphone.gain = 50;
// _microphone.rate = 44;
_microphone.addEventListener( SampleDataEvent.SAMPLE_DATA, onSampleDataReceived, false, 0, true );
}
else
{
trace( 'Microphone is not supported!' );
}
}
protected function onSampleDataReceived( e:SampleDataEvent ):void
{
trace( 'Sample data received' );
_microphone.removeEventListener( SampleDataEvent.SAMPLE_DATA, onSampleDataReceived );
_microphone = null;
setTimeout( playSound, 1500 );
}
Notes:
I stopped the sound channel before adding the mic listener, assuming that might have been causing a conflict, but it made no difference.
I've tested using the same device/OS after compiling the app with Flex 4.6.0 & AIR 14.0, and the problem persists.
By comparison, testing this app on iOS 7 worked in both cases.
Any ideas?
Update 1: Bug has been logged here: https://bugbase.adobe.com/index.cfm?event=bug&id=3801262
Update 2: Bug is fixed as of AIR 15.0.0.274: http://labs.adobe.com/downloads/air.html

Resources