AVFoundation.AVAudioPlayer stops randomly - ios

I'm trying to play multiple sounds at the same time. However sometimes the sounds just stops playing or never starts at all.
I have an eventhandler that recieves an event when a sound effect should be played:
void HandlePlaySound (object sender, EventArgs e)
{
this.InvokeOnMainThread (()=>{
...
[set url to path]
...
MonoTouch.AVFoundation.AVAudioPlayer player = MonoTouch.AVFoundation.AVAudioPlayer.FromUrl(url);
player.Play();
});
}
This works fine most of the time but when two sounds gets triggered at the same time it's seems like one of them will be killed or both. I must be doing something really wrong here.
Is there a more correct way of playing sounds in an iPhone app. Each sound is supposed to play till end and there could be multiple sounds playing at the same time.

If I were to guess, I'd say that sometimes, the GC comes in and disposes the player that has gone out of scope, causing your random stop behaviour. I found a stable solution being first establishing how many simultaneous audio streams you'd like to able to play, and then enforcing those rules:
// I'd like a maximum of 5 simultaneous audio streams
Queue<AVAudioPlayer> players = new Queue<AVAudioPlayer>(5);
void PlayAudio (string fileName)
{
NSUrl url = NSUrl.FromFilename(fileName);
AVAudioPlayer player = AVAudioPlayer.FromUrl(url);
if (players.Count == 5) {
players.Dequeue().Dispose();
}
players.Enqueue(player);
player.Play();
}
// In my example, I'll select files from my Sounds folder (containing a couple of .wav, a couple of .mp3 and an .aif)
string[] files;
int fileIndex = 0;
string GetNextFileName ()
{
if (files == null)
files = Directory.GetFiles("Sounds");
if (fileIndex == files.Length)
fileIndex = 0;
return files[fileIndex++];
}
partial void OnPlayButtonTapped (NSObject sender)
{
string fileName = GetNextFileName();
PlayAudio(fileName);
}

Related

audioplayer playing inconsistent / weird audio (url mp3 link) result in flutter

following the instructions from:
https://codingwithjoe.com/playing-audio-from-the-web-and-http/ (using his example and AudioProvider class)
I have been using:
audioplayer: ^0.8.1
audioplayer_web: ^0.7.1
to play audio from https link.
The problem it has some weird inconsistent effect.
- after playing few audio, it keeps on playing the same audio eventhough new url is loaded
- after playing few audio, the sound is weird like some part is cut with other audio.
What is a good audio player that accepts url link for flutter that can produce a consistent result ?
the provided audioplayer from your example works great and has good features. From what you are describing for me it seems like you're not closing the session when you play a sound. It seems like you are stacking the sounds which causes weird sounds.
You have to close the instance then. even though the article is outdated (march 2018) the audioplayer has developed further. check their offical guide here:
https://pub.dev/packages/audioplayer
This is version audioplayer 0.8.1 not 3.0 or something..
Example from docs:
Instantiate an AudioPlayer instance
//...
AudioPlayer audioPlugin = AudioPlayer();
//...
Player controls:
audioPlayer.play(url);
audioPlayer.pause();
audioPlayer.stop();
status and current position:
//...
_positionSubscription = audioPlayer.onAudioPositionChanged.listen(
(p) => setState(() => position = p)
);
_audioPlayerStateSubscription = audioPlayer.onPlayerStateChanged.listen((s) {
if (s == AudioPlayerState.PLAYING) {
setState(() => duration = audioPlayer.duration);
} else if (s == AudioPlayerState.STOPPED) {
onComplete();
setState(() {
position = duration;
});
}
}, onError: (msg) {
setState(() {
playerState = PlayerState.stopped;
duration = new Duration(seconds: 0);
position = new Duration(seconds: 0);
});
});
Like I said most of the audio plugins are running in singleton mode with instances. To provide getting weird effects you have to load the next song in the same instance, don't open another new instance, and you wont get any weird effects.
If you want to switch to a different audio player another great one which I used in an app project is the following:
https://pub.dev/packages/audio_manager#-readme-tab-
Hope it helps.

Web Audio API not playing sound sample on device, but works in browser

I have an Ionic app that is a metronome. Using Web Audio API I have made everything work using the oscillator feature, but when switching to use a wav file no audio is playing on a real device (iPhone).
When testing in the browser using Ionic Serve (chrome) the audio plays fine.
Here is what I have:
function snare(e) {
var audioSource = 'assets/audio/snare1.wav';
var request = new XMLHttpRequest();
request.open('GET', audioSource, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
audioContext.decodeAudioData(request.response, function(theBuffer) {
buffer = theBuffer;
playSound(buffer);
});
}
request.send();
}
function playSound(buffer) {
var source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(0);
}
The audio sample is in www/assets/audio.
Any ideas where this could be going wrong?
I believe iOS devices require a user-gesture of some sort to allow playing of audio.
It's July 2017, iOS 10.3.2 and we're still finding this issue on Safari on iPhones. Interestingly Safari on a MacBook is fine.
#Raymond Toy's general observation still appears to be true. But #padenot's approach (via https://gist.github.com/laziel/7aefabe99ee57b16081c) did not work for me in a situation where I wanted to play a sound in response to some external event/trigger.
Using the original poster's code, I've had some success with this
var buffer; // added to make it work with OP's code
// keep the original function snare()
function playSound() { // dropped the argument for simplicity.
var source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(0);
}
function triggerSound() {
function playSoundIos(event) {
document.removeEventListener('touchstart', playSoundIos);
playSound();
}
if (/iPad|iPhone/.test(navigator.userAgent)) {
document.addEventListener('touchstart', playSoundIos);
}
else { // Android etc. or Safari, but not on iPhone
playSound();
}
}
Now calling triggerSound() will produce the sound immediately on Android and will produce the sound on iOS after the browser page has been touched.
Still not ideal, but better than no sound at all...
I had a similar issue in current iOS (15). I tried to play base64 encoded binary data which worked in all browsers, but not on iOS.
Finally reordering of the statements solved my issue:
let buffer = Uint8Array.from(atob(base64), c => c.charCodeAt(0));
let context = new AudioContext();
// these lines were within "play()" before
audioSource = context.createBufferSource();
audioSource.connect(context.destination);
audioSource.start(0);
// ---
context.decodeAudioData(buffer.buffer, play, (e) => {
console.warn("error decoding audio", e)
});
function play(audioBuffer) {
audioSource.buffer = audioBuffer;
}
Also see this commit in my project.
I assume that calling audioSource.start(0) within the play() method was somehow too late because it's within a callback after context.decodeAudioData() and therefore maybe "too far away" from a user interaction for the standards of iOS.

RxJava Observable to smooth out bursts of events

I'm writing a streaming Twitter client that simply throws the stream up onto a tv. I'm observing the stream with RxJava.
When the stream comes in a burst, I want to buffer it and slow it down so that each tweet is displayed for at least 6 seconds. Then during the quiet times, any buffer that's been built up will gradually empty itself out by pulling the head of the queue, one tweet every 6 seconds. If a new tweet comes in and faces an empty queue (but >6s after the last was displayed), I want it to be displayed immediately.
I imagine the stream looking like that described here:
Raw: --oooo--------------ooooo-----oo----------------ooo|
Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o|
And I understand that the question posed there has a solution. But I just can't wrap my head around its answer. Here is my solution:
myObservable
.concatMap(new Func1<Long, Observable<Long>>() {
#Override
public Observable<Long> call(Long l) {
return Observable.concat(
Observable.just(l),
Observable.<Long>empty().delay(6, TimeUnit.SECONDS)
);
}
})
.subscribe(...);
So, my question is: Is this too naïve of an approach? Where is the buffering/backpressure happening? Is there a better solution?
Looks like you want to delay a message if it came too soon relative to the previous message. You have to track the last target emission time and schedule a new emission after it:
public class SpanOutV2 {
public static void main(String[] args) {
Observable<Integer> source = Observable.just(0, 5, 13)
.concatMapEager(v -> Observable.just(v).delay(v, TimeUnit.SECONDS));
long minSpan = 6;
TimeUnit unit = TimeUnit.SECONDS;
Scheduler scheduler = Schedulers.computation();
long minSpanMillis = TimeUnit.MILLISECONDS.convert(minSpan, unit);
Observable.defer(() -> {
AtomicLong lastEmission = new AtomicLong();
return source
.concatMapEager(v -> {
long now = scheduler.now();
long emission = lastEmission.get();
if (emission + minSpanMillis > now) {
lastEmission.set(emission + minSpanMillis);
return Observable.just(v).delay(emission + minSpanMillis - now, TimeUnit.MILLISECONDS);
}
lastEmission.set(now);
return Observable.just(v);
});
})
.timeInterval()
.toBlocking()
.subscribe(System.out::println);
}
}
Here, the source is delayed by the number of seconds relative to the start of the problem. 0 should arrive immediately, 5 should arrive # T = 6 seconds and 13 should arrive # T = 13. concatMapEager makes sure the order and timing is kept. Since only standard operators are in use, backpressure and unsubscription composes naturally.

play multiple SpeakHere audio files

By recording multiple snippets using filenames, I have attempted to record multiple separate short voice snippets in SpeakHere, I want to play them serially, separated by a set fixed interval of time between the starts of each snippet. I want the series of snippets to play in a loop forever, or until the user stops play.
My question is how do I alter SpeakHere to do so?
(I say "attempted" because I have not been able yet to run SpeakHere on my Mac Mini iPhone simulator. That is the subject of another question and because another question on the subject of multiple files has not been answered, either.)
In SpeakHereController.mm is the following method definition for playing a recorded file. Notice the final else clause calls player->StartQueue(false)
- (IBAction)play:(id)sender
{
if (player->IsRunning())
{ [snip]
}
else
{
OSStatus result = player->StartQueue(false);
if (result == noErr)
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:self];
}
}
Below is an excerpt from SpeakHere AQPlayer.mm
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we have a file but no queue, create one now
if ((mQueue == NULL) && (mFilePath != NULL)) CreateQueueForFile(mFilePath);
mIsDone = false;
// if we are not resuming, we also should restart the file read index
if (!inResume) {
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
AQBufferCallback (this, mQueue, mBuffers[i]);
}
}
return AudioQueueStart(mQueue, NULL);
}
So, can the method play and AQPlayer::StartQueue be used to play the multiple files, how can the intervals be enforced, and how can the loop be repeated?
My adaptation of the code for the method 'record` is as follows, so you can see how the multiple files are being created.
- (IBAction)record:(id)sender
{
if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
self.counter = self.counter + 1 ; //Added *****
btn_play.enabled = NO;
// Set the button's state to "stop"
btn_record.title = #"Stop";
// Start the recorder
NSString *filename = [[NSString alloc] initWithFormat:#"recordedFile%d.caf",self.counter];
// recorder->StartRecord(CFSTR("recordedFile.caf"));
recorder->StartRecord((CFStringRef)filename);
[self setFileDescriptionForFormat:recorder->DataFormat() withName:#"Recorded File"];
// Hook the level meter up to the Audio Queue for the recorder
[lvlMeter_in setAq: recorder->Queue()];
}
}
Having spoken with a local "meetup" group on iOS I have learned that the easy solution to my question is to avoid AudioQueues and to instead use the "higher level" AVAudioRecorder and AVAudioPlayer from AVFoundation.
I also found how to partially test my app on the simulator with my Mac Mini: by plugging in an Olympus audio recorder with USB to my Mini as an input "voice". This works as an alternative to the iSight which does not produce an input audio on the Mini.

XNA MediaPlayer loading music timing

I load music with the following code in my load content function:
song = Content.Load<Song>("music/game");
MediaPlayer.IsRepeating = false;
MediaPlayer.Play(song);
nothing strange there, but each round in my game is 2 minutes long and should sync up with the music (that is 2 minutes long) but the music ends betweem 2-4s early. This wouldn't be a problem if it was always the same time.
My guess is that it has something to do with load times? any advice?
One thing you could do is move the Content.Load<Song> to Load method and check if it is playing in the update, and if not, play. Eg,
public void LoadContent(ConentManager content)
{
song = content.Load<Song>("music/game");
gameSongStartedPlaying = false; // this variable to hold if you have starting playing this song already
MediaPlayer.IsRepeating = false;
}
public void Update(GameTime gameTime)
{
if(MediaPlayer.State == MediaState.Stopped && !gameSongStartedPlaying)
{
MediaPlayer.Play(song);
gameSongStartedPlaying = true;
}
}
This should start playing the song on the first pass of the Update method rather than in the Loading phase where the song is 'playing' while all resources after Content.Load<Song> are still loading (this would be the reason your song finishes early).

Resources