Is it possible to get audio from an ICY stream with percentage and seek function - ios

I'm trying to reproduce audio from an ICY stream. I'm able to reproduce that with AVPlayer and some good open source library but I'm not able to control the stream. I have no idea how I can get the percentage reproduced or how to seek to a specific time in the stream. Is that possible? Is there a good library that can help me?
Actually I'm using AFSoundManager but I'm always receiving negative numbers for percentage and I get invalid time when trying to seek the stream at a specified time.
That's the code that I'm using:
AFSoundManager.sharedManager().startStreamingRemoteAudioFromURL("http://www.abstractpath.com/files/audiosamples/sample.mp3") { (percentage, elapsedTime, timeRemaining, error, poppi) in
if error == nil {
//This block will be fired when the audio progress increases in 1%
if elapsedTime > 0 {
println(elapsedTime)
self.slider.value = Float(elapsedTime*1000)
}
} else {
//Handle the error
println(error)
}
I'm able of course to get the elapsedTime but not the percentage or the remainingTime. I always get negative numbers.
This code works perfectly with remote or local audio file but not with the stream.

This isn't possible.
These streams are live. There is nothing to seek to because what you haven't heard hasn't happened yet. Even streams that playback music end-to-end are still "live" in the sense that the audio you haven't received hasn't been encoded yet. (Small codec and transit buffers aside, of course.)

Related

Audio duration sometimes changes to near-zero in Safari

I am encountering a strange issue on Safari, both on MacOS and iOS. Initially it seemed like audio would sometimes just not play, but not emit any errors. After adding much logging I found that when when I call audio.play() the audio.duration property would accurately reflect the duration of the clip. The pause and then ended events will then almost immediately be emitted and, after that the duration for the same clip is suddenly somewhere between 0.001 and 0.003 seconds.
I am retaining a reference to the audio element and reusing it to play the same audio multiple times. It almost always works the first time, but after the first playthrough, on about 50% of subsequent plays the symptoms described above will present themselves.
The code where I play the audio is below:
// In the constructor for the class managing audio playback:
this.mediaElement.addEventListener('pause', e => {
console.log('Media paused', this.mediaElement.duration); // This shows the very short duration if the audio did not play
});
// In the function in the class that plays the media.
try {
await this.mediaElement.play();
console.log('Media is playing', this.mediaElement.duration); // This shows an accurate duration
this.state = Playable.state.PLAYING;
} catch(e) {
console.error('Playing media failed:', e);
if (e && e.name === 'NotAllowedError') {
ErrorHandler.playbackNotAllowed(e);
this.state = Playable.state.PAUSED;
return;
} else {
this._fail(e);
this._fakePlay();
}
}
As you can see, I'm not doing anything strange or complicated here. So far I haven't been able to figure out why the duration would change in this way only after playing the audio. Is this a known bug, or is there something I may be doing that could cause this behavior. The closest thing I can think of is that sometimes I will set the currentTime to 0 if I need to start playing the audio from the beginning, but that should not change the duration.

ARSession and Recording Video

I’m manually writing a video recorder. Unfortunately it’s necessary if you want to record video and use ARKit at the same time. I’ve got most of it figured out, but now I need to optimize it a bit because my phone gets pretty hot running ARKit, Vision and this recorder all at once.
To make the recorder, you need to use an AVAssetWriter with an AVAssetWriterInput (and AVAssetWriterInputPixelBufferAdaptor). The input has a isReadyForMoreMediaData property you need to check before you can write another frame. I’m recording in real-time (or as close to as possible).
Right now, when ARKit.ARSession gives me a new session I immediately pass it to the AVAssetWriterInput. What I want to do is add it to a queue, and have loop check to see if there’s samples available to write. For the life of me I can’t figure out how to do that efficiently.
I want to just run a while loop like this, but it seems like it would be a bad idea:
func startSession() {
// …
while isRunning {
guard !pixelBuffers.isEmpty && writerInput.isReadyForMoreMediaData else {
continue
}
// process sample
}
}
Can I run this a separate thread from the ARSession.delegateQueue? I don't want to run into issues with CVPixelBuffers from the camera being retained for too long.

IOS AudioQueue doesnt play when enqueue packets late

I have an app that enqueues packets to AudioQueue, and it's working perfectly. The problem is when I have a delay in the network, and I can't serve packets in time to AudioQueue.
All the application is working well and the enqueueBuffer doesnt return any error, but AudioQueue is discarding packets (so I have no sound), because they are too old.
Can I force AudioQueue to play those audio packets?, or at least, know that the packets are being discarded?. Because if I know it, i can do Pause-Play to restart the Queue... (not very good solution, but I haven't anything better)
Because the delay could be very big, I can't use a big buffer, because this would minimice error, but not solve it
Thank you very much
You're on the right track. You can handle network delays by detecting in your callback procedure when you have reached the end of your network buffer, then pausing the AudioQueue. Later on, restart the queue once enough packets have been buffered. Your code would look like this
if playerState.packetsRead == playerState.packetsWritten {
playerState.isPlaying = false
AudioQueuePause(aq)
}
And in your network code
if (playerState.packetsWritten >= playerState.packetsRead + kNumPacketsToBuffer) {
if !playerState.isPlaying {
playerState.isPlaying = true
for buffer in playerState.buffers {
audioCallback(playerState, aq: playerState.queue, buffer: buffer)
}
AudioQueueStart(playerState.queue, nil)
}
}
You would also need to update your code so that every time you receive a packet from the network, playerState.packetsWritten is incremented, and similarly for playerState.packetsRead when you add a packet to the audio queue. The optimal number for kNumPacketsToBuffer depends on the codec and network conditions. I would recommend using 256 for AAC and adjusting up/down based on performance.

Adding audio buffer [from file] to 'live' audio buffer [recording to file]

What I'm trying to do:
Record up to a specified duration of audio/video, where the resulting output file will have a pre-defined background music from external audio-file added - without further encoding/exporting after recording.
As if you were recording video using the iPhones Camera-app, and all the recorded videos in 'Camera Roll' have background-songs. No exporting or loading after ending recording, and not in a separate AudioTrack.
How I'm trying to achieve this:
By using AVCaptureSession, in the delegate-method where the (CMSampleBufferRef)sample buffers are passed through, I'm pushing them to an AVAssetWriter to write to file. As I don't want multiple audio tracks in my output file, I can't pass the background-music through a separate AVAssetWriterInput, which means I have to add the background-music to each sample buffer from the recording while it's recording to avoid having to merge/export after recording.
The background-music is a specific, pre-defined audio file (format/codec: m4a aac), and will need no time-editing, just adding beneath the entire recording, from start to end. The recording will never be longer than the background-music-file.
Before starting the writing to file, I've also made ready an AVAssetReader, reading the specified audio-file.
Some pseudo-code(threading excluded):
-(void)startRecording
{
/*
Initialize writer and reader here: [...]
*/
backgroundAudioTrackOutput = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:
backgroundAudioTrack
outputSettings:nil];
if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput])
[backgroundAudioReader addOutput:backgroundAudioTrackOutput];
else
NSLog(#"This doesn't happen");
[backgroundAudioReader startReading];
/* Some more code */
recording = YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if(!recording)
return;
if(videoConnection)
[self writeVideoSampleBuffer:sampleBuffer];
else if(audioConnection)
[self writeAudioSampleBuffer:sampleBuffer];
}
The AVCaptureSession is already streaming the camera-video and microphone-audio, and is just waiting for the BOOL recording to be set to YES. This isn't exactly how I'm doing this, but a short, somehow equivalent representation. When the delegate-method receives a CMSampleBufferRef of type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer. If this was to be done normally, without a background-track as I'm trying to do, I'd simply put something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case though, I need to overlap two buffers before writing it:
-(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer
{
CMSampleBufferRef backgroundSampleBuffer =
[backgroundAudioTrackOutput copyNextSampleBuffer];
/* DO MAGIC HERE */
CMSampleBufferRef resultSampleBuffer =
[self overlapBuffer:recordedSampleBuffer
withBackgroundBuffer:backgroundSampleBuffer];
/* END MAGIC HERE */
[assetWriterAudioInput appendSampleBuffer:resultSampleBuffer];
}
The problem:
I have to add incremental sample buffers from a local file to the live buffers coming in. The method I have created named overlapBuffer:withBackgroundBuffer: isn't doing much right now. I know how to extract AudioBufferList, AudioBuffer and mData etc. from a CMSampleBufferRef, but I'm not sure how to actually add them together - however - I haven't been able to test different ways to do that, because the real problem happens before that. Before the Magic should happen, I am in possession of two CMSampleBufferRefs, one received from microphone, one read from file, and this is the problem:
The sample buffer received from the background-music-file is different than the one I receive from the recording-session. It seems like the call to [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I realize that this might be obvious to some people, but I've never before been at this level of media-technology. I see now that it was wishful thinking to call copyNextSampleBuffer each time I receive a sampleBuffer from the session, but I don't know when/where to put it.
As far as I can tell, the recording-session gives one audio-sample in each sample-buffer, while the file-reader gives multiple samples in each sample-buffer. Can I somehow create a counter to count each received recorded sample/buffers, and then use the first file-sampleBuffer to extract each sample, until the current file-sampleBuffer has no more samples 'to give', and then call [..copyNext..], and do the same to that buffer?
As I'm in full control of both the recording and the file's codecs, formats etc, I am hoping that such a solution wouldn't ruin the 'alignment'/synchronization of the audio. Given that both samples have the same sampleRate, could this still be a problem?
Note
I'm not even sure if this is possible, but I see no immediate reason why it shouldn't.
Also worth mentioning that when I try to use a Video-file instead of an Audio-file, and try to continually pull video-sampleBuffers, they align up perfectly.
I am not familiarized with AVCaptureOutput, since all my sound/music sessions were built using AudioToolbox instead of AVFoundation. However, I guess you should be able to set the size of the recording capturing buffer. If not, and you are still get just one sample, I would recommend you to store each individual data obtained from the capture output in an auxiliar buffer. When the auxiliar buffer reaches the same size as the file-reading buffer, then call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];
I hope this would help you. If not, I can provide example about how to do this using CoreAudio. Using CoreAudio I have been able to obtain 1024 LCPM samples buffer from both microphone capturing and file reading. So the overlapping is immediate.

Matt Gallagher's AudioStreamer play mp3 from offset before playing state

I did not found solution for one issue: how to play mp3 file from offset immideately?
I can only play file then send -(void)seekToTime: but in this case sound begins and interrupts then begins from defined offset.
I tried to apply seekToTime method on ASStatusChangedNotification (in different cases of AudioStreamerState) but there were without result.
upd: I think that may set time offset after the file began streaming (before playing). But how?
Thanks.
What I did was create a method to seek to the desired time that I run after [streamer start]:
while(streamer.bitRate == 0) {
sleep(1);
}
If you're concerned about waiting too long, you can add a time out: either a count of times through the loop, or set a start time and compare it to the current time to break out of the loop.
This blog post has another take:
http://www.saygoodnight.com/2009/08/streaming-audio-to-the-iphone-starting-at-an-offset/

Resources