I'm streaming audio using _audioPlayer->openHLS() and I need to start and stop at specific positions.
The best way seems to be to use loopBetween and then call exitLoop in the LoopEnd event. However, I can't get loopBetween to play!
_audioPlayer->loopBetween(startTimeMS, stopTimeMs, true, 255, false);
I have tried calling _audioPlayer->play(false) before or after the loopBetween, but then the audio plays without stopping. If I just call loopBetween it never starts playing.
Is there some config I'm missing to get loopBetween to work? The SDK has no sample code covering looping.
EDIT: I've found one way to do this, by polling positionMs in the audio processing callback. I'd still like to know how to make looping work, as that seems like a cleaner solution.
Looping doesn't work for HLS streams. Polling positionMs in the audio processing callback is a perfect solution.
Related
My application uses Google WebRTC framework to make audio calls and that part work. However I would like to find a way to stream an audio file during a call.
Scenario :
A calls B
B answer and play a music
A hear this music
I've downloaded entire source code of WebRTC and trying to understand how it works. On the iOS part it seems that it is using Audio Unit. I can see a voice_processing_audio_unit file. I would (maybe wrongly) assume that I need to create a custom audio_unit that is reading its data from a file?
Does anyone has an idea in which direction to go?
After fighting an entire week with this issue. I finally manage to find a solution for this problem.
By editing WebRTC Code, I was able to get to the level of AudioUnits and in the AudioRenderingCallback, catch the io_data buffer.
This callback is called every 10ms to get the data from the mic. Therefor in this precise callback I was able to change this io_data buffer to put my own audio data.
I'm making an app that plays synced audio loops with a metronome. For example, I might have 3 files like this:
bass_60bpm.m4a
drums_60bpm.m4a
guitar_60bpm.m4a
And a metronome sound tick.m4a, which I play with AKSamplerMetronome.
I need to play them back at arbitrary tempos, so I use AKTimePitcher on the AKAudioFiles (so playing at 90bpm, I'd play bass_60bpm.m4a at 1.5x).
This almost works, but after 3-5 loops, the metronome gets out of sync with the audio loops. I think I understand why that happens (audio_sample_length * floating_point_number is not equivalent to AKSamplerMetronome's tempo calculations), but I don't know how to fix it.
What I suspect I need to do is manually reimplement some or all of AKSamplerMetronome, but play the metronome ticks based on AKTimePitcher's output, but I can't piece together enough info from the API, docs, and examples to make it happen.
An alternate approach might be to use AKSequencer instead of AKSamplerMetronome. The midi output of the sequencer's track could be sent to an AKCallbackInstrument, and the sequencer's events could get the callback function to trigger both the time-stretched sample and the metronome ticks (and you could also trigger synchronized UI events from there as a bonus). This would guarantee that they stay in sync.
Apple's MusicSequence, which is what AKSequencer uses under the hood, is a little flakey with its timing immediately after you call play, but it's pretty solid after that. If you start the sequencer just before its looping point (i.e., if you have a 1 bar loop, start it one sixteenth note before the end of the first bar) then you can get passed that flakiness before the actual loop starts.
I need to playback short audio samples with precise timing, including up to 4 sounds starting simultaneously.
These sound samples are triggered with NSTimers (alternatively, I've also tried dispatch_after).
I've tried with AVPlayer and AVAudioPlayer but they are just not precise enough in timing.
Multiple sounds played at once will be all over the place, especially on the real device.
I've read about NSTimer allowing up to a few 100 milliseconds deviation, which is just too much for me.
As a test I've setup a few AVAudioPlayers with one audio sample each and triggered them all at the same time in didSelectRow...() but they will not sound in exactly the same moment, even with no NSTimer involved.
It seems that it's just not possible to playback 2 sounds starting exactly at the same time with AVAudioPlayer. Is this confirmed?
From what I've gathered there are not many alternatives, Audio Queue Service being one that allows precise timing and multiple sounds at once.
However, it's written in C, which I've never worked with, and it is hard to find any examples showing how to integrate this for simple audio playback of a sound (I'm using Swift). I'd basically just need to know how to integrate Audio Queue Services to playback a simple sound.
If someone can point me in the right direction (or knows a better solution to what I'm looking for), that would be much appreciated.
At the app i'm currently working on, there is a "studio" when you can make some sound effect, and for that i'm using "The amazing audio engine".
And there is an option to listen to songs by stream too.
Unfortunately the amazing audio engine doesn't contain "streaming" functionality, so i'm using the "AudioStreamer" calss.
I don't know why but the two don't work well together for me.
Each off them alone work great, but at the moment i try to play some audio on the amazing audio engine, stop, and move to stream, then move back to the audio engine, the sound doesn't play any more! no sound!
I checked already that i call "Stop" on every class, and make it "nil".
I allocate each of them every time again before they play.
I'm out of options, and thinking maybe it has something to do with core audio that both of them use?
Any help would be much appreciated
Thanks
EDIT
What i found is, this happens only when i use the "Stop" method of the "AudioStreamer"!
Can any won explain way?
Edit second
Found the answer!
This was solved by outmark This:
/*
while (state != AS_INITIALIZED)
{
[NSThread sleepForTimeInterval:0.1];
}
*/
// And adding this:
AudioQueueStart(audioQueue, NULL);
To the "stop" method, still, do not really understand why...
It takes some time after calling an audio stop method or function for it to really stop all the audio units (while the buffers get emptied by the hardware, etc.). You often can't restart audio until after this (a short) delay.
Long story short, I am trying to implement a naive solution for streaming video from the iOS camera/microphone to a server.
I am using AVCaptureSession with audio and video AVCaptureOutputs, and then using AVAssetWriter/AVAssetWriterInput to capture video and audio in the captureOutput:didOutputSampleBuffer:fromConnection method and write the resulting video to a file.
To make this a stream, I am using an NSTimer to break the video files into 1 second chunks (by hot-swapping in a different AVAssetWriter that has a different outputURL) and upload these to a server over HTTP.
This is working, but the issue I'm running into is this: the beginning of the .mp4 files appear to always be missing audio in the first frame, so when the video files are concatenated on the server (running ffmpeg) there is a noticeable audio skip at the intersections of these files. The video is just fine - no skipping.
I tried many ways of making sure there were no CMSampleBuffers dropped and checked their timestamps to make sure they were going to the right AVAssetWriter, but to no avail.
Checking the AVCam example with AVCaptureMovieFileOutput and AVCaptureLocation example with AVAssetWriter and it appears the files they generate do the same thing.
Maybe there is something fundamental I am misunderstanding here about the nature of audio/video files, as I'm new to video/audio capture - but thought I'd check before I tried to workaround this by learning to use ffmpeg as some seem to do to fragment the stream (if you have any tips on this, too, let me know!). Thanks in advance!
I had the same problem and solved it by recording audio with a different API, Audio Queue. This seems to solve it, just need to take care of timing in order to avoid sound delay.