Concatenate Audio Recordings for Playback in AudioKit - ios

I am trying to create a recording app that has the ability to stop and start a audio recording.
My idea to achieve this is to have AudioKit record and save a new file(.aac) every time the stop button is clicked. Then when it goes to play the full recording back it would essentially concatenate all the different aac’s together.(My understanding is that I can't continue recording to the end of a file once it's saved) Example:
Records three different recordings, in directory folder is [1.acc, 2.acc, 3.acc]. When played the user would think it’s one file.
To achieve this do I use a single AKPlayer or multiple? I would need a single playback slider and also a playback time label, both these would have to correlate to the ‘single concatenation’ file of [1.acc, 2.acc, 3.acc].
This is the first time I have used AudioKit, I really appreciate any advice or solutions to this. Thanks!

Related

use one player to play mutliple assets with different durations , and looping

I want to use one player to play multiple audios.
The audio could be stream media, or local.
Audio have different duration.
Audios should be play together and looping, it works just like a mix player.
Media could be insert or remove at any time.
let's say there's two medias. media_a is 15sec long. media_b is 30sec long. Media_A should be play again on 16sec.
And now, I'm using two players to play audio medias, which works fine, but's not good while using airplay.
I found out some ways on website:
Using AVMutableCompositionTracks to create a AVPlayerItem
but media_a is not play again on 16sec.
Use the least common multiple duration for medias and create a big playerItem.
I think is not good, and could not be insert or remove easily.
So, plz help me. Any clue could be appreciated。

AudioKit metronome synced with time-pitched audio loops

I'm making an app that plays synced audio loops with a metronome. For example, I might have 3 files like this:
bass_60bpm.m4a
drums_60bpm.m4a
guitar_60bpm.m4a
And a metronome sound tick.m4a, which I play with AKSamplerMetronome.
I need to play them back at arbitrary tempos, so I use AKTimePitcher on the AKAudioFiles (so playing at 90bpm, I'd play bass_60bpm.m4a at 1.5x).
This almost works, but after 3-5 loops, the metronome gets out of sync with the audio loops. I think I understand why that happens (audio_sample_length * floating_point_number is not equivalent to AKSamplerMetronome's tempo calculations), but I don't know how to fix it.
What I suspect I need to do is manually reimplement some or all of AKSamplerMetronome, but play the metronome ticks based on AKTimePitcher's output, but I can't piece together enough info from the API, docs, and examples to make it happen.
An alternate approach might be to use AKSequencer instead of AKSamplerMetronome. The midi output of the sequencer's track could be sent to an AKCallbackInstrument, and the sequencer's events could get the callback function to trigger both the time-stretched sample and the metronome ticks (and you could also trigger synchronized UI events from there as a bonus). This would guarantee that they stay in sync.
Apple's MusicSequence, which is what AKSequencer uses under the hood, is a little flakey with its timing immediately after you call play, but it's pretty solid after that. If you start the sequencer just before its looping point (i.e., if you have a 1 bar loop, start it one sixteenth note before the end of the first bar) then you can get passed that flakiness before the actual loop starts.

Recording output audio with Swift

Is it possible to record output audio in an app using Swift? So, for example, say I'm listening to a podcast, and I want to, within a separate app, record a small segment of the podcast's audio. Is there any way to do that?
I've looked around but have only been able to find information on recording microphone recording and such.
It depends on how you are producing the audio. If the production of the audio is within your control, you can put a tap on the output and record to a file as it plays. The easiest way is with the new AVAudioEngine feature (there are other ways, but AVAudioEngine is basically an easy front end for them).
Of course, if the real problem is to take a copy of a podcast, then obviously all you have to do is download the podcast as opposed to listening to it. Similarly, you could buffer and save streaming audio to a file. There are many apps that do this. But this is not because the device's output is being hijacked; it is, again, because we have control of the sound data itself.
I believe you'll have to write a kernel extension to do that
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptIOKit/iokit_tutorial.html
You'd have to make your own audio driver to record it
It appears as though
That is how softonic made soundflowerbed.
http://features.en.softonic.com/how-to-record-internal-sound-on-a-mac

Skip between multiple files while playing audio in iPhone iOS

For a project I need to handle audio in an iPhone app quite special and hope somebody may point me in the right direction.
Lets say you have a fixed set of up to thirty audio files of the same length (2-3 sec, non-compressed). While a que is playing from one audio file it should be able to update parameters that makes the playing continue from another audio file from the same timestamp the previous audiofile ended playing. If the different audio files is different versions of heavely filtered audio it should be possible to "slide" between them an get the impression that you applied the filter directly. The filtering is at the moment not possible to achive in realtime on an iPhone, therefore the prerendered files.
If A B and C is different audio files I like to be able to:
Play A without interruption:
Start AAAAAAAAAAAAA Stop
Or start play A and continue over in B and then C, initiated while playing
Start AAABBBBBBBBCC Stop
Ideally is should be possible to play two er more ques at the same time. Latency is not that important, but the skipping between files should ideally not produce clicks or delays.
I have looked into using Audio Queue Services (which look like hell to dive into) and sniffed on OpenAl. Could anyone give me a ruff overview and a general direction I can spend the next days burried into?
Try using the iOS Audio Unit API, particularly a mixer unit connected to RemoteIO for audio output.
I managed to do this by using FMOD Designer. FMOD (http://www.fmod.org/) is a sound design framework for game development, that supports iOS development. I made a multitrack-event in FMOD Designer with different layers for each sound clip. Add a parameter in the horizontal bar that lets you controll which sound clip to play in realtime. The trick is to let each soundclip continue over the whole bar and controll which sound that is beeing heard by using a volume effect (0-100%) like in the attached picture. In that way you are ensured that skipping between files follow the same timecode. I have tried this successfully with up to thirty layers, but experienced some double playing. This seemed to dissapear if I cut the number down to fifteen.
It should be possible to use iOS Audio Unit API if you are comfortable with this, but for those of us that like the most simple sollution FMOD is quite good :) Thanks to Ellen S for the sollution tip!
Screenshot of the multitrack-event in FMOD Designer:
https://plus.google.com/photos/106278910734599034045/albums/5723469198734595793?authkey=CNSIkbyYw8PM2wE

loadSound: don't wait for the entire download before play, but not have it start automatically

I am trying to play an MP3 using Actionscript 2. I have the following requirements:
I don't want to wait for the MP3 to load before playing it.
I want to know when enough of the MP3 has downloaded that I can start playing it.
I don't want the MP3 to start playing immediately: I need to control when the play starts.
An example scenario is that I need to start playing a 30-second MP3 exactly 8 seconds from now (at the top of the minute, let's say). Depending on the connection, I may or may not be able to download the entire MP3 by then, but I can almost certainly download enough to start playing without interruption.
The closest way I can see to do this is Sound.loadSound(url, isStreamable). If I pass true for the isStreamable parameter, though, the sound will start playing immediately (docs say: Playback begins when sufficient data has been received to start the decompressor).
I've tried the following:
call mySound.loadSound(mp3Url, true)
mySound.stop(); // so that the auto-play won't happen
set a timer for the top of the minute (8 seconds from now).
In the timer, check the duration of the sound (which continues to get bigger as the file gets loaded). If the duration is < 5 seconds, we don't have enough buffered sound, so generate an error. Otherwise, start playing the sound with s.start(0).
The behavior I see is that the sound doesn't start playing until it's entirely downloaded.
I found your posting (which is a little older now, but... anyway):
there are two options you can use in the Sound-class:
Sound.getBytesTotal
and
Sound.getBytesLoaded
If you compare these two, you can get the amount of bytes loaded at a certain point of time. (See also Sound.onLoad and Sound.onSoundComplete, these two are helpful)
There are also some examples in the Flash help for this.
Greetings,
Draco
I do not believe that this is possible using ActionScript 2. I think you are going to have to either move to AS3 or wrap the MP3 in a SWF.
Even with AS3 you may have to target FP10 in order to use the new sound methods and events that were just added (Sound.extract and Event.SAMPLE_DATA).
In general Sound capabilities in Flash have really lagged until the most recent version of the player.

Resources