Simulate dynamic playback head with AudioKit (scratching / tunrtablism) - ios

I was wondering if it is possible to simulate the scratching of a record player with AudioKit.
So basically to have an input value (e.g. position of the finger on screen) as input for the index of the playback of an audio-file. I am not sure if Audio Kit is even possible of doing something like that. If not how would I program something like this for iOS? Any other frameworks/libraries? Will I need to write c++?
Thank you,
J

Use the AKPhaseLockedVocoder. Its pretty cool, does exactly that.

Related

Best way to dynamically play notes in AudioKit?

Say I want to trigger a random note and velocity on an instrument every quarter note. What is the best way to achieve this in AudioKit V5?
The examples seem to use the sequencer to schedule sounds with proper timing, but then you have to add in the notes to the track in advance.
One solution is to pre-generate a bar of random quarter notes with looping enabled - when the bar of random notes is complete, clear the bar and replace with new random notes.
I'm wondering if there's a lower level way of doing this? Some kind of callback that is called with precise timing where I can generate the values as they're needed? Or another approach?
There's nothing that enforces you to have to obey the incoming note data or velocity from the sequencer. Just make your instrument respond to any note ons with a random note and velocity. That way you get the timing without worrying about anything else.

Playing back multiple sounds simultaneously and precisely with Audio Queue?

I need to have a series of sound samples (audio files) being played back at the touch of a button. The audio samples need to be played back simultaneously and precisely (think 4 voices in a piece of music).
I managed to do this with several instances of AVAudioPlayer but it will go out of sync.
Reading about it, due to lack of precision it seems to not be the right choice for I'm trying to do.
Audio queue (is this part of Core Audio?) seems to be able to do what I want but I can hardly find any code bits in Swift to setup what I’m trying to do, which is:
Load the audio file, prepare it to be played, then play it (I would trigger it with an NSTimer).
Is this straightforward to implement with audio queue or should I look elsewhere?
If you could point me into the right direction I would be very grateful.
Thanks a lot!

Audio bars visualizer in iOS

I'm looking for a way to create a audio bars visualizer similar to this in iOS.
Every white bar will move up and down depending of audio wave. I'm really lost because haven't much experience dealing with audio in Objective-c.
EDIT: What i'm seeking is what Overcast's app does on its visualizer (the group of vertical orange bars on the lower part of the podcast's image)
Anyone can help?
Thanks
EDIT: Thanks to Tomer's answer I finally made it. First I did this tutorial in order to make it all clear. Then I created my own VisualizerView for my project, you can find it in this gist. Maybe is not perfect but it does what I needed to do.
Generally, you have a few options if you want to get an idea of what something sounds like in iOS:
Use the simple AVAudioPlayer audio player, and then use the [audioPlayer averagePowerForChannel:] method to get the avarage audio level for the current moment. Check out this tutorial.
Use the Audio Queue API, which lets you send whatever audio you want to the speaker: You would read audio from your source and fill the buffers with it every time. (If you're reading from a file, use AVAssetReader) This way you always know exactly what waveform you're playing, so you can, for example, calculate its avarage power or process it in other ways like FFT. Then you'd update the bars accordingly.
EDIT: The standard way of doing such a thing is to use the Fast Fourier Transform (FFT) - it extracts frequency information from a sound. Here's a good example of using it on iOS (Apple's guide here). But, of course, to use it you have to know exactly what waveform you're playing every time, so you'd probably want to use a lower-level API such as Audio Queue.

What logic is used for creating an Equalizer meter

Basically i'm gonna be working on an iOS music app which when a song is being played, it shows the fancy Equalizer meter, Something like this but with all the animation of bars going up and down:
After looking into this and not finding enough resource, I really want to carry this as a project perhaps making a web version using j query.
I'm not really asking for specific code, i just want to know how the animation works in general ?
Thanks a million !!!
Checkout the Cocoa Waveform Audio Player Control project. It's a cocoa audio player component which displays the waveform of the audio file.
Also, there is already a lot of questions on this topic:
iOS FFT Accerelate.framework draw spectrum during playback
Using the Apple FFT and Accelerate Framework
iOS FFT Draw spectrum
Animation would be pretty straight forward. It is just animating changes of the height of rectangles.

iOS audio : cutting and stitching audio?

I'm a Unity dev and need to help out colleagues with doing this natively in Obj-C. In Unity it's no big deal :
1)samples are stored in memory as a List of float[]
2)A helper function returns float[] of n size for any given sample, at any given offset
3)Another helper function fades the data if needed
4)An AudioClip object is created with the right size to accomodate all cut samples, and is then filled at appropriate offsets.
5)The AudioClip is assigned to a player component(AudioSource).
6)AudioSource.Play(ulong offsetInSamples), plays at a sample accurate time in the future. Looping is also just a matter of setting the AudioSource object's loop parameter.
I would very much appreciate if someone could point me towards the right classes to achieve similar results in Obj-C, for iOS devices. I'm pretty sure a lot of iOS audio newbies would be intersted too. Many thanks in advance!
Gregzo
A good overview of the relevant audio APIs available in iOs is here
The highest level framework that makes sense for patching together audio clips, setting their volume levels, and playing them back in your case is probably AVFoundation.
It will involve creating AVAssets, adding them to AVPlayerItems, possibly putting them into AVMutableCompositions to merge multiple items together and adjust their volumes (audioMix), and them playing them back with AVPlayer.
AVFoundation works with AVAsset, for converting between relevant formats and lower level bytes you'll want to have a look at AudioToolbox (I can't post more than two links yet).
For an somewhat simpler API with less control have a look at AVAudioPlayer. If you need greater control (eg: games - real time / low latency) you might need to use OpenAL for playback.

Resources