In my app, I give the user the option to play a small frame of audio (from a larger audio file)in order to listen over and over to do a manual transcription. AKPlayer makes this trivial. Now, because the frame of audio is pretty small, it's pretty intense to hear this loop over and over (a little maddening in the classical sense of the word). I'd like to either fade it out/fade it back in with the loop OR just inject like 500 ms of silence before the loop starts again. I have no idea where to start, here is the current working code as is:
public func playLoop(start: Double, end: Double) {
self.chordLoopPlayer.isLooping = true
self.chordLoopPlayer.buffering = .always
self.chordLoopPlayer.preroll()
let millisecondsPerSample : Double = 1000 / 44100
let startingDuration : Double = (((start * millisecondsPerSample) / 1000) / 2)
let endingDuration : Double = (((end * millisecondsPerSample) / 1000) / 2)
print("StartinDuration:\(startingDuration) | EndingDuration:\(endingDuration)")
self.chordLoopPlayer.loop.start = startingDuration
self.chordLoopPlayer.loop.end = endingDuration
self.chordLoopPlayer.play(from: startingDuration, to: endingDuration)
Thanks so much <3
You just need to set .fade values for your fade-in/fade-out prior to calling the play() function. AudioKit will execute them each time going in and out of the loop. So assuming you'd like a 2-second fade-out, and a 2-second fade-in (adjust to your taste), your code would look like:
public func playLoop(start: Double, end: Double) {
self.chordLoopPlayer.isLooping = true
self.chordLoopPlayer.buffering = .always
self.chordLoopPlayer.preroll()
let millisecondsPerSample : Double = 1000 / 44100
let startingDuration : Double = (((start * millisecondsPerSample) / 1000) / 2)
let endingDuration : Double = (((end * millisecondsPerSample) / 1000) / 2)
print("StartinDuration:\(startingDuration) | EndingDuration:\(endingDuration)")
self.chordLoopPlayer.loop.start = startingDuration
self.chordLoopPlayer.loop.end = endingDuration
// add fade in/out values to fade in or fade out during playback; reset to 0 to disable.
self.chordLoopPlayer.fade.inTime = 2 // in seconds
self.chordLoopPlayer.fade.outTime = 2 // in seconds
self.chordLoopPlayer.play(from: startingDuration, to: endingDuration)
}
I find the AudioKit documentation a bit frustrating in this respect, as it's not super-easy to find these properties if you don't already know what you're looking for, or to understand how to use them if you haven't already come across sample code, so I hope this is a useful example for others who happen to search on this topic on SO. In any case, the list of sub-properties associated with AudioKit's .fade property is here: https://audiokit.io/docs/Classes/AKPlayer/Fade.html
Related
Thanks for AudioKit!
I'm a beginner in Swift and AudioKit, so this may be an easy question:
I want to build a metronome app. From the example in Audiokit's new CookBook and the old AKMetronome(), I can see how to build a simple metronome using AudioKit. But I don't know how to play beats with compound time signatures (3/8, 7/8, etc.). Both examples here use a time signature with 4 as a fixed bottom number and the top number can be changed (i.e. we can have 1/4, 3/4, 6/4 but not 3/8, 6/8).
Is there a way to change the bottom number?
Link for AKMetronome: https://audiokit.io/docs/Classes/AKMetronome.html#/s:8AudioKit11AKMetronomeC5resetyyF
AudioKit Cookbook's Shaker Metronome:
https://github.com/AudioKit/Cookbook/blob/main/Cookbook/Cookbook/Recipes/Shaker.swift
I made some changes to the Shaker Metronome's code to illustrate how you could create a metronome that plays different time signatures such as 6/8, 5/8, 7/8, and so on.
First I added some information to the ShakerMetronomeData structure:
enum Figure: Double {
case quarter = 1.0
case eighth = 0.5
}
struct ShakerMetronomeData {
var isPlaying = false
var tempo: BPM = 120
var timeSignatureTop: Int = 4
var downbeatNoteNumber = MIDINoteNumber(6)
var beatNoteNumber = MIDINoteNumber(10)
var beatNoteVelocity = 100.0
var currentBeat = 0
var figure: Figure = .quarter
var pattern: [Int] = [4]
}
Then, the part of the updateSequences function that plays the metronome clicks would become:
func updateSequences() {
var track = sequencer.tracks.first!
track.length = Double(data.timeSignatureTop)
track.clear()
var startTime: Double = 0.0
for numberOfBeatsInGroup in data.pattern {
track.sequence.add(noteNumber: data.downbeatNoteNumber, position: 0.0, duration: 0.4)
startTime += data.figure.rawValue
let vel = MIDIVelocity(Int(data.beatNoteVelocity))
for beat in 1 ..< numberOfBeatsInGroup {
track.sequence.add(noteNumber: data.beatNoteNumber, velocity: vel, position: startTime, duration: 0.1)
startTime += data.figure.rawValue
}
}
}
These would be the values of the figure and pattern members of the structure for different time signatures:
/*
Time signature Figure Pattern
4/4 .quarter [4]
3/4 .quarter [3]
6/8 .eighth [3,3]
5/8 .eighth [5]
7/8 .eighth [2,2,3]
*/
Please note I haven't tested this code, but it illustrates how you could play beats with a compound time signature.
This could be improved by having three different sounds instead of two: one for the start of each bar, one for the beginning of each group, and one for the downbeats.
I am currently using FDWaveFormView to great success to display waveforms representing audio I record from AKMicrophone or AKAudioFile.
I am successfully able to highlight specific regions in the waveform and FDwaveForm gives back a range of the samples from the audiofile.
My problem now is I cannot find an appropriate method in AKPlayer that would let me play from a start sample to an end sample.
I noticed that AKSamplePlayer is now deprecated, but it did have a method:play(from: Sample, to: Sample)
My guess is that I would be able to do some math to translate Sample position to time (as a Double as prescribed in AKPlayer), however I have not found the appropriate math or functions to do this, any hints?
To be very explicit in what I am trying to do, please refer to the image below:
note for any AudioKit core members who may see this question, I know there are a variety of AudioKitUI components that may of made this easier, however only FDWaveFormView has given me the functionality I need for this particular app, i'm happy to discuss further offline, thanks again so much.
EDIT
I've come up with some code that I believe has solved it:
let startingSampleIndex = self.waveformPlot.highlightedSamples!.min()
let endingSampleIndex = self.waveformPlot.highlightedSamples!.max()
let millisecondsPerSample : Double = 1000 / 44100
let startingDuration : Double = (startingSampleIndex! * millisecondsPerSample) / 1000
let endingDuration : Double = (endingSampleIndex! * millisecondsPerSample) / 1000
print("StartSample:\(startingSampleIndex!) | EndSample:\(endingSampleIndex!) | milliPerSample:\(millisecondsPerSample) | StartDuration:\(startingDuration) | EndDuration:\(endingDuration)")
player.play(from: startingDuration, to: endingDuration)
The main equation being numberOfSamples * millisecondsPerSample = timeInMilliseconds by dividing by 1000 I can normalize everything to seconds which is what AKPlayer wants. If anyone sees something problematic here I'd love the advice but I think this has done it! Sorry I am still new to DSP and so thankful for AudioKit being an incredible Shepard into this world!
To convert from frames to seconds you should divide by the sample rate of the audio file, not a hardcoded 44100 value:
guard let frameRange = self.waveformPlot.highlightedSamples else { return }
let startTime = frameRange.min() / audioFile.fileFormat.sampleRate
let endTime = frameRange.max() / audioFile.fileFormat.sampleRate
player.play(from: startTime, to: endTime)
I found the solution, essentially RTFM on DSP 101 and samples 😅:
let startingSampleIndex = self.waveformPlot.highlightedSamples!.min()
let endingSampleIndex = self.waveformPlot.highlightedSamples!.max()
let millisecondsPerSample : Double = 1000 / 44100
let startingDuration : Double = (startingSampleIndex! * millisecondsPerSample) / 1000
let endingDuration : Double = (endingSampleIndex! * millisecondsPerSample) / 1000
print("StartSample:\(startingSampleIndex!) | EndSample:\(endingSampleIndex!) | milliPerSample:\(millisecondsPerSample) | StartDuration:\(startingDuration) | EndDuration:\(endingDuration)")
player.play(from: startingDuration, to: endingDuration)
This is working excellently, thanks again to both FDWaveFormView and AudioKit!
I have the following custom SKAction working but as an EaseIn instead EaseOut. I want it to EaseOut! I have failed miserably to correct it using various easing equations found around the web.
let duration = 2.0
let initialX = cameraNode.position.x
let customEaseOut = SKAction.customActionWithDuration(duration, actionBlock: {node, elapsedTime in
let t = Double(elapsedTime)/duration
let b = Double(initialX)
let c = Double(targetPoint.x)
let p = t*t*t*t*t
let l = b*(1-p) + c*p
node.position.x = CGFloat(l)
})
cameraNode.runAction(customEaseOut)
Any help would be much appreciate.
Thanks
You don't need to calculate it.
SKAction just have a property called timingMode:
// fall is an SKAction
fall.timingMode = .easeInEaseOut
You can choose from:
linear (default)
easeIn
easeOut
easeInEaseOut
Check details from API docs and also here.
If you need to change the Apple presets you can use: timingFunction
fall.timingFunction = { time -> Float in
return time
}
To build a custom function according to the source:
/**
A custom timing function for SKActions. Input time will be linear 0.0-1.0
over the duration of the action. Return values must be 0.0-1.0 and increasing
and the function must return 1.0 when the input time reaches 1.0.
*/
public typealias SKActionTimingFunction = (Float) -> Float
So with these informations you can write:
func CubicEaseOut(_ t:Float)->Float
{
let f:Float = (t - 1);
return f * f * f + 1;
}
fall.timingFunction = CubicEaseOut
We can modify the following code to allow the custom action to ease-out instead of ease-in
let t = Double(elapsedTime)/duration
...
let p = t*t*t*t*t
To get a better understanding of p, it is helpful to plot it as a function of t
Clearly, the function eases in over time. Changing the definition of t to
let t = 1 - Double(elapsedTime)/duration
and plotting p gives
The action now eases out, but it starts at 1 and ends at 0. To resolve this, change the definition of p to
let p = 1-t*t*t*t*t
PLEASE SOMEONE HELP!
I want to have my SKEmiterNode's scale(meaning size) get larger and smaller to the music i have built into the application using AVAudioPlayer. Right now this is pretty much all I have for the SKEmiterNode and it looks great:
beatParticle?.position = CGPoint(x: self.size.width * 0.5, y: self.size.height * 0.5)
var beatParticleEffectNode = SKEffectNode()
beatParticleEffectNode.addChild(beatParticle!)
self.addChild(beatParticleEffectNode)
All the looks are done in the .sks file.
Here is where I call the "updateBeatParticle" function in a continual loop so that It can where i will put my code for making the particle's scale(meaning size) larger and smaller to the music.
var dpLink : CADisplayLink?
dpLink = CADisplayLink(target: self, selector: "updateBeatParticle")
dpLink?.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSRunLoopCommonModes)
func updateBeatParticle(){
//Put code here
}
Any idea how i can do this? I looked at some tutorials such as this: https://www.raywenderlich.com/36475/how-to-make-a-music-visualizer-in-ios
However, i can't quite get my head around it because they're using an emitterLayer and its in Obj-C and am also interested in any other ideas you wonderful people may have!
WARNING: The following code has not been tested. Please let me know if it works.
Firstly, it looks like you are using SpriteKit, therefore you could put the code needed to alter the emitter scale in the SKScene method update:, which automatically gets called virtually as often as a CADisplayLink.
Essentially all you need to do is update the emitter scale in the update: method based on the volume of the channels of your AVAudioPlayer. Note that the audio player may have multiple channels running, so you need to average out the average power for each.
Firstly...
player.meteringEnabled = true
Set this after you initialise your audio player, so that it will monitor the levels of the channels.
Next, add something like this in your update method.
override func update(currentTime: CFTimeInterval) {
var scale: CGFloat = 0.5
if audioPlayer.playing { // Only do this if the audio is actually playing
audioPlayer.updateMeters() // Tell the audio player to update and fetch the latest readings
let channels = audioPlayer.numberOfChannels
var power: Float = 0
// Loop over each channel and add its average power
for i in 0..<channels {
power += audioPlayer.averagePowerForChannel(i)
}
power /= Float(channels) // This will give the average power across all the channels in decibels
// Convert power in decibels to a more appropriate percentage representation
scale = CGFloat(getIntensityFromPower(power))
}
// Set the particle scale to match
emitterNode.particleScale = scale
}
The method getIntensityFromPower is used to convert the power in decibels, to a more appropriate percentage representation. This method can be declared like so...
// Will return a value between 0.0 ... 1.0, based on the decibels
func getIntensityFromPower(decibels: Float) -> Float {
// The minimum possible decibel returned from an AVAudioPlayer channel
let minDecibels: Float = -160
// The maximum possible decibel returned from an AVAudioPlayer channel
let maxDecibels: Float = 0
// Clamp the decibels value
if decibels < minDecibels {
return 0
}
if decibels >= maxDecibels {
return 1
}
// This value can be adjusted to affect the curve of the intensity
let root: Float = 2
let minAmp = powf(10, 0.05 * minDecibels)
let inverseAmpRange: Float = 1.0 / (1.0 - minAmp)
let amp: Float = powf(10, 0.05 * decibels)
let adjAmp = (amp - minAmp) * inverseAmpRange
return powf(adjAmp, 1.0 / root)
}
The algorithm for this conversion was taken from this StackOverflow response https://stackoverflow.com/a/16192481/3222419.
I am trying to write this in Swift (I am in step 54). In a UICollectionViewLayout class I have a function setup function
func setup() {
var percentage = 0.0
for i in 0...RotationCount - 1 {
var newPercentage = 0.0
do {
newPercentage = Double((arc4random() % 220) - 110) * 0.0001
println(newPercentage)
} while (fabs(percentage - newPercentage) < 0.006)
percentage = newPercentage
var angle = 2 * M_PI * (1 + percentage)
var transform = CATransform3DMakeRotation(CGFloat(angle), 0, 0, 1)
rotations.append(transform)
}
}
Here is how the setup function is described in the tutorial
First we create a temporary mutable array that we add objects to. Then
we run through our loop, creating a rotation each time. We create a
random percentage between -1.1% and 1.1% and then use that to create a
tweaked CATransform3D. I geeked out a bit and added some logic to
ensure that the percentage of rotation we randomly generate is a least
0.6% different than the one generated beforehand. This ensures that photos in a stack don't have the misfortune of all being rotated the
same way. Once we have our transform, we add it to the temporary array
by wrapping it in an NSValue and then rinse and repeat. After all 32
rotations are added we set our private array property. Now we just
need to put it to use.
When I run the app, I get a run time error in the while (fabs(percentage - newPercentage) < 0.006) line.
the setup function is called in prepareLayout()
override func prepareLayout() {
super.prepareLayout()
setup()
...
}
Without the do..while loop, the app runs fine. So I am wondering, why?
Turns out I had to be more type safe
newPercentage = Double(Int((arc4random() % 220)) - 110) * 0.0001
This must be a Swift bug. That code should NOT crash at runtime. It should either give a compiler error on the newPercentage = expression or it should correctly promote the types as C does.