I am making a vibration app. On tapping on the button specific vibrating pattern is being played. But I am struggling with making it infinitive.
Here is my vibration function
func complexSuccess() {
guard CHHapticEngine.capabilitiesForHardware().supportsHaptics else { return }
var events = [CHHapticEvent]()
for i in stride(from: 0, to: 100, by: speed){
let intensity = CHHapticEventParameter(parameterID: .hapticIntensity, value: strength)
let sharpness = CHHapticEventParameter(parameterID: .hapticSharpness, value: sharpness)
let event = CHHapticEvent(eventType: .hapticTransient, parameters: [intensity, sharpness], relativeTime: TimeInterval(i))
events.append(event)
}
do {
let pattern = try CHHapticPattern(events: events, parameters: [])
let player = try engine?.makePlayer(with: pattern)
try player?.start(atTime: 0)
} catch {
print("Failed to play pattern: \(error.localizedDescription).")
}
}
For now, it has 100 as a final number but I want it to go forever. Please help me.
Related
I am polling the apple watch for Core Motion at a frequency of 0.01. The purpose of the application is to see movement in real-time. To capture data as quickly as possible, I leverage the didReceiveMessageData/ sendMessageData functions. On the iPhone, I have a simple function that reads the data:
func session(_ session: WCSession, didReceiveMessageData messageData: Data) {
let records : [Double] = try! NSKeyedUnarchiver.unarchivedObject(ofClasses: [NSArray.self], from: messageData) as! [Double]
}
And on an Apple Watch 6, I have a simple function that sends the data. However, sending suffers from a sporadic yet significant delay.
class MyController: WKInterfaceController, WCSessionDelegate {
private let motion = CMMotionManager()
private let motionQueue = OperationQueue()
private let messagingQueue = OperationQueue()
private let healthStore = HKHealthStore()
private var stack : QuaternionStack = QuaternionStack()
override init() {
super.init()
if WCSession.isSupported() {
let session = WCSession.default
session.delegate = self
if session.activationState == .notActivated { session.activate() }
}
// Serial queue for sample handling and calculations.
messagingQueue.qualityOfService = .userInteractive
messagingQueue.maxConcurrentOperationCount = 1
motionQueue.qualityOfService = .userInteractive
motionQueue.maxConcurrentOperationCount = 1
startGettingData();
}
func startGettingData() {
// If we have already started the workout, then do nothing.
if (workoutSession != nil) { return }
if !motion.isDeviceMotionAvailable { return }
let workoutConfiguration = HKWorkoutConfiguration()
workoutConfiguration.activityType = .functionalStrengthTraining
workoutConfiguration.locationType = .indoor
do {
workoutSession = try HKWorkoutSession(healthStore: healthStore, configuration: workoutConfiguration)
} catch { fatalError("Unable to create the workout session!") }
// Start the workout session and device motion updates.
workoutSession!.startActivity(with: Date())
motion.deviceMotionUpdateInterval = 0.01
motion.startDeviceMotionUpdates(using: .xArbitraryZVertical, to: motionQueue) { [self] (deviceMotion: CMDeviceMotion?, _ : Error?) in
guard let motion = deviceMotion else { return }
let attitude = motion.attitude.quaternion
stack.push(Quaternion(x: attitude.x, y: attitude.y, z: attitude.z, w: attitude.w))
guard let quaternions = stack.pop() else { return }
messagingQueue.cancelAllOperations()
let blockOperation = BlockOperation()
blockOperation.addExecutionBlock({ [unowned blockOperation] in
if blockOperation.isCancelled { return }
self.sendDataToPhone(quaternions: quaternions)
})
messagingQueue.addOperation(blockOperation)
}
}
private func sendDataToPhone(quaternions: [Quaternion]) {
if WCSession.default.isReachable {
var capturedQuaternions : [Double] = [Double]()
for quat in quaternions { capturedQuaternions.append(contentsOf: [quat.x, quat.y, quat.z, quat.w]) }
WCSession.default.sendMessageData(try! NSKeyedArchiver.archivedData(withRootObject: capturedQuaternions, requiringSecureCoding: false), replyHandler: nil, errorHandler: nil);
}
}
}
I've implemented a stack as follows:
struct QuaternionStack {
private let max = 2;
private var array: [Quaternion] = []
mutating func push(_ element: Quaternion) {
array.append(element)
if array.count > max { array.removeFirst() }
}
mutating func pop() -> [Quaternion]? {
if (array.count < max) { return nil }
var results : [Quaternion] = [Quaternion]()
for _ in 0 ..< max { results.append(array.popLast()!)}
results.reverse()
array.removeAll()
return results
}
}
If I set QuaternionStack.max to a big number, like 10, I see no obvious throttling on the iPhone when receiving data. This is because I send more data but less often. However, decreasing the number degrades the performance. As an example, imagine I send every 2 incoming packets ( QuaternionStack.max = 2 ). Sometimes, a few seconds pass between when the packets are received. When this happens, the iWatch seems to send them very quickly in an effort to catch up. Another example of this issue is when listening to music on paired Apple Airpods or receiving an incoming call. The WCSession sendMessageData from the watch becomes very inconsistent.
What must I do to increase the throughput of the WCSession sendMessageData ? The application I am writing requires very fast ( 100hz ) and continuous motion updates.
I am trying to detect pitch from iOS microphone with AudioKit, and here is the code
init() {
guard let input = engine.input else {
fatalError()
}
mic = input
filter = HighPassFilter(mic, cutoffFrequency: 200, resonance: 40)
silence = Fader(filter, gain: 0)
tracker = PitchTap(silence) { pitch , amp in
DispatchQueue.main.async {
print(pitch[0], amp[0])
self.update(pitch[0], amp[0])
}
}
engine.output = filter
}
func start() {
recordFrequency = []
do {
try engine.start()
tracker.start()
} catch let err {
Log(err)
}
}
Because there's always some detect (frequency 20 to 200) when I don't make sounds,
I add a high pass filter to filt the sound lower than 40 dB and 200 frequencies,
But it seems not work, what should I do?
Thanks.
It looks like you just have some connection point issues. Try putting the tracker before you make the signal silent:
filter = HighPassFilter(mic, cutoffFrequency: 200, resonance: 40)
tracker = PitchTap(filter) { pitch , amp in
DispatchQueue.main.async {
print(pitch[0], amp[0])
self.update(pitch[0], amp[0])
}
}
silence = Fader(tracker, gain: 0)
engine.output = filter
I implemented the AudioKit "MICROPHONE ANALYSIS" example https://audiokit.io/examples/MicrophoneAnalysis/ in my App.
I want to analyze the microphone input frequency and then play the correct note which is near the frequency which was determined.
Normally the sound output is the speaker or a Bluetooth device connected to my iPhone but after implementing the "MICROPHONE ANALYSIS" example the sound output changed to the tiny little speaker on the top of the iPhone which is normally used when you get a call.
How can I switch to the "normal" speaker or to the connected Bluetooth device like before?
var mic: AKMicrophone!
var tracker: AKFrequencyTracker!
var silence: AKBooster!
func initFrequencyTracker() {
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
tracker = AKFrequencyTracker(mic)
silence = AKBooster(tracker, gain: 0)
}
func deinitFrequencyTracker() {
plotTimer.invalidate()
do {
try AudioKit.stop()
AudioKit.output = nil
} catch {
print(error)
}
}
func initPlotTimer() {
AudioKit.output = silence
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
setupPlot()
plotTimer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(updatePlotUI), userInfo: nil, repeats: true)
}
func setupPlot() {
let plot = AKNodeOutputPlot(mic, frame: audioInputPlot.bounds)
plot.translatesAutoresizingMaskIntoConstraints = false
plot.alpha = 0.3
plot.plotType = .rolling
plot.shouldFill = true
plot.shouldCenterYAxis = false
plot.shouldMirror = true
plot.color = UIColor(named: uiFarbe)
audioInputPlot.addSubview(plot)
// Pin the AKNodeOutputPlot to the audioInputPlot
var constraints = [plot.leadingAnchor.constraint(equalTo: audioInputPlot.leadingAnchor)]
constraints.append(plot.trailingAnchor.constraint(equalTo: audioInputPlot.trailingAnchor))
constraints.append(plot.topAnchor.constraint(equalTo: audioInputPlot.topAnchor))
constraints.append(plot.bottomAnchor.constraint(equalTo: audioInputPlot.bottomAnchor))
constraints.forEach { $0.isActive = true }
}
#objc func updatePlotUI() {
if tracker.amplitude > 0.1 {
let trackerFrequency = Float(tracker.frequency)
guard trackerFrequency < 7_000 else {
// This is a bit of hack because of modern Macbooks giving super high frequencies
return
}
var frequency = trackerFrequency
while frequency > Float(noteFrequencies[noteFrequencies.count - 1]) {
frequency /= 2.0
}
while frequency < Float(noteFrequencies[0]) {
frequency *= 2.0
}
var minDistance: Float = 10_000.0
var index = 0
for i in 0..<noteFrequencies.count {
let distance = fabsf(Float(noteFrequencies[i]) - frequency)
if distance < minDistance {
index = i
minDistance = distance
}
}
// let octave = Int(log2f(trackerFrequency / frequency))
frequencyLabel.text = String(format: "%0.1f", tracker.frequency)
if frequencyTranspose(note: notesToTanspose[index]) != droneLabel.text {
note = frequencyTranspose(note: notesToTanspose[index])
droneLabel.text = note
DispatchQueue.main.asyncAfter(deadline: .now() + 0.03, execute: {
self.prepareSinglePlayerFirstForStart(note: self.note)
self.startSinglePlayer()
})
}
}
}
func frequencyTranspose(note: String) -> String {
var indexNote = notesToTanspose.firstIndex(of: note)!
let chosenInstrument = UserDefaults.standard.object(forKey: "whichInstrument") as! String
if chosenInstrument == "Bb" {
if indexNote + 2 >= notesToTanspose.count {
indexNote -= 12
}
return notesToTanspose[indexNote + 2]
} else if chosenInstrument == "Eb" {
if indexNote - 3 < 0 {
indexNote += 12
}
return notesToTanspose[indexNote - 3]
} else {
return note
}
}
It's a good practice to control the session settings, so start by creating a method in your application to take care of that during initialisation.
Following up, there's an example where I set a category and the desired options:
func start() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, options: .defaultToSpeaker)
try session.setActive(true, options: .notifyOthersOnDeactivation)
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
try AudioKit.start()
} catch {
// your error handler
}
}
You can call the method start where you make the call to AudioKit.Start() in initPlotTimer.
The example above is using the AVAudioSession, which I believe is what AKSettings wraps (please feel free to edit my answer to not mislead future readers, as I'm not looking at the AudioKit source-code at the moment).
Now that AVAudioSession is exposed, let's stick with the method offered by AudioKit since that's what you're dealing with.
Here's another example using AKSettings:
func start() {
do {
AKSettings.channelCount = 2
AKSettings.ioBufferDuration = 0.002
AKSettings.audioInputEnabled = true
AKSettings.bufferLength = .medium
AKSettings.defaultToSpeaker = true
// check docs for other options and settings
try AKSettings.setSession(category: .playAndRecord, with: [.defaultToSpeaker, .allowBluetooth])
try AudioKit.start()
} catch {
// your handler
}
}
Have in mind that you don't necessarily have to call it start, or run AudioKit's start method, I'm just exposing the initialisation phase, to make it readable to you and other use-cases.
Reference:
https://developer.apple.com/documentation/avfoundation/avaudiosession/categoryoptions
https://audiokit.io/docs/Classes/AKSettings.html
In Swift 4 i am loaded/holding a set of animations from dae files in an array:
var animations = [String: CAAnimation]()
loadAnimation(withKey: "stepL", sceneName: "art.scnassets/StepL", animationIdentifier: "StepL-1")
loadAnimation(withKey: "stepR", sceneName: "art.scnassets/StepR", animationIdentifier: "StepR-1")
func loadAnimation(withKey: String, sceneName:String, animationIdentifier:String) {
let sceneURL = Bundle.main.url(forResource: sceneName, withExtension: "dae")
let sceneSource = SCNSceneSource(url: sceneURL!, options: nil)
if let animationObject = sceneSource?.entryWithIdentifier(animationIdentifier, withClass: CAAnimation.self) {
// The animation will only play once
animationObject.repeatCount = 1
// To create smooth transitions between animations
animationObject.fadeInDuration = CGFloat(0.8)
animationObject.fadeOutDuration = CGFloat(0.6)
// Store the animation for later use
animations[withKey] = animationObject
}
}
Later on i am playing the animation by adding it to my scene:
func playAnimation(key: String) {
print("fire animation: " + key)
scnView.scene?.rootNode.addAnimation(animations[key]!, forKey: key)
}
This seems to work when doing them individually from touches, but i want to be able to trigger a sequence of these with very specific timing.
I've tried building a loop and sending them off to a DispatchQueue with specific timing outlined:
var delaytime = state.steptime!
for _ in 0..<state.trCount! {
for wSide in state.walkSeq! {
walkQ.asyncAfter(deadline: .now() + delaytime){
self.playAnimation(key: wSide)
}
delaytime += state.steptime! + 1.0
}
}
And i've tried a similar, but slightly different approach using wallDeadline:
let wt = DispatchWallTime.now() + state.pausetime!
//build sequence
var train_seq = [(name:"stepL", time:wt), (name:"stepR",time:wt + state.pausetime! + state.steptime!)]
for _ in 0..<state.trCount! {
let lasttime = train_seq[ train_seq.count - 1 ].time
train_seq += [(name:"stepL", time:lasttime + state.steptime!)]
train_seq += [(name:"stepR", time:lasttime + state.pausetime! + state.steptime!)]
}
//send it
for i in 0..<train_seq.count {
walkQ.asyncAfter(wallDeadline: train_seq[i].time){
self.playAnimation(key: train_seq[i].name)
}
}
What inevitably happens is that the first few run as expected, then around the 4th set of sequences they start to stack on top of each other and fire at the same time, no longer adhering to the timing that i've defined
What is the best way to trigger animations in a sequence on an expected timeline? Is there a way to pause/sleep a loop until the previous animation stops, then trigger the next in the series?
Thanks!
i ended up making a sequence of SCNActions in the order i needed, including a wait
var seq = [SCNAction]()
for _ in 0..<state.trCount!{
let act1 = SCNAction.run {_ in
self.playAnimation(key: "stepL")
}
seq.append(act1)
let act2 = SCNAction.wait(duration: state.steptime!)
seq.append(act2)
let act3 = SCNAction.run { _ in
self.playAnimation(key: "stepR")
}
seq.append(act3)
seq.append(act2)
let act4 = SCNAction.run { _ in
self.playAnimation(key: "idle")
}
seq.append(act4)
let act5 = SCNAction.wait(duration: state.pausetime!)
seq.append(act5)
}
let sequence = SCNAction.sequence(seq)
scnView.scene?.rootNode.runAction(sequence, completionHandler:nil )
I want to allow background audio while the app is not in focus. I currently have this code, which should allow that:
do {
try AKSettings.setSession(category: .playback, with: .mixWithOthers)
} catch {
print("error")
}
AKSettings.playbackWhileMuted = true
I also have the setting 'Audio, Airplay and Picture in Picture' enabled in capabilities settings. However, when I press the home button on my device the audio doesn't keep playing. What am I doing wrong? I am using AudioKit to produce sounds if that matters.
I am using a singleton to house all of the AudioKit components which I named AudioPlayer.swift. Here is what I have in my AudioPlayer.swift singleton file:
class AudioPlayer: NSObject {
var currentFrequency = String()
var soundIsPlaying = false
var leftOscillator = AKOscillator()
var rightOscillator = AKOscillator()
var rain = try! AKAudioFile()
var rainPlayer: AKAudioPlayer!
var envelope = AKAmplitudeEnvelope()
override init() {
super.init()
do {
try AKSettings.setSession(category: .playback, with: .mixWithOthers)
} catch {
print("error")
}
AKSettings.playbackWhileMuted = true
AudioKit.output = envelope
AudioKit.start()
}
func setupFrequency(left: AKOscillator, right: AKOscillator, frequency: String) {
currentFrequency = frequency
leftOscillator = left
rightOscillator = right
let leftPanner = AKPanner(leftOscillator)
leftPanner.pan = -1
let rightPanner = AKPanner(rightOscillator)
rightPanner.pan = 1
//Set up rain and rainPlayer
do {
rain = try AKAudioFile(readFileName: "rain.wav")
rainPlayer = try AKAudioPlayer(file: rain, looping: true, deferBuffering: false, completionHandler: nil)
} catch { print(error) }
let mixer = AKMixer(leftPanner, rightPanner, rainPlayer)
//Put mixer in sound envelope
envelope = AKAmplitudeEnvelope(mixer)
envelope.attackDuration = 2.0
envelope.decayDuration = 0
envelope.sustainLevel = 1
envelope.releaseDuration = 0.2
//Start AudioKit stuff
AudioKit.output = envelope
AudioKit.start()
leftOscillator.start()
rightOscillator.start()
rainPlayer.start()
envelope.start()
soundIsPlaying = true
}
}
And here is an example of one of my sound effect view controllers, which reference the AudioKit singleton to send it a certain frequency (I have about a dozen of these view controllers, each with its own frequency settings):
class CalmView: UIViewController {
let leftOscillator = AKOscillator()
let rightOscillator = AKOscillator()
override func viewDidLoad() {
super.viewDidLoad()
leftOscillator.amplitude = 0.3
leftOscillator.frequency = 220
rightOscillator.amplitude = 0.3
rightOscillator.frequency = 230
}
#IBAction func playSound(_ sender: Any) {
if shared.soundIsPlaying == false {
AudioKit.stop()
shared.setupFrequency(left: leftOscillator, right: rightOscillator, frequency: "Calm")
} else if shared.soundIsPlaying == true && shared.currentFrequency != "Calm" {
AudioKit.stop()
shared.leftOscillator.stop()
shared.rightOscillator.stop()
shared.rainPlayer.stop()
shared.envelope.stop()
shared.setupFrequency(left: leftOscillator, right: rightOscillator, frequency: "Calm")
} else {
shared.soundIsPlaying = false
shared.envelope.stop()
}
}
}
I instantiated the AudioPlayer singleton in my ViewController.swift file.
It depends on when you are doing your configuration in relation to when AudioKit is started. If you're using AudioKit you should be using its AKSettings to manage your session category. Basically not only the playback category but also mixWithOthers. By default, does this:
/// Set the audio session type
#objc open static func setSession(category: SessionCategory,
with options: AVAudioSessionCategoryOptions = [.mixWithOthers]) throws {
So you'd do something like this in your ViewController:
do {
if #available(iOS 10.0, *) {
try AKSettings.setSession(category: .playAndRecord, with: [.defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP])
} else {
// Fallback on earlier versions
}
} catch {
print("Errored setting category.")
}
So I think its a matter of getting that straight. It might also help to have inter-app audio set up. If you still have trouble and provide more information, I can help more, but this is as good an answer as I can muster based on the info you've given so far.