migration of AKPeriodicFunction from AudioKit v4 to AudioKit v5 - audiokit

I've delayed migrating from AudioKit v4 to AudioKit v5 because I make pretty heavy use of AKPeriodicFunction. Has someone implemented equivalent functionality in AudioKit v5? The migration guide suggests it should be straight forward but if someone has gone through the process, let me know. Thanks!

Related

Audiokit create a blank sequence

I'm new to AudioKit (and Mac programming in general) but hopefully what I want to do is quite simple - the cookbook and documentation allude to the way forward, but don't give me exactly what I'm after.
I want to create a Midi Sequence from scratch and then play and save it. All the examples I've seen load in a file first and then play it, and the one example I saw which did what I wanted was full of deprecated code :(
If anyone could provide some code or point me in the direction of some working code (latest build 27/05/22) I would be so grateful.
Thanks in advance,
Jes

Audiokit for backend

I am working on a project and I am thinking to take audiokit core to the backend using vapor.
Any of you guys have any suggestions.
If there is any other alternatives with other programming languages please let me know.

How to use Apple SpeechSynthesis AudioUnit with AudioKit?

I was reading the book "Learning core audio". Then, I found this awesome library called AudioKit. :-) Everything works great until I want to use Apple SpeechSynthesis AudioUnit.
I searched through github repo, but I cannot find an AKNode for Speech Synthesis . Did I missing something?
I found some swift examples from searching kAudioUnitSubType_SpeechSynthesis on github
So I have two questions now:
How to add an Apple AudioUnit Node into AudioKit Graph?
Is there a reason that AudioKit doesn't support Speech Synthesis Node? Will AudioKit accept a PR for this?
I would love to have a speech synthesizer in AudioKit, but as far as I know Apple doesn't let you hook it up to an audio processing graph. However, if you do figure out how to make it work, the whole AudioKit core team would be delighted. We would definitely accept a PR.
In iOS 16 and macOS Ventura you can now do this with AVSpeechSynthesisProviderAudioUnit https://developer.apple.com/documentation/avfaudio/avspeechsynthesisprovideraudiounit
Here's a twitter thread about it https://twitter.com/ElaghaMosab/status/1560067672334733312

Is it possible to bring Octave functionality in iOS app?

I studied about the C++ API provided.Can somebody suggest me some methods or links to proceed
how to import those c files into Xcode and work with it?
Does Octave provide any online API?
Thanks in advance.

Tutorial of building chat functionality with Swift using Layer?

I'm looking for the Swift documentation for Layer since I'm looking for a quick way to integrate chat functionality into my app. Many thanks!
I'm a Partner Engineer at Layer. Layer is still working on Swift documentation for LayerKit for Swift, and we hope to have something available soon. I started building a port of Layer's Quick Start project in Swift. This project is incomplete and is very much a work in progress, but it will give you an idea of where to start with Layer and Swift: https://github.com/maju6406/QuickStartSwift
Have you seen this on github? https://github.com/chatsecure

Resources