I was reading the book "Learning core audio". Then, I found this awesome library called AudioKit. :-) Everything works great until I want to use Apple SpeechSynthesis AudioUnit.
I searched through github repo, but I cannot find an AKNode for Speech Synthesis . Did I missing something?
I found some swift examples from searching kAudioUnitSubType_SpeechSynthesis on github
So I have two questions now:
How to add an Apple AudioUnit Node into AudioKit Graph?
Is there a reason that AudioKit doesn't support Speech Synthesis Node? Will AudioKit accept a PR for this?
I would love to have a speech synthesizer in AudioKit, but as far as I know Apple doesn't let you hook it up to an audio processing graph. However, if you do figure out how to make it work, the whole AudioKit core team would be delighted. We would definitely accept a PR.
In iOS 16 and macOS Ventura you can now do this with AVSpeechSynthesisProviderAudioUnit https://developer.apple.com/documentation/avfaudio/avspeechsynthesisprovideraudiounit
Here's a twitter thread about it https://twitter.com/ElaghaMosab/status/1560067672334733312
Related
I am currently in the research and prototyping stages of a project to develop a native iOS app (Swift 3) that includes a multi-channel audio player (multiple stereo MP3 files). I have found very limited information online, particularly written in Swift 3, so thought as I continue my research I would pose a question here.
Regarding frameworks it seems clear from what I've looked at so far that AVFoundation is going to do the job. It's not too low level and has a good set of functionality. It has support for playing multiple audio files with AVAudioPlayer. I am planning to start prototyping something with this soon.
But I am new to Swift and to iOS development with its huge number of libraries, so I'm wondering if I'm missing anything, if I'm on the right track here. Any answers with general information and thoughts on this will be up-voted. For an accepted answer some sample outline code using an appropriate framework, AVFoundation or a justified alternative.
If no answer is forthcoming I will post my own code when I get there.
Specifically I need from two to ten input channels, from MP3 files within the project resources, each with their own gain that can be individually adjusted, and then all of these mixed, maintaining their stereo channels, to a single output (the device) with a master gain. Some of the tracks need to loop, others not. The tracks need to be accurately synchronised. This is just for info and outline code would be fine covering the important points.
Research Notes and Resources
Apple: AVFoundation
A collection of resources relating to AVFoundation.
Apple: AVFoundation Programming Guide
This document seems encouraging at first, but actually only deals with video. It says:
There are two facets to the AVFoundation framework—APIs related to video and APIs related just to audio. The older audio-related classes provide easy ways to deal with audio. They are described in the Multimedia Programming Guide, not in this document.
The "Multimedia Programming Guide" which is also mentioned elsewhere at Apple in relation to this, is never linked and Google results point to not found pages on the Apple site. It seems to have disappeared.
Rudi Strahl: Mixing Multiple Audio Tracks with AVFoundation
Compares using AVComposition to using multiple AVPlayers. Example code is Objective-C. Not sure how the AVPlayers are mixed in the second solution. Perhaps with AVAudioMix. Currently looking at this. The article talks a little about it but doesn't deliver any specifics.
Audio Session Programming Guide
This document looks at AVAudioSession which provides supporting functionality:
AVAudioSession gives you control your app’s audio behavior. You can:
Select the appropriate input and output routes for your app
Determine how your app integrates audio from other apps
Handle interruptions from other apps
Automatically configure audio for the type of app your are creating
Techotopia: Playing Audio on iOS 10 using AVAudioPlayer
Some useful information on using AVAudioPlayer.
Stack Overflow: Playing a Sound with AVAudioPlayer
Basic Swift code for playing a sound. Some answers include a little extra functionality.
Hacking with Swift: How to Play Sounds Using AVAudioPlayer
Again, covers the basics.
Sweet Tutos: How To Play Sounds Files And Manage Duration Progress – AVAudioPlayer Tutorial
Updated to Swift 3. Some useful info.
Xamarin: Playing Sound with AVAudioPlayer
Written in Swift 2, I think.
Apple Video: WWDC 2013 Moving to AV Kit and AV Foundation
While not directly related, I found the first 30 minutes of this video introducing developers to AV Kit and AV Foundation in OS X 10 provides a useful overview of the technology.
I was working on the same problem, best what I could do it is, to transcode media content to be playing using avplayer, here is a draft, maybe it can help.
I have the following issue, and I have found a few topics here talking about it but none of these is actually answering my question.
I'm pretty new with iOS development, I searched the apple documentation but didn't found anything useful
I need to get the audio Sample/Buffer/Stream from the headphone microphone, in a manipulable variable or something like that. Then push it back to the headphones. That I can hear my voice when I'm talking.
I found things about AVFoundation but nothing more.
I know it’s possible to do that but I did not find how to
Can anybody help me further ?
Notice that the language you are looking for is swift,so you might can finding something useful on EZAudio,even this project was deprecated at June 13,2016.You might want to check the example which called PassThrough in this project,this example has the function "Hearing the voice while talking",and this project was written with swift.Wish could help.
I would like to modulate the signal from the mic input with a sine wave at 200HZ (FM only). Anyone know of any good tutorials/articles that will help get me started?
Any info is very welcome
Thanks
I suggest you start here Audio File Stream Services Reference
Here you can also find some basic tutorials: Getting Started with Audio & Video.
Especially the SpeakHere example app could be interesting
Hope that helps you
The standard way to do audio processing in iOS or OSX is Core Audio. Here's Apple's overview of the framework.
However, Core Audio has a reputation of being very difficult to learn, especially if you don't have experience with C. If you're still wanting to learn Core Audio, then this book is the way to go: Learning Core Audio.
There are simpler ways to work with audio on iOS and OSX, one of them being AudioKit, which was developed specifically so developers can quickly prototype audio without having to deal with lower-level memory management, buffers, and pointer arithmetic.
There are examples showing both FM synthesis and audio input via the microphone, so you should have everything you need :)
Full disclosure: I am one of the developers of AudioKit.
I am creating a musical app which generate some music. I already used MIDI functions on Mac to create a MIDI file with MIDI events (unfortunately, I don't remember names of those functions).
I am looking for a way to create instrumental notes (MIDI's or anything else) programmatically in order to play them. I also would like to have multiple channels playing those notes at the same time.
I already tried 'SoundBankPlayer' but apparently, it can't play multiple instruments at the same time.
Have you got an idea?
This answer might be a bit more work than you intended, but you can use PD on iOS to do this. More precisely, you can use libpd for iOS for the synthesis, and then use any number of community-donated patches for the sound you're looking for.
In iOS 5:
MusicSequence, MusicTrack, MusicPlayer will do what you want.
http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/MusicSequence_Reference/Reference/reference.html#//apple_ref/doc/uid/TP40009331
Check out AUSampler AudioUnit for iOS, you'll probably have to delve into Core Audio, which has some learning curve. ;)
What frameworks are required to detect how loud someone is talking into a microphone... Also, can anyone tell me what to search for in the documentation or google so I can create some code... What line of code would be commonly used when detecting volume levels of noise through the microphone? Thanks!
This seems to be a duplicate of:
Realtime microphone sound level monitoring
However, that question is old and the accepted answer links to a deprecated library. They now recommend that you instead use AVAudioRecorder. They suggest this tutorial and it seems to be what you are looking for.