We have a project that currently uses AVAudioEngine as an audio solution for mixing audio from different player nodes. For the most part the solution works exactly to our expectation except for the AVAudioUnitTimePitch node. We're finding that the quality of time stretching is below our expectations and we have not been able to fix it. We're currently looking at Superpowered as an alternative, however I would like to avoid replacing our entire audio solution with the new SDK and would rather create a custom AVAudioNode that handles the time stretching algorithm, so that we can attach it to our current implementation and replace AVAudioUnitTimePitch. I'm a bit of a loss on how to achieve this and was wondering if anyone here had solved a similar issue and how they approached this? Thanks!
Related
Working in audio kit and I am looking to understand how people have incorporated drums. Obviously, the sampler is an option, but I am wondering if there is a built in option similar to some of the basic synthesis options.
There are a few options. I personally like the AppleSampler/MidiSampler like in the example but instead of using audio files you can create a EXS Sampler instrument in Logic where you can assign notes for different velocities. AppleSampler can also load AUPresets made in GarageBand and SoundFonts (SF2). The DunneAudioKit Sampler is an option if you are working with SFZ files, but I think that might be a work-in-progress in AudioKit 5. Loading WAV files directly into AppleSampler is also a good option if you just want one shot sounds.
I'm assuming you're mostly talking about playback of samples, not recording.
The best built-in option I've seen (other than AppleSampler/MidiSampler) is AudioPlayer, which lets you load in a sample and play it back on demand (from an on-screen pad, etc). MIDIListener can then help you respond to external MIDI events, etc. It works (I have a pretty big branch in my app where I tried it), but not sure it works well.
I wouldn't recommend DunneAudioKit Sampler for drums. There is no one-shot playback (so playing the same note in quick succession will cut off the previous note, even if you mess with the release). If you're trying to build a complex/realistic acoustic drum instrument, you'll also want round-robins so that variations of the same hit can be played, which Dunne also doesn't have. It can load SFZ files, but only a very limited subset of SFZ's opcodes (so again, it's missing things like round robins, mute groups, one-shot, etc).
Having gone down all those roads, I would suggest starting with AppleSampler, and I would build the EXS or aupreset file in Logic or Mainstage rather than trying to build something programmatically.
If your needs are really simple, the examples in AudioKit's recently released drum pad playground is a great place to start, loading single samples into a specific note on AppleSampler.
What is the standard way of saving videos using MeshcatVisualizer? I know the following works for wrappers of PyPlotVisualizer:
visualizer.start_recording()
simulator.AdvanceTo(T)
ani = visualizer.get_recording_as_animation()
But the two relevant methods are not available for MeshcatVisualizer, and there don't seem to be any examples in the repo that create videos using it, and none of the methods that the class does have seemed like promising candidates. Failing that, is there another way of saving videos for 3D visualizations?
Meshcat has an animation tool: https://github.com/rdeits/meshcat-python/blob/master/animation_demo.ipynb , you can access MeshcatVisualizer's meshcat.Visualizer instance via MeshcatVisualizer.vis. However, MeshcatVisualizer doesn't have a function like MeshcatVisualizer.convert_to_video that supports this animation tool at the moment. Perhaps the easier route for now is screen recording.
I don't believe that meshcat offers it's own recording functionality, which means the recommended workflow would be to just use your favorite screen recorder software. I've forwarded this to a few meshcat experts in case they have something better to recommend.
Update: In addition to rdeits answer above, he had a few more details in email:
there's a built in animation API with recording support in
meshcat-python (see "Recording an Animation" in
https://github.com/rdeits/meshcat-python/blob/master/animation_demo.ipynb
), but AFAICT Drake's MeshcatVisualizer isn't hooked up to it. It
might not be that hard to do so--the basic idea is that you can use
at_frame to get a representation of a single frame of an animation
that behaves like a meshcat.Visualizer. You can call set_transform
on that frame, and rather than moving anything in the viewer it will
instead record that action into an animation track. Then you can send
the whole animation at once to the visualizer and let the browser side
handle replaying and recording it.
I'm building a small game prototype, and I'd like to be able to play simple sounds whose length/tone/pitch will vary based on what the user is doing.
This is surprisingly hard to do. Closest resource I found was:
http://www.tmroyal.com/playing-sounds-in-swift-audioengine.html
But this does not actually generate any sound on my device or on the iOS simulator.
Does anyone know of any working code to play ANY procedurally generated audio? Simple Sine Wave would do.
https://gist.github.com/rgcottrell/5b876d9c5eea4c9e411c
This code on the other hand works, and it's beautifully written...
Success!
You can try AudioKit.
It's an audio framework built on top of Core Audio.
In their Continuous Control example they use a simple FM oscillator with controlled parameters.
I have found already some similar discussion on this, but would like to investigate further and find out what's the best library and approach to use for a game made with Cocos2d v1 with the following requirements:
swap one soundtrack with another one, interlacing them (the current one lowers the volume whilst the other one increases)
have multiple audio effects (like bullet shoot sounds different for each enemy)
Cocosdenshion seems to be the best approach for a cocos2d game (rather than using avplayer). Would you agree?
Thanks!
Following up this question.. I did use "CDXPropertyModifierAction.h" library which allowed me to reference to the shared SimpleAudioEngine of the application as well as the CDAudioManager.
Although, having 4 tracks and several .caf effects, I have a consierable memory footprint. I read the comment of #LearnCocos2D and I will now try integrating ObjectAL and do some benchmarks on performance and memory footprint.
I will add comments to this answer, and please do feel free to do so as well to contribute.
Thanks a lot for your comments..
I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.