How to synchronize OSC and Filter envelope by note gate in AudioKit - audiokit

In other words I need two separate ADSR envelopes for OSC bank & Filter (cutoff). How I can synchronize this two things by note press?
AMOscillatorBank(amplitude)->lowPassFilter(cutoff)->AudioKit output
When 'amplitude' & 'cutoff' have separated adsr envelopes, and generate & transform sound by chain two output.
I trying some examples in playground, but i only can create AMOscillatorBank(amplitude) with adsr & only pass some lowpassfilter (but not syncronized to note press).

If you want sample accuracy you'll need to put the low pass filter inside the DSP for the bank. We're actually doing that on the AK Synth One project that will be open sourced when it is finished.

Related

AKOscillator frequency range for theremin sound in iOS

I want to create similar sound to theremin using touch coordinate on screen. I'm using y axis as frequency, x axis as amplitude.
Due to my small research I believe I can create it using AKOscillator or AKFMOscillator from AudioKit framework (please let me know if any other oscillator works better in this case). I'm open to other frameworks like built-in AudioToolbox (MIDINoteMessage etc.) if I can create similar sound to theremin.
Here it says theremin has two oscillators. One with fixed-frequency on 260kHz and one is dynamic between 257-260kHz. It superimposes their output (it takes difference of them I guess?). And it outputs between frequency between 0-3 kHz.
When I create sounds using AKFMOscillator with baseFrequency between 257-260 kHz, it sounds high-pitched.
When I try with one oscillator range between 0-3kHz it sounds very robotic. How I can simulate timbre of theremin?
How can I make it sound better? Should I mix two oscillators? I tried mixing with AKMixer but when both oscillators use same frequency and amplitude, it makes no difference.
I tried to mapping to nearest note (auto-tune), I tried limiting the frequency between 3-4 octaves. It sounds better but still not good as theremin.
What should use ( AKOscillator or AKFMOscillator, OscillatorBank), with which parameters (rampDuration, baseFrequency, modulationIndex, amplitude) to simulate more thereminish sound?
Update:
I did some more research and played with Synth One presets. Now, I know I need two oscillators mixed (both set to saw-shape wave). Changing ADSR(envelope) values to specific ranges creates richer sound (this gives the instrumental sound type). And a lfo to create the wavy (or spooky) sound effect. Playing notes (specific frequencies) creates good sounds, if you play every frequency in between note frequencies it doesn't sound good.

RL: Self-Play with On-Policy and Off-Policy

I try to implement self play with PPO.
Suppose we have a game with 2 agents. We control one player on each side and get information like observation and reward after each step. As far as I know, you can use the information of the right and left player to generate training data and to optimize the model. But that is only possible for off-policy, isn't it?
Because with on-policy e.g. PPO, you expect that the training data to be generated by the current network version and that is usually not the case during self play?
Thanks!
Exactly, this is also the same reason why you can use experience-replay (Replay BUffers) only for off-policy methods like Q-learning. Using sample steps that were not generated by the current policy violates the mathematical assumptions behind the gradients that are being backpropagated.

Controlling the phase of signal in pure data

I'm in need of figure out a way of changing the phase of a signal. Objective is to generate two signals with one phase changed and observe the patters when combined.
below is the program I'm using so far:
As in the above setting, I need to use the same signal to generate a phase changed signal and later combine the two signals and observe patters.
Can someone help me out on this?
Thanks.
Using the right inlet of the [osc~] object is a valid way to set the phase of an oscillator but it isn't the only or even the most correct way. The right inlet only permits a float at the control level.
A more comprehensive manipulation of phase can be done at the signal level using the [phasor~], [cos~], [wrap~], and [+~] objects. Essentially, you are performing the same function as [osc~] with a technique called a table lookup using [phasor~] and [cos~]. You could read another table with [tabread4~] instead of [cos~] as well.
This technique keeps your oscillators in sync. You can manipulate the phase of your oscillators with other oscillators, table lookups, and still of course floats (so long as the phase value is between 0 and 1, hence the [wrap~] object).
phase modulation at the signal level
Afterwards, like the other examples here, you can add the signals together and write them to corresponding tables or output the signal chain or both.
Here's how you might do the same for a custom table lookup. Of course, you'd replace sometable with your custom table name and num-samp-in-some-table with the number of samples in your table.
signal level phase modulation with custom tables
Hope it helps!
To change the phase of an oscillator, use the right-hand side inlet.
Quoting Johannes Kreidler's Programming Electronic Music in Pd:
3.1.2.1.3 Phase
In Pd, you can also set membrane position for a sound wave where it should begin (or where it should jump to). This is called the phase of a wave. You can set the phase in Pd in the right inlet of the "osc~" object with numbers between 0 and 1:
A wave's entire period is encompassed by the range from 0 to 1. However, it is often spoken of in terms of degrees, where the entire period has 360 degrees. One speaks, for example, of a "90 degree phase shift". In Pd, the input for the phase would be 0.25.
So for instance, if you want to observe how two signals can become mute due to destructive interference, you can try something like this:
Note that I connected a bang to adjust simultaneously the phases of both signals. This is important, because while you can reset the phase of a signal to any value between 0.0 and 1.0 at any moment, the other oscillator won't be reset and therefore the results will be quite random (you never know at which phase value the other signal will be at!). So resetting both does the trick.

using butterworth filter in a case structure

I'm trying to use butterworth filter. The input data comes from an "index array" module (the data is acquired through DAQ and I want to process the voltage signal which is in an array of waveforms). when I use this filter in a case structure, it doesn't work. yet, when I use the filters in the "waveform conditioning" section, there is no problem. what exactly is the difference between these two types of filters?
a little add on to my problem: the second picture is from when i tried to reassemble the initial combination, and the error happened
You are comparing offline filtering to online filtering.
In LabVIEW, the PtbyPt-VIs are intended to be used in an online setting, that is - iteratively.
For each new sample that is obtained, it would be input directly into the VI. The VI stores the states of the previous iterations to perform the filtering.
The "normal" filter VIs are intended for offline analysis and expects an array containing the full data of the signal.
The following whitepaper explains Point-by-Point-VIs. Note that this paper is quite old, so it should explain the concepts - but might be otherwise outdated.
http://www.ni.com/pdf/manuals/370152b.pdf
If VoltageBuf is an array of consecutive values of the same signal (the one that you want to filter) you only need to connect VoltageBuf directly to the filter.

using AUGraph vs direct connections between Audio Units by means of a GCD queue

So i've gone thru the Audio Unit Hosting Guide for iOS and they hint all along that one might always want to use an AUGraph instead of direct connections between AUs. Among the most notable reasons they mention a high-priority thread, being able to reconfigure the graph while it is running and general thread-safety.
My problem is that I'm trying to get as close to "making my own custom dsp effects" as possible given that iOS does not really let you dynamically load custom code. So, my approach is to create a generic output unit and write the DSP code in its render callback. Now the problem with this approach is if I wanted to chain two of these units with custom callbacks in a graph. Since a graph must have a single output au (for head), trying to add any more output units won't fly. That is, I cant have 2 Generic I/O units and a Remote I/O unit in sequence in a graph.
If I really wanted to use AUGraphs, I can think of one solution along the lines of:
A graph interface that internally keeps an AUGraph with a single Output unit plus, for each connected node in the graph, a list of "custom callback" generic output nodes that are in theory connected to such node. Maybe it could be a class / interface over AUNode instead, but hopefully you get the idea.
If I add non output units to this graph, it essentially connects them in the usual way to the backing AUGraph.
If however, I add a generic output node, this really means adding the node's au to the list and whichever node I am connecting in the graph to, actually gets its input scope / element 0 callback set to something like:
for each unit in your connected list:
call AudioUnitRender(...) and merge in ioData;
So that the node which is "connected" to any number of those "custom" nodes basically pulls the processed output from them and outputs it to whatever next non-custom node.
Sure there might be some loose ends in the idea above, but I think this would work with a bit more thought.
Now my question is, what if instead I do direct connections between AUs without an AUGraph whatsoever? Without an AUGraph in the picture, I can definitely connect many generic output units with callbacks to one final Remote I/O output and this seems to work just fine. Thing is kAudioUnitProperty_MakeConnection is a property. So I'm not sure that once an AU is initialized I can re set properties. I believe if I uninitialize then it's ok. If so, couldn't I just get GCD's high priority queue and dispatch any changes in async blocks that uninitialize, re connect and initialize again?

Resources