I am very new to simulink, so this question may seem simple. I am looking for a way to sample a continuous signal every X number of seconds.
essentially what I am doing is simulating the principle of a data acquisition unit for a demonstration I am running, but I can't seem to find a block to do this, the nearest thing I can get is the Zero-Order-Hold.
What you may need is a combination of two blocks. First, a Quantizer block to discretize the input to a chosen resolution. Second a Zero-Order Hold block to sample and hold at the chosen sampling rate.
The ordering doesn't seem to be of much importance here.
Here's an example:
also, you can use the Rate Transition block.
Related
The drake multibody plant class has a lot of functions for calculating quantities that I am interested in recording during a simulation (e.g. momentum, center of mass, etc.). What is the best way of recording data like this? I haven't used drake extensively, but I had a few ideas:
Run the simulation in a loop with a defined time step (i.e. simulator.AdvanceTo(current_time + dt)) and use the multibody plant to calculate the quantities directly.
Seems a bit limited (i.e. can't use a single call to AdvanceTo() to run the simulation) and may require a very small time step to get the resolution I'm looking for.
Record available quantities from the multibody plant output ports (e.g. body spatial velocities, body poses, etc.) using a VectorLogSink block, and solve for the quantities of interest after the conclusion of the simulation by reconstructing a Context from the values and calling the multibody plant calculation functions
Not sure if this is possible; seems like a bit of a roundabout approach; the quantities of interest are not available during the simulation
Create a system block that can connect to a multibody plant to perform these computations at every internal simulation timestep. That block could then connect to a VectorLogSink block to record the data.
Not sure if this is possible or where to start
Can anyone provide some guidance on how to record multibody quantities like this?
Porting this question from this issue for continued discussion
Initial response from #jwnimmer-tri
Your assessment of option (1) is correct. If you already know a reasonable dt step size for your logging that suits your needs, it can work well. If the "interesting" times are not always on a fixed schedule, this way can be difficult.
Relatedly, there is an option (4) you hadn't found yet. The Simulator::set_monitor() can be used to set a simulation callback. You could use that for the ability to log at every simulation step, no matter how large or small the step was.
Option (2) should work as well. Assuming that the only state in your simulation is the multibody plant positions and velocities, you could log those during the simulation, and then be able to set the context state to those values offline later, and make any queries on the plant that you need. If you have extra state (e.g., controller state), this approach becomes more difficult.
Option (3) is also possible, but more complicated to explain.
I developed an app a few months back for iOS devices that generates real-time harmonic rich drones. It works fine on newer devices, but it's running into buffer underruns on slower devices. I need to optimize this thing and need some mental help. Here's a super basic overview of what I'm currently doing:
Create an "Oscillator Bank" that consists of X number of harmonics (simply calculated from a given fundamental frequency. Nothing fancy here.)
Inside my DAC function that spits out samples to an iOS audio buffer, I call a "GetNextSample()" function that goes through the bank of sine oscillators, calculates the sample for each one and adds them up. Some simple additive synthesis.
Enjoy the beauty of the drone.
Again, it works great, until it doesn't. I'd like to optimize this thing so I'm not using brute additive synthesis of real-time calculated sine waves. If I limit the number of harmonics ("banks") to 2, it'll work on the older devices. Not cool. On the newer devices, it underruns around 50 harmonics. Not too bad. But if I want to play multiple drones at once to create some chords, that's too much processing power.... so...
Should I generate waveform tables to just loop through instead of constant calculation? (I assume yes...)
Should I convert my usage of double-precision floating point to integer based calculations? (I assume yes...)
And my big algorithmic question (being pretty non-mathematical):
If I use a waveform table, how do I accurately determine how long the wave/table should be?? In my experience developing this app, if I just go to the end of a period (2*PI) and start over again, resetting the phase back to 0, I get a sound artifact, since I'm force offsetting the phase. In other words, I can't guarantee that one period will give me the right results...
Maybe I'm over complicating things... What's the standard way of doing quick, processor friendly real-time synth of multiple added sines?
I'll keep poking around in the meantime.
Thanks!
Have you (or can you, not an iOS person) increase the buffer size? Might give you enough overhead that you do not need this. Otherwise yes wave-table synthesis is a viable approach. You could calculate a new wavetable from the sum of all the harmonics only when a parameter changes.
I have written such a beast in golang on server side ... for starters yes use single precision floating point
To address table population, I would assure your implementation is solid by having it synthesize a square wave. Visualize the output for each run as you give it each additional frequency (with its corresponding parms of amplitude and phase shift) ... by definition a single cycle is enough as long as you are correctly using enough cycles to cover the time period of a sample
Its important to leverage the knowledge that generating an output curve from an input set of sine waves ( each with freq, amplitude, phase shift) lends itself to doing the reverse ... namely perform a FFT on that output curve to have the api give you its version of the underlying sine waves (again each with a freq, amplitude and phase) ... this will confirm your system is accurate
The name of the process you are implementing is : inverse Fourier transform and there are libraries for this however I too prefer rolling my own
I am wondering what would be the most efficient way to correct signal if it drops out significantly in some period of time. Like in the figure green signal dropped out between around 16:26 and 19:16 and I would like to elevate it to the same level like before 16:26 and after 19:16 using statistics.
Please find here the figure
Thanks in advance!
Try the Bayesian blocks method, here is the paper: J. D. Scargle, J. P. Norris, B. Jackson, J. Chiang, (2012) arXiv:1207.5578.
It is rather long paper, but you can skip to the place where they included a MatLab implementation.
What it does is that it splits your time series into time blocks in which the values fluctuate around some mean, that mean being different in each block.
Once you have the blocks, the ones which are low can be scaled up to remove the drops.
NOTE: there is a parameter ncp_prior , by varying it you can change the sensitivity of the method, so that it doesn't get fooled by the fluctuations, but reproduces the drop.
I'm looking to improve the delay estimation portion of a Simulink model. The input is an estimated impulse response for the system. I want the index of the first sample of the impulse response where the sum of the absolute values of it and the previous elements exceeeds a certain fraction of the total across the whole vector.
Here's my current solution:
The matrix sum runs along dimension 2. The prelookup block is set to clip. This is finding the element (possibly one off, I haven't thought that through yet) where 1% of the total is reached.
This seems overly complicated, and it isn't clear what it is trying to do without some explanation. I tried coming up with a solution based on the discrete integrator/accumulator block but couldn't come up with something better. It certainly does a lot more addition than it needs to with this solution, although performance isn't really an issue right now.
Is there a simpler way to get the running sum across a vector that I could put in place of the Toeplitz->Triangular->Sum section? Is there a better way overall to perform the whole lookup?
If you have DSP System toolbox, there is a "Cumulative Sum" block which should be able to replace your toeplitz, traiangular matrix and matrix sum.
http://www.mathworks.com/help/dsp/ref/cumulativesum.html
If you do not have DSP System toolbox, I suggest coding this in MATLAB Function block where it should be a one liner.
y = cumsum(x);
While you are there you may also want to code the entire logic in MATLAB Function block which in cases like this is easier to code and understand.
I am totally new to the vdsp framework and I am trying to learn by building. My goal is for the signal to be processed in the following way:
100th order Band Pass FIR
Downsampling by factor: 2
From what I could understand from Apple's documentation the function vDSP_desamp() is what I am looking for (it can do both steps at the same time, right?)
How would I use this correctly?
Here are my thoughts:
Given an AudioBufferList *audio and an array of filter coefficients filterCoeffs with length [101]:
vDSP_desamp((float*)audio->mBuffers[0].mData, 2, &filterCoeffs, (float*)audio->mBuffers[0].mData, frames, 101);
would this be a correct use of the method?
Do I need to implement a circular buffer for this process?
Any guidance /direction /pointer to something to read would be very welcome.
thanks
Reading the documentation, vDSP_desamp() is indeed a compound decimation and FIR operation. Doing both together is a good idea as it reduces memory access and there is scope for eliminating a lot of computation.
The assumption here is FIR filter has been recast with (P-1)/2 group delay. The consequence of this is that to calculate C(n) the function needs access to A(n*I+p)
Where (using the terminology of the documentation):
`A[0..x-1]`: input sample array
`C[0..n-1]`: output sample array
`P`: number of filter coefficients
`I`: Decimation factor
Clearly if you pass a CoreAudio buffer to this, it'll run off the end of the buffer by 200 input samples. At best, yielding 100 garbage samples, and at worst a SIGSEGV.
So, the simple answer is NO. You cannot use vDSP_desamp() alone.
Your options are:
Assemble the samples needed into a buffer and then call vDSP_desamp() for N output samples. This involves copying samples from two CoreAudio buffers. If you're worried about latency, you recast the FIR to use 100 previous samples, alternatively, they could come from the next buffer.
Use vDSP_desamp() for what you can, and calculate the more complex case when the filter wraps over the two buffers.
Two calls to vDSP_desamp() - one with the easy case, and another with an assembled input buffer where samples wrap adjacent CoreAudio buffers
I don't see how you can use circular buffer to solve this problem: You still have the case where the buffer wraps to deal with, and still need to copy all samples into it.
Which is faster rather depends on the size of the audio buffers presented by CoreAudio. My hunch is that for small buffers, and a small filter length, vDSP_desamp() possibly isn't worth it, but you're going to need to measure to be sure.
When I've implemented this kind of thing in the past on iOS, I've found a
hand-rolled decimation and filter operation to be fairly insignificant in the grand scheme of things, and didn't bother optimizing further.