Deconvolution to obtain the unit response - signal-processing

I have measured two signals as time series. One of them is the input of the system and the other one is the output.
I assume that if I know the unit response of the system, then I may obtain the output by the convolution of this unit response and the input time series.
On the other side, is it possible to obtain the unit response if I know the input and the output on a long term? Is it possible to obtain this unit response by deconvolution?
Or how is it possible to obtain the unit response function/vector if I measure some natural process variables?
Thanks all!

It is possible to estimate the transfer function. There are different methods to do it. Many of these methods are based on the frequency domain. In case you provide additional information, you can peak up the best method for you.

Related

Why using batch to predict when applying Batch Normalization is cheating?

In a post on Quora, someone says:
At test time, the layer is supposed to see only one test data point at
a time, hence computing the mean / variance along a whole batch is
infeasible (and is cheating).
But as long as testing data have not been seen by the network during training isn't it ok to use several testing images?
I mean, our network as been train to predict using batches, so what is the issue with giving it batches?
If someone could explain what informations our network gets from batches that it is not suppose to have that would be great :)
Thank you
But as long as testing data have not been seen by the network during training isn't it ok to use several testing images ?
First of all, it's ok to use batches for testing. Second, in test mode, batchnorm doesn't compute the mean or variance for the test batch. It takes the mean and variance it already has (let's call them mu and sigma**2), which are computed based solely on the training data. The result of batch norm in test mode is that all tensors x are normalized to (x - mu) / sigma.
At test time, the layer is supposed to see only one test data point at a time, hence computing the mean / variance along a whole batch is infeasible (and is cheating)
I just skimmed through Quora discussion, may be this quote has a different context. But taken on its own, it's just wrong. No matter what the batch is, all tensors will go through the same transformation, because mu and sigma are not changed during testing, just like all other variables. So there's no cheating there.
The claim is very simple, you train your model so it is useful for some task. And in classification the task is usually - you get a data point and you output the class, there is no batch. Of course in some practical applications you can have batches (say many images from the same user etc.). So that's it - it is application dependent, so if you want to claim something "in general" about the learning model you cannot make an assumption that during test time one has access to batches, that's all.

How to evaluate sine-sweep (chirp) signal for system Identification

(for the quick reader)
Question: Am I right that the Spectral analysis method to analyze a CHIRP is not so beneficial for parameter estimation/ model identification)
[EDIT]
My system is open-loop, 1 input (steering wheel angle) and 2 outputs (y-acceleration and yaw_Rate). To find vehicle characteristics I want to fit a linear transfer function to my data (Bicycle model).
My current method is the 'Spectral analysis method': using test data to estimate the FRF and therefore the transfer function, because:
For dummy data (2 transfer functions excited by a chirp steering wheel angle) this works very well: accuracy of 99.98% to refit the model. For real test data, a real vehicle. this is nowhere near correct. Even if I average the data over 11 runs. Hence my confusion/question.
[will upload images of the test data tonight for clarification]
Background
I'm working on a project where I have to perform parameter identification of a car.
In simulator based compensatory tracking experiments I would excite the 'system' (read human) with a multi-sine signal and use the instrumental variable method (and function fitting) to perform system identification (Fourier transforming in- and output; and only evaluating the excited frequencies).
However, for a human driver this may be a bit difficult to do in the car. It is easier to provide sine-sweep (or CHIRP).
Unfortunately I think this input signal is not compatible with direct frequency domain analysis, because each frequency is only excited during a specific timeframe and the Foerier transform assumes a harmonic oscilation during the entire sample-time.
I have checked some books (System Identification:A Frequency Domain Approach, System identification : an introduction and ) but can't seem to get a grip on how to use the CHIRP signal for the estimation of the Frequency Response Function (thus also the transfer function).
Short answer (for the quick reader):
It depends on what you want to do. And yes, multisine signals can have favourable properties compared to chirps.
A bit longer answer:
You ask about chirp signals and their suitability for system identification / parameter estimation. Hence I assume you focus on frequency domain identification and hence I do not comment on time domain.
If you read the book "System Identification:A Frequency Domain Approach" by Pintelon/Schoukens (try to get the second edition from 2012), you will find (cf chapter 2) that the authors favour periodic signals over nonperiodic ones (like chirps) (and they do for good reasons, as periodic signals avoid major errors like leakage).
However if your system cannot be excited by periodic signals (for whatever reason), chirp signals may be a great excitation signal. In the aviation world, test pilots are even taught to perform good chirp signals. The processing of your data may be different for chirps (take a look at chapter 7 in the Pintelon/Schoukens book).
In the end there is just one thing that makes a good excitation signal - that is it gives the desired estimation result. If chirps work for your application: Go with them!
Unfortunately I think this input signal is not compatible with direct frequency domain analysis, because each frequency is only excited during a specific timeframe and the Foerier transform assumes a harmonic oscilation during the entire sample-time.
I do not understand what you mean with your this paragraph. Can you describe your problem in more detail?
P.S.: You didn't write much about your system. Is it static or dynamic? Linear / Nonlinear? Open loop or closed loop? SISO/MIMO? Are you limited to frequency domain ID? Can you repeat experiments? Each subject should be kept in mind when you decide about the excitation.

Simulink: Convert Continuous Signal to Discrete

I am very new to simulink, so this question may seem simple. I am looking for a way to sample a continuous signal every X number of seconds.
essentially what I am doing is simulating the principle of a data acquisition unit for a demonstration I am running, but I can't seem to find a block to do this, the nearest thing I can get is the Zero-Order-Hold.
What you may need is a combination of two blocks. First, a Quantizer block to discretize the input to a chosen resolution. Second a Zero-Order Hold block to sample and hold at the chosen sampling rate.
The ordering doesn't seem to be of much importance here.
Here's an example:
also, you can use the Rate Transition block.

normalization methods for stream data

I am using Clustream algorithm and I have figured out that I need to normalize my data. I decided to use min-max algorithm to do this, but I think in this way the values of new coming data objects will be calculated differently as the values of min and max may change. Do you think that I'm correct? If so, which algorithm shall I use?
Instead to compute the global min-max based on the whole data, you can use a local nomarlization based on a sliding window (e.g. using just the last 15 secconds of data). This approach is very commom to compute Local Mean Filter on signal and image processing.
I hope it can help you.
When normalizing stream data you need to use the statistical properties of the train set. During streaming you just need to cut too big/low values to a min/max value. There is no other way, it's a stream, you know.
But as a tradeoff, you can continuously collect the statistical properties of all your data and retrain your model from time to time to adapt to evolving data. I don't know Clustream but after short googling: it seems to be an algorithm to help to make such tradeoffs.

Audio Processing with Accelerate and vDSP_desamp()

I am totally new to the vdsp framework and I am trying to learn by building. My goal is for the signal to be processed in the following way:
100th order Band Pass FIR
Downsampling by factor: 2
From what I could understand from Apple's documentation the function vDSP_desamp() is what I am looking for (it can do both steps at the same time, right?)
How would I use this correctly?
Here are my thoughts:
Given an AudioBufferList *audio and an array of filter coefficients filterCoeffs with length [101]:
vDSP_desamp((float*)audio->mBuffers[0].mData, 2, &filterCoeffs, (float*)audio->mBuffers[0].mData, frames, 101);
would this be a correct use of the method?
Do I need to implement a circular buffer for this process?
Any guidance /direction /pointer to something to read would be very welcome.
thanks
Reading the documentation, vDSP_desamp() is indeed a compound decimation and FIR operation. Doing both together is a good idea as it reduces memory access and there is scope for eliminating a lot of computation.
The assumption here is FIR filter has been recast with (P-1)/2 group delay. The consequence of this is that to calculate C(n) the function needs access to A(n*I+p)
Where (using the terminology of the documentation):
`A[0..x-1]`: input sample array
`C[0..n-1]`: output sample array
`P`: number of filter coefficients
`I`: Decimation factor
Clearly if you pass a CoreAudio buffer to this, it'll run off the end of the buffer by 200 input samples. At best, yielding 100 garbage samples, and at worst a SIGSEGV.
So, the simple answer is NO. You cannot use vDSP_desamp() alone.
Your options are:
Assemble the samples needed into a buffer and then call vDSP_desamp() for N output samples. This involves copying samples from two CoreAudio buffers. If you're worried about latency, you recast the FIR to use 100 previous samples, alternatively, they could come from the next buffer.
Use vDSP_desamp() for what you can, and calculate the more complex case when the filter wraps over the two buffers.
Two calls to vDSP_desamp() - one with the easy case, and another with an assembled input buffer where samples wrap adjacent CoreAudio buffers
I don't see how you can use circular buffer to solve this problem: You still have the case where the buffer wraps to deal with, and still need to copy all samples into it.
Which is faster rather depends on the size of the audio buffers presented by CoreAudio. My hunch is that for small buffers, and a small filter length, vDSP_desamp() possibly isn't worth it, but you're going to need to measure to be sure.
When I've implemented this kind of thing in the past on iOS, I've found a
hand-rolled decimation and filter operation to be fairly insignificant in the grand scheme of things, and didn't bother optimizing further.

Resources