Generation rate is a rate of traffic which is generated at the source.
In general its number of packets/sec generated.
Unit is Mbps.
For different type of application there are different formula to calculate Generation rate.
GitLink
Here is a calculator for CBR, Custom, Sensor applications
Related
I'm trying to implement a hello-world algorithm in speaker recognition, GMM-UBM, on an end-device MCU. Wifi/BLE...etc are not available due to some limitation.
The UBM is pre-trained on PC then downloaded to a flash memory that my MCU can access.
What will actually be run on the MCU are only speaker enrollment and testing, as known as a GMM model adaptation procedure and score(log-likelihood ratio) calculation.
For the purpose of this system is to tell if an input voice segment is from a target speaker or not, a score threshold should be selected. Hence, currently, after speaker enrollment, I use a bunch of impostor voices(pre-saved in flash memory) to calculate the impostor scores corresponding to the target speaker. Assuming the impostor scores is a 1-D Gaussian distribution, then I can decide the score threshold depend on the false alarm rate acceptable.
But here's the problem, the procedure above for threshold selection is time-consuming especially on a device with CPU clock rate < 100MHz. Imagine that you have to wait 1 minute or even longer time to get response from Google assistant first time you use it, that will be embarrassing.
The question are as below:
Is my concept right or there is some misunderstanding?
Is this a standard procedure to enroll a speaker and select a threshold?
Is there some method to reduce the computing time of the threshold selection procedure, or even a pre-defined threshold to avoid the on-chip computing?
Thanks a billion!!
I'm working on a model for speech emotion recognition, and I'm currently in the pre-processing phase, creating a utility that can transform audio files into a feature space of fixed dimensions. I'm planning on experimenting with spectrograms, mel-spectrograms and mfccs (including deltas and delta-deltas) as input features for convolutional neural networks. One glaring issue is that the audio is of variable length.
Now, I know that the typical method of dealing with this is setting some length and then expanding all the audio files to fit that length, or truncating files, but I imagine the former method is preferable, because truncation loses some data that could be valuable in training. So I intend to pad audio files with 0s to expand them to some fixed length. I found the maximum duration of an audio file in my dataset and then took its ceiling to get the length. Now I intend to add trailing 0s (after resampling to a fixed sampling rate) to expand all the audio files to have a static length.
My question is, do these extra dimensions not potentially confuse the model? I know neural networks automatically handle feature extraction, but are there any potential caveats I should be made aware of, or perhaps some alternative method for going about doing this that may produce better results?
Thanks.
I'm designing high speed FIR filter ON FPGA .Currently My sampling rate is 3600MSPS. But the clock supported by device is 350MHZ.Please suggest how to go with multiple instantiation
or parallel implementation of FIR Filter so that it meets the design requirement.
Also suggest how to pass samples to the parallel implementation
It's difficult to answer your question based on the information you have given.
The first question I would ask myself is: can you reduce the sample rate at all? 3600 MSPS is very high. The sample rate only needs to be this high if you are truely supporting data requiring that bandwidth.
Assuming you do really need that rate, then in order to implement an FIR filter running at such a high sample rate, you will need to parallelise the architecture as you suggested. It's generally very easy to implement such a structure. An example approach is shown here:
http://en.wikipedia.org/wiki/Parallel_Processing_%28DSP_implementation%29#Parallel_FIR_Filters
Each clock cycle you will pass a parallel word into each filter section, and extract a word from the combined filter output.
But only you know the requirements and constraints of your FPGA design; you will have to craft the FIR filter according to your requirements.
I was wondering if there was a way that I could detect the exact frequency of a BLE signal with an iphone. I know it will be in the 2.4 GHz range but i would like to know the difference down to the 1 Hz range between the transmitted frequency and the received frequency. The difference would be caused by the doppler effect meaning that the central or the peripheral would have to be moving. Also is there an exact frequency that iphones transmit BLE at or does it depend on the iphone's antenna?
Bluetooth doesn't have one particular frequency it operates on. Via bluetooth.com:
Bluetooth technology operates in the unlicensed industrial, scientific and medical (ISM) band at 2.4 to 2.485 GHz, using a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec.
… adaptive hopping among 79 frequencies at 1 MHz intervals gives a high degree of interference immunity and also allows for more efficient transmission within the spectrum.
So there'll be a wide spread of frequencies in use for even a single connection to a single device. There's hardware on the market like the Ubertooth that can do packet captures and spectrum analysis.
To my knowledge, iOS doesn't offer API to find out this information. OS X does at some level, probably via SPI or an IOBluetooth API, because Apple's Hardware Tools (search for "Bluetooth") offer a way to monitor spectrum usage of Bluetooth Classic devices on OS X.
As to your desire to detect movement via the Doppler effect on the radios, my instincts say that it's going to be very, very difficult to do. I'm not sure what the exact mathematics behind it would look like, but you'll want to examine what the Doppler effect on a transmission at 2.4 GHz would be as a result of low-to-moderate rates of motion. (A higher rate of motion or relative speed, say, over a few tens of miles an hour, will be quickly make Bluetooth the wrong radio technology to use because of its low transmit power.)
I am interested in making a simple digital synthesizer to be implemented on an 8bit MCU. I would like to make wavetables for accurate representations of the sound. Standard wavetables seem to either have a table for several frequencies or to have a single sample that has fractional increments with missing data interpolated by the program to create different frequencies.
Would it be possible to create a single table for a given waveform, likely of a low frequency and change the rate at which the program polls the table to generate different frequencies which would then be processed. My MCU (free one, no budget) is rather slow so I don't have the space for lots of wavetables nor for large amounts of processing so I am trying to skimp where I can. Has anyone seen this implementation?
You should consider using a single table with a phase accumulator and linear interpolation. See this question on DSP.SE for many useful suggestions.