Unit of digital numbers? - digital

what is the unit of digital numbers https://en.wikipedia.org/wiki/Numerical_digit. For example what is the unit of the difference of two ADC values:
10 - 2 = 8 digits
10 - 2 = 8 units
10 - 2 = 8 symbols
10 - 2 = 8 ???
Or for example I want to describe a slope:
Temperature example: 2 °C per second = 2 °C/sec
ADC example: 2 ??? per second = 2 ???/sec
What is correct?
Best regards
Zlatan

Numbers don't have units by default. Units are simply a multiplied symbol that represents the "nature" of the quantity.

First of all figure out the LSB (least significant bit) of the ADC.
Example: The ADC uses a vref of 1.2V and has 8bit => LSB=1.2V/(2^8-1)=4.7mV
A typical temperature sensor using a bipolar junction has about -2mV/K. The example ADC with LSB=4.7mV will respond with a change of 1LSB per 2.35K temperature decrease.
A change of 1LSB/second means you have a change of -2.35K/per second.
If this isn't accurate enough for your application you can use an ADC with more bits or stack several diodes acting as temperature sensors.
If you use something else than a bipolar junction the sensitivity of the temperature sensor can be different. Just check the spec of the sensor and the ADC (and it's reference to calculate the LSB)
Parameters you need:
Reference voltage of the ADC
Number of bits of the ADC (to calculate LSB)
Temperature coefficient of the sensor

Related

HOW does a 8 bit processor interpret the 2 bytes of a 16 bit number to be a single piece of info?

Assume the 16 bit no. to be 256.
So,
byte 1 = Some binary no.
byte 2 = Some binary no.
But byte 1 also represents a 8 bit no.(Which could be an independent decimal number) and so does byte 2..
So how does the processor know that bytes 1,2 represent a single no. 256 and not two separate numbers
The processor would need to have another long type for that. I guess you could implement a software equivalent, but for the processor, these two bytes would still have individual values.
The processor could also have a special integer representation and machine instructions that handle these numbers. For example, most modern machines nowadays use twos-complement integers to represent negative numbers. In twos-complement, the most significant bit is used to differentiate negative numbers. So a twos-complement 8-bit integer can have a range of -128 (1000 0000) to 127 (0111 111).
You could easily have the most significant bit mean something else, so for example, when MSB is 0 we have integers from 0 (0000 0000) to 127 (0111 1111); when MSB is 1 we have integers from 256 (1000 0000) to 256 + 127 (1111 1111). Whether this is efficient or good architecture is another history.

How to calculate 512 point FFT using 2048 point FFT hardware module

I have a 2048 point FFT IP. How may I use it to calculate 512 point FFT ?
There are different ways to accomplish this, but the simplest is to replicate the input data 4 times, to obtain a signal of 2048 samples. Note that the DFT (which is what the FFT computes) can be seen as assuming the input signal being replicated infinitely. Thus, we are just providing a larger "view" of this infinitely long periodic signal.
The resulting FFT will have 512 non-zero values, with zeros in between. Each of the non-zero values will also be four times as large as the 512-point FFT would have produced, because there are four times as many input samples (that is, if the normalization is as commonly applied, with no normalization in the forward transform and 1/N normalization in the inverse transform).
Here is a proof of principle in MATLAB:
data = randn(1,512);
ft = fft(data); % 512-point FFT
data = repmat(data,1,4);
ft2 = fft(data); % 2048-point FFT
ft2 = ft2(1:4:end) / 4; % 512-point FFT
assert(all(ft2==ft))
(Very surprising that the values were exactly equal, no differences due to numerical precision appeared in this case!)
An alternate solution from the correct solution provided by Cris Luengo which does not require any rescaling is to pad the data with zeros to the required length of 2048 samples. You then get your result by reading every 2048/512 = 4 outputs (i.e. output[0], output[3], ... in a 0-based indexing system).
Since you mention making use of a hardware module, this could be implemented in hardware by connecting the first 512 input pins and grounding all other inputs, and reading every 4th output pin (ignoring all other output pins).
Note that this works because the FFT of the zero-padded signal is an interpolation in the frequency-domain of the original signal's FFT. In this case you do not need the interpolated values, so you can just ignore them. Here's an example computing a 4-point FFT using a 16-point module (I've reduced the size of the FFT for brievety, but kept the same ratio of 4 between the two):
x = [1,2,3,4]
fft(x)
ans> 10.+0.j,
-2.+2.j,
-2.+0.j,
-2.-2.j
x = [1,2,3,4,0,0,0,0,0,0,0,0,0,0,0,0]
fft(x)
ans> 10.+0.j, 6.499-6.582j, -0.414-7.242j, -4.051-2.438j,
-2.+2.j, 1.808+1.804j, 2.414-1.242j, -0.257-2.3395j,
-2.+0.j, -0.257+2.339j, 2.414+1.2426j, 1.808-1.8042j,
-2.-2.j, -4.051+2.438j, -0.414+7.2426j, 6.499+6.5822j
As you can see in the second output, the first column (which correspond to output 0, 3, 7 and 11) is identical to the desired output from the first, smaller-sized FFT.

How to calculate total energy consumption using Cooja

I'm working with wireless sensor network lead to evaluate its performance in my work. I want to measure the latency and total energy consumption to find the remaining energy in each node. So my problem is that I have some values of tx rx cpu cpu_idle and I don't how to use them to calculate what I need. I found some rules that take the calculation but i don't understand exactly how to apply it in my case.
Energy consumed in communication:
Energy consumed by CPU:
What is the meaning of 32768, and why do we use this number? Is it a standard value?
The powertrace output is printed in timer ticks.
tx - the number of ticks the radio has been in transmit mode (ENERGEST_TYPE_TRANSMIT)
rx - the number of ticks the radio has been in receive mode (ENERGEST_TYPE_LISTEN)
cpu - the number of ticks the CPU has been in active mode (ENERGEST_TYPE_CPU)
cpu_idle - the number of ticks the CPU has been in idle mode (ENERGEST_TYPE_LPM)
The elements of the pair tx and rx are exclusive, as are cpu and idle - the system can never be in both modes at the same time. However, other combinations are possible: it can be in cpu and in tx at the same time, for example. The sum of cpu and idle is the total uptime of the system.
The duration of timer a tick is platform-dependent and defined as the RTIMER_ARCH_SECOND constant. 32768 ticks per second is a typical value of this constant - that's where the number in your equation comes from. For example:
ticks_in_tx_mode = energest_type_time(ENERGEST_TYPE_TRANSMIT);
seconds_in_tx_mode = ticks_in_tx_mode / RTIMER_ARCH_SECOND;
To compute the average current consumption (in milliamperes, mA), multiply each of tx, rx, cpu, cpu_idle with the respective current consumption in that mode in mA (obtain the values from the datasheet of the node), sum them up, and divide by RTIMER_ARCH_SECOND:
current = (tx * current_tx_mode + rx * current_rx_mode + \
cpu * current_cpu + cpu_idle * current_idle) / RTIMER_ARCH_SECOND
To compute the charge (in millicoulumbs, mC), multiply the average current consumption with the duration of the measurement (node's uptime) in seconds:
charge = current * (cpu + cpu_idle) / RTIMER_ARCH_SECOND
To compute the power (in milliwats, mW) multiply the average current consumption with the voltage of the system, for exampe, 3 volts if powered from a pair of AA batteries:
power = current * voltage
Finally, to compute the energy consumption (in millijoules, mJ), multiply the power with the duration in seconds or multiply the charge with the voltage of the system:
energy = charge * voltage
The first formula above computes the energy consumption for communications; the second one: for computation.
This site might be helpful to break down the numbers.
32768 Hz or 32, 768 kHz this is of MSP430F247 Microcontroller frequency, for specifics are Active mode: 32iuA # 3 v / 1 MHz or 1x10 6 Hz and Low Power mode = 1 uA # 3V /32768 Hz

Hardware implementation for integer data processing

I am currently trying to implement a data path which processes an image data expressed in gray scale between unsigned integer 0 - 255. (Just for your information, my goal is to implement a Discrete Wavelet Transform in FPGA)
During the data processing, intermediate values will have negative numbers as well. As an example process, one of the calculation is
result = 48 - floor((66+39)/2)
The floor function is used to guarantee the integer data processing. For the above case, the result is -4, which is a number out of range between 0~255.
Having mentioned above case, I have a series of basic questions.
To deal with the negative intermediate numbers, do I need to represent all the data as 'equivalent unsigned number' in 2's complement for the hardware design? e.g. -4 d = 1111 1100 b.
If I represent the data as 2's complement for the signed numbers, will I need 9 bits opposed to 8 bits? Or, how many bits will I need to process the data properly? (With 8 bits, I cannot represent any number above 128 in 2's complement.)
How does the negative number division works if I use bit wise shifting? If I want to divide the result, -4, with 4, by shifting it to right by 2 bits, the result becomes 63 in decimal, 0011 1111 in binary, instead of -1. How can I resolve this problem?
Any help would be appreciated!
If you can choose to use VHDL, then you can use the fixed point library to represent your numbers and choose your rounding mode, as well as allowing bit extensions etc.
In Verilog, well, I'd think twice. I'm not a Verilogger, but the arithmetic rules for mixing signed and unsigned datatypes seem fraught with foot-shooting opportunities.
Another option to consider might be MyHDL as that gives you a very powerful verification environment and allows you to spit out VHDL or Verilog at the back end as you choose.

EEG Wavelet Analysis

I want to do a time-frequency analysis of an EEG signal. I found the GSL wavelet function for computing wavelet coefficients. How can I extract actual frequency bands (e.g. 8 - 12 Hz) from that coefficients? The GSL manual says:
For the forward transform, the elements of the original array are replaced by the discrete wavelet transform f_i -> w_{j,k} in a packed triangular storage layout, where J is the index of the level j = 0 ... J-1 and K is the index of the coefficient within each level, k = 0 ... (2^j)-1. The total number of levels is J = \log_2(n).
The output data has the following form, (s_{-1,0}, d_{0,0}, d_{1,0}, d_{1,1}, d_{2,0}, ..., d_{j,k}, ..., d_{J-1,2^{J-1}-1})
If I understand that right an output array data[] contains at position 1 (e.g. data[1]) the amplitude of the frequency band 2^0 = 1 Hz, and
data[2] = 2^1 Hz
data[3] = 2^1 Hz
data[4] = 2^2 Hz
until
data[7] = 2^2 Hz
data[8] = 2^3 Hz
and so on ...
That means I have only the amplitudes for the frequencies 1 Hz, 2 Hz, 4 Hz, 8 Hz, 16 Hz, ... How can I get e.g. the amplitude of a frequency component oscillating at 5.3 Hz? How can I get the amplitude of a whole frequency range, e.g. the amplitude of 8 - 13 Hz? Any recommendations how to get a good time-frequency distribution?
I'm not sure how familiar you are with general signal processing, so I'll try to be clear, but not chew the food for you.
Wavelets are essentially filter banks. Each filter splits a given signal into two non-overlapping independent high frequency and low frequency subbands such that it can then be reconstructed by the means of an inverse transform. When such filters are applied continually, you get a tree of filters with output of one fed into the next. The simplest, and the most intuitive way to build such tree is as follows:
Decompose a signal into low frequency (approximation) and high frequency (detail) components
Take the low frequency component, and perform the same processing on that
Keep going until you've processed the required number of levels
The reason for this is that you can then downsample the resulting approximation signal. For example, if your filter splits a signal with sampling frequency (Fs) 48000 Hz -- which yields maximum frequency of 24000 Hz by Nyquist Theorem -- into 0 to 12000 Hz approximation component and 12001 to 24000 Hz detail component, you can then take every second sample of the approximation component without aliasing, essentially decimating the signal. This is widely used in signal and image compression.
According to this description, at level one you split your frequency content down the middle and create two separate signals. Then you take your lower frequency component and split it down the middle again. You now get three components in total: 0 to 6000 Hz, 6001 to 12000 Hz, and 12001 to 24000 Hz. You see that the two newer components are both half the bandwidth of the first detail component. You get this sort of a picture:
This correlates with the bandwidths you describe above (2^1 Hz, 2^2 Hz, 2^3 Hz and so on). However, using a broader definition of a filter bank, we can arrange the above tree structure as we like, and it will still remain a filter bank. For example, we can feed both approximation and detail component to to split into two high-frequency and low-frequency signals like so
If you look at it carefully, you see that both high and low frequency components down the middle in their frequencies and as a result you get a uniform filter bank whose frequency separation looks more like this:
Notice that all bands are of the same size. By building a uniform filter bank with N levels, you end up with responses of 2^(N-1) band-bass filters. You can fine-tune your filter bank to eventually give you the desired band (8-13 Hz).
In general, I would not advise you to do this with wavelets. You can go through some literature on designing good band-pass filters and simply build a filter that would only let through 8-13 Hz of your EEG signals. That's what I've done before and it worked quite well for me.

Resources