I'm working with wireless sensor network lead to evaluate its performance in my work. I want to measure the latency and total energy consumption to find the remaining energy in each node. So my problem is that I have some values of tx rx cpu cpu_idle and I don't how to use them to calculate what I need. I found some rules that take the calculation but i don't understand exactly how to apply it in my case.
Energy consumed in communication:
Energy consumed by CPU:
What is the meaning of 32768, and why do we use this number? Is it a standard value?
The powertrace output is printed in timer ticks.
tx - the number of ticks the radio has been in transmit mode (ENERGEST_TYPE_TRANSMIT)
rx - the number of ticks the radio has been in receive mode (ENERGEST_TYPE_LISTEN)
cpu - the number of ticks the CPU has been in active mode (ENERGEST_TYPE_CPU)
cpu_idle - the number of ticks the CPU has been in idle mode (ENERGEST_TYPE_LPM)
The elements of the pair tx and rx are exclusive, as are cpu and idle - the system can never be in both modes at the same time. However, other combinations are possible: it can be in cpu and in tx at the same time, for example. The sum of cpu and idle is the total uptime of the system.
The duration of timer a tick is platform-dependent and defined as the RTIMER_ARCH_SECOND constant. 32768 ticks per second is a typical value of this constant - that's where the number in your equation comes from. For example:
ticks_in_tx_mode = energest_type_time(ENERGEST_TYPE_TRANSMIT);
seconds_in_tx_mode = ticks_in_tx_mode / RTIMER_ARCH_SECOND;
To compute the average current consumption (in milliamperes, mA), multiply each of tx, rx, cpu, cpu_idle with the respective current consumption in that mode in mA (obtain the values from the datasheet of the node), sum them up, and divide by RTIMER_ARCH_SECOND:
current = (tx * current_tx_mode + rx * current_rx_mode + \
cpu * current_cpu + cpu_idle * current_idle) / RTIMER_ARCH_SECOND
To compute the charge (in millicoulumbs, mC), multiply the average current consumption with the duration of the measurement (node's uptime) in seconds:
charge = current * (cpu + cpu_idle) / RTIMER_ARCH_SECOND
To compute the power (in milliwats, mW) multiply the average current consumption with the voltage of the system, for exampe, 3 volts if powered from a pair of AA batteries:
power = current * voltage
Finally, to compute the energy consumption (in millijoules, mJ), multiply the power with the duration in seconds or multiply the charge with the voltage of the system:
energy = charge * voltage
The first formula above computes the energy consumption for communications; the second one: for computation.
This site might be helpful to break down the numbers.
32768 Hz or 32, 768 kHz this is of MSP430F247 Microcontroller frequency, for specifics are Active mode: 32iuA # 3 v / 1 MHz or 1x10 6 Hz and Low Power mode = 1 uA # 3V /32768 Hz
Related
I am designing a simplistic memory controller and PHY on an Artix-7 FPGA but am having problems reading the datasheet. The timings in the memory part's datasheet (and in the JEDEC JSD79-3F doc) are expressed in CK/tCK/nCK units, which are in my opinion ambiguous if not running the memory at the nominal frequency (e.g. lower than 666 MHz clock for a 1333 MT/s module).
If I run a 1333 MT/s module at a frequency of 300 MHz -- still allowed with DLL on, as per the datasheet speed bins, -- is the CK/tCK/nCK unit equal to 1.5 ns (from the module's native 666 MHz), or 3.33 ns (from the frequency it is actually run at)? On one hand it makes sense that certain delays are constant, but then again some delays are expressed relative to the clock edges on the CK/CK# pins (like CL or CWL).
That is to say, some timing parameters in the datasheet only change when changing speed bins. E.g. tRP is 13.5 ns for a 1333 part, which is also backwards compatible with the tRP of 13.125 ns of a 1066 part -- no matter the chosen operating frequency of the physical clock pins of the device.
But then, running a DDR3 module at 300 MHz only allows usage of CL = CWL = 5, which is again expressed in "CK" units. To my understanding, this means 5 periods of the input clock, i.e. 5 * 3.33 ns.
I suppose all I am asking is whether the "CK" (or nCK or tCK) unit is tied to the chosen speed bin (tCK = 1.5 ns when choosing DDR3-1333) or the actual frequency of the clock signal provided to the memory module by the controlling hardware (e.g. 3.3 ns for the 600 MT/s mode)?
This is the response of u/Allan-H on reddit who has helped me reach a conclusion:
When you set the CL in the mode register, that's the number of clocks that the chip will wait before putting the data on the pins. That clock is the clock that your controller is providing to the chip (it's SDRAM, after all).
It's your responsibility to ensure that the number of clocks you program (e.g. CL=5) when multiplied by the clock period (e.g. 1.875ns) is at least as long as the access time of the RAM. Note that you program a number of clocks, but the important parameter is actually time. The RAM must have the data ready before it can send it to the output buffers.
Now let's run the RAM at a lower speed, say 312.5MHz (3.2ns period). We now have the option of programming CL to be as low as 3, since 3 x 3.2ns > 5 x 1.875ns.
BTW, since we are dealing with fractions of a ns, we also need to take the clock jitter into account.
Counterintuitively, the DRAM chip doesn't know how fast it is; it must be programmed with that information by the DRAM controller. That information might be hard coded into the controller (e.g. for an FPGA implementation) or by software which would typically read the SPD EEPROM on the DIMM to work out the speed grade then write the appropriate values into the DRAM controller.
This also explains timing values defined as e.g. "Greater of 3CK or 5ns". In this case, the memory chip cannot respond faster than 5 ns, but the internal logic also needs 3 positive clock edges on the input CK pins to complete the action defined by this example parameter.
what is the unit of digital numbers https://en.wikipedia.org/wiki/Numerical_digit. For example what is the unit of the difference of two ADC values:
10 - 2 = 8 digits
10 - 2 = 8 units
10 - 2 = 8 symbols
10 - 2 = 8 ???
Or for example I want to describe a slope:
Temperature example: 2 °C per second = 2 °C/sec
ADC example: 2 ??? per second = 2 ???/sec
What is correct?
Best regards
Zlatan
Numbers don't have units by default. Units are simply a multiplied symbol that represents the "nature" of the quantity.
First of all figure out the LSB (least significant bit) of the ADC.
Example: The ADC uses a vref of 1.2V and has 8bit => LSB=1.2V/(2^8-1)=4.7mV
A typical temperature sensor using a bipolar junction has about -2mV/K. The example ADC with LSB=4.7mV will respond with a change of 1LSB per 2.35K temperature decrease.
A change of 1LSB/second means you have a change of -2.35K/per second.
If this isn't accurate enough for your application you can use an ADC with more bits or stack several diodes acting as temperature sensors.
If you use something else than a bipolar junction the sensitivity of the temperature sensor can be different. Just check the spec of the sensor and the ADC (and it's reference to calculate the LSB)
Parameters you need:
Reference voltage of the ADC
Number of bits of the ADC (to calculate LSB)
Temperature coefficient of the sensor
I intend to use the beaglebone to sample a shaped signal of the order of 1 microsec. I need to fit the signal after and therefore i would like to have a sampling rate of let's 10 MHZ. Something that seems feasible with PRU and libpruio. The point is, looking to the adc specifications it seems there is a limit at 200KHz. Is my reasoning correct?
thanks
You'll need additional hardware for a sampling rate of 10 MHz! libpruio isn't designed to work at that speed, as well as the BBB hardware.
The ADC subsystem in the AM335x CPU is clocked at 24 MHz and needs 15 cycles for a sample (14 in continous mode). This leads to a maximum sample rate of 1.6 (1.74) MSamples/s. See SRM, chapter 12 for details.
The problem is to get the samples in to the host memory. I couldn't get this working faster than ~250 kSamples/s (by CPU access - I didn't try DMA).
As long as you don't need more values than the FIFO can hold, you can sample a single line at maximum 1.7 MHz.
BR
I'm reading this CPU specification: http://ark.intel.com/products/67356/Intel-Core-i7-3612QM-Processor-6M-Cache-up-to-3_10-GHz-rPGA
It says the CPU has 2 channels. So I think it has 2 memory controller inside. Then the max memory bandwidth should be 1.6GHz * 64bits * 2 * 2 = 51.2 GB/s if the supported DDR3 RAM are 1600MHz. But the specification says its max memory bandwidth is 25.6 GB/s.
I multiplied two 2s here, one for the Double Data Rate, another for the memory channel.
Is it the problem of the specification? or I have some miss understanding?
Double data rate memory specs usually already take into account that its effective frequency is doubled. "1600 MHz memory" really runs on 800 Mhz, so you can leave out one factor of 2 from your calculation.
I want to do a time-frequency analysis of an EEG signal. I found the GSL wavelet function for computing wavelet coefficients. How can I extract actual frequency bands (e.g. 8 - 12 Hz) from that coefficients? The GSL manual says:
For the forward transform, the elements of the original array are replaced by the discrete wavelet transform f_i -> w_{j,k} in a packed triangular storage layout, where J is the index of the level j = 0 ... J-1 and K is the index of the coefficient within each level, k = 0 ... (2^j)-1. The total number of levels is J = \log_2(n).
The output data has the following form, (s_{-1,0}, d_{0,0}, d_{1,0}, d_{1,1}, d_{2,0}, ..., d_{j,k}, ..., d_{J-1,2^{J-1}-1})
If I understand that right an output array data[] contains at position 1 (e.g. data[1]) the amplitude of the frequency band 2^0 = 1 Hz, and
data[2] = 2^1 Hz
data[3] = 2^1 Hz
data[4] = 2^2 Hz
until
data[7] = 2^2 Hz
data[8] = 2^3 Hz
and so on ...
That means I have only the amplitudes for the frequencies 1 Hz, 2 Hz, 4 Hz, 8 Hz, 16 Hz, ... How can I get e.g. the amplitude of a frequency component oscillating at 5.3 Hz? How can I get the amplitude of a whole frequency range, e.g. the amplitude of 8 - 13 Hz? Any recommendations how to get a good time-frequency distribution?
I'm not sure how familiar you are with general signal processing, so I'll try to be clear, but not chew the food for you.
Wavelets are essentially filter banks. Each filter splits a given signal into two non-overlapping independent high frequency and low frequency subbands such that it can then be reconstructed by the means of an inverse transform. When such filters are applied continually, you get a tree of filters with output of one fed into the next. The simplest, and the most intuitive way to build such tree is as follows:
Decompose a signal into low frequency (approximation) and high frequency (detail) components
Take the low frequency component, and perform the same processing on that
Keep going until you've processed the required number of levels
The reason for this is that you can then downsample the resulting approximation signal. For example, if your filter splits a signal with sampling frequency (Fs) 48000 Hz -- which yields maximum frequency of 24000 Hz by Nyquist Theorem -- into 0 to 12000 Hz approximation component and 12001 to 24000 Hz detail component, you can then take every second sample of the approximation component without aliasing, essentially decimating the signal. This is widely used in signal and image compression.
According to this description, at level one you split your frequency content down the middle and create two separate signals. Then you take your lower frequency component and split it down the middle again. You now get three components in total: 0 to 6000 Hz, 6001 to 12000 Hz, and 12001 to 24000 Hz. You see that the two newer components are both half the bandwidth of the first detail component. You get this sort of a picture:
This correlates with the bandwidths you describe above (2^1 Hz, 2^2 Hz, 2^3 Hz and so on). However, using a broader definition of a filter bank, we can arrange the above tree structure as we like, and it will still remain a filter bank. For example, we can feed both approximation and detail component to to split into two high-frequency and low-frequency signals like so
If you look at it carefully, you see that both high and low frequency components down the middle in their frequencies and as a result you get a uniform filter bank whose frequency separation looks more like this:
Notice that all bands are of the same size. By building a uniform filter bank with N levels, you end up with responses of 2^(N-1) band-bass filters. You can fine-tune your filter bank to eventually give you the desired band (8-13 Hz).
In general, I would not advise you to do this with wavelets. You can go through some literature on designing good band-pass filters and simply build a filter that would only let through 8-13 Hz of your EEG signals. That's what I've done before and it worked quite well for me.