Contiki-NG RE-Mote board: The radio is always listening - contiki

I wanted to measure the energy consumption of my app using energest. However, I found out that the radio is almost always listening, as the RX energest value is pretty close to the Rtime interval I measure about. I want to ask you if I should use low-power-mode in the board, so that I force the radio to deactivate and result in less energy consumption. Can I use PM0 if I really need to use the RAM? I am asking you because the linker puts a comment in a header file that only PM1 can give access to the 32k RAM, and bypasses the defined macro: #define LPM_CONF_MAX_PM 0. Thank you.

By default, the CSMA MAC protocol is used in Contiki-NG. This protocol always has radio on. For radio duty cycling, either use TSCH, or turn off the radio from application's code by calling NETSTACK_RADIO.off().

Related

Distributed Hash Tables. Do I need shortcuts (even in a large system)?

I am learning about DHT and I came upon shortcuts. It seems they are used to make the routing faster and skip going directly back-the-chain for better performance. What I don't understand is: Suppose we have a circular DHT made out of 100 servers/nodes/HT. You get some key data to server/node/HT 10 and it must be sent to server/node/HT 76. When the destination is reached, and the value is taken couldn't I just provide the IP of the requester (server 10) and then it will directly send the value to 10, which seems to make shortcuts useless?
Thank you in advance.
Edit: Useless for returning the value. Not getting to it.
You're assuming a circular network layout and forwading-based routing. Both of which only apply to a subset of DHTs.
Anyway, the forward path would still go through all the nodes, any of which might be down or have transient network problems. As the number of hops goes up so does the cumulative error probability. Additionally it increases latency, which matters on a global scale because at least simple DHT routing algorithms don't account for physical proximity.
For the return it can also matter if reachability is asymmetrical, e.g. due to firewalls.

Difference in "Initializing Each Device"

everyone. I am learning LDD3. and has a question on below statement
"Note that struct net_device is always put together at runtime; it cannot be set up at compile time in the same manner as a file_operations or block_device_operations structure"
so what is root cause for this difference? the different behavior btw network driver and char driver?? Could anyone explain here? Thankss
The root cause of the difference is the nature of the data stored in these structures.
file_operations is some kind of global set of callbacks for a particular device which have a well-defined purpose (such as .open, .mmap, etc.) and are available (known) at compile time.
It doesn't assume any volatile data fields which could change throughout the process of module usage. So, merely a set of callbacks known at compile time.
net_device, in turn, is a structure intended to keep data amenable to lots of runtime changes. Suffice it to say, such fields as state, mtu and many others are self-explanatory. They are not known at compile time and need to be initialised/changed throughout runtime.
In other words, the structure obeys a strict interface of device probe/configuration/start order and the corresponding fields are initialised on the corresponding steps. Say, nobody knows the number of Rx or Tx queues at compile time. The driver calculates these values on initialisation based on some restrictions and demands from the OS. In example, the same network adapter may find it feasible to allocate 1 Rx queue and 1 Tx queue on 1-core CPU system whilst it will likely decide to setup 4 queues on a quad-core system. There is no point predicting this at compile time and initialising the struct to some values which will be changed anyway.
the different behavior btw network driver and char driver??
So, to put it together, it is the difference between the purposes of the two kind of structures. Keeping a static information about callbacks in one case and maintaining a volatile device state (name, tunable properties, counters, etc.) in the other case. I hope you find it clear.

Data recovery of QFSK signal in GNURadio

I'm pretty new to using GNURadio and I'm having trouble recovering the data from a signal that I've saved into a file. The signal is a carrier frequency of 56KHz with a frequency shift key of +/- 200hz at 600 baud.
So far, I've been able to demodulate the signal that looks similar to the signal I get from the source:
I'm trying to get this into a repeating string of 1s and 0s (the whole telegram is 38 bytes long and it continuously repeats). I've tried to use a clock recovery block in order to have only one byte per sample, but I'm not having much luck. Using the M&M clock recovery block, the whole telegram sometimes comes out correct, but it is not consistent. I've tried to adjust the omega and Mu values, but it doesn't seem to help that much. I've also tried using the Polyphase Clock sync, but I keep getting a runtime error of 'please specify a filter'. Is this asking me to add a tap? what tap would i use?
So I guess my overall question would be: What's the best way to get the telegram out of the demodulated fsk signal?
Again, pretty new at this so please let me know if I've missed something crucial. GNU flow graph below:
You're recovering the bit timing, but you're not recovering the byte boundaries – that needs to happen "one level higher", eg. by a well-known packet format with a defined preamble that you can look for.

Contiki OS CC2538: Reducing current / power consumption

I am trying to drive down the current consumption of the contiki os running on the CC2538 development kit.
I would like to operate the device from a CR2032 with a run life of 2 years. To achieve this I would need an average current less than 100uA.
However when I run the following at 3V, I get the following results:
contiki/examples/hello-world = 0.4mA - 2mA
contiki/examples/er-rest-example/er-example-client = 27mA
contiki/examples/er-rest-example/er-example-server = 27mA
thingsquare websocket example = 4mA
I have also designed my own target platform based on the cc2538 and get similar results.
I have read the guide at https://github.com/contiki-os/contiki/blob/648d3576a081b84edd33da05a3a973e209835723/platform/cc2538dk/README.md
and have ensured that in the contiki-conf.h file:
- LPM_CONF_ENABLE 1
- LPM_CONF_MAX_PM 2
Can anyone give me some pointers as to how I can get the current down. It would be most appreciated.
Regards,
Shane
How did you measure the current?
You have to be aware that using a basic ampere meter to measure the current consumption of contiki-os wouldn't give you relevant results. The system is turning on/off the radio at a relative high rate (8Hz by default) in order to perform the CCA. This might not be very easy to catch for an ampere meter.
To have an idea of the current consumption when the device is in deep sleep (and then make calculation to determine the averaged current consumption), I'd rather put the device in the PM state before the program reach the infinite while loop. I used the following code to do that:
lpm_enter();
REG(SYS_CTRL_PMCTL) = SYS_CTRL_PMCTL_PM2;
do { asm("wfi"::); } while(0);
leds_on(LEDS_RED); // should not reach here
while(1){
...
On the CC2538, the CCA check consumes about 10-15mA and last approximately 2ms. When the radio transmit a packet, it consume 25mA. Have a look at this post: Contiki UDP packet transmission duration with CC2538.
Furthermore, to save a little more current, turn off the serial com:
#define CC2538_CONF_QUIET 1
Are you using the SmartRF board? If you want to make proper current measurement with this board, you have to remove every jumpers: P486, P487, P411 and P408. Keep only the jumpers of BTN_SEL and the RESET signals.

First Atmel Project

looking for a little help.
I'm familiar with PIC Microcontrollers but have never used Atmel.
I'm required to use an ATMEGA128 for a project at work so I've been playing around in Atmel Studio 6 the last few days.
I'm having an issue however, I can't even get an LED to blink.
I'm using the STK500 and STK501 Dev boards and the JTAGICE_MKII USB debugger/programmer.
The ATMEGA128 chip is a TQFP package that's in the socket on the STK501 board.
I'm able to program/read the chip no problems, and my code builds without error (except for when I try to use the delay functions used in the delay.h library - but that's another issue).
For now I'm just concerned with getting the IO working. I have a jumper from 2 bits of PORTD connecting to 2 of the LEDs on the STK500 board.
All I'm doing in my code is setting the PORT direction with the DDRx ports and then setting all the PORTD pins to 0. The LEDs remain turned on.
When I'm in debugging mode and I have the watch window open, I can break the code and the watch windows shows me that the PORTD bits are indeed all 0's, but the LEDs remain on.
So far, I hate Atmel. :)
Any ideas?
Thanks
Have you tried setting them to logic 1? It is common for LED circuits to connect the LED to Vcc via a current-limiting resistor, which means the output port has to be 0 to turn on the LED.
If you set it to 1 and the LED goes off, then that'll tell you it's an "active low" signal and you can reverse your logic accordingly.
Have you read the STK500's doc? It is likely, that the LEDs are driven active low.
There are two steps to follow. First you set the "direction" of the pins, because they can be used as input or output. To make the D register pins output pins:
DDRD = 0xFF;
This will set all pins on the D register as output pins. Do this first. Then code like:
PORTD != 0x01;
will set the D0 pin high. And code like
PORTD ^= 0x01;
will toggle the pin.
See this tutorial for a little more info or visit in with this community. The Atmel community is vibrant and helpful.

Resources