Can a udp-client join a specific RPL instance ID in contiki-ng? - contiki

I am using the rpl-udp example and I want to deploy 2 udp-servers that will have a different RPL Instance ID (using RPL-Lite). I modified the RPL_DEFAULT_INSTANCE macro, and they create 2 RPL instances. However, when I simulate using Cooja, the udp-clients join both RPL instances depending on which DIO packet they receive. Is it possible to specify to udp-client to join only a specified RPL instance?

As far as I know there is no such feature. Have you tried to modify DIO processing function? (For example, if a udp-client receives a DIO from a non-target instance, simply drops it).

Related

How to copy measurements into new measurements with different structure

Say I have measurements named:
eth0
wlan0
And I want to change these measurements to be named just "traffic" and have the previous name be a tag, so something like:
traffic - tags(ifname=eth0)
traffic - tags(ifname=wlan0)
This is a better structure because with this I should be able to retrieve either the total traffic for all interfaces or traffic from a specific interface, while with the previous structure I understand that I can't do it.
However, I need to write a data migration in my application that converts the past data to the new structure, how can I do that?
Do I have to rewrite each point one by one or is there a faster way to do it?
Thanks in advance!

What is the difference between DataStream and KeyedStream in Apache Flink?

I am looking in the context of joining two streams using Flink and would like to understand how these two streams differ and affect how Flink processes them.
As a related question, I would also like to understand how CoProcessFunction would differ from a KeyedCoProcessFunction.
A KeyedStream is a DataStream that has been hash partitioned, with the effect that for any given key, every stream element for that key is in the same partition. This guarantees that all messages for a key are processed by the same worker instance. Only keyed streams can use key-partitioned state and timers.
A KeyedCoProcessFunction connects two streams that have been keyed in compatible ways -- the two streams are mapped to the same keyspace -- making it possible for the KeyedCoProcessFunction to have keyed state that relates to both streams. For example, you might want to join a stream of customer transactions with a stream of customer updates -- joining them on the customer_id. You would implement this in Flink (if doing so at a low level) by keying both streams by the customer_id, and connecting those keyed streams with a KeyedCoProcessFunction.
On the other hand, a CoProcessFunction has two inputs, but with no particular relationship between those inputs.
The Flink training has tutorials covering keyed streams and connected streams, and a related exercise/example.

Chord Join DHT - join protocol for second node

I have a distributed hash table (DHT) which is running on multiple instances of the same program, either on multiple machines or for testing on different ports on the same machine. These instances are started after each other. First, the base node is started, then the other nodes join it.
I am a little bit troubled how I should implement the join of the second node, in a way that it works with all the other nodes as well (all have of course the same program) without defining all border cases.
For a node to join, it sends a join message first, which gets passed to the correct node (here it's just the base node) and then answered with a notify message.
With these two messages the predecessor of the base node and the successor of the existing nodes get set. But how does the other property get set? I know, that occasionally the nodes send a stabilise message to their successor, which compares it to its predecessor and returns it with a notify message and the predecessor in case it varies from the sender of the message.
Now, the base node can't send a message, because it doesn't know its successor, the new node can send one, but the predecessor is already valid.
I am guessing, both properties should point to the other node in the end, to be fully joined.
Here another diagram what I think should be the sequence i the third node joins. But again, when do I update the properties based on a stabilise message, when do I send a notify message back? In the diagram it is easy to see, but in code it is hard to decide.
Th trick is here to set the successor to the same value as the predecessor if it is NULL after the join-message has been received. Everything else gets handled nicely by the rest of the protocol.

Mixing stores in same Xodus Environment

Would it be possible to use a PersistentEntityStore and one or more plain Store instances in the same Environment instance? I was hoping to use transactions that cover changes on such a combination.
I see potential conflicts with store names that I would have to avoid. Anything else?
It's possible to mix code using different API layers inside a single transaction. The only requirement is the data touched by different API should be isolated, disjoint sets of names of Stores should be used.
What are the names of Stores used by PersistentEntityStore? Any PersistentEntityStore has its own unique name, and names of all Stores, that represent mapping of the entity store to key/value layer, start with "${PersistentEntityStore name}.", as it's specified in the source code.
Another issue is that API is not complete for such approach. After a StoreTransaction is created against the PersistentEntityStore, it should be be cast to PersistentStoreTransaction in order to call PersistentStoreTransaction#getEnvironmentTransaction() for getting underlying transaction:
final StoreTransaction txn = entityStore.beginTransaction();
// here is underlying Transaction instance:
final Transaction envTxn = ((PersistentStoreTransaction) txn).getEnvironmentTransaction();

Erlang scalability questions related to gen_server:call()

in erlang otp when making a gen_server:call(), you have to send in the name of the node, to which you are making the call.
Lets say I have this usecase:
I have two nodes: 'node1' and 'node2' running. I can use those nodes to make gen_server:call() to each other.
Now lets say I added 2 more nodes: 'node3' and 'node4' and pinged each other so that all nodes can see and make gen_server:calls to each other.
How do the erlang pros handle the dynamic adding of new nodes like that so that they know the new node names to enter into the gen_server calls, or is it a requirement to know beforehand the names of all the nodes so that they are hardcoded in somewhere like the sys.config?
you can use:
erlang:nodes()
to get a "now" view of the node list
Also, you can use:
net_kernel:monitor_nodes(true) to be notified as nodes come and go (via ping / crash / etc)
To see if your module is running on that node you can either call the gen_server with some kind of ping callback
That, or you can use the rpc module to call erlang:whereis(name) on the foreign node.

Resources