Can a Contiki NG node be a Root of one RPL instance and a child in another RPL instance? - contiki

In Cooja, I made two nodes, each using the rpl-border-router example configured to use RPL classic in storing and multicast mode, but with different default instance IDs and different prefixes. Both border routers connect to a Ubuntu host via tunslip. I have RPL logging set to DBG. Other than changing RPL_max_instances to 3, all configuration parameters are their defaults. I saw that uip configuration is set to act as a router by default, so it seems like that what I put together ought to allow passing data along the appropriate DAG based on the prefix and RPL instance. My reading of RFC 6550 Appendix A is that using separate prefixes is allowed.
When I run the simulation in Cooja, I see the nodes exchange DIOs, each joins the other's DAG, and each have IP addresses with both prefixes. However, if I ping from the host I don't get a ping response, but do see warnings from both nodes that a loop is detected. Is there something I am missing in my configuration to allow a root of one DAG in one instance participate as a child in a DAG on another instance? Have I misunderstood the intent of multiple instances?

Related

Is it possible to set up two mosquitto brokers on the same machine with different access to them?

Is it possible to set up two brokers on the same machine with different access to them? Or one broker - one machine, the second - the second?
Yes - it is possible. You will want to point each instance at a different configuration files (-c / --config-file command line option) and ensure that the configuration files do not have listener's on the same the same port.
The different instances will be independent (so something with a connection to one broker will not receive any messages published to a second broker unless you establish a bridge between them).

overriding configuration on a running tarantool instance

Can anyone tell me in the course, it is possible to override the parameters of individual box.cfg on a running instance. For example, add a replica, for several days I have been trying to deploy three replicas on three hosts via the docker service stack.
When I raise my hands on each server, everything works, through deploy they do not see each other and fall. I've tried all sorts of ways. hung up the endpoint on the target nodes, when requested, it gives the ip of the machine on which the container rises, if the ip matches one of those indicated in SEED, then substitutes the internal ip of the container instead (otherwise it cannot connect to itself).
In theory, it all works as I described, but there are suspicions that everything is not much different, I suppose that the problem is that before the declaration of box.cfg the instance does not reserve the address. Alas, I can not go inside the container because it cannot rise. I got the idea that if all three nodes are declared at the minimum settings and as soon as they rise to listen to the subnet, as soon as the node finds another, it will write it to replication and override box.cfg. Correct me please who had experience.
Some of the box.cfg parameters are dynamic. For example, the box.cfg{listen=}. You can set this one from the Lua code as you wish. In your case, if the container gets its IP address later, you need to specify only the port in listen. This way, Tarantool will listen on all possible interfaces.
The replication_source is a bit trickier. You can set it dynamically, but your first (initializing) call to box.cfg should be with the replication_source. This is because all instances that are initialized without this parameter will create their own replicaset, and it will make it impossible to join them to another replicaset.
You can read more about Tarantool replication architecture here: https://www.tarantool.io/en/doc/latest/book/replication/repl_architecture/

Is there a way to disconnect or sandbox an instance network interface

I am looking for how I can take an existing instance and either change its network "connection" to a sandboxed network (which is easy enough to create since each project supports up to 5 networks) or start the instance with no network interface at all and just use console access. Alternatively, what is the recommended process for doing forensic investigation into an instance that is suspected to be running processes or services that should not be communicating with other instances in the project or any external address? Thanks in advance.
You can leave instances without a public IP address. Instances created this way will not accessible by machines outside your project.
Have a look at the documentation concerning IPs.
You may also need to set up a NAT gateway so instances can communicate with ouside machines.
You can use forwarding rules to discard packets from/to an instance in combination with routing.

Can I set multiple cookies for a given Erlang node?

To my understanding, if you have two different erlang clusters, each of them using a different Erlang cookie, a node belonging to the first cluster will not be able to communicate with a node belonging to the second cluster.
Does Erlang provide a mechanism to allow multiple magic cookies for a given node?
As explained here and as mentioned by #legoscia in the comments:
For a node Node1 with magic cookie Cookie to be able to connect
to, or accept a connection from, another node Node2 with a different
cookie DiffCookie, the function erlang:set_cookie(Node2,
DiffCookie) must first be called at Node1.
Please note that connections between Erlang nodes are by default transitive, meaning that you soon end up with a fully-connected cluster of Erlang nodes, which can heavily affect communication performances. An alternative approach, based on the concept of "group of nodes" is under research.

Erlang: automatic population of .hosts.erlang file?

I am using net_adm:world() to connect to Nodes on other hosts, but the only way I got this to work is once I manually created a hosts file and list the name of the other host in the file. If I had 10 hosts I would have to put this file on all ten machines and update the list ten times every time a new host is added to the cluster.
Is there no way this file can be automatically updated each time a connection to a Node on a new host is made?
your .hosts.erlang file doesn't need to be complete or 100% correct. a node only needs to connect to one other to learn about every other node in the cluster.
you could skip maintaining the .hosts.erlang file and use mutlicast UDP to dynamically discover nodes. See nodefinder for example code.
we started down the multicast UDP route but then decided to just maintain a central hosts file and use rsync to distribute it to all hosts. we restart nodes infrequently so it hasn't been a big problem.
We use chef to prepopulate the .hosts.erlang file for nodes that belong to a cluster. The function net_adm:world() can be used to determine the nodes that are currently part of a cluster which does not necessarily match what is contained .hosts.erlang, e.g., when one of the nodes is down. An alternative to using net_adm:world() is the function net_adm:world_list(Hosts) which takes a list of hosts (instead of reading from .hosts.erlang) and does the same as net_adm:world() to determine the currently connected nodes.

Resources