In RPL, How do I block some node at link layer itself so that further layer do not process the data from that node? - contiki

I want to block malicious node at data-link layer itself so that the further layers do not have to process data from that node. Is there any way to block all communications from that node.
Note: I have IP address of identified malicious node, e.g. fe80::c30c:0:0:13
Ideas are welcomed.

Related

How do I get the next node id/name along the path the agent is moving?

Assume that an agent is moving from Node1 to Node3 in the following network:
Node1 - PathA - Node2 - PathB- Node3
How do I access the next node the agent will pass?
The actual task it for bus agents to move along the yellow paths and pickup/drop off passengers and crews at the corresponding stands (nodes) - one of the tasks requires me to acquire the "next node".
If you want full control of path-finding and nodes, check this tutorial. Fair warning: this is quite advanced and goes well beyond AnyLogic basics: https://www.benjamin-schumann.com/blog/2022/8/6/taking-control-of-your-network-agent-based-pathfinding
When you define a destination node, the only thing you can access once your agent starts moving is the destination position using getTargetX() for instance.
Nevertheless, AnyLogic doesn't give you access to the set of nodes the agent will use on its movement.

How to capture a MultibodyPlant's geometry hierarchy in an output port's callback

I have a system consisting of a MultibodyPlant (and scene graph), which loads an iiwa manipulator from a URDF file, and a LeafSystem that is creating ROS TF messages corresponding to the geometry stored in the MultibodyPlant.
The LeafSystem has an input port that receives a graph query object, and an output port that produces ROS TF messages. The input port is connected to the scene graph's graph query port. The output port is connected elsewhere, which is not relevant to my problem.
The LeafSystem's output port's value is calculated by this callback. It inspects the scene graph and creates a TF frame for every frame in the scene graph.
The problem I'm having is that the scene graph has no knowledge of the hierarchical relationship between bodies in the MultibodyPlant. This means that the TF messages produced contain a set of frames that are all rooted in the world frame. I would like to instead have them reflect the actual hierarchy of joints and links that is in the URDF. The MultibodyPlant has this information. What architecture or ports should I use to make the information available in my LeafSystem's output port value-calculating callback?
Can the solution to this be generic to any system that contains geometry, or does it only make sense for a MultibodyPlant? (I may not be understanding systems correctly here - I'm relatively new to Drake.)

How to calculate the number of packets sent to or forwarded by a node in RPL protocol of ContikiOS?

In RPL for selecting the best parent with the trust model In order to select a trusted parent, direct and indirect trust must be calculated. For direct trust computation, the number of packets sent to node A by node B and the number of packets forwarded by node A on behalf of node B must be counted and I have trouble in determining the number of forwarded packets. Any help will be useful for solving this problem.
You can look at the values of uip_stat.ip.sent and uip_stat.ip.forwarded. Make sure to enable uIP statistics (#define UIP_CONF_STATISTICS 1).

Private Ethereum maxpeer

Steps
I have created a private node and use --maxpeer of 1 (network id =1223123341)
Add user's X node by admin.addPeer(enode of user X) successfully. (same network id and genesis)
Base on my understanding that maxpeer will limit the node that can conect from the network to 1 node only(user's X node)
Question - if user's X node update his --maxpeer to 5 and give the network id and genesis file to other nodes, does it means there can now 5 who can conect to this network? Who control the maxpeer in a private network (e.g. network id =1223123341)
If you want to avoid 51% attacks, you should consider running permissioned chains. You can either do this by keeping your genesis block of the Proof-of-Work or -Stake network private, but you would have to share it with any participant in the network and you will not know if this can be leaked at some point. And if it does, there is no way to stop other users from participating.
Another option is to use Proof-of-Authority networks. Both Geth and Parity support that. This allows only strictly defined nodes to seal blocks and everyone else can just use the network, but not change the set of rules defined by the authorities.
Note: I work for Parity.
The --maxpeers option controls the number of peers for that particular instance. So, yes, if node 1 has --maxpeers=1 and node 2 has --maxpeers=5, you will not be limited to just 2 nodes in the network. Nodes don't all need to know about every other node either, so node 2 may be peers with nodes 3-7 and not know anything about node 1 (in other words, with the example you provided, the total number of nodes could be even more than 5).
AFAIK, there is no configuration to limit the total number of nodes in a network, and I don't see what you would want one. You are given enough control at the node level.

The impact of a distributed application configuration on node discovery via net_adm:ping/0

I am experiencing different behavior with respect to net_adm:ping/1 when being done in the context of a Distributed Application.
I have an application that pings a well-known node on start-up and in that way discovers all nodes in a mesh of connected nodes.
When I start this application on a single node (non-distributed configuration), the net_adm:ping/1 followed by a nodes/0 reports 4 other nodes (this is correct). The 4 nodes are on 2 different physical machines, so what is returned is the following n1#machine_1, n2#machine_2, n3#machine_2, n4#machine_1 (ip addresses are actually returned, not machine_x).
When part of a two-node distributed application, on the node where the application starts, the net_adm:ping/1 followed by a nodes/0 reports 2 nodes, one from each machine(n1#machine1, n2#machine2). A second call to nodes/0 after about a 750 ms delay results in the correct 5 nodes being found. Two of the three missing nodes are required for my application to work and so, not finding them, the application dies.
I am using R15B02
Is latency regarding the transitive node-discovery process known to be different when some of the nodes in the mesh are participating in distributed application configuration?
The kernel application documentation mentions the way to synchronize nodes in order to stop the boot phase until ready to move forward and everything is in place. Here are the options:
sync_nodes_mandatory = [NodeName]
Specifies which other nodes must be alive in order for this node to start properly. If some node in the list does not start within the specified time, this node will not start either. If this parameter is undefined, it defaults to [].
sync_nodes_optional = [NodeName]
Specifies which other nodes can be alive in order for this node to start properly. If some node in this list does not start within the specified time, this node starts anyway. If this parameter is undefined, it defaults to the empty list.
A file using them could look as follows:
[{kernel,
[{sync_nodes_mandatory, [b#ferdmbp, c#ferdmbp]},
{sync_nodes_timeout, 30000}]
}].
Starting the node a#ferdmbp by calling erl -sname a -config config-file-above. The downside of this approach is that each node needs its own config file.

Resources