Private Ethereum maxpeer - private

Steps
I have created a private node and use --maxpeer of 1 (network id =1223123341)
Add user's X node by admin.addPeer(enode of user X) successfully. (same network id and genesis)
Base on my understanding that maxpeer will limit the node that can conect from the network to 1 node only(user's X node)
Question - if user's X node update his --maxpeer to 5 and give the network id and genesis file to other nodes, does it means there can now 5 who can conect to this network? Who control the maxpeer in a private network (e.g. network id =1223123341)

If you want to avoid 51% attacks, you should consider running permissioned chains. You can either do this by keeping your genesis block of the Proof-of-Work or -Stake network private, but you would have to share it with any participant in the network and you will not know if this can be leaked at some point. And if it does, there is no way to stop other users from participating.
Another option is to use Proof-of-Authority networks. Both Geth and Parity support that. This allows only strictly defined nodes to seal blocks and everyone else can just use the network, but not change the set of rules defined by the authorities.
Note: I work for Parity.

The --maxpeers option controls the number of peers for that particular instance. So, yes, if node 1 has --maxpeers=1 and node 2 has --maxpeers=5, you will not be limited to just 2 nodes in the network. Nodes don't all need to know about every other node either, so node 2 may be peers with nodes 3-7 and not know anything about node 1 (in other words, with the example you provided, the total number of nodes could be even more than 5).
AFAIK, there is no configuration to limit the total number of nodes in a network, and I don't see what you would want one. You are given enough control at the node level.

Related

Distributing in-memory linked list

I have a program which is build based on a singly linked list. There are different programs which creates some form of data and this data sent to this linked list module to be added. As long as I've RAM available, program working as intended. Periodically -about every year-, I archive the entire linked list to the disk -due to requirement, I'm archiving all-. So far so good.
What happens if I wanted to add new node to the list whilst RAM is full and I haven't archived and freed the memory on RAM? This might occur when producer count goes up or regardless of producer count, there may be more data created depending or where it's used etc. I couldn't find a clear solution the scale the on-memory linked list. There is a workaround in my head but don't know even if it works so I thought better to ask here.
When the RAM start to get almost full, I would create a new instance
of the linked list program -just another machine on the cloud or new
physical computer on premise, whatever -.
I do have an service discovery module -something like ZooKeeper-, this discovery module will detect the newly created machine and adds to the list.
When first instance is almost in it's limits, it will check if there is an available instance, if there is; it will relay the node to the next instance and it will update its last node's next pointer to something special. If you wanted to traverse the list from start to finish across all the machines every time you come to this special node, it will have the information of the which machine has the next node. Traversal will continue from the next machine that the last node points to.
Since this this not a hash map or something in that nature, I can't just add replicate the service and for example relay the incoming request based on a given key to a particular machine.
Rather than archiving part of the old data and loading that to the RAM and continuing on like that, I thought it would be better to have a last pointer to point to a different machine and continue reading from that machine. My choice for a network call seemed better because this program will be used in a intranet, but still I couldn't find a solid solution on paper.
Is there a such example that I can study on and try to find a better solution? Is this solution feasible?
An example:
Machine 1:
1st node : [data:x, *next: 2nd Node address],
2nd node : [data:123, *next: 3rd Node address],
...
// at this point RAM is almost full
// receive next instance's ip
(n-1)th node : [data:987, *next: nth Node address],
nth node : [data:x2t, type: LastNodeInMachine, *next: nullptr]
Machine 2:
1st node == (n+1) node : [data:x, *next: 2nd Node address],
... and so on

How to calculate the number of packets sent to or forwarded by a node in RPL protocol of ContikiOS?

In RPL for selecting the best parent with the trust model In order to select a trusted parent, direct and indirect trust must be calculated. For direct trust computation, the number of packets sent to node A by node B and the number of packets forwarded by node A on behalf of node B must be counted and I have trouble in determining the number of forwarded packets. Any help will be useful for solving this problem.
You can look at the values of uip_stat.ip.sent and uip_stat.ip.forwarded. Make sure to enable uIP statistics (#define UIP_CONF_STATISTICS 1).

Configure Same offset for Kafka consumers from different groups

I have ServiceA which produces DomainChangeEvents and commits them into topic in kafka, then ServiceB consumes this events from kafka topic and applies changes to a read model stored in memory. Some of DomainChangeEvent's are reset events and those reset domain to starting point. On restart of ServiceB i want to read ChangeEvents from last reset and re-build domain afterwards.
ServiceB is lunched in docker as replicated service.
As i want all ChangeEvents in each replica of ServiceB i cannot give them same group.id or messages will be loadbalanced and i won't get all events in all replicas. How can i configure ServiceB to continue from latest reset event after restart?
I tried setting random group.id on ServiceB and committing reset message after i consume it but after restart i have different group.id so all messages are consumed from the start again.
Thought about giving different configuration to docker replicas but as i read docker service is configured to be identical in all replicas and thats not an option.
A possible solution would be storing those points you want your different consumers to start from, by manually committing the offset to, for example, a database.
A table that would look like:
Topic Partition Offset
topicA 0 112
topicA 1 125
topicB 0 2313
topicB 1 2984
topicB 2 2554
Those would be your "latest reset" points, or positions your consumers want to start from. The problem with the subscribe() method , as you correctly said, is that it depends on the group.id parameter, and plays the consumer rebalancing and coordination game.
In order to consume from a fixed point (or set of points in different partitions), you should make a call to assign() instead. With this method, you'll be able to manually specify a list of partitions to your consumers. No group.id, no dynamic partition assignment nor offset loading, which is what you seem to need.
After assigning the partitions, you should make a call to seek(). With seek, you are telling the consumer from which offset you want to start reading from the partition that was specified on the assign() method.
For example, to start reading from the "latest resets" from any topic, you should do something like:
//seeking the last offset of topicA's partition0
public void setStartPosition(TopicPartition partition, long offset)
{
consumer.assign(Collections.singletonList(partition)); //f.e-> partition0
consumer.seek(partition, offset); //f.e -> 112
}
Calling this method will position your consumer exactly in the desired position in each partition. I'm not really sure if I'm answering your issue, but hope it helps!

Missing master heartbeat does not cause node to react in a CANopen system

I have a strange finding about the heartbeat-protocol in CANopen. Maybe somebody else has seen something like this and maybe it is supposed to work like this... Anyway, here's what it's about:
In CANopen there are two timeout-based life-guarding mechanisms: the first is node guarding, which I will not mention further, since it's considered old news.
The other one is called heartbeat. It is pretty simple: Any participant on the network sends a regular message stating its node ID and its state. The frequency is defined by object 0x1017sub0 and is called heartbeat-producer-time. If it is set to zero, no heartbeat is being sent.
Any other participant can then define a number of nodes it wants to find on the network plus the maximum time there may be between two consecutive heartbeat-messages. This information is stored in object 0x1016sub1..n as 32-bit entries for as many nodes as this particular node wants to listen to.
The entries consist of the node ID (bits 22 to 16) and the mentioned maximum time that may elaps between heartbeats, called the heartbeat-consumer-time (in bits 15..0). Again if the entry is zero, it is being ignored.
As you may have gathered, there is no distinction between network-master (node ID 1) and slaves (node IDs 2 to 127).
So far the theory, now for my problem:
I configure one of the slave-nodes in my network as a heartbeat-consumer for the master, so there's an entry in object 0x1016sub1 that looks like this: 0x000107D0. Meaning that a heartbeat-message from the master is expected after at least two seconds.
I have observed that this works in two examples. If I send a master-heartbeat for a time and then stop, the node either returns to pre-operational mode or sends an appropriate emergency-message.
If I don't send any master-heartbeat-messages, I would expect that after I start the node (send it into operational mode) it takes at most two seconds for the node to either return to pre-operational mode or send an appropriate emergency-message or perhaps even both. But in the two examples I tried, nothing happened. If I never send any heartbeat, the node never expects one and just keeps on running.
The two examples are very different from each other. I am not sure whether they use the same CANopen-stack library perhaps.
Is there an explanation?
If you read CANopen User Manual, section 1.3.1.6, page 39, you will notice that the heartbeat consumer is first activated upon receiving a heartbeat from the producer. I would assume then that, since in your example the first heartbeat is never sent, the consumer is not activated.

The impact of a distributed application configuration on node discovery via net_adm:ping/0

I am experiencing different behavior with respect to net_adm:ping/1 when being done in the context of a Distributed Application.
I have an application that pings a well-known node on start-up and in that way discovers all nodes in a mesh of connected nodes.
When I start this application on a single node (non-distributed configuration), the net_adm:ping/1 followed by a nodes/0 reports 4 other nodes (this is correct). The 4 nodes are on 2 different physical machines, so what is returned is the following n1#machine_1, n2#machine_2, n3#machine_2, n4#machine_1 (ip addresses are actually returned, not machine_x).
When part of a two-node distributed application, on the node where the application starts, the net_adm:ping/1 followed by a nodes/0 reports 2 nodes, one from each machine(n1#machine1, n2#machine2). A second call to nodes/0 after about a 750 ms delay results in the correct 5 nodes being found. Two of the three missing nodes are required for my application to work and so, not finding them, the application dies.
I am using R15B02
Is latency regarding the transitive node-discovery process known to be different when some of the nodes in the mesh are participating in distributed application configuration?
The kernel application documentation mentions the way to synchronize nodes in order to stop the boot phase until ready to move forward and everything is in place. Here are the options:
sync_nodes_mandatory = [NodeName]
Specifies which other nodes must be alive in order for this node to start properly. If some node in the list does not start within the specified time, this node will not start either. If this parameter is undefined, it defaults to [].
sync_nodes_optional = [NodeName]
Specifies which other nodes can be alive in order for this node to start properly. If some node in this list does not start within the specified time, this node starts anyway. If this parameter is undefined, it defaults to the empty list.
A file using them could look as follows:
[{kernel,
[{sync_nodes_mandatory, [b#ferdmbp, c#ferdmbp]},
{sync_nodes_timeout, 30000}]
}].
Starting the node a#ferdmbp by calling erl -sname a -config config-file-above. The downside of this approach is that each node needs its own config file.

Resources