Maximal number of hops (path cost) in Thread / OpenThread - iot

What is the maximal number of hops (or path cost) allowed by Thread specification.
How many hops are supported by OpenThread?
I found two similar questions (link1, link2) but answer is different (maybe even outdated?).

Thread specifies a maximum path cost of 16 between routers. Link cost between routers can be 1, 2, or 4. Links to end devices are not included in the path cost calculation.

Related

Behavior of docker compose v3's deploy resources limits 'cpus' parameter setting (is it an absolute number or a percentage of available cores)

Folks,
With regards to docker compose v3's 'cpus' parameter setting (under 'deploy' 'resources' 'limits') to limit the available CPUs to a service, is it an absolute number that specifies the count of CPUs or is it a more useful percentage of available CPUs setting.
From what i read it appears to be an absolute number, where in, say if a host has 4 CPUs and one were to set two services in the compose file with 0.5 then both the services combined can only use a max of 1 CPU (0.5 each) while leaving the 3 remaining CPUs idle.
But thinking loudly it appears to me that it would be nicer if this is a percentage of available cores setting in which case for the same previous example this would result in both services each being able to use up to 2 CPUs each & thereby the two combined could use up all 4 when needed. This way when i increase or decrease the available cores the relative settings would help avoid modifying this value again.
EDIT(09/10/21):
On reading this it appears that the above can be achieved with 'cpu-shares' setting instead of setting 'cpus'. Is my understanding correct?
The doc for 'cpu-shares' however mentions the below cautionary note,
"It does not guarantee or reserve any specific CPU access."
If the above is achieved with this setting, then what does it mean (what is to lose) to not have a guarantee or reservation?
EDIT(09/13/21):
Just to summarize,
The 'cpus' parameter setting is an an absolute number that refers to the number of CPUs a service has reserved for it to use at all times. Correct?
The 'cpu-shares' parameter setting is a relative weight number the value of which is used to compute/determine the percentage of total available CPU that a service can use only when there is contention. Correct?

How to specify maximum cost when running BFS Neo4j?

The docs of Neo4j data science library state:
There are multiple termination conditions supported for the traversal,
based on either reaching one of several target nodes, reaching a
maximum depth, exhausting a given budget of traversed relationship
cost, or just traversing the whole graph.
But in the algorithm specific parameters I could not find any parameter for constraining the maximum cost of the traversal (or simply number of relationships if cost is 1). The Only parameters listed are startNodeId, targetNodes and maxDepth.
Any Idea if this actually can be done or if the docs are incorrect?
Here is the list of procedures and functions for your reference. As you can see, Breadth First Search is still in Alpha stage and no estimate function is available yet. You can also see that functions in Beta and Production stages have this function *.estimate. These functions will give you an idea of how much memory will be used when you run those data science related functions. An example of gds.nodeSimilarity.write.estimate can be found below
CALL gds.nodeSimilarity.write.estimate('myGraph', {
writeRelationshipType: 'SIMILAR',
writeProperty: 'score'})
YIELD nodeCount, relationshipCount, bytesMin, bytesMax, requiredMemory
nodeCount relationshipCount bytesMin bytesMax requiredMemory
9 9 2592 2808 "[2592 Bytes ... 2808 Bytes]"

Precise definition of HL7(v2) field RXE-25?

Can anyone explain what the field of RXE-25 in HL-7v2 means? The description is "Give strength". I have read the official explanation, but I feel this is ambiguous. I am not sure whether this field should contain a)the strength of a single tablet/dose form vs b)the total strength to administer.
For example, hydroxychloroquine [HCQ] is a lupus medication that comes in 200 mg tablets. Lupus patients are frequently started off on 400 mg of this per day (ie 2 tablets).
Let's say RXE-3 ("Give Amount - Minimum") is "2", and RXE-5 ("Give Units") is "tablet". And let's assume there are multiple tablet strengths, so we don't know what dose that is. Would one put the per-tablet dose in RXE-25 (ie "200" mg), or instead put the entire dose (2 tablets="400" mg)?
My understanding of all the 'Give' fields is they represent the amount given per dose. So to answer your questions:
b)the total strength to administer
AND
put the entire dose (2 tablets="400" mg)
Actually, I found the answer hidden in another part of the HL7 documentation, specifically under the RXO segment (RXO-18 Requested Give Strength). Per that documentation, this applies to various RX_ segments.
The example given:
One way would be: "Ampicillin 250 mg capsules, 2 capsules four times a
day." In this case the give amount would be 2, the give units would be
capsules, the strength would be 250 and the strength units would
milligrams.
So it seems the GIVE STRENGTH AMOUNT (if present, and no GIVE DRUG STRENGTH VOLUME is specified) is multiplied by the GIVE AMOUNT to come up with the total dose. So I believe the answer to the example in the question would be a) 200 mg.

Why is there this Capacity Limit on Nodes and Relationships in neo4j?

I wonder why neo4j has a Capacity Limit on Nodes and Relationships. The limit on Nodes and Relationships is 2^35 1 which is a "little" bit more then the "normal" 2^32 integer. Common SQL Databases for example mysql stores there primary key as int(2^32) or bigint(2^64)2. Can you explain me the advantages of this decision? In my opinion this is a key decision point when choosing a database.
It is an artificial limit. They are going to remove it in the not-too-distant future, although I haven't heard any official ETA.
Often enough, you run into hardware limits on a single machine before you actually hit this limit.
The current option is to manually shard your graphs to different machines. Not ideal for some use cases, but it works in other cases. In the future they'll have a way to shard data automatically--no ETA on that either.
Update:
I've learned a bit more about neo4j storage internals. The reason the limits are what they are exactly, are because the id numbers are stored on disk as pointers in several places (node records, relationship records, etc.). To increase it by another power of 2, they'd need to increase 1 byte per node and 1 byte per relationship--it is currently packed as far as it will go without needing to use more bytes on disk. Learn more at this great blog post:
http://digitalstain.blogspot.com/2010/10/neo4j-internals-file-storage.html
Update 2:
I've heard that in 2.1 they'll be increasing these limits to around another order of magnitude higher than they currently are.
As of neo4j 3.0, all of these constraints are removed.
Dynamic pointer compression expands Neo4j’s available address space as needed, making it possible to store graphs of any size. That’s right: no more 34 billion node limits!
For more information visit http://neo4j.com/blog/neo4j-3-0-massive-scale-developer-productivity.

Maximum number of globally registered processes

Is there a limit to the number of processes that can be register globally? Or is this only limited by the memory/ max number of atoms ?
Ubuntu 12.04 and Erlang R15B01.
Good question! I'd bet on the number of atoms, if you take into account the following. The Efficiency Guide has a section on system limits:
Processes
The maximum number of simultaneously alive Erlang processes is by default 32768. This limit can be raised up to at most 268435456 processes at startup (see documentation of the system flag +P in the erl(1) documentation). The maximum limit of 268435456 processes will at least on a 32-bit architecture be impossible to reach due to memory shortage.
Distributed nodes
Known nodes
A remote node Y has to be known to node X if there exist any pids, ports, references, or funs (Erlang data types) from Y on X, or if X and Y are connected. The maximum number of remote nodes simultaneously/ever known to a node is limited by the maximum number of atoms available for node names. All data concerning remote nodes, except for the node name atom, are garbage-collected.
Also, the erl manual section describes the flag you can use to alter the number of processes in your node:
+P Number
Sets the maximum number of concurrent processes for this system. Number must be in the range 16..134217727. Default is 32768.
Since you can alter the number of concurrent processes per node, but you cant alter the number of allowed atoms, and the process names are atoms which are copied in replica per node, that should be the total allowed number of globally registered processes.
Hope it helps :)
EDIT: Actually, turns out you can change the number of allowed atoms :)
Atoms
By default, the maximum number of atoms is 1048576. This limit can be raised or lowered using the +t option.
+t size
Set the maximum number of atoms the VM can handle. Default is 1048576.

Resources