Contiki find neighbors - contiki

I want to find or list all of my neighbor nodes. It should be broadcast or unicast process for nodes. How can I find them with Contiki? Are there any functions for that?

IPv6 neighbors are stored in list ds6_neighbors. To iterate over this list you can use this code:
For Contiki:
#include "net/ipv6/uip-ds6.h"
uip_ds6_nbr_t *nbr;
for(nbr = nbr_table_head(ds6_neighbors);
nbr != NULL;
nbr = nbr_table_next(ds6_neighbors, nbr)) {
/* process nbr here */
}
For Contiki-NG:
#include "net/ipv6/uip-ds6-nbr.h"
uip_ds6_nbr_t *nbr;
for(nbr = uip_ds6_nbr_head();
nbr != NULL;
nbr = uip_ds6_nbr_next(nbr)) {
/* process nbr here */
}
Other network layers have their own notions of neighbors. There are TSCH neighbors, RPL neighbors (called "parents"), and link layer neighbors, each in a separate list.

Related

Merge two sorted linked lists: space complexity

I am looking at the following Geeks for Geeks problem:
Given two sorted linked lists consisting of N and M nodes respectively. The task is to merge both of the list (in-place) and return head of the merged list.
Example 1
Input:
N = 4, M = 3
valueN[] = {5,10,15,40}
valueM[] = {2,3,20}
Output: 2 3 5 10 15 20 40
Explanation: After merging the two linked
lists, we have merged list as 2, 3, 5,
10, 15, 20, 40.
Below answer is the GFG answer. I don't understand how its space complexity is O(1). We are creating a new node, so it must be O(m+n).
Node* sortedMerge(Node* head1, Node* head2)
{
struct Node *dummy = new Node(0);
struct Node *tail = dummy;
while (1) {
if (head1 == NULL) {
tail->next = head2;
break;
}
else if (head2 == NULL) {
tail->next = head1;
break;
}
if (head1->data <= head2->data){
tail->next = head1;
head1 = head1->next;
}
else{
tail->next = head2;
head2 = head2->next;
}
tail = tail->next;
}
return dummy->next;
}
Could someone explain how the space complexity is O(1) here?
I can't understand how it's space complexity is O(1). Since we are creating a new node so it must be O(m+n).
Why should it be O(m+n) when it creates one node? The size of that node is a constant, so one node represents O(1) space complexity. Creating one node has nothing to do with the size of either of the input lists. Note that the node is created outside of the loop.
It is actually done this way to keep the code simple, but the merge could be done even without that dummy node.

Breadth first search simple improvement

In most resources implementation of BFS is like this(this is geeks for geeks implementation):
void findpaths(vector<vector<int> >&g, int src,
int dst, int v)
{
// create a queue which stores
// the paths
queue<vector<int> > q;
// path vector to store the current path
vector<int> path;
path.push_back(src);
q.push(path);
while (!q.empty()) {
path = q.front();
q.pop();
int last = path[path.size() - 1];
// if last vertex is the desired destination
// then print the path
if (last == dst)
printpath(path);
// traverse to all the nodes connected to
// current vertex and push new path to queue
for (int i = 0; i < g[last].size(); i++) {
if (isNotVisited(g[last][i], path)) {
vector<int> newpath(path);
newpath.push_back(g[last][i]);
q.push(newpath);
}
}
}
the above implementation suggests that we first add the neighbors to the queue then we will check it that it is our destination or not.
but we can simply check the neighbor is the destination or not when adding the neighbor to the queue (instead of checking it when it is that node's turn) although it is a very minor improvement, it is better than the last one. so why everyone use the previous method for implementing BFS?

Coding a type of random walk in Neo4j using the Traversal Framework

I'm currently working on a graph where nodes are connected via probabilistic edges. The weight on each edge defines the probability of existence of the edge.
Here is an example graph to get you started
(A)-[0.5]->(B)
(A)-[0.5]->(C)
(B)-[0.5]->(C)
(B)-[0.3]->(D)
(C)-[1.0]->(E)
(C)-[0.3]->(D)
(E)-[0.3]->(D)
I would like to use the Neo4j Traversal Framework to traverse this graph starting from (A) and return the number of nodes that have been reached based on the probability of the edges found along the way.
Important:
Each node that is reached can only be counted once. -> If (A) reaches (B) and (C), then (C) need not reach (B). On the other hand if (A) fails to reach (B) but reaches (C) then (C) will attempt to reach (B).
The same goes if (B) reaches (C), (C) will not try and reach (B) again.
This is a discrete time step function, a node will only attempt to reach a neighboring node once.
To test the existence of an edge (whether we traverse it) we can generate a random number and verify if it's smaller than the edge weight.
I have already coded part of the traversal description as follows. (Here it is possible to start from multiple nodes but that is not necessary to solve the problem.)
TraversalDescription traversal = db.traversalDescription()
.breadthFirst()
.relationships( Rels.INFLUENCES, Direction.OUTGOING )
.uniqueness( Uniqueness.NODE_PATH )
.uniqueness( Uniqueness.RELATIONSHIP_GLOBAL )
.evaluator(new Evaluator() {
#Override
public Evaluation evaluate(Path path) {
// Get current
Node curNode = path.endNode();
// If current node is the start node, it doesn't have previous relationship,
// Just add it to result and keep traversing
if (startNodes.contains(curNode)) {
return Evaluation.INCLUDE_AND_CONTINUE;
}
// Otherwise...
else {
// Get current relationhsip
Relationship curRel = path.lastRelationship();
// Instantiate random number generator
Random rnd = new Random();
// Get a random number (between 0 and 1)
double rndNum = rnd.nextDouble();
// relationship wc is greater than the random number
if (rndNum < (double)curRel.getProperty("wc")) {
String info = "";
if (curRel != null) {
Node prevNode = curRel.getOtherNode(curNode);
info += "(" + prevNode.getProperty("name") + ")-[" + curRel.getProperty("wc") + "]->";
}
info += "(" + curNode.getProperty("name") + ")";
info += " :" + rndNum;
System.out.println(info);
// Keep node and keep traversing
return Evaluation.INCLUDE_AND_CONTINUE;
} else {
// Don't save node in result and stop traversing
return Evaluation.EXCLUDE_AND_PRUNE;
}
}
}
});
I keep track of the number of nodes reached like so:
long score = 0;
for (Node currentNode : traversal.traverse( nodeList ).nodes())
{
System.out.print(" <" + currentNode.getProperty("name") + "> ");
score += 1;
}
The problem with this code is that although NODE_PATH is defined there may be cycles which I don't want.
Therefore, I would like to know:
Is there is a solution to avoid cycles and count exactly the number of nodes reached?
And ideally, is it possible (or better) to do the same thing using PathExpander, and if yes how can I go about coding that?
Thanks
This certainly isn't the best answer.
Instead of iterating on nodes() I iterate on the paths, and add the endNode() to a set and then simply get the size of the set as the number of unique nodes.
HashSet<String> nodes = new HashSet<>();
for (Path path : traversal.traverse(nodeList))
{
Node currNode = path.endNode();
String val = String.valueOf(currNode.getProperty("name"));
nodes.add(val);
System.out.println(path);
System.out.println("");
}
score = nodes.size();
Hopefully someone can suggest a more optimal solution.
I'm still surprised though that NODE_PATH didn't not prevent cycles from forming.

Boost graph library breadth first search yielding incorrect predecessor map

Running breadth-first search on an unweighted, directed graph on 2 vertices where each vertex is connected to the other yields a predecessor map where the source of the breadth-first search is not its own predecessor. The following program is sufficient to produce this behavior:
#include <vector>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/breadth_first_search.hpp>
using namespace boost;
using std::vector;
enum family { one, two, N };
typedef adjacency_list< vecS, vecS, directedS> Graph;
typedef graph_traits<Graph>::vertex_descriptor Vertex;
int main() {
Graph g(N);
const char *name[] = { "one", "two" };
add_edge(one, two, g);
add_edge(two, one, g);
vector<Vertex> p(num_vertices(g));
breadth_first_search(g, two, visitor(make_bfs_visitor(
record_predecessors(&p[0],
on_tree_edge()))));
//At this point, p[0] == 1 and p[1] == 0
return 0;
}
This seems to contradict the Boost Graph Library documentation. More importantly, the predecessor map should represent a spanning tree of the graph breadth-first search is run on, which is not the case when the source of the search is not its own predecessor.

pugixml number of child nodes

Does a pugixml node object have a number-of-child-nodes method? I cannot find it in the documentation and had to use an iterator as follows:
int n = 0;
for (pugi::xml_node ch_node = xMainNode.child("name"); ch_node; ch_node = ch_node.next_sibling("name")) n++;
There is no built-in function to compute that directly; one other approach is to use std::distance:
size_t n = std::distance(xMainNode.children("name").begin(), xMainNode.children("name").end());
Of course, this is linear in the number of child nodes; note that computing the number of all child nodes, std::distance(xMainNode.begin(), xMainNode.end()), is also linear - there is no constant-time access to child node count.
You could use an expression based on an xpath search (no efficiency guarantees, though):
xMainNode.select_nodes( "name" ).size()
int children_count(pugi::xml_node node)
{
int n = 0;
for (pugi::xml_node child : node.children()) n++;
return n;
}

Resources