How to Switch Source and Target Nodes? - cytoscape

Using Cytoscape 3.8.2 for desktop I want to edit an existing edge in my network an switch the source and target node. I've tried to edit the columns that contain the node IDs e.g.
745 (directed) 178 to 178 (directed) 745
but this command does not change the directed edge in my visualisation. Thanks!

Unfortunately, there is no way to switch the source and target nodes in your network. On the other hand, typically Cytoscape doesn't care much about it, so you could override the visualization (using the bypass column) and switch how source and target are visualized.
-- scooter

If you use the "draw edge" tool you can control which node is the source and which node is the target.
Right-click on the node you want to be the source
In the context menu that pops up expand "Add >" and click "Edge"
A line will appear connecting the mouse pointer to the source node
Click on the desired target node to create the new edge

Related

How do I get the next node id/name along the path the agent is moving?

Assume that an agent is moving from Node1 to Node3 in the following network:
Node1 - PathA - Node2 - PathB- Node3
How do I access the next node the agent will pass?
The actual task it for bus agents to move along the yellow paths and pickup/drop off passengers and crews at the corresponding stands (nodes) - one of the tasks requires me to acquire the "next node".
If you want full control of path-finding and nodes, check this tutorial. Fair warning: this is quite advanced and goes well beyond AnyLogic basics: https://www.benjamin-schumann.com/blog/2022/8/6/taking-control-of-your-network-agent-based-pathfinding
When you define a destination node, the only thing you can access once your agent starts moving is the destination position using getTargetX() for instance.
Nevertheless, AnyLogic doesn't give you access to the set of nodes the agent will use on its movement.

Distributing in-memory linked list

I have a program which is build based on a singly linked list. There are different programs which creates some form of data and this data sent to this linked list module to be added. As long as I've RAM available, program working as intended. Periodically -about every year-, I archive the entire linked list to the disk -due to requirement, I'm archiving all-. So far so good.
What happens if I wanted to add new node to the list whilst RAM is full and I haven't archived and freed the memory on RAM? This might occur when producer count goes up or regardless of producer count, there may be more data created depending or where it's used etc. I couldn't find a clear solution the scale the on-memory linked list. There is a workaround in my head but don't know even if it works so I thought better to ask here.
When the RAM start to get almost full, I would create a new instance
of the linked list program -just another machine on the cloud or new
physical computer on premise, whatever -.
I do have an service discovery module -something like ZooKeeper-, this discovery module will detect the newly created machine and adds to the list.
When first instance is almost in it's limits, it will check if there is an available instance, if there is; it will relay the node to the next instance and it will update its last node's next pointer to something special. If you wanted to traverse the list from start to finish across all the machines every time you come to this special node, it will have the information of the which machine has the next node. Traversal will continue from the next machine that the last node points to.
Since this this not a hash map or something in that nature, I can't just add replicate the service and for example relay the incoming request based on a given key to a particular machine.
Rather than archiving part of the old data and loading that to the RAM and continuing on like that, I thought it would be better to have a last pointer to point to a different machine and continue reading from that machine. My choice for a network call seemed better because this program will be used in a intranet, but still I couldn't find a solid solution on paper.
Is there a such example that I can study on and try to find a better solution? Is this solution feasible?
An example:
Machine 1:
1st node : [data:x, *next: 2nd Node address],
2nd node : [data:123, *next: 3rd Node address],
...
// at this point RAM is almost full
// receive next instance's ip
(n-1)th node : [data:987, *next: nth Node address],
nth node : [data:x2t, type: LastNodeInMachine, *next: nullptr]
Machine 2:
1st node == (n+1) node : [data:x, *next: 2nd Node address],
... and so on

How to get all the preferred parents up to the root for a certain node in Contiki RPL classic?

I'm using Contiki 3.0 and I would like to find all the preferred parents up to the root for a certain node. For Example, if I have node 1 with preferred parent node 5, node 5 with preferred parent node 8 and node 8 connected directly to the root.
How can I find or print these preferred parents like this: 1-> 5 -> 8 -> root.
I'm using this code to get the preferred parent:
PRINT6ADDR(rpl_get_parent_ipaddr(dag->preferred_parent));
Many Thanks
Hanin
You cannot print this information on a node since RPL is a distance vector protocol, not a link state protocol. A network node does not have enough information to know the full routing path to the root node; it has just a local view of the network, limited to their immediate neighbors.

Vimdiff equivalent for Select Lines

When used as a mergetool for Git, what is the equivalent in vimdiff to kdiff3's "Select Lines(s) From A/B/C"? Is there a shortcut for that like Ctrl+1/2/3 in kdiff3?
Based on the Vim Reference Manual section for vimdiff, there are no built-in commands with the full functionality of Ctrl+1/2/3 in vimdiff. What I mean by "full functionality" is that in kdiff3 you could do the commands Ctrl+2, Ctrl+3, Ctrl+1 in that order, and in the merged version you end up with the diff lines from buffer B followed by the lines from buffer C followed by the lines from buffer A.
There is, however, a command for performing a more limited version of the functionality available in kdiff3. If you only want to use lines from one of your input files, then the command [count]do is available, where [count] is typically 1,2, or 3 depending on which vim buffer you want to pull the lines from. (do stands for "diff obtain".)
For example, if you had the following merge situation:
then you could move your cursor to the merge conflict in the bottom buffer and type 1do if you wanted "monkey", 2do if you wanted "pig", or 3do if you wanted "whale".
If you do need to grab lines from multiple buffers when merging with vimdiff, then my recommendation would be to set the Git config option merge.conflictstyle to diff3 (git config merge.conflictstyle diff3) so that the common ancestor appears in the merged buffer of the file, as shown in the screenshot above. Then just move the lines around to your liking using vim commands and delete the diff notations and any unused lines.

how do I remove an extra node

I have a group of erlang nodes that are replicating their data through Mnesia's "extra_db_nodes"... I need to upgrade hardware and software so I have to detach some nodes as I make my way from node to node.
How does one remove a node and still preserve the data that was inserted?
[update] removing nodes is as important as adding them. Over time as your cluster grows it must also contract. If not then Mnesia is going to be busy trying to send data to nonexistent nodes filling up queues and keeping the network busy.
[final update] after pouring through the erlang/mnesia source code I was able to determine that it is not possible to completely disassociate nodes. While del_table_copy removes the linkage between tables it is incomplete. I would close this question but none of the close descriptions are adequate.
I wish I had found this a long time ago: http://weblambdazero.blogspot.com/2008/08/erlang-tips-and-tricks-mnesia.html
basically, with a properly functioning cluster....
login to the cluster to be removed
stop mnesia
mnesia:stop().
login to a different node on the cluster
delete the schema
mnesia:del_table_copy(schema, node#host.domain).
I'm extremely late to the party, but came across this info in the doc when looking for a solution to the same problem:
"The function call
mnesia:del_table_copy(schema,
mynode#host) deletes the node
'mynode#host' from the Mnesia system.
The call fails if mnesia is running on
'mynode#host'. The other mnesia nodes
will never try to connect to that node
again. Note, if there is a disc
resident schema on the node
'mynode#host', the entire mnesia
directory should be deleted. This can
be done with mnesia:delete_schema/1.
If mnesia is started again on the the
node 'mynode#host' and the directory
has not been cleared, mnesia's
behaviour is undefined."
(http://www.erlang.org/doc/apps/mnesia/Mnesia_chap5.html#id74278)
I think the following might do what you desire:
AllTables = mnesia:system_info(tables),
DataTables = lists:filter(fun(Table) -> Table =/= schema end,
AllTables),
RemoveTableCopy = fun(Table,Node) ->
Nodes = mnesia:table_info(Table,ram_copies) ++
mnesia:table_info(Table,disc_copies) ++
mnesia:table_info(Table,disc_only_copies),
case lists:member(Node,Nodes) of
true -> mnesia:del_table_copy(Table,Node);
false -> ok
end
end,
[RemoveTableCopy(Tbl,'gone#gone_host') || Tbl <- DataTables].
rpc:call('gone#gone_host',mnesia,stop,[]),
rpc:call('gone#gone_host',mnesia,delete_schema,[SchemaDir]),
RemoveTablecopy(schema,'gone#gone_host').
Though, I haven't tested it since my scenario is slightly different.
I've certainly used this method to perform this (supporting the mnesia:del_table_copy/2 use). See removeNode/1 below:
-module(tool_bootstrap).
-export([bootstrapNewNode/1, closedownNode/0,
finalBootstrap/0, removeNode/1]).
-include_lib("records.hrl").
-include_lib("stdlib/include/qlc.hrl").
bootstrapNewNode(Node) ->
%% Make the given node part of the family and start the cloud on it
mnesia:change_config(extra_db_nodes, [Node]),
%% Now make the other node set things up
rpc:call(Node, tool_bootstrap, finalBootstrap, []).
removeNode(Node) ->
rpc:call(Node, tool_bootstrap, closedownNode, []),
mnesia:del_table_copy(schema, Node).
finalBootstrap() ->
%% Code removed to actually copy over my tables etc...
application:start(cloud).
closedownNode() ->
application:stop(cloud), mnesia:stop().
If you have replicated the table (added table copies) on nodes other than the one you're removing, then you're already fine - just remove the node.
If you wanted to be slightly tidier you'd delete the table copies from the node you're about to remove first via mnesia:del_table_copy/2.
Generally, mnesia gracefully handles node loss and detects node rejoin (rebooted nodes obtain new table copies from nodes that kept running, nodes that didn't reboot are detected as a network partition event). Mnesia does not consume CPU or network traffic for nodes that have gone down. I think, though I haven't confirmed it in the source, mnesia won't reconnect to nodes that have gone down automatically - the node that goes down is expected to reboot (mnesia) and reconnect.
mnesia:add_table_copy/3, mnesia:move_table_copy/3 and mnesia:del_table_copy/2 are the functions you should look at for live schema management.
The extra_db_nodes parameter should only be used when initialising a new DB node - once a new node has a copy of the schema it doesn't need the extra_db_nodes parameter.

Resources