How submodules communicate - connection

I am new to OMNET and there is something unclear to me. I know that we have compound modules, inside this module we can have nested other modules/submodules and so on until simpleModule. I know that if I have two modules I need to use gates to pass msg between them, "in" gate to "out" gate of other module using channel. What I don t understand how communication flow inside communication module, I know that I must use gates in a same manner but do I require communication channel? I find that there is #direct option for gates is this type of gates used for inner communication between modules/submodules?

The connections between submodules inside a compound module usually use IdealChannel and they do not require #directIn annotation. According to OMNeT++ Simulation Manual:
#directIn should be used when the gate is an input gate that is intended for being used as a target for the sendDirect() method;

Related

How to discover the high-performance network interface on a linux HPC cluster?

I have a distributed program which communicates with ZeroMQ that runs on HPC clusters.
ZeroMQ uses TCP sockets, so by default on HPC clusters the communications will use the admin network, so I have introduced an environment variable read by my code to force communication on a particular network interface.
With Infiniband (IB), usually it is ib0. But there are cases where another IB interface is used for the parallel file system, or on Cray systems the interface is ipogif, on some non-HPC systems it can be eth1, eno1, p4p2, em2, enp96s0f0, or whatever...
The problem is that I need to ask the administrator of the cluster the name of the network interface to use, while codes using MPI don't need to because MPI "knows" which network to use.
What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster? (I don't mind writing a small MPI program for this if there is no simple way)
There is no simple way and I doubt a complete solution exists. For example, Open MPI comes with an extensive set of ranked network communication modules and tries to instantiate all of them, selecting in the end the one that has the highest rank. The idea is that ranks somehow reflect the speed of the underlying network and that if a given network type is not present, its module will fail to instantiate, so faced with a system that has both Ethernet and InfiniBand, it will pick InfiniBand as its module has higher precedence. This is why larger Open MPI jobs start relatively slowly and is definitely not fool proof - in some cases one has to intervene and manually select the right modules, especially if the node has several network interfaces of InfiniBand HCAs and not all of them provide node-to-node connectivity. This is usually configured system-wide by the system administrator or the vendor and is why MPI "just works" (pro tip: in not-so-small number of cases it actually doesn't).
You may copy the approach taken by Open MPI and develop a set of detection modules for your program. For TCP, spawn two or more copies on different nodes, list their active network interfaces and the corresponding IP addresses, match the network addresses and bind on all interfaces on one node, then try to connect to it from the other node(s). Upon successful connection, run something like the TCP version of NetPIPE to measure the network speed and latency and pick the fastest network. Once you've gotten this information from the initial small set of nodes, it is very likely that the same interface is used on all other nodes too, since most HPC systems are as homogeneous as possible when it comes to their nodes' network configuration.
If there is a working MPI implementation installed, you can use it to launch the test program. You may also enable debug logging in the MPI library and parse the output, but this will require that the target system has an MPI implementation supported by your log parser. Also, most MPI libraries use native InfiniBand or whatever high-speed network API there is and will not tell you which is the IP-over-whatever interface, because they won't use it at all (unless configured otherwise by the system administrator).
Q : What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster?
This seems to be in a gray-zone - trying to solve a multi-faceted problem among site-specific hardware (technical) interface naming and theirs non-technical, weakly administratively maintained, preferred ways of use.
As-is State :
ZeroMQ can (as per RFC 37/ZMTP v3.0+) specify <hardware(interface)>:<port>/<service> details :
zmq_bind (server_socket, "tcp://eth0:6000/system/name-service/test");
And:
zmq_connect (client_socket, "tcp://192.168.55.212:6000/system/name-service/test");
yet has no means, to my knowledge, to reverse-engineer the primary use of such an interface, in the holistic context of the HPC-site and it's hardware configuration.
Seems to me, your idea of pre-testing the administrative mappings via MPI-tool first and letting ZeroMQ deployment use these externally detected (if indeed auto-detectable, as you assumed above) configuration details for a proper (preferred) interface usage.
The Safe Way to Go :
Asking the HPC-infrastructure Support Team ( who is responsible for knowing all of the above and trained to help Scientific Teams to use the HPC in the most productive manner ) would be my preferred way to go.
Disclaimer :
Sorry in case this did not help your will to read & auto-detect all the needed configuration details ( a universal BlackBox-HPC-ecosystem detection and auto-configuration strategy would hardly be a trivial one-liner, I guess, wouldn't it? )

ejabberd inter-module communication

I want to know if it is possible to obtain data of the other modules from a module. I am using ejabberd server 15.10, I implemented modules using Erlang.
Here is the case:
I have a module that filters messages: mod_filter
I have another module that makes some calculations while the server is running: mod_calculate
Is it possible to get fresh data from mod_calculate every time the ejabberd server filters a message at mod_filter.
Data isn't stored in modules but in variables. And you won't have access to internal variables on which code in one module operates without that that module exporting those variables to the external world somehow.
The module may have some functions already exported. Check with:
rp(mod_calculate:module_info()).
This will show you all functions exported in the module. Some of those functions may expose the variables from the module to other modules. If not, then you would need to add such functions and call them from mod_filter.
What #Amiramix states is accurate, but it's not the whole picture.
There's a low-coupling mechanism to communicate events between modules in ejabberd - it's the hooks and handlers concept. The link points at MongooseIM documentation, but this mechanism is more or less the same in both codebases.
In general, one module can call a hook, which is alike a function call, but depending on the registered handlers may or may not result in some action(s) being carried out. Other modules can register handlers for hooks they choose. If you're authoring the modules in question, this is a mechanism which might give you the required communication channel.
To make things more concrete - each time mod_filter needs some information that only mod_calculate has access to, it can run ejabberd_hooks:run_fold/4 with a custom hook name. If mod_calculate registers a handler for that hook (usually in its start function), it can return some data relevant to mod_filter. However, different modules can implement a handler for the hook, so mod_filter and mod_calculate aren't coupled as they would be if you used a direct function call (like mod_calculate:some_function(...)).

Using custom DataFlow unbounded source on DirectPipelineRunner

I'm writing a custom DataFlow unbounded data source that reads from Kafka 0.8. I'd like to run it locally using the DirectPipelineRunner. However, I'm getting the following stackstrace:
Exception in thread "main" java.lang.IllegalStateException: no evaluator registered for Read(KafkaDataflowSource)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:700)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:219)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:215)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:102)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:252)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:662)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:374)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:87)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:174)
Which makes some sense, as I haven't registered an evaluator for my custom source at any time.
Reading https://github.com/GoogleCloudPlatform/DataflowJavaSDK, it seems like only evaluators for bounded sources are registered. What's the recommended way to define and register an evaluator for an custom unbounded source?
DirectPipelineRunner currently runs over bounded input only. We are actively working on removing this restriction, and expect to release it shortly.
In the meanwhile, you can trivially turn any UnboundedSource into a BoundedSource, for testing purposes, by using withMaxNumRecords, as in the following example:
UnboundedSource<String> unboundedSource = ...; // make a Kafka source
PCollection<String> boundedKafkaCollection =
p.apply(Read.from(unboundedSource).withMaxNumRecords(10));
See this issue on GitHub for more details.
Separately, there are several efforts on contributing the Kafka connector. You may want to engage with us and other contributors about that via our GitHub repository.

Need help getting and setting call state in Avaya one-X Agent

The PCs in my area at work have Avaya one-X Agent 2.5.7.6.
I am writing a program to automate some commonly used call functions. I have the Avaya one-X Agent 2.5 API guide and using it,I have managed to interface to one-X and perform some of the functions I need (dialing numbers, answering/releasing calls, putting them on hold).
Nevertheless, there are some additional things I need to do that the guide doesn't mention. Specifically, I need to be able to :
query and set the work state (auto-in/ready, ACW, and some of the aux modes)
transferring the current call to one of several commonly dialed numbers.
Can you point to me to any documentation or links where I can find information about these operations?
one-X Agent HTTP API does not support these methods.
This functionality can be implemented using AVAYA AES server-side API.
You could use DMCC (which has bindings to different languages and also a language-agnostic XML interface), which implements CSTA ECMA-269 industrial standard.
Specifically, you'll need Get/Set Agent State methods to control work state and Single Step Transfer Call method to redirect a call. You'll need DMCC 6.x version for Agent State functionality.

erlang general question on socket

I have a question about a project I should implement for my Distributed System course.
The project consist in designing and implementing a library that provides a reliable multicast service to user processes. All processes belong to a group, and a message is sent by a member process to all members of the group. The sender is excluded from the recipient list.
This seems to me quite easy to implement in erlang, due to its message passing structure...more points are given if you use rpc call instead of normal sockets based implementation..
Now my question is this: one of the mandatory points of this projects requires that sockets aren't kept open when there is no communication going on between processes...
Our course is held in C, but we are free to use any language we like...can I satisfy this constraint using erlang nodes and rpc calls?
thanks in advance
Yes. The rpc module even has multicall, which takes a list of nodes and will do exactly what you described. It won't hold your sockets open when it's not using them either.
Despite what the other answers say, Erlang's default behavior does not satisfy your constraints.
A typical network of Erlang nodes using Erlang distribution will remain densely connected (every node connected to every other node) with TCP sockets open even when you're not using them. You will either have to use -connect_all false and manage opening/closing the connections to other nodes yourself, or you will have to develop your own distribution protocol. I would recommend the latter, especially since you are learning. The trick to make it easy is to use term_to_binary and binary_to_term.

Resources