I was wondering if is there any network protocol which deals with the transmission of executable instructions.
Host1 Host2
SEND(instructions) ---------------> EXECUTE(instructions)
Furthermore, is there any software which allows the execution of a program
of Host1 into another remote machine Host2 which has the same architecture?
Yes. What you're looking for are RPC (Remote Procedure Call) protocols. Some of which, like SOAP are open and are implemented by a bunch of different libraries in various languages. Others are closed, or are specific to a single language.
Related
I have a requirement to build two applications (in Golang), first application just receives data via UART and send it to the second application for processing, second application should receive the data and process.
I have already completed receiving data via UART in first application, now I'm looking for better way to get data from first module to second module. They both are running as docker containers and sharing same docker network.
I was thinking of creating rest API in second application and first application will simply send data with http call, but is there a better way to do? Any other option that can take advantage of docker network?
In general, yes sockets are what you need. Plain TCP/UDP, HTTP server (RESTful API or not), gRPC, etc.
Or you start another container of a message queue (NATS, Kafka, RabbitMQ, etc), and write pub-sub logic. Or you can use a database.
Or you can mount a shared Docker volume between both containers and communicate via files.
None of these are necessarily unique to Golang and will work with any language.
I'd like to make a small Windows Service, that would be shutdown most of the time, but would be automatically activated when incoming TCP (REST?) connection comes. I do not want the service to be running 24/7 just in case (albeit that might trn to be the least evil).
There were projects porting inet.d and xinet.d to Windows, but they are all abandoned, and introducing yet another dependency for a lean program is wrong.
However by the fact they were abndoned i thought it is now a standard Windows functionality?
Service Triggers documentation seems both incomplete and self-contradictionary.
SERVICE_TRIGGER_SPECIFIC_DATA_ITEM claims that for SERVICE_TRIGGER_TYPE_NETWORK_ENDPOINT there is
A SERVICE_TRIGGER_DATA_TYPE_STRING that specifies the port, named
pipe, or RPC interface for the network endpoint.
Feels great, but totally lacks any example how to specify port and nothing but the port. But then:
SERVICE_TRIGGER structure seems to claim there is no way to "wait" on TCP/UDP connections.
SERVICE_TRIGGER_TYPE_NETWORK_ENDPOINT - The event is triggered when a packet or request arrives on a particular network protocol.
So far so good... But then.
The pTriggerSubtype member specifies one of the following values: RPC_INTERFACE_EVENT_GUID or NAMED_PIPE_EVENT_GUID. The pDataItems member specifies an endpoint or interface GUID
Dead-end. You have no choice but either Windows-specific flavor of RPC or Windows-specific named pipes. And you can only specify GUID as a data item, not a port, as it was told above.
I wonder, which part of documentation is wrong? Can ChangeServiceConfig2 API be used for a seemingly simple aim of starting service to respond to TCP packet coming to a specific port ? If yes - how?
there is also SERVICE_TRIGGER_TYPE_FIREWALL_PORT_EVENT but the scarce documentation seems to say the functionality is the opposite, the trigger is not remote packet incoming from a client, but instead by a local server binding to a port.
Some alternative avenues, from quick search:
Internet Information Server/Service seems to have "Windows Process Activation Service" and "WWW Publishing Service" components, but adding dependency upon heavy IIS feels wrong too. It also can interfere with other HTTP servers (for example WAMP systems). Imagining explaining to non-techie persons how to diagnose and resolve clashes for TCP ports makes me shiver.
I wonder if that kind of starting a service on demand can be done only programming http.sys driver without rest of IIS, though.
COM protocol seems ot have servers activation on demand feature, and hopefully so does DCOM, but I do not want to have DCOM dependency. It seems today much easier to find documentation, programs and network administrators for maintaining plain TCP or HTTP connections, than DCOM. I fear relying on DCOM would be more and more fragile in practice, just like relying on DDE.
DCOM and NT-RPC would also make the program non-portable if i later would decide to use other operating systems than Windows.
Really, starting a service/daemon on incoming network connection seems so obvious a task, there has to be out-of-the-box function in Windows?
I have a distributed program which communicates with ZeroMQ that runs on HPC clusters.
ZeroMQ uses TCP sockets, so by default on HPC clusters the communications will use the admin network, so I have introduced an environment variable read by my code to force communication on a particular network interface.
With Infiniband (IB), usually it is ib0. But there are cases where another IB interface is used for the parallel file system, or on Cray systems the interface is ipogif, on some non-HPC systems it can be eth1, eno1, p4p2, em2, enp96s0f0, or whatever...
The problem is that I need to ask the administrator of the cluster the name of the network interface to use, while codes using MPI don't need to because MPI "knows" which network to use.
What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster? (I don't mind writing a small MPI program for this if there is no simple way)
There is no simple way and I doubt a complete solution exists. For example, Open MPI comes with an extensive set of ranked network communication modules and tries to instantiate all of them, selecting in the end the one that has the highest rank. The idea is that ranks somehow reflect the speed of the underlying network and that if a given network type is not present, its module will fail to instantiate, so faced with a system that has both Ethernet and InfiniBand, it will pick InfiniBand as its module has higher precedence. This is why larger Open MPI jobs start relatively slowly and is definitely not fool proof - in some cases one has to intervene and manually select the right modules, especially if the node has several network interfaces of InfiniBand HCAs and not all of them provide node-to-node connectivity. This is usually configured system-wide by the system administrator or the vendor and is why MPI "just works" (pro tip: in not-so-small number of cases it actually doesn't).
You may copy the approach taken by Open MPI and develop a set of detection modules for your program. For TCP, spawn two or more copies on different nodes, list their active network interfaces and the corresponding IP addresses, match the network addresses and bind on all interfaces on one node, then try to connect to it from the other node(s). Upon successful connection, run something like the TCP version of NetPIPE to measure the network speed and latency and pick the fastest network. Once you've gotten this information from the initial small set of nodes, it is very likely that the same interface is used on all other nodes too, since most HPC systems are as homogeneous as possible when it comes to their nodes' network configuration.
If there is a working MPI implementation installed, you can use it to launch the test program. You may also enable debug logging in the MPI library and parse the output, but this will require that the target system has an MPI implementation supported by your log parser. Also, most MPI libraries use native InfiniBand or whatever high-speed network API there is and will not tell you which is the IP-over-whatever interface, because they won't use it at all (unless configured otherwise by the system administrator).
Q : What is the most portable way to discover the name of the high-performance network interface on a linux HPC cluster?
This seems to be in a gray-zone - trying to solve a multi-faceted problem among site-specific hardware (technical) interface naming and theirs non-technical, weakly administratively maintained, preferred ways of use.
As-is State :
ZeroMQ can (as per RFC 37/ZMTP v3.0+) specify <hardware(interface)>:<port>/<service> details :
zmq_bind (server_socket, "tcp://eth0:6000/system/name-service/test");
And:
zmq_connect (client_socket, "tcp://192.168.55.212:6000/system/name-service/test");
yet has no means, to my knowledge, to reverse-engineer the primary use of such an interface, in the holistic context of the HPC-site and it's hardware configuration.
Seems to me, your idea of pre-testing the administrative mappings via MPI-tool first and letting ZeroMQ deployment use these externally detected (if indeed auto-detectable, as you assumed above) configuration details for a proper (preferred) interface usage.
The Safe Way to Go :
Asking the HPC-infrastructure Support Team ( who is responsible for knowing all of the above and trained to help Scientific Teams to use the HPC in the most productive manner ) would be my preferred way to go.
Disclaimer :
Sorry in case this did not help your will to read & auto-detect all the needed configuration details ( a universal BlackBox-HPC-ecosystem detection and auto-configuration strategy would hardly be a trivial one-liner, I guess, wouldn't it? )
I am looking to write a program that will connect to many computers from a single computer. Sort of like "Command Center" where you can monitor all the remote system remotely on a single PC.
My plan is to have multiple Client Sockets on a form. They will connect to individual PCs remotely. So, they can request information from them to display on the Window. Remote PCs will be hosts. Is this possible?
Direct answer to your question: Yes, you can do that.
Long answer: Yes, you can do that but are you sure your design is correct? Are you sure you want to create parallel connections, one to each client? Probably you don't! If yes, then you probably want to run them in separate threads.
If you want to send some commands from time to time (and you are not doing some kind of constant video monitoring) why don't you just use one connection and 'switch' between clients?
I can't tell you more about the design because from your question is not clear about what you want to build (what exactly you are 'monitoring').
VERY IMPORTANT!
Two important notices to take into account before designing your app (both relevants only if the remote computers are not in the LAN (you connect to them via Internet)):
If the remote computers are running as servers, you will have lots of problems to explain your customers (if they are connected (and they probably are) to Internet via a router) how to setup the router and the software firewall. For example, if a remote computer is listening for commands from you, on port 1234 (for example) the firewall in the router will block BY DEFAULT any connection attempt from a 'foreign' computer (from you) to that port.
If your remote computers are running as clients, how they will know master's IP (your IP). Do you have a static IP?
What you actually need is one ServerSocket on the module running on your machine.
To which all your remote PC's will connect through their individual ClientSocket.
You can make your design other way round by putting ClientSocket on the module running on your machine and ServerSocket on the module running on remote machine.
But you will end up creating one ClientSocket to each ServerSocket, what if you have the number of remote servers increase.
Now if you still want to have multiple ClientSockets on your machine then as Altar said you could need a multi threaded application where each thread is responsible for one ClientSocket.
I would recommend Internet Direct (Indy) as they work well in threads, and you can specify a connect time-out per connection, so that your monitoring app will be able to get a 'negative' test result faster than with the default OS connect time-out.
Instead of placing them on the form, I would wrap each client in a class which runs an internal monitoring thread. More work initially but easier to keep independent from each other.
I need to connect to a VPN Server , I can`t use windows Connections , My Application should work independently !
I tested some Components using RAS Api , they works ! but by using windows connections .
how can i do that without any dependency to windows connections ?
The problem with this question
"VPN" stands for "Virtual Private Network". It's a way to make a private network available to your computer, possibly in a secure way, so your computer can use standard IP protocols as if it were physically connected to the private network.
The operating system needs to know about that network, so of course all VPN implementations use "windows connections". From a different perspective: When you're connected to a VPN you can open a TCP connection to an IP on the private network as if it were on your local network. Since it's the operating system's job to set up your TCP connection and route your TCP/IP packets, of course it needs to know about the VPN! If it doesn't, it'll simply forward all your requests for the given IP to it's default router and fail with a "no route to destination" message (or a "time out", if your router is not kind enough to tell your system it has no idea what the private IP is).
Can it be done?
From a theoretical point of view, of course, you can bypass Windows completely, but then you'll have to "roll your own" everything. You can't use the Windows IP services, you'll have to implement your own TCP. I'm sure there are about a million other little things that need re-implementing.
For a starting point I'd look at the Open VPN: it's Open Source and available for Windows. It uses the UDP protocol as the bases for the VPN implementation, unlike the Windows VPN (that one uses GRE - General Routing Encapsulation, protocol 47). Open VPN itself, of course, uses a "windows connection" to do it's job, because it aims to provide a useful service, but you can use the source code as the bases for your own implementation.
I personally wouldn't even think about doing this, I'm just showing you the way and proving it's possible.
What should be done
I assume you want some kind of secure communication channel to your own service. Look into simple secure connections, tunneling protocols and proxies.
If this needs to be done for one service on one server, I'd look into a simple SSL implementation. Even better, look into using HTTPS.
If you need to access many different services on possibly different servers on the given private network I'd look into proxies.