Data exchange through FIROS fails - ros

On a computer, 2 ubuntu virtual machines are installed. On one of them there is another virtual machine with Fiware-orion Context broker. Both VMs have ROS.
I am trying to make a simple publisher-subscriber ROS program, that sends a message from one VM to another one through FIROS(firos is installed and configured). The problem is that the message from a publishing VM is being sent to FIROS(or it is better to say, the topic is shared through FIROS), but somehow it is not being achieved by the subscribing VM, and therefore I cannot see the message being sent.
We are using the local network so there shouldn't be an issue with port forwarding. Moreover, using rostopic list it is visible that it has fiwaretopics on both VMs running.
Can it be, that the issue lies in using Virtual Machines rather than 2 separate PCs?
Thank you in advance.

I solved this.
There were 2 problems - first, the IP address of the server in config.json must be of the machine where the FIROS is running, not where I wanted to send it.
2 problem, the FIROS has to be launched last, after all other nodes are being run. Therefore it is able to subscribe to those topics and send the data. I was running FIROS first and therefore failed to subscribe, because there were nothing to subscribe to at that particular moment.

Related

Docker Swarm - Route a request to ALL containers

Is there any sort of way to broadcast an incoming request to all containers in a swarm?
EDIT: More info
I have a distributed application with many docker containers. The client can send requests to the swarm and have it respond. However, in some cases, the client needs to change a state on all server instances and therefore I would either need to be able to broadcast a message or have all the Docker containers talk to each other similar to MPI, which I'm trying to avoid.
There is no built-in way to turn a unicast packet into a multicast packet, nor any common 3rd party way of doing (That I've seen or heard of).
I'm not sure what "change a state on all server instances" means. Are we talking about the running state on all containers in a single service?
Or the actual underlying OS? All containers on all services? etc.
Without knowing more about your use case, I'd say it's likely better to design something where the request is received by one Swarm service, and then it's stored in a queue system where a backend worker would pick it up and "change the state on all server instances" for you.
It depends on your specific use case. One way to do it is to send a docker service update --force, which will cause all containers to reboot. If your containers fetch the information that is changed at startup, it would have the required effect

Consul-Agent architecture .. the node-id issue after upgrading to 0.8.1 - conceptual issue?

i am not sure where the root of my problem actually comes from, so i try to explain the bigger picture.
In short, the symptom: After upgrading consul from 0.7.3 to 0.8.1 my agents ( explaining that below ) could no longer connect to the cluster leader due to dublicated node-ids ( why that probably happens, explained below).
I could neither fix it with https://www.consul.io/docs/agent/options.html#_disable_host_node_id nor fully understand, why i run into this .. and thats where the bigger picture and maybe even different questions comes from.
I have the following setup:
I run a application stack with about 8 containers for different services ( different micoservices, DB-types and so on).
I use a single consul server per stack (yes the consul server runs in the software stack, it has its reasons because i need this to be offline-deployable and every stack lives for itself)
The consul-server does handle the registration, service discovery and also KV/configuration
Important/Questionable: Every container has a consul agent started with with "consul agent -config-dir /etc/consul.d" .. connecting the this one server. The configuration looks like this .. including to other files with they encrypt token / acl token. Do not wonder about servicename() .. it replaced by a m4 macro during image build time
The clients are secured by a gossip key and ACL keys
Important: All containers are on the same hardware node
Server configuration looks like this, if any important. In addition, ACLs looks like this, and a ACL-master and client token/gossip json files are in that configurtion folder
Sorry for this probably TLTR above, but the reasons behind all the explanation was, this multi-agent setup ( or 1-agent per container ).
My reasons for that:
I use tiller to configure the containers, so a dimploy gem will try to usually connect to localhost:8500 .. to acomplish that without making the consul-configuration extraordinary complicated, i use this local agent, which then forwards the request to the actual server and thus handles all the encryption-key/ACL negation stuff
i use several 'consul watch' tasks on the server to trigger re-configuration, they also run on localhost:8500 without any extra configuration
That said, the reason i run a 1-agent/container is, the simplicity for local services to talk to the consul-backend without really knowing about authentication as long as they connect through 127.0.0.1:8500 ( as the level of security )
Final Question:
Is that multi-consul agent actually designed to be used that way? The reason i ask is, because as far as i understand, the node-id duplication issue i get now when starting a 0.8.1 comes from "the host" being the same, so the hardware node being identical for all consul-agents .. right?
Is my design wrong or do i need to generate my own node-ids from now on and its all just fine?
Seem like this issue has been identified by Hashicorp and addressed in https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#085-june-27-2017 where -disable-host-node-id has been set to true by default, thus the node-id is no longer generated from the host hardware but a random uuid, which solves the issue i had running several consul nodes on the same physical hardware
So the way i deployed was fine.

How to ensure the receiving application is up and running in client-server communication?

I am presently working on a client-server solution to transfer files to another machine via a socket network connection. Since I intend to do some evaluation on the receiving end as well I am assuming that I will need to have some kind of client or server programme running there, too.
I am fairly new to the whole client-server thing and therefore have the following elementary question:
My present understanding is that client and server will be two independent programmes running on two different machines. How would one typically ensure that the communication partner (i.e., the server when sending from a client and the client when sending from a server) is actually up and running on the remote machine that I want to transfer a file to?
So far, I have been looking into the following options:
In the sending programme include an ssh access to the remote
machine and start an instance of the receiving programme on the
remote machine.
Have the receiving programme run as a demon process on the remote
machine. This would mean that the receiving programme should always
be running on the remote machine. However, how would I know whether
the process has crashed or has been shut down for some reason and
how would one recover from that without option 1) above?
So, my main question is: Are there any additional options that might be worth considering?
Thanks for your view on this!
Depending on how your client server messages are setup, a ping (I don't mean the ICMP ping, but the basic idea) message, where the server can respond with "I am alive" would help. This way at least you know the server end is running.
It is not uncommon in production environments using these that monitoring systems are put in place. Other options worth considering - xinet.d scripts - stuff that gets started on incoming connections.
There probably new ways to achieve the automatic start/restart or start on connection of this with systemd/systemctl but I am not familiar enough with them to give you the specifics.
A somewhat crude, but effective means may be a cron job that periodically runs a script to enforce keeping the service up.

Can multiple ClientSocket Components can be placed on a Form?

I am looking to write a program that will connect to many computers from a single computer. Sort of like "Command Center" where you can monitor all the remote system remotely on a single PC.
My plan is to have multiple Client Sockets on a form. They will connect to individual PCs remotely. So, they can request information from them to display on the Window. Remote PCs will be hosts. Is this possible?
Direct answer to your question: Yes, you can do that.
Long answer: Yes, you can do that but are you sure your design is correct? Are you sure you want to create parallel connections, one to each client? Probably you don't! If yes, then you probably want to run them in separate threads.
If you want to send some commands from time to time (and you are not doing some kind of constant video monitoring) why don't you just use one connection and 'switch' between clients?
I can't tell you more about the design because from your question is not clear about what you want to build (what exactly you are 'monitoring').
VERY IMPORTANT!
Two important notices to take into account before designing your app (both relevants only if the remote computers are not in the LAN (you connect to them via Internet)):
If the remote computers are running as servers, you will have lots of problems to explain your customers (if they are connected (and they probably are) to Internet via a router) how to setup the router and the software firewall. For example, if a remote computer is listening for commands from you, on port 1234 (for example) the firewall in the router will block BY DEFAULT any connection attempt from a 'foreign' computer (from you) to that port.
If your remote computers are running as clients, how they will know master's IP (your IP). Do you have a static IP?
What you actually need is one ServerSocket on the module running on your machine.
To which all your remote PC's will connect through their individual ClientSocket.
You can make your design other way round by putting ClientSocket on the module running on your machine and ServerSocket on the module running on remote machine.
But you will end up creating one ClientSocket to each ServerSocket, what if you have the number of remote servers increase.
Now if you still want to have multiple ClientSockets on your machine then as Altar said you could need a multi threaded application where each thread is responsible for one ClientSocket.
I would recommend Internet Direct (Indy) as they work well in threads, and you can specify a connect time-out per connection, so that your monitoring app will be able to get a 'negative' test result faster than with the default OS connect time-out.
Instead of placing them on the form, I would wrap each client in a class which runs an internal monitoring thread. More work initially but easier to keep independent from each other.

How to create a virtual environment to check network application

I want to create a client server application to test socket created and threads are running between client and server.
I want to check this for internet application.
But being on local machine. I cannot check.
You could run a virtual machine on your real machine and a bridge-mode network connection between the two. Then run the server on one and the client on the other.
However, you will not get a good picture of how well it responds under heavily loaded network conditions. For that you truly need a private network with a phantom load generator.
VirtualBox, VirtualPC, and vmware can do this. You might get better/more responses on superuser.com.

Resources