I want to create a client server application to test socket created and threads are running between client and server.
I want to check this for internet application.
But being on local machine. I cannot check.
You could run a virtual machine on your real machine and a bridge-mode network connection between the two. Then run the server on one and the client on the other.
However, you will not get a good picture of how well it responds under heavily loaded network conditions. For that you truly need a private network with a phantom load generator.
VirtualBox, VirtualPC, and vmware can do this. You might get better/more responses on superuser.com.
Related
On a computer, 2 ubuntu virtual machines are installed. On one of them there is another virtual machine with Fiware-orion Context broker. Both VMs have ROS.
I am trying to make a simple publisher-subscriber ROS program, that sends a message from one VM to another one through FIROS(firos is installed and configured). The problem is that the message from a publishing VM is being sent to FIROS(or it is better to say, the topic is shared through FIROS), but somehow it is not being achieved by the subscribing VM, and therefore I cannot see the message being sent.
We are using the local network so there shouldn't be an issue with port forwarding. Moreover, using rostopic list it is visible that it has fiwaretopics on both VMs running.
Can it be, that the issue lies in using Virtual Machines rather than 2 separate PCs?
Thank you in advance.
I solved this.
There were 2 problems - first, the IP address of the server in config.json must be of the machine where the FIROS is running, not where I wanted to send it.
2 problem, the FIROS has to be launched last, after all other nodes are being run. Therefore it is able to subscribe to those topics and send the data. I was running FIROS first and therefore failed to subscribe, because there were nothing to subscribe to at that particular moment.
I am presently working on a client-server solution to transfer files to another machine via a socket network connection. Since I intend to do some evaluation on the receiving end as well I am assuming that I will need to have some kind of client or server programme running there, too.
I am fairly new to the whole client-server thing and therefore have the following elementary question:
My present understanding is that client and server will be two independent programmes running on two different machines. How would one typically ensure that the communication partner (i.e., the server when sending from a client and the client when sending from a server) is actually up and running on the remote machine that I want to transfer a file to?
So far, I have been looking into the following options:
In the sending programme include an ssh access to the remote
machine and start an instance of the receiving programme on the
remote machine.
Have the receiving programme run as a demon process on the remote
machine. This would mean that the receiving programme should always
be running on the remote machine. However, how would I know whether
the process has crashed or has been shut down for some reason and
how would one recover from that without option 1) above?
So, my main question is: Are there any additional options that might be worth considering?
Thanks for your view on this!
Depending on how your client server messages are setup, a ping (I don't mean the ICMP ping, but the basic idea) message, where the server can respond with "I am alive" would help. This way at least you know the server end is running.
It is not uncommon in production environments using these that monitoring systems are put in place. Other options worth considering - xinet.d scripts - stuff that gets started on incoming connections.
There probably new ways to achieve the automatic start/restart or start on connection of this with systemd/systemctl but I am not familiar enough with them to give you the specifics.
A somewhat crude, but effective means may be a cron job that periodically runs a script to enforce keeping the service up.
I am looking to write a program that will connect to many computers from a single computer. Sort of like "Command Center" where you can monitor all the remote system remotely on a single PC.
My plan is to have multiple Client Sockets on a form. They will connect to individual PCs remotely. So, they can request information from them to display on the Window. Remote PCs will be hosts. Is this possible?
Direct answer to your question: Yes, you can do that.
Long answer: Yes, you can do that but are you sure your design is correct? Are you sure you want to create parallel connections, one to each client? Probably you don't! If yes, then you probably want to run them in separate threads.
If you want to send some commands from time to time (and you are not doing some kind of constant video monitoring) why don't you just use one connection and 'switch' between clients?
I can't tell you more about the design because from your question is not clear about what you want to build (what exactly you are 'monitoring').
VERY IMPORTANT!
Two important notices to take into account before designing your app (both relevants only if the remote computers are not in the LAN (you connect to them via Internet)):
If the remote computers are running as servers, you will have lots of problems to explain your customers (if they are connected (and they probably are) to Internet via a router) how to setup the router and the software firewall. For example, if a remote computer is listening for commands from you, on port 1234 (for example) the firewall in the router will block BY DEFAULT any connection attempt from a 'foreign' computer (from you) to that port.
If your remote computers are running as clients, how they will know master's IP (your IP). Do you have a static IP?
What you actually need is one ServerSocket on the module running on your machine.
To which all your remote PC's will connect through their individual ClientSocket.
You can make your design other way round by putting ClientSocket on the module running on your machine and ServerSocket on the module running on remote machine.
But you will end up creating one ClientSocket to each ServerSocket, what if you have the number of remote servers increase.
Now if you still want to have multiple ClientSockets on your machine then as Altar said you could need a multi threaded application where each thread is responsible for one ClientSocket.
I would recommend Internet Direct (Indy) as they work well in threads, and you can specify a connect time-out per connection, so that your monitoring app will be able to get a 'negative' test result faster than with the default OS connect time-out.
Instead of placing them on the form, I would wrap each client in a class which runs an internal monitoring thread. More work initially but easier to keep independent from each other.
How can I detect if a machine is connected/available in the present network.
It has several uses of course, but my main concern here is that my application uses resources located in specific machines and if they are not available it will not even attempt the connection and will use local resources.
you can try making a ping to the machine. check this article Making a PING with Delphi and the WMI.
ICMP echo request (PING) will tell you if the machine is up and reachable on the network. It will not tell you if the service you want to connect to is available on the machine (up and running).
Best bet would probably be to just attempt the connection and fall back to local resources if the connection fails.
Just try to use the resource and if you get an error use the local resource instead. The strategy you are trying to implement suffers from several problems including timing windows between the test and the use, during which the resource may become unavailable, and also doesn't actually test the resource for availability, only some lower-order thing like a TCP port or the ICMP echo part of the stack. In general the best way to detect whether a resource is available is just to try to use it, and recover from the failures. You have to write code to handle those failures anyway, why do it all twice?
A different strategy than trying to connect: let the server tell the clients if the services are still available, by sending UDP Broadcast or some kind of heartbeat signal over middleware (pipes?), which the clients listens to - a publish/subscribe communication model.
I have a bit of a situation. I've consumed about fourty-eleven different tutorials/books/videos on Capistrano, and none of them touch on out-of-the-norm cases. They all assume straightforward setups -- which, in my experience, is rarely the case.
Basically my situation is as follows:
1) I am developing the application on a system at home
2) My goal is to run the application on a server at the office running behind the company router. I have all the appropriate ports (21,22,80,3000,etc) forwarded to the machine so all is well as far as outside communication.
3) I'm using GIT for version control, and I PUSH my updates to the server itself.
My confusion comes in two areas:
1) How do I identify all the appropriate roles in the Capistrano recipe? Do I base them on the external IP or the internal?
2) How do I tell Capistrano to look locally (instead of trying to bounce out) on the same machine for the GIT repository? Of course, this assumes that Capistrano does anything at all from the server.
NOTE: One of the big issues I'm facing is the fact that none of the machines in the office can access the main IP from inside the network -- supposedly a protection from DOS and various other troubles -- so if for some reason the server needs to pretend the information is on an external machine when it's really local, it won't work.
I think you need to look at the ':deploy_via' command; specifically 'copy':
http://www.capify.org/index.php/Understanding_Deployment_Strategies
Consider your home computer as remote, and the server as local, and this takes a local copy for the deployment.