I'm implementing a few devices and services with a fair bit of data running around my local network, with all publishers and most subscribers within the firewall. Because it's easy, I'm starting with a CloudMQTT subscription, but ideally, I'd like that to be the primary (to service a couple of external clients), but if the internet goes down, I'd like an internal server to be a hot back up, with all publishing and, for internal clients, subscribe service. I'm not sure if Bridging can help?
Is there anyway to implement this? In my head, it's something like the way DNS works - you can have a local server, and to the extent it knows your answer, it can service it, but it has a place to go for further answers.
Run a broker on each site and bridge it to the CloudMQTT instance. This way things can always communicate locally even if the internet is down
Related
I love using Prometheus for monitoring and alerting. Until now, all my targets (nodes and containers) lived on the same network as the monitoring server.
But now I'm facing a scenario, where we will deploy our application stack (as a bunch of Docker containers) to several client machines in thier networks. Nearly all of the clients networks are behind a firewall or NAT. So scraping becomes quite difficult.
As we're still accountable for our stack, I'd like to have a central montioring server, altering and dashboards.
I was wondering what could be the best architecture if want to implement it with Prometheus, but I couldn't find any convincing approaches. My ideas so far:
Use a Pushgateway on our side and push all data out of the client networks. As the docs state, it's not intended that way: https://prometheus.io/docs/practices/pushing/
Use a federation setup (https://prometheus.io/docs/prometheus/latest/federation/): Place a Prometheus server in every client network behind a reverse proxy (to enable SSL and authentication) and aggregate relevant metricts there. Open/forward just a single port for federation scraping.
Other more experimental setups, such as SSH Tunneling (e.g. here https://miek.nl/2016/february/24/monitoring-with-ssh-and-prometheus/) or VPN!?
Thank you in advance for your help!
Nobody posted an answer so I will try to give my opinion on the second choice because that's what I think I would do in your situation.
The second setup seems the most flexible, you have access to the datas and only need to open one port on for the federating server, so it should still be secure.
One other bonus of this type of setup is that even if the firewall stop working for a reason or another, you will still have a prometheus scraping, you will have an alert because you won't be able to access the server(s) but when the connexion comes again you will have all the datas. You won't have a hole in the grafana dashboards because there was no datas, apart during the incident.
The issue with this setup is the fact that you need to maintain a number of server equivalent to the number of networks. A solution for this would be to have a packer image or maybe an ansible playbook to deploy.
I would like to understand networking services with a large user base a bit better so that I know how to approach a project I am busy with.
The following statements that I make may be incorrect but they still lead to the question that I want to ask...
Please consider Skype and TeamViewer clients. It seems that both keep persistent network connections open to their respective servers. They use these persistent connections to initiate additional connections. Some of these connections are created by means of Hole Punching if the clients are behind NATs. They are then used for direct Peer-to-Peer communications.
Now according to http://expandedramblings.com/index.php/skype-statistics/ there are 300 million users using Skype and 4.9 million daily active users. I would assume that most of that 4.9 million users will most probably have their client apps running most of the day. That is a lot of connections to the Skype servers that are open at any given time.
So to my question; Is this feasible or at least acceptable? I mean, wouldn't it be better to not have a network connection open while idle and aspecially when there are so many connections open to the servers at once? The only reason I can think is that it would be the only way to properly do Hole Punching. Techically, how is this achieved on the server side?
Is this feasible or at least acceptable?
Feasible it certainly is, you mention already two popular apps that do it, so it is very doable in practice.
As for acceptable, to start no internet authority (e.g. IETF) has ever said it is unacceptable to have long-lived connections even with low traffic.
Furthermore, the only components for which this matters are network elements that keep connection/flow state. These are for sure the endpoints and so-called middleboxes like NAT and firewalls. For the client this is only one connection, the server is usually fine tuned by the application developers (who made this choice) themselves, so for these it is acceptable. For middleboxes it's simple: they have no choice, they're designed to just work with all kind of flows, including long-lived persistent connections.
I mean, wouldn't it be better to not have a network connection open while idle and aspecially when there are so many connections open to the servers at once?
Not at all. First of all, that could be 'much' slower as you'd need to set up a full connection before each control-plane call. This is especially noticeable if your RTT is big or if the servers do some complicated connection proxying/redirection for load-balancing/localization purposes.
Next to that this would historically make incoming calls difficult for a huge amount of users. Many ISP's block/blocked unknown incoming connections from the internet by means of a firewall. Similar, if you are behind a NAT device that does not support UPnP or PCP you can't open a port to listen on for your public IP address. So you need it even aside from hole-punching.
The only reason I can think is that it would be the only way to
properly do Hole Punching. Techically, how is this achieved on the
server side?
Technically you can't do proper hole-punching as soon as the NAT devices maintain a full <src-ip,src-port,dest-ip,dest-port,protocol> (classical 5-tuple) flow match. Then the best you can do with 'hole punching' is set up a proxy between peers.
What hole-punching relies on is that the NAT flow lookup is only looking at <src-ip,src-port,protocol> upstream and <dest-ip,dest-port,protocol> downstream to do the translation. In that case both clients just set up a connection to the server, their ip and port gets translated and the server passes this to the other client. The other client can now start sending packets to that translated <ip,port> combination which should work because NAT ignores the server's ip/port. But even if the particular NAT would work like this, some security device (e.g. stateful firewall) might detect session hi-jacking and drop this anyway.
Nowadays you rather use UPnP to open up a port to listen on your public IP which is much easier if supported.
I have a strange szenario:
Webserver / Appserver (Java) sends requests to many different satellite systems (on customers site). Only satellite systems can initiate connection due to firewall rules.
The model I think should be something like REQ/REP, but here the REQuester have to bind and the REPlyer would have to connect.
Is this possible and a stable architecture?
Are there better solutions? (We first had WebSockets in mind...)
Remark: we don't have to use Java on both ends. To be precise on customers site we have Delphi, but we could bridge it somehow.
The model I think should be something like REQ/REP, but here the
REQuester have to bind and the REPlyer would have to connect.
This will be problematic. When the server initiates the connection, it must be aware of all peers and their bind address. Not a big deal for a handful of peers, but for many peers changing constantly, it's a mess.
Only satellite systems can initiate connection due to firewall rules.
If that's the case, your mileage will vary with WebSockets; google around, lots of info on this.
Are there better solutions?
Well, with ZeroMq, one solution that comes to mind to support client request initiation is this:
Server binds with ROUTER
Clients connect with DEALER.
This approach offers bi-directional request/reply, does not block (asynchronous), and eliminates the client-side bind problem mentioned in your question. Here, the server binds, and either side can initiate the conversation.
I recommend reading this section in the guide, it covers extended async request/reply and message enveloping, important when using ROUTER/DEALER sockets.
I want to write a app which will run on different computers and need all of then to communicate with each other like "utorrent" (peer to peer). This app only will send text messages.
How can I do this? I mean sending one message to remote computer on the internet?
I have a website and every app at start can send some information to it and find information of other apps on other computers (with PHP) but I do not know how address one computer through internet and send the data directly to that. I can find the ip address with PHP but it is the ip address of router (ISP).
How a message reaches a computer? I'm wondering about addressing every computer?
My brain really stuck here, I really appreciate any help. Thanks.
In a peer-to-peer network there's no centralized server for transmitting the data from one client to another, in this case the clients must be able to act as both the server and client. This means that either you'll have to be using UPnP like most modern torrent clients, which handles port forwarding in the router, or you'll have to manually forward a port to the computer in the router.
A centralized server (like a torrent tracker) is usually used to make the clients aware of each other's existence and tell them where to connect. This is where your PHP script comes in, though PHP might not offer the most effective way of doing this, assuming you're using it in combination with a webserver to serve the data though the http protocol.
As for actual text communication, you could use the Indy socket library for that. I found this example, basically which shows how to do it: http://www.ciuly.com/delphi/indy/indy-10-client-server-basic-demo/
Hi
let me make my question clear. Two people using my app are connected to the internet. Both have each other's IP and they want to chat (like Y!messanger) with each other.
I think I need to use Indy components; right? Which component should I use?
Thanks in advance
Have you looked at any of the demos on Indy's website yet?
In general, you are looking to create a "Client/Server" type application. A quick Google search for "indy client server example" pulls up lots of results, including this one: http://www.devarticles.com/c/a/Delphi-Kylix/A-Real-World-Client-Server-Application-in-Delphi/
In reality, this gets a lot more complicated when you have firewalls and NATs with private IP addresses. You will have to consider how your application will either get around or through these types of technologies.
Similar to what Scott said, I think that your biggest problem is getting them talking to each other. My computers at home go through a router, which blocks all incoming connection requests (i.e. requests to start a conversation between two computers) from the Internet. My computers can send connection requests OUT, and start a conversation that way, but unless you modify the router (port forwarding) my computers can not receive connection requests.
You need a server somewhere to which both people will connect, that can then relay messages back and forth. To get really tricky, once the connection is made to the server the two computers can then be put into direct contact, but that involves UDP packets and some clever magic.
You don't have to use Indy components, you just need anything that will handle communications over the network. Any HTTP or sockets network stack will do. Indy is the defacto standard for Delphi Win32.
To do network communications, you will need to create a listener object or service on machine A and a sender object on machine B to send a network message from A to B. To send a message from B to A, you will need a reverse path as well - 4 objects total to perform bidirectional comms. Some object wrappers hide this detail internally. I don't recall offhand whether Indy hides this or not.
It would probably be easiest if you use a common TCP/IP protocol for your machine to machine communications, such as HTTP. This will make it easier to get your connections through firewalls and proxies that frequently exist between arbitrary users. To avoid conflicting with any HTTP web services that might be running on either machine, you should use a custom port number with the IP address: 192.168.1.10:12345, not the standard HTTP web server port 80. This is what most of the IM clients do.