Is there theoretically anyway to connect nodes to a Docker swarm if they are on a private network and don't have a public IP? The swarm host has a public IP, and the nodes can access it as well as a discovery service just fine, but they themselves are on private networks over which I have no control. So is this possible?
I this situation you ether tunnel the requests or use weave for creating the Virtual Private network.
Related
I have multiple Raspberry Pi's at home and would like to make a Docker Swarm cluster out of it.
Each have it own private ip on my home network.
That part is working fine.
But to make it more reliable I would like to add a VPS to the cluster. The issue is with the networks, the raspberry are on a private network and the VPS on public one.
I'd like to avoid using VPN or other services.
Is it possible to add it to the cluster ?
What is the process to do so ?
I tried with the following steps :
Forward ports 7946, 4789 and 2377 to the master of my cluster
Init the swarm on Pi4 with public IP specifying --listen-addr HOME_IP
Pi3 joins the cluster using private IP
VPS joins the cluster using public IP specifying --advertise-addr VPS_IP
But the overlay network is not working properly, when a service is on the VPS, the published port is not responding on any of the raspberry and vice-versa
Currently, I'm trying to create a docker swarm network over hosts. We have two different network sites, and one is a closed and private network. In this closed site, there is only one public IP assigned to us and hosts in this site have private IP addresses. Hosts in another network site have own public IP address to each host so there is no problem.
What I want to do is connecting hosts in the closed network site (called internal hosts) and hosts that have their own public IP addresses (called external hosts).
Because the only one public IP assigned to us for the closed network site, I set this public IP designated one internal host in the closed network site and this host became the docker swarm manager. Then, internal hosts joined to the swarm network using the internal IP address of the swarm manager host and external hosts joined using the public IP address.
For example, in the internal hosts:
docker swarm join --token ... 172.0.12.12:2377
and in the external hosts:
docker swarm join --token ... 123.123.123.123:2377
Joining was successfully done and I could recognize all nodes correctly in the swarm manager using docker node ls command. However, when I create an overlay network, this network is recognized in external hosts, but not in internal hosts. So, when I created a container in an external host and tried to ping from an internal host, it failed.
Is this a wrong way? Or is there anything that I should check? Any kind of ideas will be very helpful. Thanks!
I have a docker SWARM (conected with docker overylay network) with 5 host (4 worker and 1 master). I will be deploying my application along with load balancer/gateway on this swarm. So far so good, but how can I access the gateway from the internet.
1) I don't want to use port-forwading.
2) I don't want to use Docker Enterprise Edition / Http Routing mesh.
3) I don't want to use Weave Net etc third party Net Plugins.
With these restrictions is it possible to access the gateway from net.
If you create a Swarm Cluster With Overlay network driver,
you will be getting a gateway for Docker which will be having a private IP Address attached to a Interface which is created by Docker Daemon.
Attach a Public IP to this Interface (as we do in AWS, we will be having a Private IP Address attached to a Interface and will be attaching a Public Ip Address).
Not sure if it's a machine or docker configuration problem.
I have a VM with public IP (176.X.XXX.XXX) and private IP (10.X.XXX.XXX) and I'd like other VMs to access my container through private IP as they are in the same network.
So I do
ports:
- "10.X.XXX.XXX:9200:9200"
but this exposes the port to 176.X.XXX.XXX as well, which is not desired.
and when I expose it to localhost only
ports:
- "127.0.0.1:9200:9200"
I can't access it from other VMs on the private network.
This is most probably because of either
This is an aws/gcp/azure/droplet/etc instance in which case the cloud provider NATs the public IP address to the private IP address
You have managed to NAT the public IP address to private IP address explicitly for some reason
I'm new in docker and I have a simple question.
I have 3 hosts running a docker swarm with the following ip's:
192.168.0.52
192.168.0.53
192.168.0.54
Also, I've created a http service with a published port:8080. As expected, service it's available at all hosts ip's (ex: 192.168.0.52:8080).
Is it possible to assign a static IP Address to the service(for example 192.168.0.254) and be able to reach it from any computer from my local network ? (192.168.0.0/24).
This way I should have high availability for my service; if the host with the service goes down, it should be started on another host, but keep the same IP.
Thanks,
Alex