I'm new in docker and I have a simple question.
I have 3 hosts running a docker swarm with the following ip's:
192.168.0.52
192.168.0.53
192.168.0.54
Also, I've created a http service with a published port:8080. As expected, service it's available at all hosts ip's (ex: 192.168.0.52:8080).
Is it possible to assign a static IP Address to the service(for example 192.168.0.254) and be able to reach it from any computer from my local network ? (192.168.0.0/24).
This way I should have high availability for my service; if the host with the service goes down, it should be started on another host, but keep the same IP.
Thanks,
Alex
Related
Say I have a Swarm of 3 nodes on my local system. And I create a service say Drupal with a replication of 3 in this swarm. Now, say each of the node has one container each running Drupal. Now when I have to access this in my browser I will have to use the IP address of one of the nodes <IP Address>:8080 to access Drupal.
Is there a way I can set a DNS name for this service and access it using DNS name instead of having to use IP Address and port number?
You need to configure the DNS server that you use on the host making the query. So if your laptop queries the public DNS, you need to create a public DNS entry that would resolve from the internet (on a domain you own). This should resolve to the docker host IPs running the containers, or an LB in front of those hosts. And then you publish the port on the host to the container you want to access.
You should not be trying to talk directly to the container IP, these are not routeable from outside of the docker host. And the docker DNS used for service discovery is for container to container communication. This is separate from communication outside of docker that goes through a published port.
Currently, I'm trying to create a docker swarm network over hosts. We have two different network sites, and one is a closed and private network. In this closed site, there is only one public IP assigned to us and hosts in this site have private IP addresses. Hosts in another network site have own public IP address to each host so there is no problem.
What I want to do is connecting hosts in the closed network site (called internal hosts) and hosts that have their own public IP addresses (called external hosts).
Because the only one public IP assigned to us for the closed network site, I set this public IP designated one internal host in the closed network site and this host became the docker swarm manager. Then, internal hosts joined to the swarm network using the internal IP address of the swarm manager host and external hosts joined using the public IP address.
For example, in the internal hosts:
docker swarm join --token ... 172.0.12.12:2377
and in the external hosts:
docker swarm join --token ... 123.123.123.123:2377
Joining was successfully done and I could recognize all nodes correctly in the swarm manager using docker node ls command. However, when I create an overlay network, this network is recognized in external hosts, but not in internal hosts. So, when I created a container in an external host and tried to ping from an internal host, it failed.
Is this a wrong way? Or is there anything that I should check? Any kind of ideas will be very helpful. Thanks!
I have some experience with Docker Compose and container linking. In a non-swarm environment, you could easily connect from, e.g, the web container to the db_mysql container using its name (for example, in PHP I can configure the MySQL connection to be:
$dsn = 'mysql:host=db_mysql;
I am having a hard time understanding how that works with Docker in Swarm mode, especially considering the "replicas" and "load balancing" mechanisms.
Let's say I have 5 different Docker Machines, each having a different public IP, participating in a Swarm. I also have a web service and a db service that's replicated across these 5 different machines (1 instance per each machine).
My question is: how do I make any of the 5 web containers, communicate to any of the 5 db_mysql containers without forcing these web containers to have knowledge of any Docker Machine public IPs or the fact that these containers live within a Swarm?
You use the service name. This will resolve in DNS to either a VIP or the 5 ip addresses (one for each replica) of the service. Under the covers, the VIP uses IPVS to round robin to one of the healthy replicas without suffering from stale DNS issues. You can also get all the replica IP addresses using service_name.tasks even if you use the default VIP.
In Docker's DNS implementation, you can resolve the container name, and any network alias. The network alias includes the service name with DNSRR (used by docker-compose without swarm). Or the service name resolves to a VIP in swarm mode. The hostname of the container does not resolve, likely because it can change outside of the control (and therefore knowledge) of the docker engine.
Using Docker version 19.03.5 the correct DNS name to query in order to obtain all the IP addresses of the replica of a service is the following:
tasks.<service-name>
How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.
is it possible to change the ip of docker0 or provide a static IP to docker containers, because by default docker containers have the ip range of 172.17.0.2/16 but my network is 192.168.X.X/24 in this situation on the server container is running there all the containers is able to communicate within servers but from other server this failed to connect.
How do you set up your cluster? Do you use Swarm? If so, you need to use a k/v storage backend to enable communication between two containers hosted on different hosts. Is this what you aim to do, or do you want the host to communicate with the container on the other host?
Anyway, the solution is similar.
I re-writing a tuto for Docker Swarm to pull request it into their Swarm doc, you may want to take a look: https://www.auzias.net/en/docker-network-multihost/
Have a nice day!
problem can be fix by using --network=host
this will allow your container to use the host machine network. for direct accessing your container you can change the ssh port of the container and access your container with the specific port number.
I answered a similar question here
https://stackoverflow.com/a/35359185/4094678
The difference in your case would be to create a netowrk with subnet 192.168.X.X/24 and then assign desired ip addr to container with --ip
Here we can't able to change docker0 Ip address, but we have option to create multiple networks.
Solution 1:
can be by using start container with host network --network=host
Solution2:
we can also start the container by exposing the cluster required port and from another node we can communicate it.
-p hostport:serviceport
Or, Solution3:
We can deploy cluster over docker swarm.