Our internal network has the range 172.20.0.0/16 reserved for internal purposes and docker uses the 172 range by default for its internal networking. I can reset the bridge to live in 192.168 by providing the bip setting to the daemon:
➜ ~ sudo cat /etc/docker/daemon.json
{
"bip": "192.168.2.1/24"
}
➜ ~ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.2.1 netmask 255.255.255.0 broadcast 0.0.0.0
However, when creating new custom networks via docker network create or by defining them in the networks sections of the docker-compose.yaml these are still created in 172, thus eventually clashing with 172.20:
➜ ~ docker network create foo
610fd0b7ccde621f87d40f8bcbed1699b22788b70a75223264bb14f7e63f5a87
➜ ~ docker network inspect foo | grep Subnet
"Subnet": "172.17.0.0/16",
➜ ~ docker network create foo1
d897eab31b2c558517df7fb096fab4af9a4282c286fc9b6bb022be7382d8b4e7
➜ ~ docker network inspect foo1 | grep Subnet
"Subnet": "172.18.0.0/16",
I understand I can provide the subnet value to docker network create, but I rather want all such subnets created under 192.168.*.
How can one configure dockerd to do this automatically?
For anyone who found this question. Now it is possible.
$ docker -v
Docker version 18.06.0-ce, build 0ffa825
Edit or create config file for docker daemon:
# nano /etc/docker/daemon.json
Add lines:
{
"default-address-pools":
[
{"base":"10.10.0.0/16","size":24}
]
}
Restart dockerd:
# service docker restart
Check the result:
$ docker network create foo
$ docker network inspect foo | grep Subnet
"Subnet": "10.10.1.0/24"
It works for docker-compose too.
Your "bip": "192.168.2.1/24" works for bridge0 only. It means that any container which run without --network will use this default network.
Related
i've been trying all the existing commands for several hours and could not fix this problem.
i used everything covered in this Article: Docker - Bind for 0.0.0.0:4000 failed: port is already allocated.
I currently have one container: docker ps -a | meanwhile docker ps is empty
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ebb9289dfd1 dockware/dev:latest "/bin/bash /entrypoi…" 2 minutes ago Created TheGoodPartDocker
when i Try docker-compose up -d i get the Error:
ERROR: for TheGoodPartDocker Cannot start service shop: driver failed programming external connectivity on endpoint TheGoodPartDocker (3b59ebe9366bf1c4a848670c0812935def49656a88fa95be5c4a4be0d7d6f5e6): Bind for 0.0.0.0:80 failed: port is already allocated
I've tried to remove everything using: docker ps -aq | xargs docker stop | xargs docker rm
Or remove ports: fuser -k 80/tcp
even deleting networks:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
or just manually shut down stop and run:
docker-compose down
docker stop 5ebb9289dfd1
docker rm 5ebb9289dfd1
here is also my netstat : netstat | grep 80
unix 3 [ ] STREAM CONNECTED 20680 /mnt/wslg/PulseAudioRDPSink
unix 3 [ ] STREAM CONNECTED 18044
unix 3 [ ] STREAM CONNECTED 32780
unix 3 [ ] STREAM CONNECTED 17805 /run/guest-services/procd.sock
And docker port TheGoodPartDocker gives me no result.
I also restarted my computer, but nothing works :(.
Thanks for helping
Obviously port 80 is already occupied by some other process. You need to stop the process, before you start the container. To find out the process use ss:
$ ss -tulpn | grep 22
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1187,fd=3))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1187,fd=4))
I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.
Before upgrading my system, I was able to successfully connect to mongo running in a docker container using published ports. After upgrading, as shown in Case #1 connecting via published ports no longer work for me.
Case #1
~ docker run --rm -d -p 27017:27017 mongo:3.6
2594b7e5cbf481526589d221361c853338ff55ecb32d9e02eae17383960e971a
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2594b7e5cbf4 mongo:3.6 "docker-entrypoint.s…" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp dazzling_fermat
Robo3T Logs
Cannot connect to the MongoDB at localhost:27017.
Error:
Network is unreachable. Reason: network error while attempting to run command 'isMaster' on host 'localhost:27017'
~ sudo lsof -i -P -n | grep LISTEN
...
docker-pr 263637 root 4u IPv4 3723123 0t0 TCP *:27017 (LISTEN)
✘ ~ sudo ufw status
Status: inactive
Now I can only connect using the host networking stack.
Case #2
~ docker run --rm -d --network=host mongo:3.6
39929a8d50cc8554d256f7516d039621cd22ed8be86680ac0e1400809464b619
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39929a8d50cc mongo:3.6 "docker-entrypoint.s…" 5 seconds ago Up 4 seconds admiring_grothendieck
Robo3T Logs
4:13:20 PM Info: Connecting to localhost:27017...
4:13:20 PM Info: Establish connection successful. Connection: localhost
Pre-upgrade:
Linux Mint 19 - Tricia,
Docker version was 19.xx something I believe.
Post Upgrade:
~ lsb_release -a
No LSB modules are available.
Distributor ID: Linuxmint
Description: Linux Mint 20
Release: 20
Codename: ulyana
~ docker --version
Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.1
I verified there are no running firewalls (UFD, etc), I can connect from container to container when specifying a private docker network for both the server and client. What am I missing? How can I connect using published ports again? Thanks in advance.
Docker on Linux generally uses the host's DNS and modifies your iptables to provide the connectivity between the host and container. If there's a problem with connectivity, in your case the most likely culprits are (in order of likelihood):
DNS entry missing for localhost or wrong IP version target. Try using 127.0.0.1 or ::1 as the hostname instead.
iptables rules are missing. Check the earlier link in my response for remediations and flags that can affect this.
The container might actually have issues starting up. Check the output of docker log <container_id> for errors after you start it. I would say this option is unlikely as things work under host network but don't discount this possibility too quickly.
I have a Docker Compose v2 file which starts a container. I locally run a service on port 3001. I want to reach this service from the Docker container.
The Docker Compose file looks like this:
version: '2'
services:
my-thingy:
image: my-image:latest
#network_mode: host #DOES not help
environment:
- THE_HOST_I_WANT_TO_CONNECT_TO=http://127.0.0.1:3001
ports:
- "3010:3010"
Now, how can I reach THE_HOST_I_WANT_TO_CONNECT_TO?
What I tried is:
Setting network_mode to host. This did not work. 127.0.0.1 could not be reached.
I can also see that I can reach the host from the container if I use the local IP of the host. A quick hack would be to use something like ifconfig | grep broadcast | awk '{print $2}' to obtain the IP and substitute that in Docker Compose. Since this IP can change on reconnect and different setups can have different ifconfig results, I am looking for a better solution.
I've used another hack/workarkound from comments in the docker issue #1143. Seems to Work For Me™ for the time being... Specifically, I've added the following lines in my Dockerfile:
# - net-tools contains netstat, used to discover IP of Docker host server.
# NOTE: the netstat trick is to make Docker host server accessible
# from inside Docker container under name 'dockerhost'. Unfortunately,
# as of 2016.10, there's no official/robust way to do this when Docker host
# has no public IP/DNS entry. What is used here is built based on:
# - https://github.com/docker/docker/issues/1143#issuecomment-39364200
# - https://github.com/docker/docker/issues/1143#issuecomment-46105218
# See also:
# - http://stackoverflow.com/q/38936738/98528
# - https://github.com/docker/docker/issues/8395#issuecomment-200808798
# - https://github.com/docker/docker/issues/23177
RUN apt-get update && apt-get install -y net-tools
CMD (netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2" dockerhost"}' >> /etc/hosts) && \
...old CMD...
With this, I can use dockerhost as the name of the host where Docker is installed. As mentioned above, this is based on:
https://github.com/docker/docker/issues/1143#issuecomment-39364200
(...) One way is to rely on the fact that the Docker host is reachable through the address of the Docker bridge, which happens to be the default gateway for the container. In other words, a clever parsing of ip route ls | grep ^default might be all you need in that case. Of course, it relies on an implementation detail (the default gateway happens to be an IP address of the Docker host) which might change in the future. (...)
https://github.com/docker/docker/issues/1143#issuecomment-46105218
(...) A lot of people like us are looking for a little tidbit like this
netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
where netstat -nr means:
Netstat prints information about the Linux networking subsystem.
(...)
--route , -r
Display the kernel routing tables.
(...)
--numeric , -n
Show numerical addresses instead of trying to determine symbolic host, port or user names.
This is a known issue with Docker Compose: see Document how to connect to Docker host from container #1143. The suggested solution of a dockerhost entry in /etc/hosts is not implemented.
I went for the solution with a shell variable as also suggested in a comment by amcdl on the issue:
Create a LOCAL_XX_HOST variable: export LOCAL_XX_HOST="http://$(ifconfig en0 inet | grep "inet " | awk -F'[: ]+' '{ print $2 }'):3001".
Then, for example, refer to this variable in docker-compose like this:
my-thingy:
image: my-image:latest
environment:
- THE_HOST_I_WANT_TO_CONNECT_TO=${LOCAL_XX_HOST}
I have the Docker version 1.10 with embedded DNS service.
I have created two service containers in my docker-compose file. They are reachable each other by hostname and by IP, but when I would like reach one of them from the host machine, it doesn't work, it works only with IP but not with hostname.
So, is it possible to access a docker container from the host machine by it's hostname in the Docker 1.10, please?
Update:
docker-compose.yml
version: '2'
services:
service_a:
image: nginx
container_name: docker_a
ports:
- 8080:80
service_b:
image: nginx
container_name: docker_b
ports:
- 8081:80
then I start it by command: docker-compose up --force-recreate
when I run:
docker exec -i -t docker_a ping -c4 docker_b - it works
docker exec -i -t docker_b ping -c4 docker_a - it works
ping 172.19.0.2 - it works (172.19.0.2 is docker_b's ip)
ping docker_a - fails
The result of the docker network inspect test_default is
[
{
"Name": "test_default",
"Id": "f6436ef4a2cd4c09ffdee82b0d0b47f96dd5aee3e1bde068376dd26f81e79712",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1/16"
}
]
},
"Containers": {
"a9f13f023761123115fcb2b454d3fd21666b8e1e0637f134026c44a7a84f1b0b": {
"Name": "docker_a",
"EndpointID": "a5c8e08feda96d0de8f7c6203f2707dd3f9f6c3a64666126055b16a3908fafed",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"c6532af99f691659b452c1cbf1693731a75cdfab9ea50428d9c99dd09c3e9a40": {
"Name": "docker_b",
"EndpointID": "28a1877a0fdbaeb8d33a290e5a5768edc737d069d23ef9bbcc1d64cfe5fbe312",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
As answered here there is a software solution for this, copying the answer:
There is an open source application that solves this issue, it's called DNS Proxy Server
It's a DNS server that resolves container hostnames, and when it can't resolve a hostname then it can resolve it using public nameservers.
Start the DNS Server
$ docker run --hostname dns.mageddo --name dns-proxy-server -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will set as your default DNS automatically (and revert back to the original when it stops).
Start your container for the test
docker-compose up
docker-compose.yml
version: '2'
services:
redis:
container_name: redis
image: redis:2.8
hostname: redis.dev.intranet
network_mode: bridge # that way he can solve others containers names even inside, solve elasticsearch, for example
elasticsearch:
container_name: elasticsearch
image: elasticsearch:2.2
hostname: elasticsearch.dev.intranet
Now resolve your containers' hostnames
from host
$ nslookup redis.dev.intranet
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: redis.dev.intranet
Address: 172.21.0.3
from another container
$ docker exec -it redis ping elasticsearch.dev.intranet
PING elasticsearch.dev.intranet (172.21.0.2): 56 data bytes
As well it resolves Internet hostnames
$ nslookup google.com
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
Here's what I do.
I wrote a Python script called dnsthing, which listens to the Docker events API for containers starting or stopping. It maintains a hosts-style file with the names and addresses of containers. Containers are named <container_name>.<network>.docker, so for example if I run this:
docker run --rm --name mysql -e MYSQL_ROOT_PASSWORD=secret mysql
I get this:
172.17.0.2 mysql.bridge.docker
I then run a dnsmasq process pointing at this hosts file. Specifically, I run a dnsmasq instance using the following configuration:
listen-address=172.31.255.253
bind-interfaces
addn-hosts=/run/dnsmasq/docker.hosts
local=/docker/
no-hosts
no-resolv
And I run the dnsthing script like this:
dnsthing -c "systemctl restart dnsmasq_docker" \
-H /run/dnsmasq/docker.hosts --verbose
So:
dnsthing updates /run/dnsmasq/docker.hosts as containers
stop/start
After an update, dnsthing runs systemctl restart dnsmasq_docker
dnsmasq_docker runs dnsmasq using the above configuration, bound
to a local bridge interface with the address 172.31.255.253.
The "main" dnsmasq process on my system, maintained by
NetworkManager, uses this configuration from
/etc/NetworkManager/dnsmasq.d/dockerdns:
server=/docker/172.31.255.253
That tells dnsmasq to pass all requests for hosts in the .docker
domain to the docker_dnsmasq service.
This obviously requires a bit of setup to put everything together, but
after that it seems to Just Work:
$ ping -c1 mysql.bridge.docker
PING mysql.bridge.docker (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.087 ms
--- mysql.bridge.docker ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
To specifically solve this problem I created a simple "etc/hosts" domain injection tool that resolves names of local Docker containers on the host. Just run:
docker run -d \
-v /var/run/docker.sock:/tmp/docker.sock \
-v /etc/hosts:/tmp/hosts \
--name docker-hoster \
dvdarias/docker-hoster
You will be able to access a container using the container name, hostname, container id and vía the network aliases they have declared for each network.
Containers are automatically registered when they start and removed when they are paused, dead or stopped.
The easiest way to do this is to add entries to your hosts file
for linux: add 127.0.0.1 docker_a docker_b to /etc/hosts file
for mac: similar to linux but use ip of virtual machine docker-machine ip default
Similar to #larsks, I wrote a Python script too but implemented it as service. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp