Can't ping service inside Docker container from the host machine - docker

I'm running a container via docker-compose on Ubuntu 20.04, and I can't ping or curl the web server that's running inside from the host machine that's running docker.
I've given the container a static IP, and if I open a shell in the container I can see the service running fine and curl it as expected.
My docker-compose.yml looks like this:
version: "2.1"
services:
container:
image: imagename
container_name: container
networks:
net:
ipv4_address: 172.20.0.5
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
ports:
- 9000:9000
restart: unless-stopped
networks:
net:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
But if I curl -v 172.20.0.5:9000 from the same machine, I get
* Trying 172.20.0.5:9000...
* TCP_NODELAY set
* connect to 172.20.0.5 port 9000 failed: No route to host
* Failed to connect to 172.20.0.5 port 9000: No route to host
* Closing connection 0
curl: (7) Failed to connect to 172.20.0.5 port 9000: No route to host
My best guess is something to do with iptables or firewall rules? I've not changed those at all from the default Docker set up. With host network mode it does work, but exposes the 9000 port publicly. I want to have it only accessible locally and then set it up behind a reverse proxy. Thanks.

The static IP you gave is within the network docker created. Your host is correctly telling you that it has no routes to that subnet. However you are binding the containers port 9000 to your host port 9000, thus you should be able to ping/curl localhost:9000. If that doesn't work your webserver may need to listen on on 0.0.0.0

Related

Assign IP address in docker with a bridge network which has no IP

I want to ask a static IP within the docker container.
I can assign an IP address if there is the IPAM configuration in the network.
version: "2.1"
services:
nginx:
image: ghcr.io/linuxserver/nginx
container_name: nginx
volumes:
- ./config:/config
ports:
- 443:443
restart: always
networks:
br-uplink:
ipv4_address: 192.168.1.2
networks:
br-uplink:
driver: bridge
name: br-uplink
ipam:
config:
- subnet: "192.168.1.0/24"
gateway: "192.168.1.1"
but if there is no IPAM, then this does not work.
$ docker compose up -d
[+] Running 1/2
⠿ Network br-uplink Created
⠋ Container nginx Creating
Error response from daemon: user specified IP address is supported only when connecting to networks with user configured subnets
So if I remove ipv4_address and IPAM configuration, then a random address is assigned.
and if I assign manually within the container, it works.
docker compose exec nginx bash
ip add flush dev eth0
ip add add 192.168.11.2/24 dev eth0
How can I make this possible automatically?
I don't want to create my own Dockerfile for this, it would be happy if this can be done within the docker-compose.yml.
Any ideas?

Using custom local domain with Docker

I am running Docker using Docker Desktop on Windows.
I would like to set-up a simple server.
I run it using:
$ docker run -di -p 1234:80 yahya/example-server
This works as expected and runs fine on localhost:1234.
However, I want to give it's own local domain name (e.g. api.example.test), which should only be accessible locally.
Normally for a VM setup I would edit the Windows hosts file, get the IP address of the VM (let's say it's 192.168.90.90) and add something like the following:
192.168.90.90 api.example.test
How would I do something similar in Docker.
I know you can enter an ip address for port forwarding, but if I enter any local IP I get the following error:
$ docker run -di -p 192.168.90.90:1234:80 yahya/example-server
docker: Error response from daemon: Ports are not available: exposing port TCP 192.168.90.90:80 -> 0.0.0.0:0: listen tcp 192.168.90.90:80: can't bind on the specified endpoint.
However, it does work for 10.0.0.7 for some reason (I found this IP automatically added in the hosts file after installing Docker Desktop).
$ docker run -di -p 10.0.0.7:1234:80 yahya/example-server
This essentially solves the issue, but would become an issue again if I have more than 1 project.
Is there a way I can use another local IP address (preferably without a nginx proxy)?
I think there is no simple way to do this without some kind of reverse-proxy.
In my dev environment I use Traefik and dnscrypt-proxy to achieve automatic *.test domain names for multiple projects at same time
First, start Traefik proxy on ports 80 and 433, example docker-compose.yml:
---
networks:
traefik:
name: traefik
services:
traefik:
image: traefik:2.8.3
container_name: traefik
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
ports:
- 80:80
- 443:443
environment:
TRAEFIK_API: 'true'
TRAEFIK_ENTRYPOINTS_http: 'true'
TRAEFIK_ENTRYPOINTS_http_ADDRESS: :80
TRAEFIK_ENTRYPOINTS_https: 'true'
TRAEFIK_ENTRYPOINTS_https_ADDRESS: :443
TRAEFIK_ENTRYPOINTS_https_HTTP_TLS: 'true'
TRAEFIK_GLOBAL_CHECKNEWVERSION: 'false'
TRAEFIK_GLOBAL_SENDANONYMOUSUSAGE: 'false'
TRAEFIK_PROVIDERS_DOCKER: 'true'
TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT: 'false'
Then, attach your service to traefik network, and set labels for routing (see Traefik & Docker). Example docker-compose.yml:
---
networks:
traefik:
external: true
services:
example:
image: yahya/example-server
restart: always
labels:
traefik.enable: true
traefik.docker.network: traefik
traefik.http.routers.example.rule: Host(`example.test`)
traefik.http.services.example.loadbalancer.server.port: 80
networks:
- traefik
Finally, add to hosts:
127.0.0.1 example.test
Instead of manually adding all future domains to hosts, you can setup local DNS resolver. I prefer to use cloaking feature of dnscrypt-proxy for this.
You can install it using Installation instructions, then uncomment following line in dnscrypt-proxy.toml:
cloaking_rules = 'cloaking-rules.txt'
and add to cloaking-rules.txt:
*.test 127.0.0.1
finally, setup your network connection to use 127.0.0.1 as DNS resolver

Private "host" for docker compose network

Given a docker-compose file something like this
version: "3.8"
services:
service-one:
ports:
- "8881:8080"
image: service-one:latest
service-one:
ports:
- "8882:8080"
image: service-two:latest
what happens is that service-one is exposed to the host network on port 8881 and service-two would be exposed on the host network at port 8882.
What I'd like to be able to arrange is that in the network created for the docker-compose there be a "private host" on which service-one will be exposed at port 8881 and service-two will be exposed on port 8882 such that any container in the docker-compose network will be able to connect to the "private host" and connect to the services on their configured HOST_PORT but not on the actual docker host. That is, to have whatever network configuration that usually bridges from the CONTAINER_PORT to the HOST_PORT happen privately within the docker-compose network without having the opportunity for there to be port conflicts on the actual host network.
I tweak this to fit to your case. The idea is to run socat in a gateway so that containers nor images changed (just service names). So, from service-X-backend you are able to connect to:
service-one on port 8881, and
service-two on port 8882
Tested with nginx containers.
If you wish to make some ports public, you need to publish them from the gateway itself.
version: "3.8"
services:
service-one-backend:
image: service-one:latest
networks:
- gw
service-two-backend:
image: service-two:latest
networks:
- gw
gateway:
image: debian
networks:
gw:
aliases:
- service-one
- service-two
depends_on:
- service-one-backend
- service-two-backend
command: sh -c "apt-get update
&& apt-get install -y socat
&& nohup bash -c \"socat TCP-LISTEN: 8881,fork TCP:service-one-backend:8080 2>&1 &\"
&& socat TCP-LISTEN: 8882,fork TCP:service-two-backend:8080"
networks:
gw:

Accessing to gitlab docker container outputs connection refused

I have a docker container running this configuration for the gitlab-ce image:
version: "3"
services:
gitlab:
hostname: gitlab.mydomain.com
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: always
ports:
- 3000:80
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
networks:
default:
external:
name: custom_network
When running docker ps i see my container up and running with the 80 container port mapped to the 3000 host machine port as intended.
Altough when running : wget -O- https://172.25.0.2:3000 i am getting this error message:
Connecting to 172.25.0.2:3000... failed: Connection refused.
When you map a port, you should use the host IPs to access through the mapped port.
So if you need to access port 80 use the container IP.
If you need to access port 3000 use the host IP or localhost of the main host itself or even if you have a private interface inside your host.
So this command: wget -O- https://172.25.0.2:3000 means that you are talking to the container directly not through the mapped port and requesting a service listening on port 3000 which is not true so the result will be connection refused.

docker-compose to access port on host machine

Ip of host machine is 192.168.0.208.
docker-compose file is as follows:
version: '2'
services:
zl-tigervnc:
image: zl/dl-tigervnc:1.5
container_name: zl_dl_tigervnc
restart: always
tty: true
ports:
- "8001:8888"
- "6001:6006"
- "8901:5900"
- "10001:22"
devices:
- /dev/nvidia0
volumes:
- ~/data:/root/data
- /var/run/docker.sock:/var/run/docker.sock
extra_hosts:
- "dockerhost:192.168.0.208"
A container was launched by this script. The container want to access port 8080 on the host machine (e.g. 192.168.0.208:8080). But it doesn't work.
However, I use port forwarding to map 8080 on host machine to 8080 on router. Router's IP was 63.25.20.83. The container could access host machine's 8080 by port forwarding(e.g. 63.25.20.83:8080).
I have tried many solutions from https://github.com/docker/docker/issues/1143, but it still does not work.

Resources