My Docker containers can ping but can't curl URL - docker

I'm using Docker version 20.10.21 under Ubuntu server 22.04.
Since a week ago, my Docker containers can't reach public APIs on the internet (for example Public holidays in France). They could reach it before an apt update and upgrade was done.
I was thinking that it was a Docker bridge network related issue in a first place, so I tried this solution:
My docker container has no internet
Then, I tried
docker network prune
, then I tried to uninstall and reinstall Docker.
After investigations, I was wrong about my diagnosis because I can ping public names, but I can't curl any URL:
I don't understand why this issue suddenly happened and I'm out of thoughts to solve this.
UPDATE:
Docker containers can't curl any URL, but my Ubuntu host does.
With docker host network, curl is working for the given API.
On the other hand, if I'm running the same container on Docker Desktop, on my dev computer, that works well.

I finally found out what was the issue. The MTU of my host network interface was different from the default value of the docker network (1500).
I checked my network interface MTU:
ip a | grep mtu
And then, I settled the MTU for the docker daemon in /etc/docker/daemon.json :
{
"mtu" : 1280
}
Then don't forget to restart docker:
systemctl restart docker

Related

Docker plugins misfunctioning with mdns?

I have the following set-up:
I run the Docker daemon in a VM on my Macbook (M1, MacOS Monterey version 12.6). The VM advertises the "docker.local" service (not entirely sure this is the correct terminology).
I then try to interact with the Docker daemon from my Macbook.
I observe the following:
user#host ~ $ DOCKER_HOST=tcp://docker.local:2375 docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
user#host ~ $ DOCKER_HOST=tcp://docker.local:2375 docker-compose ls
error during connect: Get "http://docker.local:2375/v1.24/containers/json?filters=%7B%22label%22%3A%7B%22com.docker.compose.project%22%3Atrue%7D%7D": dial tcp: lookup docker.local on 10.0.0.1:53: no such host
So when I used docker (client version 20.10.10), communication works as expected. But if use docker-compose (version v2.14.0) then I get this no such host error. I see the same behavior with docker buildx for example.
However:
user#host ~ $ dscacheutil -q host -a name docker.local
name: docker.local
ipv6_address: fd05:60e3:4cfd:5e54:5054:ff:fe15:ff48
name: docker.local
ip_address: 192.168.205.85
So to me, it looks like the service is advertised appropriately.
So I can only assume docker and docker-compose try to resolve differently.
In the case of docker-compose, it looks like the gateway is actually used as a DNS server. However the gateway does know about this service because the VM is running on my MAC.
Do you have any idea why this is and if there is a work-around?
I have spent quite a bit of time looking into it (tcpdumping, editing DNS settings, ...) but I'm still confused how to make this work. Good thing about it, is I got to learn about mdns (pretty cool stuff!).
Thanks in advance,

docker compose installed in ubuntu in wsl2 not connecting to internet with cisco vpn

I have installed docker/compose on ubuntu focal in wsl2. If the container are started without compose, I am able to ping various external hosts. However, same container when started through compose along with vpn is not able to ping hosts and fails with errors like 'Temporary failure in name resolution'. The problem looks to be related to dns resolution. Has anyone seen this before ?
I was able to get it working with
sudo dockerd --dns 8.8.8.8
However, why this affects only compose is not clear.

Docker container cannot access internet behind cisco vpn

My setup:
Linux Mint 20
Docker version 19.03.12
Cisco AnyConnect 4.3.05017
My Issue:
When I connect to my company's VPN I cannot access the internet through my docker containers.
e.g. running docker run -it ubuntu apt update will fail with the message
"Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Temporary failure resolving 'archive.ubuntu.com'"
Disconnecting from VPN does not fix the issue. (see workaround #2)
I have two workarounds:
running docker with docker run -it --net=host ubuntu apt update will work fine, however, that is not a suitable workaround for my company's scripts and build system. It will do for ad-hoc jobs.
Disconnect from the VPN and run the following script (from https://github.com/moby/moby/issues/36151):
# /bin/bash
docker system prune -a
systemctl stop docker
iptables -F
ip link set docker0 down
brctl delbr docker0
systemctl start docker
will allow it to work again - but then I don't have access to my company's internal servers, which is also needed to build our software.
I have tried these things:
Added DNS to daemon.json (My docker container has no internet)
Fixing the resolv.conf (My docker container has no internet)
https://superuser.com/questions/1130898/no-internet-connection-inside-docker-containers
Docker container can only access internet with --net=host
https://stackoverflow.com/a/35519951/9496422
and basically any other hit on the first two pages of google searching for "docker container no internet behind vpn"
In order to do this you need to enable the setting "Allow local (LAN) access when using VPN (if configured)" in Cisco AnyConnect.
cisco-anyconnect-preferences-window
However, some companies doesn't allow to do this because of security policy.

Docker Intercontainer communication on CentOS 7

I am setting up a microservices architecture using docker for each service. I am also using kong API gateway running in its own docker container. The docker host is Centos 7 running in a VM with an IP 192.168.222.76.
On the host command line, I can access the starter service on port 7000 fine. However, within the kong VM, I ping the IP address but cannot access the service. As you can see from the output below, it says "Host is unreachable".
I am starting docker with --icc=true and --iptables=true and I have made several suggested changes to the firewalld and rich rules, etc. but I continue to not be able to reach the other container from within the kong container.
I am starting the kong container with a named network "kong-net" and the kong database is instance is in the same docker network and THEY seem to be able to communicate. I have added my starter service container to the same network on start up and still no joy. The kong container CAN access the outside world, just not other docker containers on the same host.
Output is below:
[root#docker ~]# clear
[root#docker ~]# curl 192.168.222.76:7000/starter/hello
Hello Anonymous Person!!
[root#docker ~]# docker exec -it kong /bin/ash
# curl 192.168.222.76:7000/starter/hello
curl: (7) Failed to connect to 192.168.222.76 port 7000: Host is unreachable
# curl www.google.com
HTML returned properly...
Any help on this appreciated!
You must have to reach the other container with his container name.
Try this:
docker exec -t kong curl servicename:7000/starter/hello
Kong container and service containers must share the same network
I was able to get ICC working by disabling firewalld all together (stop, disable, mask with systemctl) and opening up everything in iptables. Now its just a matter of setting up rules to block inbound access except on the API gateway and SSH.
Thanks!
I have come across this problem before. If disabling the firewall fixes the problem, DO NOT leave the firewall disabled, this is a very big security concern. The proper way to go about it is firstly, reactivate the firewall and then add a network masquerade.
firewall-cmd --zone=public --add-masquerade --permanent

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

Resources