How to disable external network calls in docker compose? [duplicate] - docker

I have a situation to restrict internet access of the container in load balancer network. for example in that below picture
Only container4 connects to the Internet; other three only communicate through container4 with the outside world. For example if container1 needs smtp support, it will forward smtp request to container4 to get access.
No container other than container4 should be allowed to access the Internet directly! This should be enforced on Docker level.
I believe it will be configurable on docker network creation, can any one explain how to achieve this?

As found here, I got this to work with docker-compose. Save as docker-compose.yml:
version: '3'
services:
outgoing-wont-work:
image: alpine
networks:
- no-internet
command: ping -c 3 google.com # will crash
internal-will-work:
image: alpine
networks:
- no-internet
command: ping -c 3 internal-and-external
internal-and-external:
image: alpine
networks:
- no-internet
- internet
command: ping -c 3 google.com
networks:
no-internet:
driver: bridge
internal: true
internet:
driver: bridge
Then run docker-compose up -d, docker-compose ps will show something like this after a few seconds:
Name Command State Ports
----------------------------------------------------------------------------------
dco_inet_internal-and-external_1 ping -c 3 google.com Exit 0
dco_inet_internal-will-work_1 ping -c 3 internal-and-ext ... Exit 0
dco_inet_outgoing-wont-work_1 ping -c 3 google.com Exit 1

Network creation for access internet
docker network create --subnet=172.19.0.0/16 internet
Network creation for block internet access
docker network create --internal --subnet 10.1.1.0/24 no-internet
If you want to connect docker container into internet
docker network connect internet container-name
If you want to block internet access
docker network connect no-internet container-name
Note
in internal network we can't expose ports to connect outside world, please refer this question for more details

Another option, if you need to expose ports on a container without internet access, but want to let it talk to other containers would be to provide a bogus DNS configuration. This isn't a perfect solution though, since it doesn't prevent direct IP access to the outside world.
docker-compose.yaml
version: '3'
services:
service1:
image: alpine
command: sh -c 'ping service2 -c 1; ping google.com -c 1'
dns: 0.0.0.0
service2:
image: alpine
command: sh -c 'ping service1 -c 1; ping google.com -c 1'
dns: 0.0.0.0
isolated> docker-compose up
Recreating isolated_service1_1 ... done Recreating isolated_service2_1 ... done Attaching to isolated_service2_1, isolated_service1_1
service1_1 | PING service2 (172.18.0.2) 56(84) bytes of data.
service1_1 | 64 bytes from isolated_service2_1.isolated_default (172.18.0.2): icmp_seq=1 ttl=64 time=0.038 ms
service1_1 |
service1_1 | --- service2 ping statistics ---
service1_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service1_1 | rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
service2_1 | PING service1 (172.18.0.3) 56(84) bytes of data.
service2_1 | 64 bytes from isolated_service1_1.isolated_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.093 ms
service2_1 |
service2_1 | --- service1 ping statistics ---
service2_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service2_1 | rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms
service1_1 | ping: google.com: Temporary failure in name resolution
service2_1 | ping: google.com: Temporary failure in name resolution
isolated_service1_1 exited with code 2
isolated_service2_1 exited with code 2

As stated in Bilal's answer, the internal network is a good solution if you do not need to expose the ports.
If you do need to expose the ports, the below solution using iptables does the job for my requirements:
docker network create --subnet 172.19.0.0/16 no-internet
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -j REJECT --reject-with icmp-port-unreachable
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -m state --state RELATED,ESTABLISHED -j RETURN
Then add
--network no-internet
when you run your docker container. For instance:
$ docker run -it --network no-internet ubuntu:focal /bin/bash
root#9f2181f79985:/# apt update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Temporary failure resolving 'archive.ubuntu.com'

Related

docker-jenkins container can't access internet

I installed jenkins container on docekr.
I used docker-compose with yml file.
version: '2'
services:
jenkins:
image: 'bitnami/jenkins:2'
ports:
- '8080:8080'
- '8443:8443'
- '50000:50000'
volumes:
- 'jenkins_data:/bitnami/jenkins'
dns:
- '8.8.8.8'
- '1.1.1.1'
volumes:
jenkins_data:
driver: local
In logs, i found UnknownHostException error.
jenkins_1 | 2020-03-23 17:45:06.490+0000 [id=46] INFO hudson.util.Retrier#start: The attempt #1 to do the action check updates server failed with an allowed exception:
jenkins_1 | java.net.UnknownHostException: updates.jenkins.io
...
jenkins_1 | 2020-03-23 17:45:06.490+0000 [id=46] INFO hudson.util.Retrier#start: Calling the listener of the allowed exception 'updates.jenkins.io' at the attempt #1 to do the action check updates server
jenkins_1 | 2020-03-23 17:45:06.492+0000 [id=46] INFO hudson.util.Retrier#start: Attempted the action check updates server for 1 time(s) with no success
I tried to resolve this error. But failed finally.
set 'dns' parameter.
nameserver 8.8.8.8
nameserver 1.1.1.1
reset bridge network.
systemctl stop docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
systemctl start docker
test ping
docker run -it bitnami/jenkins:2 ping 8.8.8.8
[FATAL tini (8)] exec ping failed: No such file or directory
docker run -it ubuntu:trusty ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8:
icmp_seq=1 ttl=52 time=31.3 ms 64 bytes from 8.8.8.8: icmp_seq=2
ttl=52 time=30.8 ms
docker run -it ubuntu:trusty ping google.com
ping: unknown host google.com
I think bitnami/jenkins maybe doesn't include ping.
Maybe It's not problem about bridge because of Test case 3.
I don't know what should I check.
Can you give me some hints?
Thank you!
You are only exposing your ports on loopback interface. Change your ports declaration from
ports:
- '8080:8080'
- '8443:8443'
- '50000:50000'
to
ports:
- '0.0.0.0:8080:8080'
- '0.0.0.0:8443:8443'
- '0.0.0.0:50000:50000'
To allow accessing those ports on all interfaces (i.e. including from outside the host).

Apache Guacamole in Docker containers: Creation of WebSocket tunnel to guacd failed

I installed Apache Guacamole using Docker on a CentOS 8.1 with Docker 19.03.
I followed the steps described here:
https://guacamole.apache.org/doc/gug/guacamole-docker.html
https://www.linode.com/docs/applications/remote-desktop/remote-desktop-using-apache-guacamole-on-docker/
I started the containers like this:
# mysql container
docker run --name guacamole-mysql -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_ONETIME_PASSWORD=yes -d mysql/mysql-server
# guacd container
docker run --name guacamole-guacd -e GUACD_LOG_LEVEL=debug -d guacamole/guacd
# guacamole container
docker run --name guacamole-guacamole --link guacamole-guacd:guacd --link guacamole-mysql:mysql -e MYSQL_DATABASE=guacamole -e MYSQL_USER=guacamole -e MYSQL_PASSWORD=password -d -p 8080:8080 guacamole/guacamole
All went fine and I was able to access the Guacamole web interface on port 8080. I configured one VNC connection to another machine on port 5900. Unfortunately when I try to use that connection I get the following error in the web interface:
"An internal error has occurred within the Guacamole server, and the connection has been terminated..."
I had a look on the logs too and in the guacamole log I found this:
docker logs --tail all -f guacamole-guacamole
...
15:54:06.262 [http-nio-8080-exec-2] ERROR o.a.g.w.GuacamoleWebSocketTunnelEndpoint - Creation of WebSocket tunnel to guacd failed: End of stream while waiting for "args".
15:54:06.685 [http-nio-8080-exec-8] ERROR o.a.g.s.GuacamoleHTTPTunnelServlet - HTTP tunnel request failed: End of stream while waiting for "args".
I'm sure that the target machine (which is running the VNC server) is fine. I'm able to connect to it from both a VNC client and another older Guacamole which I installed previously (not using Docker).
My containers look ok too:
docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad62aaca5627 guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp guacamole-guacamole
a46bd76234ea guacamole/guacd "/bin/sh -c '/usr/lo…" About an hour ago Up About an hour 4822/tcp guacamole-guacd
ed3a590b19d3 mysql/mysql-server "/entrypoint.sh mysq…" 2 hours ago Up 2 hours (healthy) 3306/tcp, 33060/tcp guacamole-mysql
I connected to the guacamole-guacamole container and pinged the other two containers: guacamole-mysql and guacamole-guacd. Both look fine and reachable.
docker exec -it guacamole-guacamole bash
root#ad62aaca5627:/opt/guacamole# ping guacd
PING guacd (172.17.0.2) 56(84) bytes of data.
64 bytes from guacd (172.17.0.2): icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from guacd (172.17.0.2): icmp_seq=2 ttl=64 time=0.091 ms
root#ad62aaca5627:/opt/guacamole# ping mysql
PING mysql (172.17.0.3) 56(84) bytes of data.
64 bytes from mysql (172.17.0.3): icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from mysql (172.17.0.3): icmp_seq=2 ttl=64 time=0.102 ms
Looks like there is a communication issue between the guacamole itself and guacd. And this is where I'm completely stuck.
EDIT
I tried on CentOS 7 and I got the same issues.
I also tried this solution https://github.com/boschkundendienst/guacamole-docker-compose as suggested by #BatchenRegev but I got the same issue again.
I've been experiencing the same issues under centos.
My only difference is that I'm hosting the database on a separate machine as this is all cloud-hosted and I want to be able to destroy/rebuild the guacamole server at will.
I ended creating a docker-compose.yml file as that seemed to work better.
Other gotcha's I came across:
make sure the guacd_hostname is the actual machine hostname and not 127.0.0.1
setting Selinux to allow httpd.
sudo setsebool -P httpd_can_network_connect
My docker-compose.yml is shown below replace all {variables} with your own and update the file if you are using a sql image as well.
version: "2"
services:
guacd:
image: "guacamole/guacd"
container_name: guacd
hostname: guacd
restart: always
volumes:
- "/data/shared/guacamole/guacd/data:/data"
- "/data/shared/guacamole/guacd/conf:/conf:ro"
expose:
- "4822"
ports:
- "4822:4822"
network_mode: bridge
guacamole:
image: "guacamole/guacamole"
container_name: guacamole
hostname: guacamole
restart: always
volumes:
- "/data/shared/guacamole/guacamole/guac-home:/data"
- "/data/shared/guacamole/guacamole/conf:/conf:ro"
expose:
- "8080"
ports:
- "8088:8080"
network_mode: bridge
environment:
- "GUACD_HOSTNAME={my_server_hostname}"
- "GUACD_PORT=4822"
- "MYSQL_PORT=3306"
- "MYSQL_DATABASE=guacamole"
- "GUACAMOLE_HOME=/data"
- "MYSQL_USER=${my_db_user}"
- "MYSQL_PASSWORD=${my_db_password}"
- "MYSQL_HOSTNAME=${my_db_hostname}"
i have the same problem on FreeBSD 12.2 - SOLUTION
Change "localhost" hostname in
/usr/local/etc/guacamole-client/guacamole.properties
to "example"
guacd-hostname: 192.168.10.10
next: /usr/local/etc/guacamole-server/guacd.conf
[server]
bind_host = 192.168.10.10
Check /etc/guacamole/guacamole.properties i have link:
guacd-hostname: 192.168.10.10
Restart:
/usr/local/etc/rc.d/guacd restart
/usr/local/etc/rc.d/tomcat9 restart
with name "localhost" i have:
11:01:48.010 [http-nio-8085-exec-3] DEBUG o.a.g.s.GuacamoleHTTPTunnelServlet - Internal error in HTTP tunnel.
I hope it will be useful to someone else - it works for me`

docker-compose can't connect to adjacent service via service name

I have this docker-compose.yml that basically builds my project for e2e test. It's composed of a postgres db, a backend Node app, a frontend Node app, and a spec app which runs the e2e test using cypress.
version: '3'
services:
database:
image: 'postgres'
backend:
build: ./backend
command: /bin/bash -c "sleep 3; yarn backpack dev"
depends_on:
- database
frontend:
build: ./frontend
command: /bin/bash -c "sleep 15; yarn nuxt"
depends_on:
- backend
spec:
build:
context: ./frontend
dockerfile: Dockerfile.e2e
command: /bin/bash -c "sleep 30; yarn cypress run"
depends_on:
- frontend
- backend
The Dockerfiles are just simple Dockerfiles that based off node:8 which copies the project files and run yarn install. In the spec Dockerfile, I pass http://frontend:3000 as FRONTEND_URL.
But this setup fails at the spec command when my cypress runner can't connect to frontend with error:
spec_1 | > Error: connect ECONNREFUSED 172.20.0.4:3000
As you can see, it resolves the hostname frontend to the IP correctly, but it's not able to connect. I'm scratching my head over why can't I connect to the frontend with the service name. If I switch the command on spec to do sleep 30; ping frontend, it's successfully pinging the container. I've tried deleting and let docker-compose recreate the network, I've tried specifying expose and links to the services respectively. All to no success.
I've set up a sample repo here if you wanna try replicating the issue:
https://github.com/afifsohaili/demo-dockercompose-network
Any help is greatly appreciated! Thank you!
Your application is listening on loopback:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.11:35233 *:*
LISTEN 0 128 127.0.0.1:3000 *:*
From outside of the container, you cannot connect to ports that are only listening on loopback (127.0.0.1). You need to reconfigure your application to listen on all interfaces (0.0.0.0).
For your app, in the package.json, you can add (according to the nuxt faq):
"config": {
"nuxt": {
"host": "0.0.0.0",
"port": "3000"
}
},
Then you should see:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:3000 *:*
LISTEN 0 128 127.0.0.11:39195 *:*
And instead of an unreachable error, you'll now get a 500:
...
frontend_1 | response: undefined,
frontend_1 | statusCode: 500,
frontend_1 | name: 'NuxtServerError' }
...
spec_1 | The response we received from your web server was:
spec_1 |
spec_1 | > 500: Server Error

Can not reach mysql container running in different node with its service name, but can do with its container name

I have 2 nodes in docker swarm mode, and deployed a mysql service on one node and a mysql client on the other node with 'docker stack deploy -c composeYaml stackName'. But it turns out the mysql client can not reach mysql by its service name 'mysql', but can do with its container name 'aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53'
Inside the client container, I can ping and nslookup 'mysql' container, but can not reach it with 3306 port
root#ced2d59027e8:/opt/docker# ping mysql
PING mysql (10.0.2.2) 56(84) bytes of data.
64 bytes from 10.0.2.2: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 10.0.2.2: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 10.0.2.2: icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from 10.0.2.2: icmp_seq=4 ttl=64 time=0.042 ms
^C
--- mysql ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.030/0.042/0.052/0.007 ms
root#ced2d59027e8:/opt/docker# nslookup mysql
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: mysql
Address: 10.0.2.2
root#ced2d59027e8:/opt/docker# nmap -p 3306 mysql
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:34 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.49 seconds
root#ced2d59027e8:/opt/docker# nmap -p 3306 10.0.2.2
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:41 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.48 seconds
But if I try with container name of 'mysql' got by 'docker ps', it's working, and its VirtualIP also working
On node where mysql container running:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebe25854c5b0 nysql:latest "docker-entrypoint..." 4 minutes ago Up 4 minutes 3306/tcp aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53
Back to inside the client container:
root#ced2d59027e8:/opt/docker# nmap -p 3306 aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:43 UTC
Nmap scan report for aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53 (10.0.2.3)
Host is up (0.000077s latency).
rDNS record for 10.0.2.3: aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53.aqi_backend
PORT STATE SERVICE
3306/tcp open mysql
MAC Address: 02:42:0A:00:02:03 (Unknown)
Nmap done: 1 IP address (1 host
root#ced2d59027e8:/opt/docker# nmap -p 3306 10.0.2.3
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:37 UTC
Nmap scan report for aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53.aqi_backend (10.0.2.3)
Host is up (0.000098s latency).
PORT STATE SERVICE
3306/tcp open mysql
MAC Address: 02:42:0A:00:02:03 (Unknown)
Nmap done: 1
my compose file looks like follows:
version: '3.2'
services:
mysql:
image: mysql
ports:
- target: 3306
published: 3306
protocol: tcp
mode: ingress
environment:
MYSQL_ROOT_PASSWORD: 1234
MYSQL_DATABASE: aqitradetest
MYSQL_USER: aqidbmaster
MYSQL_PASSWORD: aqidbmaster
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == prod-03]
networks:
- backend
mysql_client:
image: mysql_client
ports:
- "9000:9000"
deploy:
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 30s
placement:
constraints: [node.hostname == production-01]
networks:
- backend
depends_on:
- mysql
networks:
frontend:
backend:
I think you are confusing some concepts.
In swarm when you publish a port it will be published on all nodes and accessible from outside using the IP of any of your nodes and that port (or using 0.0.0.0:port from an application on any of your nodes). Playing around with these ports won't help you access the other service by servicename.
When two services are on the same network (if you define no networks all services in the same compose file join the same default network) they should be able to reach all internal ports of the other service by servicename:port.
Probably there is a problem with your compose file. I would try to make a minimal compose file where you don't publish any ports on mysql and you don't define any networks because its easier to find an issue in a minimal compose file.
Most probably
ports:
- target: 3306
published: 3306
protocol: tcp
mode: ingress
causes the problem.

Restrict Internet Access - Docker Container

I have a situation to restrict internet access of the container in load balancer network. for example in that below picture
Only container4 connects to the Internet; other three only communicate through container4 with the outside world. For example if container1 needs smtp support, it will forward smtp request to container4 to get access.
No container other than container4 should be allowed to access the Internet directly! This should be enforced on Docker level.
I believe it will be configurable on docker network creation, can any one explain how to achieve this?
As found here, I got this to work with docker-compose. Save as docker-compose.yml:
version: '3'
services:
outgoing-wont-work:
image: alpine
networks:
- no-internet
command: ping -c 3 google.com # will crash
internal-will-work:
image: alpine
networks:
- no-internet
command: ping -c 3 internal-and-external
internal-and-external:
image: alpine
networks:
- no-internet
- internet
command: ping -c 3 google.com
networks:
no-internet:
driver: bridge
internal: true
internet:
driver: bridge
Then run docker-compose up -d, docker-compose ps will show something like this after a few seconds:
Name Command State Ports
----------------------------------------------------------------------------------
dco_inet_internal-and-external_1 ping -c 3 google.com Exit 0
dco_inet_internal-will-work_1 ping -c 3 internal-and-ext ... Exit 0
dco_inet_outgoing-wont-work_1 ping -c 3 google.com Exit 1
Network creation for access internet
docker network create --subnet=172.19.0.0/16 internet
Network creation for block internet access
docker network create --internal --subnet 10.1.1.0/24 no-internet
If you want to connect docker container into internet
docker network connect internet container-name
If you want to block internet access
docker network connect no-internet container-name
Note
in internal network we can't expose ports to connect outside world, please refer this question for more details
Another option, if you need to expose ports on a container without internet access, but want to let it talk to other containers would be to provide a bogus DNS configuration. This isn't a perfect solution though, since it doesn't prevent direct IP access to the outside world.
docker-compose.yaml
version: '3'
services:
service1:
image: alpine
command: sh -c 'ping service2 -c 1; ping google.com -c 1'
dns: 0.0.0.0
service2:
image: alpine
command: sh -c 'ping service1 -c 1; ping google.com -c 1'
dns: 0.0.0.0
isolated> docker-compose up
Recreating isolated_service1_1 ... done Recreating isolated_service2_1 ... done Attaching to isolated_service2_1, isolated_service1_1
service1_1 | PING service2 (172.18.0.2) 56(84) bytes of data.
service1_1 | 64 bytes from isolated_service2_1.isolated_default (172.18.0.2): icmp_seq=1 ttl=64 time=0.038 ms
service1_1 |
service1_1 | --- service2 ping statistics ---
service1_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service1_1 | rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
service2_1 | PING service1 (172.18.0.3) 56(84) bytes of data.
service2_1 | 64 bytes from isolated_service1_1.isolated_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.093 ms
service2_1 |
service2_1 | --- service1 ping statistics ---
service2_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service2_1 | rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms
service1_1 | ping: google.com: Temporary failure in name resolution
service2_1 | ping: google.com: Temporary failure in name resolution
isolated_service1_1 exited with code 2
isolated_service2_1 exited with code 2
As stated in Bilal's answer, the internal network is a good solution if you do not need to expose the ports.
If you do need to expose the ports, the below solution using iptables does the job for my requirements:
docker network create --subnet 172.19.0.0/16 no-internet
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -j REJECT --reject-with icmp-port-unreachable
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -m state --state RELATED,ESTABLISHED -j RETURN
Then add
--network no-internet
when you run your docker container. For instance:
$ docker run -it --network no-internet ubuntu:focal /bin/bash
root#9f2181f79985:/# apt update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Temporary failure resolving 'archive.ubuntu.com'

Resources