Docker Container Refuses to NOT use Proxy for Docker Network - docker

I'm having issues trying to get networking to work correctly in my container inside a corp domain/behind a proxy.
I've correctly configured (I think) Docker to get around the proxy for downloading images, but now my container is having trouble talking to another container inside the same docker-compose network.
So far, the only resolution is to manually append the docker-compose network to the no_proxy variable in the docker config, but this seems wrong and would need to be configured for each docker-compose network and requires a restart of docker.
Here is how I configured the docker proxy settings on host:
cat << "EOF" >docker_proxy_setup.sh
#!/bin/bash
#Proxy
#ActiveProxyVar=127.0.0.1:80
#Domain
corpdom=domain.org
httpproxyvar=http://$ActiveProxyVar/
httpsproxyvar=http://$ActiveProxyVar/
mkdir ~/.docker
cat << EOL >~/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "$httpproxyvar",
"httpsProxy": "$httpsproxyvar",
"noProxy": ".$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
}
}
}
EOL
mkdir -p /etc/systemd/system/docker.service.d
cat << EOL >/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=$httpproxyvar"
Environment="HTTPS_PROXY=$httpsproxyvar"
Environment="NO_PROXY=.$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
EOL
systemctl daemon-reload
systemctl restart docker
#systemctl show --property Environment docker
docker run hello-world
EOF
chmod +x docker_proxy_setup.sh
docker_proxy_setup.sh
and basically if I change to this:
#Domain
corpdom=domain.org,icinga_icinga-net
I am able to use curl to test network and it works correctly, but ONLY when using container_name.icinga_icinga-net
Eg:
This fails curl -k -u root:c54854140704eafc https://icinga2-api:5665/v1/objects/hosts
While this succeeds curl -k -u root:c54854140704eafc https://icinga2-api.icinga_icinga-net:5665/v1/objects/hosts
Note that using curl --noproxy seems to have no effect.
Here is some output from container for reference, any ideas what I can do to have containers NOT use proxy for Docker Networks (private IPv4)?
root#icinga2-web:/# ping icinga2-api
PING icinga2-api (172.30.0.5) 56(84) bytes of data.
64 bytes from icinga2-api.icinga_icinga-net (172.30.0.5): icmp_seq=1 ttl=64 time=0.138 ms
64 bytes from icinga2-api.icinga_icinga-net (172.30.0.5): icmp_seq=2 ttl=64 time=0.077 ms
^C
--- icinga2-api ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1025ms
rtt min/avg/max/mdev = 0.077/0.107/0.138/0.030 ms
root#icinga2-web:/# curl --noproxy -k -u root:c54854140704eafc https://172.30.0.5:5665/v1/objects/hosts
curl: (56) Received HTTP code 503 from proxy after CONNECT
root#icinga2-web:/# curl -k -u root:c54854140704eafc https://172.30.0.5:5665/v1/objects/hosts
curl: (56) Received HTTP code 503 from proxy after CONNECT
root#icinga2-web:/# curl -k -u root:c54854140704eafc https://icinga2-api:5665/v1/objects/hosts
curl: (56) Received HTTP code 503 from proxy after CONNECT
root#icinga2-web:/# curl -k -u root:c54854140704eafc https://icinga2-api.icinga_icinga-net:5665/v1/objects/hosts
{"results":[{"attrs":{"__name":"icinga2-api","acknowledgement":0,"acknowledgement_expiry":0,"acknowledgement_last_change":0,"action_url":"","active":true,"address":"127.0.0.1","address6":"::1","check_attempt":1,"check_command":"hostalive","check_interval":60,"check_period":"","check_timeout":null,"command_endpoint":"","display_name":"icinga2-api","downtime_depth":0,"enable_active_checks":true,"enable_event_handler":true,"enable_flapping":false,"enable_notifications":true,"enable_passive_checks":true,"enable_perfdata":true,"event_command":"","executions":null,"flapping":false,"flapping_current":0,"flapping_ignore_states":null,"flapping_last_change":0,"flapping_threshold":0,"flapping_threshold_high":30,"flapping_threshold_low":25,"force_next_check":false,"force_next_notification":false,"groups":["linux-servers"],"ha_mode":0,"handled":false,"icon_image":"","icon_image_alt":"","last_check":1663091644.161905,"last_check_result":{"active":true,"check_source":"icinga2-api","command":["/usr/lib/nagios/plugins/check_ping","-H","127.0.0.1","-c","5000,100%","-w","3000,80%"],"execution_end":1663091644.161787,"execution_start":1663091640.088944,"exit_status":0,"output":"PING OK - Packet loss = 0%, RTA = 0.05 ms","performance_data":["rta=0.055000ms;3000.000000;5000.000000;0.000000","pl=0%;80;100;0"],"previous_hard_state":99,"schedule_end":1663091644.161905,"schedule_start":1663091640.087908,"scheduling_source":"icinga2-api","state":0,"ttl":0,"type":"CheckResult","vars_after":{"attempt":1,"reachable":true,"state":0,"state_type":1},"vars_before":{"attempt":1,"reachable":true,"state":0,"state_type":1}},"last_hard_state":0,"last_hard_state_change":1663028345.921676,"last_reachable":true,"last_state":0,"last_state_change":1663028345.921676,"last_state_down":0,"last_state_type":1,"last_state_unreachable":0,"last_state_up":1663091644.161787,"max_check_attempts":3,"name":"icinga2-api","next_check":1663091703.191943,"next_update":1663091771.339701,"notes":"","notes_url":"","original_attributes":null,"package":"_etc","paused":false,"previous_state_change":1663028345.921676,"problem":false,"retry_interval":30,"severity":0,"source_location":{"first_column":1,"first_line":18,"last_column":20,"last_line":18,"path":"/etc/icinga2/conf.d/hosts.conf"},"state":0,"state_type":1,"templates":["icinga2-api","generic-host"],"type":"Host","vars":{"disks":{"disk":{},"disk /":{"disk_partitions":"/"}},"http_vhosts":{"http":{"http_uri":"/"}},"notification":{"mail":{"groups":["icingaadmins"]}},"os":"Linux"},"version":0,"volatile":false,"zone":""},"joins":{},"meta":{},"name":"icinga2-api","type":"Host"}]}
root#icinga2-web:/#
PS: I'm fairly certain this is not a specific issue to icinga as I've had some random proxy issues w/ other containers. But, I can say I've tested this icinga compose setup outside corp domain and it worked fine 100%.
Partial Resolution!
I would still prefer to use CIDR to have no_proxy work via container name without having to adjust docker-compose/.env but I got it to work.
A few things I did:
Added lowercase to docker service -->:
cat << EOL >/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=$httpproxyvar"
Environment="HTTPS_PROXY=$httpsproxyvar"
Environment="NO_PROXY=.$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
Environment="http_proxy=$httpproxyvar"
Environment="https_proy=$httpsproxyvar"
Environment="no_proxy=.$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
EOL
Added no_proxy in caps and lower to docker-compose containers and set in .env
Note: lower and CAPS should be used
environment:
- 'NO_PROXY=${NO_PROXY}'
- 'no_proxy=${NO_PROXY}'
NO_PROXY=.domain.org,127.0.0.0/8,172.16.0.0/12,icinga_icinga-net
I would prefer to append to the existing variable at least, but I tried the following and it made the variable no_proxy = ,icinga_icinga-net
NO_PROXY=$NO_PROXY,icinga_icinga-net
NO_PROXY=${NO_PROXY},icinga_icinga-net
Note: NO_PROXY was set on host via export
I still don't understand why it fails when using:
curl --noproxy -k -u root:c54854140704eafc https://172.30.0.4:5665/v1/objects/hosts
when I have no_proxy 172.16.0.0/12 which should equal 172.16.0.0 – 172.31.255.255 but doesn't work.
Update:
I tried setting no_proxy to the IP explicitly (no CIDR) and that worked, but it still failed w/ just container as host (no .icinga-net).
This is all related to this great post -->
https://about.gitlab.com/blog/2021/01/27/we-need-to-talk-no-proxy/

This is the best I can come up with, happy to reward better answers!
Docker Setup (Global):
#!/bin/bash
#Proxy
ActiveProxyVar=127.0.0.7
#Domain
corpdom=domain.org
#NoProxy
NOT_PROXY=127.0.0.0/8,172.16.0.0/12,192.168.0.0/16,10.0.0.0/8,.$corpdom
httpproxyvar=http://$ActiveProxyVar/
httpsproxyvar=http://$ActiveProxyVar/
mkdir ~/.docker
cat << EOL >~/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "$httpproxyvar",
"httpsProxy": "$httpsproxyvar",
"noProxy": "$NOT_PROXY"
}
}
}
EOL
mkdir -p /etc/systemd/system/docker.service.d
cat << EOL >/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=$httpproxyvar"
Environment="HTTPS_PROXY=$httpsproxyvar"
Environment="NO_PROXY=$NOT_PROXY"
Environment="http_proxy=$httpproxyvar"
Environment="https_proy=$httpsproxyvar"
Environment="no_proxy=$NOT_PROXY"
EOL
systemctl daemon-reload
systemctl restart docker
#systemctl show --property Environment docker
#docker run hello-world
docker-compose.yaml:
environment:
- 'NO_PROXY=${NO_PROXY}'
- 'no_proxy=${NO_PROXY}'
.env:
--Basically, add docker-compose network then each container name...
NO_PROXY=127.0.0.0/8,172.16.0.0/12,192.168.0.0/16,10.0.0.0/8,.icinga_icinga-net,icinga2-api,icinga2-web,icinga2-db,icinga2-webdb,icinga2-redis,icinga2-directordb,icinga2-icingadb,icinga2-web_director

Related

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

Connecting with Portainer: "resource is online but isn't responding to connection attempts"

I installed Ubuntu on an older Laptop. Now there is Docker with Portainer running and I want to access Portainer via my main PC in the same network. When I try to connect to Portainer via my Laptop where it is runnig (not Localhost address) it works fine. But when I try to connect via my PC, I get a timeout. Windows diagnostics says: "resource is online but isn't responding to connection attempts". How can I open Portainer to my local network? Or is this a problem with Ubuntu?
so check if you have openssh server running for ssh! disable firewall on terminal sudo ufw disable check if your network card is running on name eth0 ifconfig if not change following this step below
Using netplan which is the default these days. File /etc/netplan/00-installer-config.yaml file. but b4 you need to get serial/mac
Find the target devices mac/hw address using the lshw command:
lshw -C network
You'll see some output which looks like:
root#ys:/etc# lshw -C network
*-network
description: Ethernet interface
physical id: 2
logical name: eth0
serial: dc:a6:32:e8:23:19
size: 1Gbit/s
capacity: 1Gbit/s
capabilities: ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bcmgenet driverversion=5.8.0-1015-raspi duplex=full ip=192.168.0.112 link=yes multicast=yes port=MII speed=1Gbit/s
So then you take the serial
dc:a6:32:e8:23:19
Note the set-name option.
This works for the wifi section as well.
if you using calbe you can delete everything add the example only change for your serial "mac" sudo nano /etc/netplan/00-installer-config.yaml file.
network:
version: 2
ethernets:
eth0:
dhcp4: true
match:
macaddress: <YOUR MAC ID HERE>
set-name: eth0
Then then to test this config run.
netplan try
When your happy with it
netplan apply
reboot you ubuntu
after restart
stop portainer container
sudo docker stop portainer
remove portainer container
sudo docker rm portainer
now run again on the last version
docker run -d -p 8000:8000 -p 9000:9000 \
--name=portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:2.13.1

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

Run X application in a Docker container reliably on a server connected via SSH without "--net host"

Without a Docker container, it is straightforward to run an X11 program on a remote server using the SSH X11 forwarding (ssh -X). I have tried to get the same thing working when the application runs inside a Docker container on a server. When SSH-ing into a server with the -X option, an X11 tunnel is set up and the environment variable "$DISPLAY" is automatically set to typically "localhost:10.0" or similar. If I simply try to run an X application in a Docker, I get this error:
Error: GDK_BACKEND does not match available displays
My first idea was to actually pass the $DISPLAY into the container with the "-e" option like this:
docker run -ti -e DISPLAY=$DISPLAY name_of_docker_image
This helps, but it does not solve the issue. The error message changes to:
Unable to init server: Broadway display type not supported: localhost:10.0
Error: cannot open display: localhost:10.0
After searching the web, I figured out that I could do some xauth magic to fix the authentication. I added the following:
SOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -
chmod 777 $XAUTH
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
However, this only works if also add "--net host" to the docker command:
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH --net host name_of_docker_image
This is not desirable since it makes the whole host network visible for the container.
What is now missing in order to get it fully to run on a remote server in a docker without "--net host"?
I figured it out. When you are connecting to a computer with SSH and using X11 forwarding, /tmp/.X11-unix is not used for the X communication and the part related to $XSOCK is unnecessary.
Any X application rather uses the hostname in $DISPLAY, typically "localhost" and connects using TCP. This is then tunneled back to the SSH client. When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine.
When not specifying "--net host", the Docker is using the default bridge network mode. This means that "localhost" means something else inside the container than for the host, and X applications inside the container will not be able to see the X server by referring to "localhost". So in order to solve this, one would have to replace "localhost" with the actual IP-address of the host. This is usually "172.17.0.1" or similar. Check "ip addr" for the "docker0" interface.
This can be done with a sed replacement:
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
Additionally, the SSH server is commonly not configured to accept remote connections to this X11 tunnel. This must then be changed by editing /etc/ssh/sshd_config (at least in Debian) and setting:
X11UseLocalhost no
and then restart the SSH server, and re-login to the server with "ssh -X".
This is almost it, but there is one complication left. If any firewall is running on the Docker host, the TCP port associated with the X11-tunnel must be opened. The port number is the number between the : and the . in $DISPLAY added to 6000.
To get the TCP port number, you can run:
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
Then (if using ufw as firewall), open up this port for the Docker containers in the 172.17.0.0 subnet:
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
All the commands together can be put into a script:
XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | sudo xauth -f $XAUTH nmerge -
sudo chmod 777 $XAUTH
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
sudo ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
Assuming you are not root and therefore need to use sudo.
Instead of sudo chmod 777 $XAUTH, you could run:
sudo chown my_docker_container_user $XAUTH
sudo chmod 600 $XAUTH
to prevent other users on the server from also being able to access the X server if they know what you have created the /tmp/.docker.auth file for.
I hope this should make it properly work for most scenarios.
If you set X11UseLocalhost = no, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.
Here's an alternative how to get X11 graphics out of a container and via X11 forwarding from the server to the client, without changing X11UseLocalhost in the sshd config.
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------- veth123#if5 --|-- eth0#if6 |
| (bridge) (veth pair) | (veth pair) |
| | |
| 127.0.0.1 +-------------------------+
routing +- lo
| (loopback)
|
| 192.168.1.2
+- ens33
(physical host interface)
With the default X11UseLocalhost yes, sshd listens only on 127.0.0.1 on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0 bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0, lo and ens33) can communicate via routing.
We want to achieve the following:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo >--ssh x11 fwd-+
(loopback) |
v
192.168.1.2 |
<-- ssh -- ens33 ------<-----+
(physical host interface)
We can let the X11 application talk directly to 172.17.0.1 to "escape" the docker net ns. This is achieved by setting the DISPLAY appropriately: export DISPLAY=172.17.0.1:10:
+ docker container net ns+
| |
172.17.0.1 | 172.17.0.2 |
docker0 --------- veth123#if5 --|-- eth0#if6 -----< xeyes |
(bridge) (veth pair) | (veth pair) |
| |
127.0.0.1 +-------------------------+
lo
(loopback)
192.168.1.2
ens33
(physical host interface)
Now, we add an iptables rule on the host to route from 172.17.0.1 to 127.0.0.1 in the root net ns:
iptables \
--table nat \
--insert PREROUTING \
--proto tcp \
--destination 172.17.0.1 \
--dport 6010 \
--jump DNAT \
--to-destination 127.0.0.1:6010
sysctl net.ipv4.conf.docker0.route_localnet=1
Note that we're using port 6010, that's the default port on which SSHD performs X11 forwarding: It's using display number 10, which is added to the port "base" 6000. You can check which display number to use after you've established the SSH connection by checking the DISPLAY environment variable in a shell started by SSH.
Maybe you can improve on the forwarding rule by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet is needed, to be honest. It appears that 127/8 is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.
With the commands given above, we end up with:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo
(loopback)
192.168.1.2
ens33
(physical host interface)
The remaining connection is established by SSHD when you establish a connection with X11 forwarding. Please note that you have to establish the connection before attempting to start an X11 application inside the container, since the application will immediately try to reach the X11 server.
There is one piece missing: authentication. We're now trying to access the X11 server as 172.17.0.1:10 inside the container. The container however doesn't have any X11 authentication, or not a correct one if you're bind-mounting the home directory (outside the container it's usually something like <hostname>:10). Use Ruben's suggestion to add a new entry visible inside the docker container:
# inside container
xauth add 172.17.0.1:10 . <cookie>
where <cookie> is the cookie set up by the SSH X11 forwarding, e.g. via xauth list.
You might also have to allow traffic ingress to 172.17.0.1:6010 in your firewall.
You can also start an application from the host inside the docker container network namespace:
sudo nsenter --target=<pid of process in container> --net su - $USER <app>
Without the su, you'll be running as root. Of course, you can also use another container and share the network namespace:
sudo docker run --network=container:<other container name/id> ...
The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0 bridge). Therefore, it will work for any applications inside the container network namespace.
In my case, I sit at "remote" and connect to a "docker_container" on "docker_host":
remote --> docker_host --> docker_container
To make debugging scripts easier with VScode, I installed SSHD into the "docker_container", reporting on port 22, mapped to another port (say 1234) on the "docker_host".
So I can connect directly with the running container via ssh (from "remote"):
ssh -Y -p 1234 appuser#docker_host.local
(where appuser is the username within the "docker_container". I am working on my local subnet now, so I can reference my server via the .local mapping. For external IPs, just make sure your router is mapped to this port to this machine.)
This creates a connection directly from my "remote" to "docker_container" via ssh.
remote --> (ssh) --> docker_container
Inside the "docker_container", I installed sshd with
sudo apt-get install openssh-server (you can add this to your Dockerfile to install at build time).
To allow X11 forwarding to work, edit the /etc/ssh/sshd_config file as such:
X11Forwarding yes
X11UseLocalhost no
Then restart the ssh within the container. You should do this from shell executed into the container, from the "docker_host", not when you are connected to the "docker_container" via ssh: (docker exec -ti docker_container bash)
Restart sshd:
sudo service ssh restart
When you connect via ssh to the "docker_container", check the $DISPLAY environment variable. It should say something like
appuser#3f75a98d67e6:~/data$ echo $DISPLAY
3f75a98d67e6:10.0
Test by executing your favorite X11 graphics program from within "docker_container" via ssh (like cv2.imshow())
I use an automated approach which can be executed entirely from within the docker container.
All that is needed is to pass the DISPLAY variable to the container, and mounting .Xauthority.
Moreover, it only uses the port from the DISPLAY variable, so it will also work in cases where DISPLAY=localhost:XY.Z.
Create a file, source-me.sh, with the following content:
# Find the containers address in /etc/hosts
CONTAINER_IP=$(grep $(hostname) /etc/hosts | awk '{ print $1 }')
# Assume the docker-host IP only differs in the last byte
SUBNET=$(echo $CONTAINER_IP | sed 's/\.[^\.]$//')
DOCKER_HOST_IP=${SUBNET}.1
# Get the port from the DISPLAY variable
DISPLAY_PORT=$(echo $DISPLAY | sed 's/.*://' | sed 's/\..*//')
# Create the correct display-name
export DISPLAY=$DOCKER_HOST_IP:$DISPLAY_PORT
# Find an existing xauth entry for the same port (DISPLAY_PORT),
# and copy everything except the dispay-name
# filtering out entries containing /unix: which correspond to "same-machine" connections
ENTRY=$(xauth -n list | grep -v '/unix\:' | grep "\:${DISPLAY_PORT}" | head -n 1 | sed 's/^[^ ]* *//')
# Prepend our display-name
ENTRY="$DOCKER_HOST_IP:$DISPLAY_PORT $ENTRY"
# Add the new xauth entry.
# Because our .Xauthority file is mounted, a new file
# named ${HOME}/.Xauthority-n will be created, and a warning
# is printed on std-err
xauth add $ENTRY 2> /dev/null
# replace the content of ${HOME}/.Xauthority with that of ${HOME}/.Xauthority-n
# without creating a new i-node.
cat ${HOME}/.Xauthority-n > ${HOME}/.Xauthority
Create the following Dockerfile for testing:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y xauth
COPY source-me.sh /root/
RUN cat /root/source-me.sh >> /root/.bashrc
# xeyes for testing:
RUN apt-get install -y x11-apps
Build and run:
docker build -t test-x .
docker run -ti \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash
Inside the container, run:
xeyes
To run non-interactively, you must ensure source-me.sh is sourced:
docker run \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash -c "source source-me.sh ; xeyes"

Docker remote api don't restart after my computer restart

Last week I struggled to make my docker remote api working. As it is running on VM, I have not restart my VM since then. Today I finally restarted my VM and it is not working any more (docker and docker-compose are working normally, but not docker remote api). My docker init file looks like this: /etc/init/docker.conf.
description "Docker daemon"
start on filesystem and started lxc-net
stop on runlevel [!2345]
respawn
script
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
end script
# description "Docker daemon"
# start on (filesystem and net-device-up IFACE!=lo)
# stop on runlevel [!2345]
# limit nofile 524288 1048576
# limit nproc 524288 1048576
respawn
kill timeout 20
.....
.....
Last time I made setting indicated here this
I tried nmap to see if port 4243 is opened.
ubuntu#ubuntu:~$ nmap 0.0.0.0 -p-
Starting Nmap 7.01 ( https://nmap.org ) at 2016-10-12 23:49 CEST
Nmap scan report for 0.0.0.0
Host is up (0.000046s latency).
Not shown: 65531 closed ports
PORT STATE SERVICE
22/tcp open ssh
43978/tcp open unknown
44672/tcp open unknown
60366/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 1.11 seconds
as you can see, the port 4232 is not opened.
when I run:
ubuntu#ubuntu:~$ echo -e "GET /images/json HTTP/1.0\r\n" | nc -U
This is nc from the netcat-openbsd package. An alternative nc is available
in the netcat-traditional package.
usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
[-P proxy_username] [-p source_port] [-q seconds] [-s source]
[-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
[-x proxy_address[:port]] [destination] [port]
I run this also:
ubuntu#ubuntu:~$ sudo docker -H=tcp://0.0.0.0:4243 -d
flag provided but not defined: -d
See 'docker --help'.
I restart my computer many times and try a lot of things with no success.
I already have a group named docker and my user is in:
ubuntu#ubuntu:~$ groups $USER
ubuntu : ubuntu adm cdrom sudo dip plugdev lpadmin sambashare docker
Please tel me what is wrong.
Your startup script contains an invalid command:
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
Instead you need something like:
/usr/bin/docker daemon -H tcp://0.0.0.0:4243
As of 1.12, this is now (but docker daemon will still work):
/usr/bin/dockerd -H tcp://0.0.0.0:4243
Please note that this is opening a port that gives remote root access without any password to your docker host.
Anyone that wants to take over your machine can run docker run -v /:/target -H your.ip:4243 busybox /bin/sh to get a root shell with your filesystem mounted at /target. If you'd like to secure your host, follow this guide to setting up TLS certificates.
I finally found www.ivankrizsan.se and it is working find now. Thanks to this guy (or girl) ;).
This settings work for me on ubuntu 16.04. Here is how to do :
Edit this file /lib/systemd/system/docker.service and replace the line ExecStart=/usr/bin/dockerd -H fd:// with
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:4243
Save the file
restart with :sudo service docker restart
Test with : curl http://localhost:4243/version
Result: you should see something like this:
{"Version":"1.11.0","ApiVersion":"1.23","GitCommit":"4dc5990","GoVersion" "go1.5.4","Os":"linux","Arch":"amd64","KernelVersion":"4.4.0-22-generic","BuildTime":"2016-04-13T18:38:59.968579007+00:00"}
Attention :
Remain aware that 0.0.0.0 is not good for security, for more security, you should use 127.0.0.1

Resources