How can I connect (ssh) to docker container from other computer inside my lab network? - docker

I'm trying to use the resources from other computers using the python3-mpi4py since my research uses a lot of calculations.
My codes and data are on the docker container.
To use mpi I have to be able to ssh directly to the docker container from other computers inside the same network as the host computer is located. But I cannot ssh into it.
my image is like below
|Host | <- On the same network -> | Other Computers |
| port 10000 | | |
| ^ | | |
|-------|-----------| | |
| V | | |
| port 10000 | | |
|docker container <-|------------ ssh ------------|--> |
Can anyone teach me how to do this?

You can running ssh server in the Host computer, then you can ssh to Host, then use docker command such as docker exec -i -t containerName /bin/bash to get interactive shell.
example:
# 1. On Other Computers
ssh root#host_ip
>> enter into Host ssh shell
# 2. On Host ssh shell
docker exec -i -t containerName /bin/bash
>> enter into docker interactive shell
# 3. On docker interactive shell
do sth.

Related

Docker failed to load listeners, cannot assign requested address

I'm using this guide to try and run up Docker using WSL2. I've got everything starting however there is an issue when I actually try to run up Docker. Once I run the command sudo dockerd -H `ifconfig eth0 | grep -E "([0-9]{1,3}.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d:
WARN[2022-02-01T11:07:40.033323500-06:00] Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network. host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:40.033991800-06:00] Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there! host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.036303800-06:00] Binding to an IP address without --tlsverify is deprecated. Startup is intentionally being slowed down to show this message host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.043536700-06:00] Please consider generating tls certificates with client validation to prevent exposing unauthenticated root access to your network host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.044564400-06:00] You can override this by explicitly specifying '--tls=false' or '--tlsverify=false' host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.045654100-06:00] Support for listening on TCP without authentication or explicit intent to run without authentication will be removed in the next release host="tcp://169.254.77.26:2375"
failed to load listeners: listen tcp 169.254.77.26:2375: bind: cannot assign requested address
I'm not too familiar with Docker so not sure what I can adjust to make it launch properly. Any suggestions are appreciated, thanks!
I'm doing exactly the same.
What worked for me was this comment https://dev.to/nelsonpena/comment/1jmkb . But it was not too explicit
I opened windows PowerShell and used the command
wsl --set-version Ubuntu 2
if you have another distro of linux it would be
wsl --set-version <distroname> 2
I closed wsl and opened it again. and executed the command
echo `ifconfig eth0 | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2;exit }' | cut -f2 -d:`
and got API listen on [the IP]

docker service create interactive mode

My Dockerfile looks as below
FROM base-rpi:latest
USER root
WORKDIR /Pwr/murata/test
RUN make
CMD ["./murata_tcp_test"]
Docker build
docker build --no-cache --rm -t m-docker .
When I run docker as below :
docker run -it --rm --name m-docker m-docker
it shows me interactive console and allows me to select the options
****** Test application **********
Press 1 for connect
Press 2 for add a node
Press 0 for exit
Enter choice
******************************************
But in swarm mode when I do
docker service create --name m-docker m-docker:latest
it is unable to start docker container with below message
overall progress: 0 out of 1 tasks
1/1: preparing [=================================> ]
verify: Detected task failure
Docker service logs show that container is started/stopped repeatedly
docker service logs m-docker -f
m-docker.1.9gwwzx4r0isn#raspberrypi | ****** Test application **********
m-docker.1.9gwwzx4r0isn#raspberrypi | Press 1 for connect
m-docker.1.9gwwzx4r0isn#raspberrypi | Press 2 for add a node
m-docker.1.kpg4fxom4uyw#raspberrypi | ****** Test application **********
m-docker.1.kpg4fxom4uyw#raspberrypi | Press 1 for connect
m-docker.1.kpg4fxom4uyw#raspberrypi | Press 2 for add a node
m-docker.1.9gwwzx4r0isn#raspberrypi | Press 0 for exit
m-docker.1.9gwwzx4r0isn#raspberrypi | Enter choice
m-docker.1.kpg4fxom4uyw#raspberrypi | Press 0 for exit
m-docker.1.kpg4fxom4uyw#raspberrypi | Enter choice
m-docker.1.tk676t1aabmh#raspberrypi | ****** Test application **********
How to run docker service create in interactive mode . I referred the docker service create documentation but it doesnot provide any option to run in interactive mode
That because of swarm, it runs container by default in detach mode, so no tty will be allocated to interact with container.
Did you tried to run with
docker service create --name m-docker --tty m-docker:latest
This will allocate pseudo-TTY
--tty , -t API 1.25+ Allocate a pseudo-TTY
service_create

Run X application in a Docker container reliably on a server connected via SSH without "--net host"

Without a Docker container, it is straightforward to run an X11 program on a remote server using the SSH X11 forwarding (ssh -X). I have tried to get the same thing working when the application runs inside a Docker container on a server. When SSH-ing into a server with the -X option, an X11 tunnel is set up and the environment variable "$DISPLAY" is automatically set to typically "localhost:10.0" or similar. If I simply try to run an X application in a Docker, I get this error:
Error: GDK_BACKEND does not match available displays
My first idea was to actually pass the $DISPLAY into the container with the "-e" option like this:
docker run -ti -e DISPLAY=$DISPLAY name_of_docker_image
This helps, but it does not solve the issue. The error message changes to:
Unable to init server: Broadway display type not supported: localhost:10.0
Error: cannot open display: localhost:10.0
After searching the web, I figured out that I could do some xauth magic to fix the authentication. I added the following:
SOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -
chmod 777 $XAUTH
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
However, this only works if also add "--net host" to the docker command:
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH --net host name_of_docker_image
This is not desirable since it makes the whole host network visible for the container.
What is now missing in order to get it fully to run on a remote server in a docker without "--net host"?
I figured it out. When you are connecting to a computer with SSH and using X11 forwarding, /tmp/.X11-unix is not used for the X communication and the part related to $XSOCK is unnecessary.
Any X application rather uses the hostname in $DISPLAY, typically "localhost" and connects using TCP. This is then tunneled back to the SSH client. When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine.
When not specifying "--net host", the Docker is using the default bridge network mode. This means that "localhost" means something else inside the container than for the host, and X applications inside the container will not be able to see the X server by referring to "localhost". So in order to solve this, one would have to replace "localhost" with the actual IP-address of the host. This is usually "172.17.0.1" or similar. Check "ip addr" for the "docker0" interface.
This can be done with a sed replacement:
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
Additionally, the SSH server is commonly not configured to accept remote connections to this X11 tunnel. This must then be changed by editing /etc/ssh/sshd_config (at least in Debian) and setting:
X11UseLocalhost no
and then restart the SSH server, and re-login to the server with "ssh -X".
This is almost it, but there is one complication left. If any firewall is running on the Docker host, the TCP port associated with the X11-tunnel must be opened. The port number is the number between the : and the . in $DISPLAY added to 6000.
To get the TCP port number, you can run:
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
Then (if using ufw as firewall), open up this port for the Docker containers in the 172.17.0.0 subnet:
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
All the commands together can be put into a script:
XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | sudo xauth -f $XAUTH nmerge -
sudo chmod 777 $XAUTH
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
sudo ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
Assuming you are not root and therefore need to use sudo.
Instead of sudo chmod 777 $XAUTH, you could run:
sudo chown my_docker_container_user $XAUTH
sudo chmod 600 $XAUTH
to prevent other users on the server from also being able to access the X server if they know what you have created the /tmp/.docker.auth file for.
I hope this should make it properly work for most scenarios.
If you set X11UseLocalhost = no, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.
Here's an alternative how to get X11 graphics out of a container and via X11 forwarding from the server to the client, without changing X11UseLocalhost in the sshd config.
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------- veth123#if5 --|-- eth0#if6 |
| (bridge) (veth pair) | (veth pair) |
| | |
| 127.0.0.1 +-------------------------+
routing +- lo
| (loopback)
|
| 192.168.1.2
+- ens33
(physical host interface)
With the default X11UseLocalhost yes, sshd listens only on 127.0.0.1 on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0 bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0, lo and ens33) can communicate via routing.
We want to achieve the following:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo >--ssh x11 fwd-+
(loopback) |
v
192.168.1.2 |
<-- ssh -- ens33 ------<-----+
(physical host interface)
We can let the X11 application talk directly to 172.17.0.1 to "escape" the docker net ns. This is achieved by setting the DISPLAY appropriately: export DISPLAY=172.17.0.1:10:
+ docker container net ns+
| |
172.17.0.1 | 172.17.0.2 |
docker0 --------- veth123#if5 --|-- eth0#if6 -----< xeyes |
(bridge) (veth pair) | (veth pair) |
| |
127.0.0.1 +-------------------------+
lo
(loopback)
192.168.1.2
ens33
(physical host interface)
Now, we add an iptables rule on the host to route from 172.17.0.1 to 127.0.0.1 in the root net ns:
iptables \
--table nat \
--insert PREROUTING \
--proto tcp \
--destination 172.17.0.1 \
--dport 6010 \
--jump DNAT \
--to-destination 127.0.0.1:6010
sysctl net.ipv4.conf.docker0.route_localnet=1
Note that we're using port 6010, that's the default port on which SSHD performs X11 forwarding: It's using display number 10, which is added to the port "base" 6000. You can check which display number to use after you've established the SSH connection by checking the DISPLAY environment variable in a shell started by SSH.
Maybe you can improve on the forwarding rule by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet is needed, to be honest. It appears that 127/8 is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.
With the commands given above, we end up with:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo
(loopback)
192.168.1.2
ens33
(physical host interface)
The remaining connection is established by SSHD when you establish a connection with X11 forwarding. Please note that you have to establish the connection before attempting to start an X11 application inside the container, since the application will immediately try to reach the X11 server.
There is one piece missing: authentication. We're now trying to access the X11 server as 172.17.0.1:10 inside the container. The container however doesn't have any X11 authentication, or not a correct one if you're bind-mounting the home directory (outside the container it's usually something like <hostname>:10). Use Ruben's suggestion to add a new entry visible inside the docker container:
# inside container
xauth add 172.17.0.1:10 . <cookie>
where <cookie> is the cookie set up by the SSH X11 forwarding, e.g. via xauth list.
You might also have to allow traffic ingress to 172.17.0.1:6010 in your firewall.
You can also start an application from the host inside the docker container network namespace:
sudo nsenter --target=<pid of process in container> --net su - $USER <app>
Without the su, you'll be running as root. Of course, you can also use another container and share the network namespace:
sudo docker run --network=container:<other container name/id> ...
The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0 bridge). Therefore, it will work for any applications inside the container network namespace.
In my case, I sit at "remote" and connect to a "docker_container" on "docker_host":
remote --> docker_host --> docker_container
To make debugging scripts easier with VScode, I installed SSHD into the "docker_container", reporting on port 22, mapped to another port (say 1234) on the "docker_host".
So I can connect directly with the running container via ssh (from "remote"):
ssh -Y -p 1234 appuser#docker_host.local
(where appuser is the username within the "docker_container". I am working on my local subnet now, so I can reference my server via the .local mapping. For external IPs, just make sure your router is mapped to this port to this machine.)
This creates a connection directly from my "remote" to "docker_container" via ssh.
remote --> (ssh) --> docker_container
Inside the "docker_container", I installed sshd with
sudo apt-get install openssh-server (you can add this to your Dockerfile to install at build time).
To allow X11 forwarding to work, edit the /etc/ssh/sshd_config file as such:
X11Forwarding yes
X11UseLocalhost no
Then restart the ssh within the container. You should do this from shell executed into the container, from the "docker_host", not when you are connected to the "docker_container" via ssh: (docker exec -ti docker_container bash)
Restart sshd:
sudo service ssh restart
When you connect via ssh to the "docker_container", check the $DISPLAY environment variable. It should say something like
appuser#3f75a98d67e6:~/data$ echo $DISPLAY
3f75a98d67e6:10.0
Test by executing your favorite X11 graphics program from within "docker_container" via ssh (like cv2.imshow())
I use an automated approach which can be executed entirely from within the docker container.
All that is needed is to pass the DISPLAY variable to the container, and mounting .Xauthority.
Moreover, it only uses the port from the DISPLAY variable, so it will also work in cases where DISPLAY=localhost:XY.Z.
Create a file, source-me.sh, with the following content:
# Find the containers address in /etc/hosts
CONTAINER_IP=$(grep $(hostname) /etc/hosts | awk '{ print $1 }')
# Assume the docker-host IP only differs in the last byte
SUBNET=$(echo $CONTAINER_IP | sed 's/\.[^\.]$//')
DOCKER_HOST_IP=${SUBNET}.1
# Get the port from the DISPLAY variable
DISPLAY_PORT=$(echo $DISPLAY | sed 's/.*://' | sed 's/\..*//')
# Create the correct display-name
export DISPLAY=$DOCKER_HOST_IP:$DISPLAY_PORT
# Find an existing xauth entry for the same port (DISPLAY_PORT),
# and copy everything except the dispay-name
# filtering out entries containing /unix: which correspond to "same-machine" connections
ENTRY=$(xauth -n list | grep -v '/unix\:' | grep "\:${DISPLAY_PORT}" | head -n 1 | sed 's/^[^ ]* *//')
# Prepend our display-name
ENTRY="$DOCKER_HOST_IP:$DISPLAY_PORT $ENTRY"
# Add the new xauth entry.
# Because our .Xauthority file is mounted, a new file
# named ${HOME}/.Xauthority-n will be created, and a warning
# is printed on std-err
xauth add $ENTRY 2> /dev/null
# replace the content of ${HOME}/.Xauthority with that of ${HOME}/.Xauthority-n
# without creating a new i-node.
cat ${HOME}/.Xauthority-n > ${HOME}/.Xauthority
Create the following Dockerfile for testing:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y xauth
COPY source-me.sh /root/
RUN cat /root/source-me.sh >> /root/.bashrc
# xeyes for testing:
RUN apt-get install -y x11-apps
Build and run:
docker build -t test-x .
docker run -ti \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash
Inside the container, run:
xeyes
To run non-interactively, you must ensure source-me.sh is sourced:
docker run \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash -c "source source-me.sh ; xeyes"

Docker container only on network

I use Docker and I have multiple webapps each need a MySQL server. Actually each webapp use his own bridge network to communicate with his MySQL server but each MySQL server use a different port (3306, 3307, 3308 ...).
I can't run them all on the port 3306 because this one is already used by the first MySQL webapp's.
Is it possible to do something to run all on MySQL servers on the 3306 ?
What I have :
| Net1 (bridge) | Net2(bridge) | Net3(bridge) | .... |
|--------------------|----------------------|--------------------|-----|
| Webapp1:80 | Webapp2:8080 | Webapp3:8081 | ... |
| Mysql:3306 | Mysql:3307 | Mysql:3308 | ... |
What I would like:
| Net1 (bridge) | Net2(bridge) | Net3(bridge) | .... |
|--------------------|----------------------|--------------------|-----|
| Webapp1:80 | Webapp2:8080 | Webapp3:8081 | ... |
| Mysql:3306 | Mysql:3306 | Mysql:3306 | ... |
How I run my containers:
docker network create --driver bridge webapp1net
docker run -d -p 3306:3306\
--net=webapp1net \
--net-alias=[webapp1net] \
-h webapp1-mysql \
--name webapp1-mysql mysql
docker run -d -p 127.0.0.1:80:80\
--net=webapp1net \
--net-alias=[webapp1net] \
-h webapp1 \
--name webapp1 webapp1
Thanks
Old post:
With Docker, I would like to know if it's possible to expose a container only on the network and not on the host.
Example:
I have 3 services each on a network and use MySQL but I don't want to change the MySQL's port.
Net 1 : myapp:80 (accessible by the localhost) & MySQL:3306 (only on the network)
Net 2 : myapp:8080 (accessible by the localhost) & MySQL:3306 (only on the network)
etc.
Is it possible to do something by running MySQL on 0.0.0.0 ?
Thanks

Can't Ping a Pod after Ubuntu cluster setup

I have followed the most recent instructions (updated 7th May '15) to setup a cluster in Ubuntu** with etcd and flanneld. But I'm having trouble with the network... it seems to be in some kind of broken state.
**Note: I updated the config script so that it installed 0.16.2. Also a kubectl get minions returned nothing to start but after a sudo service kube-controller-manager restart they appeared.
This is my setup:
| ServerName | Public IP | Private IP |
------------------------------------------
| KubeMaster | 107.x.x.32 | 10.x.x.54 |
| KubeNode1 | 104.x.x.49 | 10.x.x.55 |
| KubeNode2 | 198.x.x.39 | 10.x.x.241 |
| KubeNode3 | 104.x.x.52 | 10.x.x.190 |
| MongoDev1 | 162.x.x.132 | 10.x.x.59 |
| MongoDev2 | 104.x.x.103 | 10.x.x.60 |
From any machine I can ping any other machine... it's when I create pods and services that I start getting issues.
Pod
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
auth-dev-ctl-6xah8 172.16.37.7 sis-auth leportlabs/sisauth:latestdev 104.x.x.52/104.x.x.52 environment=dev,name=sis-auth Running 3 hours
So this pod has been spun up on KubeNode3... if I try and ping it from any machine other than it's KubeNode3 I get a Destination Net Unreachable error. E.g.
# ping 172.16.37.7
PING 172.16.37.7 (172.16.37.7) 56(84) bytes of data.
From 129.250.204.117 icmp_seq=1 Destination Net Unreachable
I can call etcdctl get /coreos.com/network/config on all four and get back {"Network":"172.16.0.0/16"}.
I'm not sure where to look from there. Can anyone help me out here?
Supporting Info
On the master node:
# ps -ef | grep kube
root 4729 1 0 May07 ? 00:06:29 /opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root 4730 1 1 May07 ? 00:21:24 /opt/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd_servers=http://127.0.0.1:4001 --logtostderr=true --portal_net=192.168.3.0/24
root 5724 1 0 May07 ? 00:10:25 /opt/bin/kube-controller-manager --master=127.0.0.1:8080 --machines=104.x.x.49,198.x.x.39,104.x.x.52 --logtostderr=true
# ps -ef | grep etcd
root 4723 1 2 May07 ? 00:32:46 /opt/bin/etcd -name infra0 -initial-advertise-peer-urls http://107.x.x.32:2380 -listen-peer-urls http://107.x.x.32:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
On a node:
# ps -ef | grep kube
root 10878 1 1 May07 ? 00:16:22 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname_override=104.x.x.49 --api_servers=http://107.x.x.32:8080 --logtostderr=true --cluster_dns=192.168.3.10 --cluster_domain=kubernetes.local
root 10882 1 0 May07 ? 00:05:23 /opt/bin/kube-proxy --master=http://107.x.x.32:8080 --logtostderr=true
# ps -ef | grep etcd
root 10873 1 1 May07 ? 00:14:09 /opt/bin/etcd -name infra1 -initial-advertise-peer-urls http://104.x.x.49:2380 -listen-peer-urls http://104.x.x.49:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
#ps -ef | grep flanneld
root 19560 1 0 May07 ? 00:00:01 /opt/bin/flanneld
So I noticed that the flannel configuration (/run/flannel/subnet.env) was different to what docker was starting up with (wouldn't have a clue how they got out of sync).
# ps -ef | grep docker
root 19663 1 0 May07 ? 00:09:20 /usr/bin/docker -d -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.85.1/24 --mtu=1472
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=172.16.60.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
Note that the docker --bip=172.16.85.1/24 was different to the flannel subnet FLANNEL_SUBNET=172.16.60.1/24.
So naturally I changed /etc/default/docker to reflect the new value.
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.60.1/24 --mtu=1472"
But now a sudo service docker restart wasn't erroring out... so looking at /var/log/upstart/docker.log I could see the following
FATA[0000] Shutting down daemon due to errors: Bridge ip (172.16.85.1) does not match existing bridge configuration 172.16.60.1
So the final piece to the puzzle was deleting the old bridge and restarting docker...
# sudo brctl delbr docker0
# sudo service docker start
If sudo brctl delbr docker0 returns bridge docker0 is still up; can't delete it run ifconfig docker0 down and try again.
Please try this:
ip link del docker0
systemctl restart flanneld

Resources