Windows Container swarm publish port and not access - docker

I use windows container and try to create docker swarm ,I create three virtual machine use hyper-v , and each OS is windows server 2016.There machines ip is :
windocker211 192.168.1.211
windocker212 192.168.1.212
windocker219 192.168.1.219
The docker swarm node is :
PS C:\ConsoleZ> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
4c0g0o0uognheugw4do1a1h7y windocker212 Ready Active
bbxot0c8zijq7xw4lm86svgwp * windocker219 Ready Active Leader
wftwpiqpqpbqfdvgenn787psj windocker211 Ready Active
I create use command:
docker service create --name=demo5 -p 5005:5005 --replicas 6 192.168.1.245/cqgis/wintestcore:0.6
The docker image is asp.net core app , the Dockerfile is:
FROM 192.168.1.245/win/aspnetcore-runtime:1.1.2
COPY . /app
WORKDIR /app
ENV ASPNETCORE_URLS http://*:5005
EXPOSE 5005/tcp
ENTRYPOINT ["dotnet", "dotnetcore.dll"]
then it create success:
PS C:\ConsoleZ> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
omhu7e0vo96s demo5 replicated 6/6 192.168.1.245/cqgis/wintestcore:0.6 *:5005->5005/tcp
PS C:\ConsoleZ> docker service ps demo5
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
8pihnak9a2ei demo5.1 192.168.1.245/cqgis/wintestcore:0.6 windocker212 Running Running 59 seconds ago
ut3f3b9giu4w demo5.2 192.168.1.245/cqgis/wintestcore:0.6 windocker219 Running Running 47 seconds ago
iy1xjevt67yl demo5.3 192.168.1.245/cqgis/wintestcore:0.6 windocker211 Running Running about a minute ago
q7f1gnbwslr3 demo5.4 192.168.1.245/cqgis/wintestcore:0.6 windocker212 Running Running about a minute ago
8zewaktcu32h demo5.5 192.168.1.245/cqgis/wintestcore:0.6 windocker219 Running Running about a minute ago
xq820kqwf3v9 demo5.6 192.168.1.245/cqgis/wintestcore:0.6 windocker211 Running Running 55 seconds ago
but my question is I cann't accessing The Site each by
http://192.168.1.219:5005/
http://192.168.1.219:5005/
http://192.168.1.219:5005/
When I use command
docker run -it -p 5010:5005 192.168.1.245/cqgis/wintestcore:0.6
I can use http://192.168.1.219:5010/ get the right result
my docker info is
PS C:\ConsoleZ> docker info
Containers: 4
Running: 3
Paused: 0
Stopped: 1
Images: 5
Server Version: 17.06.0-ce-rc1
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd json-file logentries splunk syslog
Swarm: active
NodeID: bbxot0c8zijq7xw4lm86svgwp
Is Manager: true
ClusterID: 32vsgwrbn6ihvpevly71gkgxk
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Root Rotation In Progress: false
Node Address: 192.168.1.219
Manager Addresses:
192.168.1.219:2377
Default Isolation: process
Kernel Version: 10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)
Operating System: Windows Server 2016 Datacenter
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 2.89GiB
Name: windock219
ID: 7AOY:OT6V:BTJV:NCHA:3OF5:5WR5:K2YR:CFG3:VXLD:QTMD:GA3D:ZFJ2
Docker Root Dir: C:\ProgramData\docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: -1
Goroutines: 297
System Time: 2017-06-04T19:58:20.7582294+08:00
EventsListeners: 2
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
192.168.1.245
127.0.0.0/8
Live Restore Enabled: false

I beleive you need to publish port in "host" mode (learn.microsoft.com/en-us/virtualization/windowscontainers/…‌​). Also it will be one to one port mapping between running container and host and hence you will not be able to run several containers on the same port. Routing mesh is not working on Windows yet.

There are some differences in the network between Docker for windows container and Docker for Linux. Windows Containers uses the HyperV Network technologies to provide the virtual networking features that docker uses. From there are some restrictions that are not work like you would expect or maybe found in standard Docker Documentation.
First you cannot access the web side running inside your container by
using the lookback address (127.0.0.1) or the host address (192.168.1.xxx) You have to call it
always from a remote machine.
I saw you are using the expose command in your Dockerfile. It is not
so self-explaining but expose is to expose a port in any other
network then the host or ingress network. It’s not a problem if you
do that in a non swarm configuration but it does not work in a swarm.
I Suggest to remove the Expose command.
There are some unsolved problems with windows networking. Sometimes the port stays in use after the container gets restarted.
For example, after a reboot of the host system.
[https://github.com/moby/moby/issues/21558][1]
With this scrip you can remove the all virtual network settings:
Stop-Service docker
Get-ContainerNetwork | Remove-ContainerNetwork
Get-NetNat | Remove-NetNat
Get-VMSwitch | Remove-VMSwitch
Start-Service docker

You cannot reach a container's published port from the same machine because of a limitation of the WinNAT networking. But you can reach the required port using an external request.
In your example, from a machine other than 192.168.1.219, accessing using the url http://192.168.1.219:5005/ will succeed. The url's http://192.168.1.211:5005/ and http://192.168.1.212:5005/ will also succeed provided the requests originate from outside those machines.
Using the 'host' mode will succeed: however, you are not getting the advantage of the 'routing mesh' feature which allows the service to be reachable from any of the services' nodes - only from that one single node.

Related

Cannot deploy Docker Swarm stack in rootless mode, mkdir /var/lib/docker: permission denied

I've set up Docker in rootless mode under Ubuntu 20.04 and Debian 11 (in my case, using Ansible and this role). I want to deploy a simple Docker stack to the node via Docker Swarm. No other hosts are involved, just one Swarm node from the same machine, acting as a manager.
I can run this project with Docker and Docker Compose just fine, also in rootless mode. All that changes for the rootless setup is that DOCKER_HOST is overwritten in .bashrc:
export XDG_RUNTIME_DIR="/run/user/1000"
export DOCKER_HOST="unix:///run/user/1000/docker.sock"
When I deploy the stack though, none of the services can start (here is an excerpt of the status):
$ docker stack deploy -c docker-stack.yml demo-stack
$ docker stack ps demo-stack --no-trunc
jig6zyewkem2g225509x91nt5 demo-stack_db.1 registry.example.com/db:v1.20.2 bullseye Shutdown Rejected 15 seconds ago "mkdir /var/lib/docker: permission denied"
ox6x5w7du9o5ew2v70g5mfg9e demo-stack_redis.1 registry.example.com/redis:v1.20.2 bullseye Shutdown Rejected 15 seconds ago "mkdir /var/lib/docker: permission denied"
ipme447wrrsjc8jw6cpfak4hq demo-stack_web.1 registry.example.com/web:v1.20.2 bullseye Shutdown Rejected 14 seconds ago "mkdir /var/lib/docker: permission denied"
The services all error with mkdir /var/lib/docker: permission denied. I suppose that it tries to start them as if the system was using rootful Docker, but it's a rootless installation.
I guess the question is: how do I get the Swarm node (which is the very same machine) to use the correct Docker rootless configuration for launching the services? That would include using the correct DOCKER_HOST configuration.
I am unsure if this is even supposed to work. I hear that overlay networks are not supported, but I am only on one machine, so I don't really need this. I do need Swarm for its usable implementation of secrets (compared to the mock implementation from Docker Compose).
Note that I have the same setup with Docker running in (normal) rootful mode, and there, all services can be started. It's therefore not an issue with the Docker stack file itself.
More details with docker info:
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 12
Server Version: 20.10.13
Storage Driver: fuse-overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: hpzsmez48acse9yo1frnx37fo
Is Manager: true
ClusterID: zkv7wsoun193kyvbxe1k3hdph
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 127.0.0.1
Manager Addresses:
127.0.0.1:2377
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc version: v1.0.3-0-gf46b6ba2
init version: de40ad0
Security Options:
seccomp
Profile: default
rootless
cgroupns
Kernel Version: 5.10.0-13-amd64
Operating System: Debian GNU/Linux 11 (bullseye)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.936GiB
Name: bullseye
ID: 3R5P:2UV6:FIP4:UIJV:TDNQ:35DT:DEDI:SMGN:FDUY:JSWO:FRU6:O2HF
Docker Root Dir: /home/vagrant/.local/share/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
The solution is simple: Docker Rootless does not work with Docker Swarm. You can have either, but not both.

Cannot use docker-compose with overlay network

I'm pretty baffled what's going on here, but I've narrowed it down to a very small test case. Here's my docker-compose file:
version: "3.7"
networks:
cl_net_overlay:
driver: overlay
services:
redis:
image: "redis:alpine"
networks:
- cl_net_overlay
The cl_net_overlay network doesn't exist. When I run this with:
docker-compose up
It stalls for a little while, then says:
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating network "tmp_cl_net_overlay" with driver "overlay"
Recreating tmp_redis_1 ... error
ERROR: for tmp_redis_1 Cannot start service redis: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
ERROR: for redis Cannot start service redis: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
ERROR: Encountered errors while bringing up the project.
This file was working fine for me on my previous laptop. My docker and docker-compose should be up to date since this is a brand new laptop. Is there some piece of the puzzle I'm missing?
05:01:11::mlissner#gabbro::/tmp
↪ docker --version
Docker version 19.03.1, build 74b1e89
05:01:57::mlissner#gabbro::/tmp
↪ docker-compose --version
docker-compose version 1.24.1, build 4667896b
Any ideas what's going on here? I've been trying to get it to work all day and I'm feeling a little like I'm losing my mind.
Small follow up. The message says:
make sure your network options are correct and check manager logs
I have no idea how to check the manager logs. That might be a useful first step?
Another follow up, per comments. If I try to deploy this I get no logs and it's unable to start up:
05:44:32::mlissner#gabbro::~/Programming/courtlistener/docker/courtlistener
↪ docker stack deploy --compose-file /tmp/docker-compose.yml test2
Creating network test2_cl_net_overlay2
Creating service test2_redis
05:44:50::mlissner#gabbro::~/Programming/courtlistener/docker/courtlistener
↪ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
5y7o01o5mifn test2_redis replicated 0/1 redis:alpine
05:44:57::mlissner#gabbro::~/Programming/courtlistener/docker/courtlistener
↪ docker service ps 5y
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
0kbph0ie8qth test2_redis.1 redis:alpine gabbro Ready Rejected 4 seconds ago "mkdir /var/lib/docker: read-o…"
inr81c3r4un7 \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 9 seconds ago "mkdir /var/lib/docker: read-o…"
tl1h6dp90ur2 \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 14 seconds ago "mkdir /var/lib/docker: read-o…"
jacv2yvkspix \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 19 seconds ago "mkdir /var/lib/docker: read-o…"
7cm6e8snf517 \_ test2_redis.1 redis:alpine gabbro Shutdown Rejected 19 seconds ago "mkdir /var/lib/docker: read-o…"
Another idea: Running as root. Same issue.
Do you have the right plugins (see more bellow on the docker info command)?
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
It works on:
$ docker swarm init
$ docker-compose up
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating network "stackoverflow-57701373_cl_net_overlay" with driver "overlay"
Pulling redis (redis:alpine)...
alpine: Pulling from library/redis
9d48c3bd43c5: Pull complete
(...)
redis_1 | 1:M 29 Aug 2019 01:27:31.969 * Ready to accept connection
When:
$ docker --version
Docker version 19.03.1-ce, build 74b1e89e8a
and info:
$ docker info
Client:
Debug Mode: false
Server:
(...)
Server Version: 19.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: ff5mogx0ph4pgmwm2zrbhmjb4
Is Manager: true
ClusterID: vloixv7g75jflw5i1k81neul1
Managers: 1
Nodes: 1
(...)

Can't recover docker swarm from pending state

There was a crash and I have this issue now where it says docker swarm status is pending and the node status is UNKNOWN. This is my docker info result
swarm#swarm-manager-1:~$ docker info
Containers: 270
Running: 0
Paused: 0
Stopped: 270
Images: 160
Server Version: 1.12.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1211
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: pending
NodeID: d9hq8wzz6skh9pzrxzhbckm97
Is Manager: true
ClusterID: 5zgab5w50qgvvep35eqcbote2
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: HIDDEN
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-91-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 6.804 GiB
Name: swarm-manager-1
ID: AXPO:VFSV:TDT3:6X7Y:QNAO:OZJN:U23R:V5S2:FU33:WUNI:CRPK:2E2C
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
This is my docker node ls result:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
9tlo3rg7tuc23xzc3am28lak1 swarm-worker-1 Unknown Active
d9hq8wzz6skh9pzrxzhbckm97 * swarm-manager-1 Unknown Active Leader
I've tried restarting docker engine and the VM but doesn't help in any way. The system is actually running as when I say docker ps in the worker it shows all the containers but on the manager there is nothing on docker ps.
Any idea?
In my experience with Swarm the only solution to similar trouble was to destroy the swarm. And when you do this you should probably also do a docker system prune (only if theres nothing valuable that could be deleted) and service docker restart. And then set up a new swarm.
It sucks. I know
Instead of just rebuilding the whole swarm all at once, you can attempt to remove and re-add each node one at a time - the advantage of this is that the swarm state is not destroyed and, on larger swarms, services can continue while you fix it. This process is considerably more complicated when you don't have a quorum of managers, though.
First, note the node IDs (I'll refer to here as $WORKER_ID and $MANAGER_ID).
On manager node:
docker node update --availability drain $WORKER_ID
^ This is optional, but it's a good habit when working with live services on a swarm.
docker swarm join-token manager
^ This command will give you the join command to run on each node after it's removed. I'll refer to it as $JOIN_COMMAND below. We will demote the worker once the manager re-joins.
On worker:
docker swarm leave
$JOIN_COMMAND
This node is now re-joined as a manager, but I'll continue calling it the 'worker' to avoid confusion.
On manager:
docker node rm $WORKER_ID
docker node update --availability drain $MANAGER_ID
docker swarm leave -f
$JOIN_COMMAND
docker node rm $MANAGER_ID
docker node ls
Find the worker's new id (pay attention to the hostname, not the role) -> $NEW_WORKER_ID
docker node demote $NEW_WORKER_ID
Your swarm should be refreshed - if there were more nodes, the services running on each would have migrated across the swarm when you drained each node.
If it still doesn't work (and regardless), you really should consider upgrading to docker v17.06 or newer. Swarm networking was very unstable before that, causing a lot of issues stemming from race conditions.

docker installation on ubuntu in virtualbox, cannot pull images

I have ubuntu 14.04.5 installed as guest os in virtualbox 5.0.26 running on windows 10. I am not aware of any issues with the ubuntu installation, it seems to run fine and has a bridged internet connection so gets its own ip.
I have installed docker following the directions on docker docs for linux. The installation goes fine without any errors and the docker daemon starts ok.
Here is the docker info:
root#ubuntu-z9:~# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay bridge host null
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 4.2.0-27-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 10
Total Memory: 31.42 GiB
Name: ubuntu-z9
ID: 7MPO:OHFW:3OBJ:KUVX:3YCS:XP4U:RE6W:SFC3:O4KK:GJJU:M6WJ:HYLY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
The machine can see the internet fine and access hub.docker.com from a browser.
However, when I run the simple hello-world test the daemon hangs
root#ubuntu-z9:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
with a timeout.
I can run docker-machine without any issues on the host windows 10 machine so I believe the issue lies in my setup of the ubuntu machine in virtualbox and docker.
Here is the logging output of the docker daemon on the ubuntu guest machine:
$ docker pull hello-world
DEBU[0093] Calling POST /v1.24/images/create?fromImage=hello-world&tag=latest
DEBU[0093] Trying to pull hello-world from https://registry-1.docker.io v2
DEBU[0094] Increasing token expiration to: 60 seconds
ERRO[0494] Error trying v2 registry: error parsing HTTP 408 response body: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n\n"
ERRO[0494] Attempting next endpoint for pull after error: error parsing HTTP 408 response body: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n\n"
DEBU[0494] Skipping v1 endpoint https://index.docker.io because v2 registry was detected
ERRO[0494] Handler for POST /v1.24/images/create returned error: error parsing HTTP 408 response body: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n\n"
Any suggestions on a way forward to diagnose or fix the issue?
Many thanks.
It was a simple issue, undoubtedly documented somewhere but I missed it. I post an answer here in case someone else has the same.
The virtualbox os (ubuntu in my case) has to have a NAT network adapter and the NAT adapter has to have higher priority than a bridge adapter (if you have one). You don't need a bridged adapter to run docker (but if you want the virtualbox to have an ip on your local network then you do need to add a bridged adapter.)
VirtualBox configuration examples that work to run docker:
VBox Adapter 1: NAT (eth0), VBox Adapter 2: Host-only Adapter (eth1)
VBox Adapter 1: NAT (eth0), VBox Adapter 2: Bridged Adapter (eth1)
VirtualBox configuration examples that do not work to run docker:
VBox Adapter 1: Bridged Adapter (eth0)
VBox Adapter 1: Bridged Adapter (eth0), VBox Adapter 2: NAT (eth1)
Note in all four cases the virtualbox ubuntu os has access to the internet but docker can only pull images when NAT has priority over the bridged interface.

docker and DNS issues

I am trying to install a docker repo on an Ubuntu server, but it seems Docker has issues with DNS.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Unable to find image 'registry:2' locally
Pulling repository registry
Get https://index.docker.io/v1/repositories/library/registry/images: dial tcp: lookup index.docker.io: no such host
However, all other applications work fine. I can also do a wget on index.docker.io, so no issues there.
I am using an internal DNS server, which is a Synology NAS device.
resolv.conf of the server:
nameserver 192.168.10.2
search internal.mydomain.com
my /etc/default/docker options:
DOCKER_OPTS="--bip=192.168.11.0/24 --dns 192.168.10.2"
I am using 192.168.10.0/24 as my internal ip range. the .2 ip belongs to my NAS/DNS server.
Docker version:
Docker version 1.7.1, build 786b29d
Anyone a clue?
Update: changing dns to Google solved the download issue, but now it gives an error afterwards:
Error response from daemon: Cannot start container 33757f59f942583ff949f421fb5c266e6d1c2b0fdc1363565e77febf44feb60f: invalid argument
Some additional info about my setup:
jeroen#docker01:~$ docker info
Containers: 3
Images: 22
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 2
Total Memory: 1.955 GiB
Name: docker01
ID: X6JB:IH7Z:OK5O:II5I:OJ6V:OERE:IPEM:PN6S:HDDM:G2J7:HRB2:4ZKO
WARNING: No swap limit support
I had the same issue, and I notice that you have "--bip=192.168.11.0/24"
Try changing this to an actual IP address, rather than a subnet. For example, try "--bip=192.168.11.1/24".
You will have to stop docker, remove the docker0 bridge (ip link delete docker0) and then restart using the new bip option.

Resources