Mesos Slave - Docker compose - docker

Am using mesos version 1.0.3. Just installed mesos thru
docker pull mesosphere/mesos-master:1.0.3
docker pull mesosphere/mesos-salve:1.0.3
Using docker-compose to start mesos-master and mesos-slave.
docker-compose file,
services:
#
# Zookeeper must be provided externally
#
#
# Mesos
#
mesos-master:
image: mesosphere/mesos-master:1.0.3
restart: always
privileged: true
network_mode: host
volumes:
- ~/mesos-data/master:/tmp/mesos
environment:
MESOS_CLUSTER: "mesos-cluster"
MESOS_QUORUM: "1"
MESOS_ZK: "zk://localhost:2181/mesos"
MESOS_PORT: 5000
MESOS_REGISTRY_FETCH_TIMEOUT: "2mins"
MESOS_EXECUTOR_REGISTRATION_TIMEOUT: "2mins"
MESOS_LOGGING_LEVEL: INFO
MESOS_INITIALIZE_DRIVER_LOGGING: "false"
mesos-slave1:
image: mesosphere/mesos-slave:1.0.3
depends_on: [ mesos-master ]
restart: always
privileged: true
network_mode: host
volumes:
- ~/mesos-data/slave-1:/tmp/mesos
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock
environment:
MESOS_CONTAINERIZERS: docker
MESOS_MASTER: "zk://localhost:2181/mesos"
MESOS_PORT: 5051
MESOS_WORK_DIR: "/var/lib/mesos/slave-1"
MESOS_LOGGING_LEVEL: WARNING
MESOS_INITIALIZE_DRIVER_LOGGING: "false"
Mesos master runs fine without any issues. But the slave is not starting with the below error. Not sure, what else is missing here.
I0811 21:38:28.952507 1 main.cpp:243] Build: 2017-02-13 08:10:42 by ubuntu
I0811 21:38:28.952599 1 main.cpp:244] Version: 1.0.3
I0811 21:38:28.952601 1 main.cpp:247] Git tag: 1.0.3
I0811 21:38:28.952603 1 main.cpp:251] Git SHA: c673fdd00e7f93ab7844965435d57fd691fb4d8d
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.29: No such file or directory
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#730: Client environment:host.name=<HOST_NAME>
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#737: Client environment:os.name=Linux
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#738: Client environment:os.arch=3.8.13-98.7.1.el7uek.x86_64
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#739: Client environment:os.version=#2 SMP Wed Nov 25 13:51:41 PST 2015
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#747: Client environment:user.name=(null)
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#755: Client environment:user.home=/root
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#767: Client environment:user.dir=/
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#zookeeper_init#800: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f4f82265e50 sessionId=0 sessionPasswd=<null> context=0x7f4f5c000930 flags=0
2017-08-11 21:38:29,064:1(0x7f4f74ccb700):ZOO_INFO#check_events#1728: initiated connection to server [127.0.0.1:2181]
2017-08-11 21:38:29,067:1(0x7f4f74ccb700):ZOO_INFO#check_events#1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x15dc8b48c6d0155, negotiated timeout=10000
Failed to perform recovery: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.22)
'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/slave-1/meta/slaves/latest
This ensures agent doesn't recover old live executors.
The below command returns same version for docker client API and docker server API. Not sure what is wrong with the setup.
docker -H unix:///var/run/docker.sock version
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:18:46 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:18:46 2016
OS/Arch: linux/amd64

Meoss slave was using the client version 1.24.
This is working after setting the environment variable for the mesos slave.
DOCKER_API_VERSION = 1.22
The combination of the release version and API version of Docker is as follows:
https://docs.docker.com/engine/api/v1.26/#section/Versioning
The other option is to update the docker version.

Related

Docker port has unwanted port declaration

I am using Docker latest version, here is the output of "docker version"
docker version
Client:
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.16.3
Git commit: 370c289
Built: Fri Apr 9 22:46:57 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.6
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8728dd2
Built: Fri Apr 9 22:44:56 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
I run a simple python flask images as followed https://docs.docker.com/language/python/build-images/
docker run --publish 5000:5000 python-docker-test
My container has up and run without any problem. The problem is I observed an addition port declaration as below
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e8188fe2db3 python-docker-test "python3 -m flask ru…" 4 seconds ago Up 3 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp test_docker_python-docker-test_1
Or more specifically:
PORTS
0.0.0.0:5000->5000/tcp, :::5000->5000/tcp
Output of docker port command
~$ docker port test_docker_python-docker-test_1 5000
0.0.0.0:5000
:::5000
Question is: Why do we have such :::5000 or generally :::<port_num>? Can we avoid this ?
Problem that I have is my bash script that fetch the output docker port need to be modified a bit. It's not a big deal. I just curious if there are some update in Docker 20.10.3.
Thanks
Alex
0.0.0.0 is the wildcard address in IPv4.
:: is the wildcard address of IPv6.
Docker does it so that it can receive requests from both IPv4 and IPv6 network interfaces.
To only bind port in the IPv4 interface, you have to specify the network interface explicitly.
docker run --publish 0.0.0.0:5000:5000 python-docker-test
Docker doc about networking

Using Docker Buildkit on Google Cloud Build

I'm trying to use BuildKit with Docker on Google Cloud Build so that I can eventually use the --secret flag. I'm using Build Enhancements for Docker as a reference.
It works on my laptop when I use the following command: DOCKER_BUILDKIT=1 docker build -t hello-world:latest .
When I run it on Cloud Build, I get the error "docker.io/docker/dockerfile:experimental not found".
Any idea how to get this to work on Cloud Build?
Here is the setup (note: I'm not using the --secret flag yet):
Dockerfile:
#syntax=docker/dockerfile:experimental
FROM node:10.15.3-alpine
RUN mkdir -p /usr/src/app && \
apk add --no-cache tini
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --production
COPY . .
RUN chown -R node:node .
USER node
EXPOSE 8080
ENTRYPOINT ["/sbin/tini", "--"]
CMD [ "node", "index.js" ]
cloudbuild.yaml:
steps:
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t', 'gcr.io/$PROJECT_ID/hello-world:latest',
'.'
]
env:
- "DOCKER_BUILDKIT=1"
Cloud Build Log:
starting build "xxxx"
FETCHSOURCE
Fetching storage object: gs://xxxxx
Copying gs://xxxxx...
/ [0 files][ 0.0 B/ 15.3 KiB]
/ [1 files][ 15.3 KiB/ 15.3 KiB]
Operation completed over 1 objects/15.3 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
#2 [internal] load .dockerignore
#2 digest: sha256:3ce0de94c925587ad30afb764af9bef89edeb62eb891b99694aedb086ee53f50
#2 name: "[internal] load .dockerignore"
#2 started: 2019-07-24 03:21:49.153855989 +0000 UTC
#2 completed: 2019-07-24 03:21:49.195969197 +0000 UTC
#2 duration: 42.113208ms
#2 transferring context: 230B done
#1 [internal] load build definition from Dockerfile
#1 digest: sha256:82b0dcd17330313705522448d60a78d4565304d55c86f55b903b18877d612601
#1 name: "[internal] load build definition from Dockerfile"
#1 started: 2019-07-24 03:21:49.150042849 +0000 UTC
#1 completed: 2019-07-24 03:21:49.189628322 +0000 UTC
#1 duration: 39.585473ms
#1 transferring dockerfile: 445B done
#3 resolve image config for docker.io/docker/dockerfile:experimental
#3 digest: sha256:401713457b113a88eb75a6554117f00c1e53f1a15beec44e932157069ae9a9a3
#3 name: "resolve image config for docker.io/docker/dockerfile:experimental"
#3 started: 2019-07-24 03:21:49.210803849 +0000 UTC
#3 completed: 2019-07-24 03:21:49.361743084 +0000 UTC
#3 duration: 150.939235ms
#3 error: "docker.io/docker/dockerfile:experimental not found"
docker.io/docker/dockerfile:experimental not found
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
Laptop Docker version:
Client: Docker Engine - Community
Version: 18.09.2
API version: 1.39
Go version: go1.10.8
Git commit: 6247962
Built: Sun Feb 10 04:12:39 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 04:13:06 2019
OS/Arch: linux/amd64
Experimental: false
Cloud Build Docker Version:
Step #0 - "Version": Client:
Step #0 - "Version": Version: 18.09.7
Step #0 - "Version": API version: 1.39
Step #0 - "Version": Go version: go1.10.8
Step #0 - "Version": Git commit: 2d0083d
Step #0 - "Version": Built: Thu Jun 27 17:56:17 2019
Step #0 - "Version": OS/Arch: linux/amd64
Step #0 - "Version": Experimental: false
Step #0 - "Version":
Step #0 - "Version": Server: Docker Engine - Community
Step #0 - "Version": Engine:
Step #0 - "Version": Version: 18.09.3
Step #0 - "Version": API version: 1.39 (minimum version 1.12)
Step #0 - "Version": Go version: go1.10.8
Step #0 - "Version": Git commit: 774a1f4
Step #0 - "Version": Built: Thu Feb 28 05:59:55 2019
Step #0 - "Version": OS/Arch: linux/amd64
Step #0 - "Version": Experimental: false
Update: I noticed that I was using #syntax=docker/dockerfile:experimental whereas the linked article has #syntax=docker/dockerfile:1.0-experimental. I get the same error when using 1.0-experimental.
There seems to be an issue when the "registry-mirrors" option is used in combination with buildkit, then the buildkit frontend images fail to fetch:
https://github.com/moby/moby/issues/39120
Pulling them before doing the build seems to resolve the issue:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'docker/dockerfile:experimental']
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'docker/dockerfile:1.0-experimental']
I had a similar issue and managed to figure it out. It's not really possible to use docker buildkit with gcr.io/cloud-builders/docker, instead, you have to run a docker in docker daemon and run another docker build on the side with docker-compose.
Specifically, you'll need a docker-compose.yml that has:
docker (docker in docker daemon)
a docker build step that builds the image (with buildkit enabled)
a docker auth and push step that authorizes docker to push to gcr (you need to create creds.json w/ service role w/ gcs permission, see bottom for details)
In order to auth and push to gcr, one needs to do docker login with creds.json. See details: https://cloud.google.com/container-registry/docs/advanced-authentication
# deploy/app/docker-compose.yml
version: '3.7'
services:
docker:
image: "docker:18.09.9-dind"
privileged: true
volumes:
- docker-certs-client:/certs/client
- docker-certs-ca:/certs/ca
expose:
- 2376
environment:
- DOCKER_TLS_CERTDIR=/certs
networks:
- docker-in-docker-network
docker-build:
image: docker:18.09.9
working_dir: /project
command: build -t 'gcr.io/$PROJECT_ID/<image>:<tag>'
privileged: true
depends_on:
- docker
volumes:
- docker-certs-client:/certs/client:ro
- ../../:/project
environment:
- DOCKER_TLS_CERTDIR=/certs
- DOCKER_BUILDKIT=1
networks:
- docker-in-docker-network
docker-push:
image: docker:18.09.9
working_dir: /project
entrypoint: /bin/sh -c
command:
- |
cat creds.json | docker login -u _json_key --password-stdin https://gcr.io
docker push 'gcr.io/$PROJECT_ID/<image>:<tag>'
privileged: true
depends_on:
- docker
volumes:
- docker-certs-client:/certs/client:ro
- ../../:/project
environment:
- DOCKER_CERT_PATH=/certs/client
- DOCKER_HOST=tcp://docker:2376
- DOCKER_TLS_VERIFY=1
networks:
- docker-in-docker-network
volumes:
docker-certs-ca:
docker-certs-client:
networks:
docker-in-docker-network:
Then in your cloud-build.yaml:
you need to first decrypt a creds.json (must be created and encrypted first) -- for details: https://cloud.google.com/docs/authentication/getting-started
(The push step will use the key to authorize docker login to gcr.)
Run a docker daemon from docker-compose in daemon mode (so it doesn't block the build and push step)
Run the build step docker-compose
Run the auth and push step in docker-compose.
# cloud-build.yaml
steps:
# decrypt gcloud json secret
- name: gcr.io/cloud-builders/gcloud
args:
- kms
- decrypt
- --ciphertext-file=deploy/app/creds.json.enc
- --plaintext-file=creds.json
- --location=global
- --keyring=<...>
- --key=<...>
# run docker daemon
- name: 'docker/compose:1.24.1'
args: ['-f', 'deploy/app/docker-in-docker-compose.yml', 'up', '-d', 'docker']
env:
- 'PROJECT_ID=$PROJECT_ID'
# build image
- name: 'docker/compose:1.24.1'
args: ['-f', 'deploy/app/docker-in-docker-compose.yml', 'up', 'docker-build']
env:
- 'PROJECT_ID=$PROJECT_ID'
# docker auth and push to gcr
- name: 'docker/compose:1.24.1'
args: ['-f', 'deploy/app/docker-in-docker-compose.yml', 'up', 'docker-push']
env:
- 'PROJECT_ID=$PROJECT_ID'
timeout: 600s

RabbitMQ Docker Container Error: Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces

I am having a problem with running rabbitmq from within Docker on Windows Server 1709 (Windows Server core edition).
I am using docker-compose to create the rabbitmq service. If I run the docker-compose on my local computer, everything works fine. When I run the docker-compose on the windows server (where docker has been set to docker lcow support on windows) I get the above mentioned error multiple times occurring the in the logs. Namely this error is:
Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces
It is worth noting that I receive this error even if I just do a manual pull of rabbitmq and a manual run with docker run -itd --rm --name rabbitmq rabbitmq:3-management
I am able to bash into the container for a short while before it crashes and exits and I see the following:
root#localhost:~# ls -la
---------- 2 root root 20 Jan 5 12:18 .erlang.cookie
On my localhost, the permissions look like this (which is correct):
root#localhost:~# ls -la
-r-------- 1 rabbitmq rabbitmq 20 Dec 28 00:00 .erlang.cookie
I can't understand why the permission structure is broken on the server.
Is it possible that this is an issue with LCOW support on Windows Server 1709 with Docker for Windows? Or is the problem with rabbitmq?
For reference here is the docker compose file used:
version: "3.3"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
hostname: localhost
ports:
- "1001:5672"
- "1002:15672"
environment:
- "RABBITMQ_DEFAULT_USER=user"
- "RABBITMQ_DEFAULT_PASS=password"
volumes:
- d:/docker_data/rabbitmq:/var/lib/rabbitmq/mnesia
restart: always
For reference here is the docker information where there error is happening.
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.10.0-ee-preview-3
Storage Driver: windowsfilter (windows) lcow (linux)
LCOW:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd json-file logentries splunk syslog
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 16299 (16299.15.amd64fre.rs3_release.170928-1534)
Operating System: Windows Server Datacenter
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.905GiB
Name: ServerName
Docker Root Dir: D:\docker-root
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
docker version
Client:
Version: 17.10.0-ee-preview-3
API version: 1.33
Go version: go1.8.4
Git commit: 1649af8
Built: Fri Oct 6 17:52:28 2017
OS/Arch: windows/amd64
Server:
Version: 17.10.0-ee-preview-3
API version: 1.34 (minimum version 1.24)
Go version: go1.8.4
Git commit: b8571fd
Built: Fri Oct 6 18:01:48 2017
OS/Arch: windows/amd64
Experimental: true
I struggled with same problem when run RabbitMQ inside AWS ECS container
Disclaimer: I didn't check this behavior in detail and that is only my assumption, so the problem cause may be wrong, but at least solution is working
It feels like RabbitMQ creating .erlang.cookie file on container start if it doesn't exist. And if inside-container user is root:
...
rabbitmq:
image: rabbitmq:3-management
# set container user to root
user: 0:0
...
then .erlang.cookie will be created with root permissions. But RabbitMQ starting child processes with rabbitmq user permissions. And .erlang.cookie is not writeable (editable) in this case.
To avoid this problem, I created custom image with existing .erlang.cookie file using Dockerfile:
ARG COOKIE_VALUE=SomeDefaultRandomString01
FROM rabbitmq:3.11-alpine
ARG COOKIE_VALUE=$COOKIE_VALUE
RUN printf 'log.console = true\nlog.console.level = warning\nlog.default.level = warning\nlog.connection.level = warning\nlog.channel.level = warning\nlog.file.level = warning\n' > /etc/rabbitmq/conf.d/10-logs_to_stdout.conf && \
printf 'loopback_users.guest = false\n' > /etc/rabbitmq/conf.d/20-allow_remote_guest_users.conf && \
printf 'management_agent.disable_metrics_collector = true' > /etc/rabbitmq/conf.d/30-disable_metrics_data.conf && \
chown rabbitmq:rabbitmq /etc/rabbitmq/conf.d/* && mkdir -p /var/lib/rabbitmq/ && \
echo "$COOKIE_VALUE" > /var/lib/rabbitmq/.erlang.cookie && chmod 400 /var/lib/rabbitmq/.erlang.cookie && \
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
where .erlang.cookie value may be any random string, but it should be same for all nodes in RabbitMQ cluster (extra information here).

docker stack communicate between containers

I'm trying to setup a swarm using docker but I'm having issues with communicating between containers.
I have cluster with 5 nodes. 1 manager and 4 workers.
3 apps: redis, splash, myapp
myapp has to be on the 4 workers
redis, splash just on the manager
myapp has to be able to communicate with redis and splash
I tried using the container name but its not working. It resolves the container name to different IPs.
ping splash # return a different ip than the container actually has
I am deploying running the swarm using docker stack
docker stack deploy -c docker-stack.yml myapp
Linking container between them also doesn't work.
Any ideas ? Am I missing something ?
root#swarm-manager:~# docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
docker-stack.yml contains:
version: "3"
services:
splash:
container_name: splash
image: scrapinghub/splash
ports:
- 8050:8050
- 5023:5023
deploy:
mode: global
placement:
constraints:
- node.role == manager
redis:
container_name: redis
image: redis
ports:
- 6379:6379
deploy:
mode: global
placement:
constraints:
- node.role == manager
myapp:
container_name: myapp
image: myapp_image:latest
environment:
REDIS_ENDPOINT: redis:6379
SPLASH_ENDPOINT: splash:8050
deploy:
mode: global
placement:
constraints:
- node.role == worker
entrypoint:
- ping google.com
---- EDIT ----
I tried with curl also. Didn't work.
docker stack deploy -c docker-stack.yml myapp
Creating network myapp_default
Creating service myapp_splash
Creating service myapp_redis
Creating service myapp_myapp
curl http://myapp_splash:8050
curl: (7) Failed to connect to myapp_splash port 8050: No route to host
curl http://splash:8050
curl: (7) Failed to connect to splash port 8050: No route to host
What worked is getting the actual container name of splash, which is some random generated string.
curl http://myapp_splash.d7bn0dpei9ijpba4q41vpl4zz.tuk1cimht99at9g0au8vj9lkz:8050
But this doesn't really help me.
Ping is not the proper tool to try and connect services. For some reason it doesn't work with docker networks. Try curl http://serviceName instead.
Other than that: Containers can't be named when using stack deploy, instead your service name is used (which coincidentally is the same) to access another service.
I manage to get it working using curl http://tasks.splash:8050 or http://tasks.myapp_splash:8050.
I don't know whats is causing this issue though. Feel free to comment with an answer.
It seems that containers in stack named tasks.<service name> so the command ping tasks.myservice works for me!
Itersting point to note that names like <stackname>_<service name> will also resolve and ping'able but IP address is incorrect. This is frustarating.
(For exmple if you do docker stack deploy -c my.yml AA you'll get name like AA_myservice which will resolve to incorrect addreses)
To add to above answer. From network point of view curl and ping do the same things. Both will try to resolve name passed to them and then curl will try to connect using specified protocol (http is the example above) and ping will send ICMP echo requests.

docker-proxy - Error starting userland proxy while trying to bind on 443

I'm trying to install discourse with docker in an Ubuntu 16.04 LTS with Apache listening to port 80 and 443.
When I try to lunch the app I get the following error:
starting up existing container
+ /usr/bin/docker start app Error response from daemon: driver failed programming external connectivity on endpoint app
(dade361e77fbf29f4d9667febe57a06f168f916148e10cc1365093d8f97026bb):
Error starting userland proxy: listen tcp 0.0.0.0:443: listen: address
already in use Error: failed to start containers: app
For what I'v found docker-proxy is the one that is trying to bind on 443.
How can I solve this?
Some details...
docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 4
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 25
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.4.0-28-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 31.39 GiB
Name: sd-12345
ID: 6OLH:SAG5:VWTW:BL7U:6QYH:4BBS:QHBN:37MY:DLXA:W64E:4EVZ:WBAK
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
perhaps, stop apache? – vitr Jul 22 '16 at 2:56
^^^ This comment from vitr should be the Accepted Answer:
Docker cannot proxy a service from within a container to the port on the host without first stopping any services that are already using that port.
In this case, Apache must be stopped with a command such as sudo service apache2 stop.
Then docker start app can then be run and docker should do its thing unhindered.
See the related question: docker run -> name is already in use by container
Edit /etc/docker/daemon.json and add:
{
"userland-proxy": false
}

Resources