I'm using docker quickstart terminal on Win10.
Client:
Version: 17.06.0-ce,
API version: 1.30
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:30:30 2017
OS/Arch: windows/amd64
I have a simple document upload php script that saves an uploaded document to a file location called '/uploads'.
I want to make the '/uploads' folder a volume attached to the php:apache container, so i can easily share the contents with the python back-end.
I'm using the following docker-compose.yml file to build a web service with a python back-end.
Note: The php works on the php:apache environment without the volume.
$ docker volume create fileul --opt o=uid=197609,gid=0
version: '3.2'
services:
Python_app:
build: ./product
volumes:
- ./product:/usr/src/app
ports:
- 5001:80
website:
image: php:apache
volumes:
- ./website:/var/www/html
- type: volume
source: fileuld
target: /var/www/html/uploads
read_only: false
ports:
- 5000:80
depends_on:
- Python_app
volumes:
fileuld:
I get a permission error on the web service when I try to upload a document to the attached volume fileuld. failed to open stream: Permission denied in /var/www/html/upload.php
I have read some other stackoverflow posts on this and they talk about uid and gid and i have tried using :
$ls -ls
Which gives the following:
4 -rw-r--r-- 1 ASUSNJHOME 197609 343 Sep 23 14:49 docker-compose.yml
32 drwxr-xr-x 1 ASUSNJHOME 197609 0 Sep 22 07:10 product/
0 drwxr-xr-x 1 ASUSNJHOME 197609 0 Sep 23 15:38 volume-example/
0 drwxr-xr-x 1 ASUSNJHOME 197609 0 Sep 22 21:32 website/
But i can't work out how to have the volume able to accept a document getting written into or how to change the permissions of it when it is being created from the docker-compose file.
Can anyone point me in the right direction?
Thanks
Michael
Related
I have been trying to bring up a Prometheus container using docker-compose file. I have looked into the various solutions available online and none of them seem to work. Please go through my prometheus.yaml file and the docker-compose.yml file and let me know, what have I configured wrongly.
My prometheus.yaml file is located at /root/prometheus/prometheus.yaml
Note: I'm trying to run the prometheus in the agent-mode and I'm running the docker-compose file from the /root path.
Error generated:
-bash-5.0# docker-compose up
[...]
prometheus | ts=2022-05-12T14:28:25.350Z caller=main.go:447 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yaml)" file=/etc/prometheus/prometheus.yaml err="open /etc/prometheus/prometheus.yaml: no such file or directory"
prometheus exited with code 2
docker-compose.yml:
version: '3'
volumes:
prometheus_data:
services:
prometheus:
image: prom/prometheus:v2.35.0
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yaml'
- '--storage.agent.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
- '--enable-feature=agent'
expose:
- 9090
ports:
- "9090:9090"
Update 1: Adding the directory tree structure below
-bash-5.0# pwd
/root
-bash-5.0# tree
.
|-- cadvisor
|-- docker-compose.yml
|-- docker-compose_1.yml
|-- prometheus
| |-- prometheus.yaml
| |-- prometheus.yml
| |-- prometheus_old.yaml
| `-- prometheus_old.yml
|-- prometheus.yaml
`-- prometheus.yml
1 directory, 9 files
-bash-5.0#
Update 2: I did some debugging and found out that the directory is being mounted, whereas the files are being mounted as directory.
Basically what I did was I made changes to the docker-compose.yml file as follows.
version: '3'
services:
prometheus:
image: prom/prometheus:v2.35.0
container_name: prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus-test/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
#- '--storage.agent.path=/prometheus'
- '--web.enable-lifecycle'
#- '--enable-feature=agent'
expose:
- 9090
ports:
- "9090:9090"
volumes:
prometheus_data:
In the above docker file I'm mounting my prometheus.yml file to a different location and let the prometheus configure with default config file present.
Later I logged into the container and checked for the mounted files and this is what was seen.
-bash-5.0# docker container exec -it prometheus sh
/prometheus $ cd /etc
/etc $ ls -ltr
total 68
-rw-r--r-- 1 root root 18774 Feb 10 2019 services
-rw-r--r-- 1 root root 494 Aug 16 2019 nsswitch.conf
-rw-r--r-- 1 root root 118 Mar 22 21:07 localtime
-rw-r--r-- 1 root root 340 Apr 11 21:49 passwd
-rw-rw-r-- 1 root root 306 Apr 11 21:49 group
-rw------- 1 root root 136 Apr 13 00:25 shadow
drwxr-xr-x 6 root root 4096 Apr 13 00:25 network
drwxr-xr-x 3 root root 4096 Apr 15 10:54 ssl
lrwxrwxrwx 1 root root 12 May 13 10:54 mtab -> /proc/mounts
-rw-r--r-- 1 root root 174 May 13 10:54 hosts
-rw-r--r-- 1 root root 13 May 13 10:54 hostname
-rw-r--r-- 1 root root 38 May 13 10:54 resolv.conf
drwxr-xr-x 3 root root 4096 May 13 11:00 prometheus-test
drwxr-xr-x 1 nobody nobody 4096 May 13 11:00 prometheus
/etc $
/etc $ cd prometheus-test/
/etc/prometheus-test $ ls
prometheus.yml
/etc/prometheus-test $ ls -ltr
total 8
drwxr-xr-x 2 root root 4096 May 13 10:54 prometheus.yml
/etc/prometheus-test $
From the above we can observe that the prometheus.yml file is being mounted as a directory instead of a file. Can anyone please let me know about this.
OS: Ubuntu 20.04
This is a vm instance running on the ESXI server
-bash-5.0# docker version
Client: Docker Engine - Community
Version: 19.03.10
API version: 1.40
Go version: go1.13.10
Git commit: 9424aeaee9
Built: Thu May 28 22:16:52 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 633a0ea838f10e000b7c6d6eed1623e6e988b5bb
Built: Sat May 9 16:43:52 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.3.2
GitCommit: ff48f57fc83a8c44cf4ad5d672424a98ba37ded6
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit:
-bash-5.0#
-bash-5.0#
-bash-5.0#
-bash-5.0# docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
-bash-5.0#
I think you may be over-complicating things. Given this docker-compose.yaml:
version: '3'
volumes:
prometheus_data:
services:
prometheus:
image: prom/prometheus:v2.35.0
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.agent.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
- '--enable-feature=agent'
And this directory structure:
$ tree .
.
├── docker-compose.yaml
└── prometheus
└── prometheus.yml
It Just Works:
$ docker-compose up
Starting prometheus ... done
Attaching to prometheus
prometheus | ts=2022-05-13T12:12:16.206Z caller=main.go:187 level=info msg="Experimental agent mode enabled."
prometheus | ts=2022-05-13T12:12:16.207Z caller=main.go:525 level=info msg="Starting Prometheus" version="(version=2.35.0, branch=HEAD, revision=6656cd29fe6ac92bab91ecec0fe162ef0f187654)"
prometheus | ts=2022-05-13T12:12:16.207Z caller=main.go:530 level=info build_context="(go=go1.18.1, user=root#cf6852b14d68, date=20220421-09:53:42)"
prometheus | ts=2022-05-13T12:12:16.207Z caller=main.go:531 level=info host_details="(Linux 5.17.5-100.fc34.x86_64 #1 SMP PREEMPT Thu Apr 28 16:02:54 UTC 2022 x86_64 02da15afa8e7 (none))"
prometheus | ts=2022-05-13T12:12:16.207Z caller=main.go:532 level=info fd_limits="(soft=1073741816, hard=1073741816)"
prometheus | ts=2022-05-13T12:12:16.207Z caller=main.go:533 level=info vm_limits="(soft=unlimited, hard=unlimited)"
prometheus | ts=2022-05-13T12:12:16.208Z caller=web.go:541 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus | ts=2022-05-13T12:12:16.208Z caller=main.go:1013 level=info msg="Starting WAL storage ..."
prometheus | ts=2022-05-13T12:12:16.212Z caller=db.go:332 level=info msg="replaying WAL, this may take a while" dir=/prometheus/wal
prometheus | ts=2022-05-13T12:12:16.213Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false
prometheus | ts=2022-05-13T12:12:16.214Z caller=db.go:383 level=info msg="WAL segment loaded" segment=0 maxSegment=1
prometheus | ts=2022-05-13T12:12:16.214Z caller=db.go:383 level=info msg="WAL segment loaded" segment=1 maxSegment=1
prometheus | ts=2022-05-13T12:12:16.215Z caller=main.go:1034 level=info fs_type=XFS_SUPER_MAGIC
prometheus | ts=2022-05-13T12:12:16.215Z caller=main.go:1037 level=info msg="Agent WAL storage started"
prometheus | ts=2022-05-13T12:12:16.215Z caller=main.go:1162 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | ts=2022-05-13T12:12:16.216Z caller=dedupe.go:112 component=remote level=info remote_name=b4f547 url=http://10.120.23.224:9090/api/v1/write msg="Starting WAL watcher" queue=b4f547
prometheus | ts=2022-05-13T12:12:16.216Z caller=dedupe.go:112 component=remote level=info remote_name=b4f547 url=http://10.120.23.224:9090/api/v1/write msg="Starting scraped metadata watcher"
prometheus | ts=2022-05-13T12:12:16.216Z caller=main.go:1199 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=809.925µs db_storage=275ns remote_storage=282.833µs web_handler=394ns query_engine=377ns scrape=211.707µs scrape_sd=43.033µs notify=609ns notify_sd=911ns rules=143ns tracing=1.98µs
prometheus | ts=2022-05-13T12:12:16.216Z caller=main.go:930 level=info msg="Server is ready to receive web requests."
prometheus | ts=2022-05-13T12:12:16.216Z caller=dedupe.go:112 component=remote level=info remote_name=b4f547 url=http://10.120.23.224:9090/api/v1/write msg="Replaying WAL" queue=b4f547
prometheus | ts=2022-05-13T12:12:22.558Z caller=dedupe.go:112 component=remote level=info remote_name=b4f547 url=http://10.120.23.224:9090/api/v1/write msg="Done replaying WAL" duration=6.341973747s
If this fails, the first diagnostic step is probably to remove the command section from docker-compose.yaml and add an entrypoint like this:
version: '3'
volumes:
prometheus_data:
services:
prometheus:
image: prom/prometheus:v2.35.0
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
entrypoint:
- sleep
- inf
This will come up and just run sleep, allowing you to docker-compose exec into the container and explore the filesystem. If you find that /etc/prometheus is empty, that suggests that the you're not running docker-compose on the same system as the docker daemon (so when it attempts to reference a host path, like ./prometheus, it doesn't find it).
I tried to deploy Keycloak and it's database via Docker (Docker-Compose).
It retries 10 times, then failes the deployment. The same docker-compose.yml file worked for me in the past. Haven't done any OS or contianer updates since.
The the following error and warning is thrown:
keycloak | 09:48:42,070 ERROR [org.jgroups.protocols.TCP] (ServerService Thread Pool -- 60) JGRP000034: cff2ce8f5cdf: failure sending message to e832b25e9785: java.net.SocketTimeoutException: connect timed out
keycloak | 09:48:45,378 WARN [org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 60) cff2ce8f5cdf: JOIN(cff2ce8f5cdf) sent to 05bdb7a4a7f5 timed out (after 3000 ms), on try 0
My docker-compose.yml looks like this:
keycloak:
container_name: keycloak
image: jboss/keycloak:11.0.2
ports:
- 8081:8080
environment:
- DB_VENDOR=mariadb
- DB_ADDR=authenticationDB
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING
- JGROUPS_DISCOVERY_PROPERTIES=datasource_jndi_name=java:jboss/datasources/KeycloakDS,info_writer_sleep_time=500
depends_on:
- authenticationDB
authenticationDB:
container_name: authenticationDB
image: mariadb
volumes:
- ./keycloakDB:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
healthcheck:
test: ["CMD", "mysqladmin", "ping", "--silent"]
I've tried following:
SSH into Keycloak's container and curl authenticationDB:3306. I've got a no permission error so the container could talk to each other.
Check if the database is running inside the DB-Container and yes, it's running.
I am running out of ideas.
Normally it retried 10 times, and then successfully deployed keycloak.
Thanks in advance,
Rosario
I would say that docker image jboss/keycloak:11.0.2 doesn't support JDBC_PING:
$ docker run --rm --entrypoint bash -ti jboss/keycloak:11.0.2 \
-c 'ls -lah /opt/jboss/tools/cli/jgroups/discovery/'
total 4.0K
drwxrwxr-x. 1 jboss root 25 Sep 15 09:01 .
drwxrwxr-x. 1 jboss root 23 Sep 15 09:01 ..
-rw-rw-r--. 1 jboss root 611 Sep 15 09:01 default.cli
vs
$ docker run --rm --entrypoint bash -ti jboss/keycloak:12.0.2 \
-c 'ls -lah /opt/jboss/tools/cli/jgroups/discovery/'
total 8.0K
drwxrwxr-x. 1 jboss root 46 Jan 19 07:27 .
drwxrwxr-x. 1 jboss root 23 Jan 19 07:27 ..
-rw-rw-r--. 1 jboss root 611 Jan 19 07:27 default.cli
-rw-rw-r--. 1 jboss root 605 Jan 19 07:27 JDBC_PING.cli
Try to test newer version.
Code
I'm trying to run a redis service defined inside a docker-compose.yml as follows:
version: '3'
services:
redis:
image: "redis:5-alpine"
volumes:
- ./redis-vol:/home/data
app:
build: .
ports:
- 8080:8080
volumes:
- .:/home/app/
This is the Dockerfile:
FROM python:2.7-alpine3.8
WORKDIR /home/app
COPY ./requirements.txt .
RUN apk add python2-dev build-base linux-headers pcre-dev && \
pip install -r requirements.txt
# Source files
COPY ./api.py .
COPY ./conf.ini .
CMD ["uwsgi", "--ini", "conf.ini"]
The app consists of this snippet running a uwsgi interface on port 8080
import uwsgi
import redis
r = redis.Redis('redis')
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
r.append('hello', 'world!')
r.save()
return [b"Hello World"]
And this is the conf.ini file:
[uwsgi]
http = :8080
wsgi-file = api.py
master = true
process = 2
enable-threads = true
uid = 1001
gid = 1001
The app service is supposed to save a key:value pair through redis every time it receives a request to http://localhost:8080.
Upon a successful request, the docker-compose process returns the following log:
redis_1_bdf757fbb2bf | 1:M 26 Nov 2018 15:38:20.399 * DB saved on disk
app_1_5f729e6bcd36 | [pid: 17|app: 0|req: 1/1] 172.21.0.1 () {38 vars in 690 bytes} [Mon Nov 26 15:38:20 2018] GET / => generated 11 bytes in 8 msecs (HTTP/1.1 200) 1 headers in 44 bytes (1 switches on core 0)
redis_1_bdf757fbb2bf | 1:M 26 Nov 2018 15:38:20.998 * DB saved on disk
app_1_5f729e6bcd36 | [pid: 17|app: 0|req: 2/2] 172.21.0.1 () {40 vars in 691 bytes} [Mon Nov 26 15:38:20 2018] GET /favicon.ico => generated 11 bytes in 4 msecs (HTTP/1.1 200) 1 headers in 44 bytes (1 switches on core 0)
Problem
Despite the DB saved on disk log, the redis_vol folder is empty and the dump.rdb file doesn't seem to be saved anywhere else.
What I am doing wrong? I've also tried to use redis:alpine as image but I have the following error at startup:
redis_1_bdf757fbb2bf | 1:M 26 Nov 14:57:27.003 # Can't handle RDB format version 9
redis_1_bdf757fbb2bf | 1:M 26 Nov 14:57:27.003 # Fatal error loading the DB: Invalid argument. Exiting.
And I've also tried to map the dump.rdb in the redis service as follows:
redis:
image: "redis:5-alpine"
volumes:
- ./redis-vol/dump.rdb:/home/data/dump.rdb
but the docker creates a folder named dump.rdb/ instead of a readable file.
If you still face the problem even after changing the volume map to
<your-volume>:/data
Make sure to delete the previous container with
docker container prune
before restarting your container again
Accoring to the redis documentation on the DockerHub page
If persistence is enabled, data is stored in the VOLUME /data
So you are using the wrong volume path. Yout should use /data instead
volumes:
- ./redis-vol:/data
To be able to mount a single file for your container with docker-compose, use absolute path of the file you want to mount from your filesystem:
redis:
image: "redis:5-alpine"
volumes:
- /Users/username/dirname/redis-vol/dump.rdb:/home/data/dump.rdb
As codestation correctly pointed out, it is on the
official documenation but he is suggesting
mount point instead of volumes which has some additional cons.
In both cases in documentation there is also statement about "persistence
enabled":
$ docker run --name some-redis -d redis redis-server --appendonly yes
or in your docker-compose file:
redis:
image: "redis:alpine"
command: redis-server --appendonly yes
volumes:
- your-volume:/data
I am having a problem with running rabbitmq from within Docker on Windows Server 1709 (Windows Server core edition).
I am using docker-compose to create the rabbitmq service. If I run the docker-compose on my local computer, everything works fine. When I run the docker-compose on the windows server (where docker has been set to docker lcow support on windows) I get the above mentioned error multiple times occurring the in the logs. Namely this error is:
Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces
It is worth noting that I receive this error even if I just do a manual pull of rabbitmq and a manual run with docker run -itd --rm --name rabbitmq rabbitmq:3-management
I am able to bash into the container for a short while before it crashes and exits and I see the following:
root#localhost:~# ls -la
---------- 2 root root 20 Jan 5 12:18 .erlang.cookie
On my localhost, the permissions look like this (which is correct):
root#localhost:~# ls -la
-r-------- 1 rabbitmq rabbitmq 20 Dec 28 00:00 .erlang.cookie
I can't understand why the permission structure is broken on the server.
Is it possible that this is an issue with LCOW support on Windows Server 1709 with Docker for Windows? Or is the problem with rabbitmq?
For reference here is the docker compose file used:
version: "3.3"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
hostname: localhost
ports:
- "1001:5672"
- "1002:15672"
environment:
- "RABBITMQ_DEFAULT_USER=user"
- "RABBITMQ_DEFAULT_PASS=password"
volumes:
- d:/docker_data/rabbitmq:/var/lib/rabbitmq/mnesia
restart: always
For reference here is the docker information where there error is happening.
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.10.0-ee-preview-3
Storage Driver: windowsfilter (windows) lcow (linux)
LCOW:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd json-file logentries splunk syslog
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 16299 (16299.15.amd64fre.rs3_release.170928-1534)
Operating System: Windows Server Datacenter
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.905GiB
Name: ServerName
Docker Root Dir: D:\docker-root
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
docker version
Client:
Version: 17.10.0-ee-preview-3
API version: 1.33
Go version: go1.8.4
Git commit: 1649af8
Built: Fri Oct 6 17:52:28 2017
OS/Arch: windows/amd64
Server:
Version: 17.10.0-ee-preview-3
API version: 1.34 (minimum version 1.24)
Go version: go1.8.4
Git commit: b8571fd
Built: Fri Oct 6 18:01:48 2017
OS/Arch: windows/amd64
Experimental: true
I struggled with same problem when run RabbitMQ inside AWS ECS container
Disclaimer: I didn't check this behavior in detail and that is only my assumption, so the problem cause may be wrong, but at least solution is working
It feels like RabbitMQ creating .erlang.cookie file on container start if it doesn't exist. And if inside-container user is root:
...
rabbitmq:
image: rabbitmq:3-management
# set container user to root
user: 0:0
...
then .erlang.cookie will be created with root permissions. But RabbitMQ starting child processes with rabbitmq user permissions. And .erlang.cookie is not writeable (editable) in this case.
To avoid this problem, I created custom image with existing .erlang.cookie file using Dockerfile:
ARG COOKIE_VALUE=SomeDefaultRandomString01
FROM rabbitmq:3.11-alpine
ARG COOKIE_VALUE=$COOKIE_VALUE
RUN printf 'log.console = true\nlog.console.level = warning\nlog.default.level = warning\nlog.connection.level = warning\nlog.channel.level = warning\nlog.file.level = warning\n' > /etc/rabbitmq/conf.d/10-logs_to_stdout.conf && \
printf 'loopback_users.guest = false\n' > /etc/rabbitmq/conf.d/20-allow_remote_guest_users.conf && \
printf 'management_agent.disable_metrics_collector = true' > /etc/rabbitmq/conf.d/30-disable_metrics_data.conf && \
chown rabbitmq:rabbitmq /etc/rabbitmq/conf.d/* && mkdir -p /var/lib/rabbitmq/ && \
echo "$COOKIE_VALUE" > /var/lib/rabbitmq/.erlang.cookie && chmod 400 /var/lib/rabbitmq/.erlang.cookie && \
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
where .erlang.cookie value may be any random string, but it should be same for all nodes in RabbitMQ cluster (extra information here).
Am using mesos version 1.0.3. Just installed mesos thru
docker pull mesosphere/mesos-master:1.0.3
docker pull mesosphere/mesos-salve:1.0.3
Using docker-compose to start mesos-master and mesos-slave.
docker-compose file,
services:
#
# Zookeeper must be provided externally
#
#
# Mesos
#
mesos-master:
image: mesosphere/mesos-master:1.0.3
restart: always
privileged: true
network_mode: host
volumes:
- ~/mesos-data/master:/tmp/mesos
environment:
MESOS_CLUSTER: "mesos-cluster"
MESOS_QUORUM: "1"
MESOS_ZK: "zk://localhost:2181/mesos"
MESOS_PORT: 5000
MESOS_REGISTRY_FETCH_TIMEOUT: "2mins"
MESOS_EXECUTOR_REGISTRATION_TIMEOUT: "2mins"
MESOS_LOGGING_LEVEL: INFO
MESOS_INITIALIZE_DRIVER_LOGGING: "false"
mesos-slave1:
image: mesosphere/mesos-slave:1.0.3
depends_on: [ mesos-master ]
restart: always
privileged: true
network_mode: host
volumes:
- ~/mesos-data/slave-1:/tmp/mesos
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock
environment:
MESOS_CONTAINERIZERS: docker
MESOS_MASTER: "zk://localhost:2181/mesos"
MESOS_PORT: 5051
MESOS_WORK_DIR: "/var/lib/mesos/slave-1"
MESOS_LOGGING_LEVEL: WARNING
MESOS_INITIALIZE_DRIVER_LOGGING: "false"
Mesos master runs fine without any issues. But the slave is not starting with the below error. Not sure, what else is missing here.
I0811 21:38:28.952507 1 main.cpp:243] Build: 2017-02-13 08:10:42 by ubuntu
I0811 21:38:28.952599 1 main.cpp:244] Version: 1.0.3
I0811 21:38:28.952601 1 main.cpp:247] Git tag: 1.0.3
I0811 21:38:28.952603 1 main.cpp:251] Git SHA: c673fdd00e7f93ab7844965435d57fd691fb4d8d
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.29: No such file or directory
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#730: Client environment:host.name=<HOST_NAME>
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#737: Client environment:os.name=Linux
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#738: Client environment:os.arch=3.8.13-98.7.1.el7uek.x86_64
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#739: Client environment:os.version=#2 SMP Wed Nov 25 13:51:41 PST 2015
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#747: Client environment:user.name=(null)
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#755: Client environment:user.home=/root
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#767: Client environment:user.dir=/
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#zookeeper_init#800: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f4f82265e50 sessionId=0 sessionPasswd=<null> context=0x7f4f5c000930 flags=0
2017-08-11 21:38:29,064:1(0x7f4f74ccb700):ZOO_INFO#check_events#1728: initiated connection to server [127.0.0.1:2181]
2017-08-11 21:38:29,067:1(0x7f4f74ccb700):ZOO_INFO#check_events#1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x15dc8b48c6d0155, negotiated timeout=10000
Failed to perform recovery: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.22)
'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/slave-1/meta/slaves/latest
This ensures agent doesn't recover old live executors.
The below command returns same version for docker client API and docker server API. Not sure what is wrong with the setup.
docker -H unix:///var/run/docker.sock version
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:18:46 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:18:46 2016
OS/Arch: linux/amd64
Meoss slave was using the client version 1.24.
This is working after setting the environment variable for the mesos slave.
DOCKER_API_VERSION = 1.22
The combination of the release version and API version of Docker is as follows:
https://docs.docker.com/engine/api/v1.26/#section/Versioning
The other option is to update the docker version.