I am trying to set up kong db-less
I have created a docker file as below:
FROM kong
USER 0
RUN mkdir -p /kong/declarative/
COPY kong.yml /usr/local/etc/kong/kong.yml
USER kong
and a docker-compose file
version: "3.8"
networks:
kong-net:
services:
kong:
container_name: kong-dbless
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
- kong-net
environment:
- KONG_DATABASE=off
- KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl
- KONG_PROXY_ACCESS_LOG=/dev/stdout
- KONG_ADMIN_ACCESS_LOG=/dev/stdout
- KONG_PROXY_ERROR_LOG=/dev/stderr
- KONG_ADMIN_ERROR_LOG=/dev/stderr
- KONG_DECLARATIVE_CONFIG=/usr/local/etc/kong/kong.yml
ports:
- "8001:8001"
- "8444:8444"
- "80:8000"
- "443:8443"
and kong.yaml is as below
_format_version: "1.1"
_transform: true
services:
- host: mockbin.org
name: example_service
port: 80
protocol: http
routes:
- name: example_route
paths:
- /mock
strip_path: true
I run docker-compose up but I errors in the log
*- [+] Running 1/0
Container kong-dbless Created 0.0s
Attaching to kong-dbless
kong-dbless | 2022/04/29 01:31:52 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong-dbless | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong-dbless | 2022/04/29 01:31:52 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:553: error parsing declarative config file /kong/declarative/kong.yml:
kong-dbless | /kong/declarative/kong.yml: No such file or directory
kong-dbless | stack traceback:
kong-dbless | [C]: in function 'error'
kong-dbless | /usr/local/share/lua/5.1/kong/init.lua:553: in function 'init'
kong-dbless | nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:553: error parsing declarative config file /kong/declarative/kong.yml:
kong-dbless | [C]: in function 'error'
kong-dbless | /usr/local/share/lua/5.1/kong/init.lua:553: in function 'init'
kong-dbless | init_by_lua:3: in main chunk*
Does anybody know what the problem is and how should I fix it?
Also I tried this but did not work:
Dockerfile
FROM kong
COPY kong.yml /
RUN cp /etc/kong/kong.conf.default /etc/kong/kong.conf
docker-compose
version: "3.8"
networks:
kong-net:
services:
kong:
container_name: kong-dbless
build:
context: .
dockerfile: Dockerfile
# restart: unless-stopped
networks:
- kong-net
healthcheck:
test: [ “CMD”, “curl”, “-f”, “http://kong:8000” ]
interval: 5s
timeout: 2s
retries: 15
environment:
- KONG_DATABASE=off
- KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl
- KONG_PROXY_ACCESS_LOG=/dev/stdout
- KONG_ADMIN_ACCESS_LOG=/dev/stdout
- KONG_PROXY_ERROR_LOG=/dev/stderr
- KONG_ADMIN_ERROR_LOG=/dev/stderr
- KONG_DECLARATIVE_CONFIG=kong.yml
ports:
- "8001:8001"
- "8444:8444"
- "80:8000"
- "443:8443"
This worked for me
FROM kong
USER 0
RUN mkdir -p /kong/declarative/
COPY declarative/kong.yml /kong/declarative/
RUN cp /etc/kong/kong.conf.default /etc/kong/kong.conf
USER kong
and
version: "3.8"
networks:
kong-net:
services:
kong:
container_name: kong
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
- kong-net
healthcheck:
test: [ “CMD”, “curl”, “-f”, “http://kong:8000” ]
interval: 5s
timeout: 2s
retries: 15
environment:
- KONG_DATABASE=off
- KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl
- KONG_PROXY_ACCESS_LOG=/dev/stdout
- KONG_ADMIN_ACCESS_LOG=/dev/stdout
- KONG_PROXY_ERROR_LOG=/dev/stderr
- KONG_ADMIN_ERROR_LOG=/dev/stderr
- KONG_DECLARATIVE_CONFIG=/kong/declarative/kong.yml
ports:
- "8444:8444"
- "80:8000"
- "443:8443"
Related
i should see tree targets in my prometheus dashboard, one from prometheus itself, which works, and one from my self created node.js application called chat-api, and one from cadvisor. for cadvisor i get following error, when i run docker-compose up:
cadvisor | W0419 22:12:08.195849 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor | W0419 22:12:08.196364 1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor | E0419 22:12:08.200398 1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
i changed the parameters in my docker-compose file but it dont change anything. im a beginner in docker.
docker-compose.yml:
version : '3.7'
services:
chat-api:
container_name: chat-api
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000:4000'
networks:
- cchat
restart: 'on-failure'
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- cchat
cadvisor:
image: gcr.io/cadvisor/cadvisor
container_name: cadvisor
privileged: true
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
devices:
- /dev/kmsg:/dev/kmsg
depends_on:
- chat-api
networks:
- cchat
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
depends_on:
- chat-api
networks:
- cchat
volumes:
userdb:
prometheus-data:
networks:
cchat:
prometheus.yml:
global:
scrape-interval: 5s
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
- job_name: 'chat-api'
static_configs:
- targets: ['chat-api:4000']
Dockerfile:
FROM node:alpine
WORKDIR .
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 4000
CMD ["node", "server.js"]
chat-api is a node application with express
my folder structure:
structure
Good day!
I get error, when running the "docker-compose up" command
Full error:
ERROR: Volume /home/<...>/docker/telegraf.conf:/etc/telegraf/telegraf.conf:ro has incorrect format, should be external:internal[:mode]
I tried it:
Change the value of the docker version in Docker-compose.yml on 20.10.8 and 20. I get a version error
Put ./telegraf. conf:/etc/telegraf/telegraf. conf:ro in quotation marks ("/telegraf. conf:/etc/telegraf/telegraf. conf:ro in quotation marks"). I get the same error
Delete ":ro" from /telegraf. conf: / etc/telegraf/telegraf. conf:ro. Starts loading containers, but returns an error in the "telegraf" block
My OS: Ubuntu Mate (distro Ubuntu 20.04.3 LTS)
Docker version 20.10.8, build 3967b7d
Docker-compose version 1.29.2, build 5becea4c
Please tell me how to fix/bypass the error so that all the components are loaded
Docker-compose.yml:
# docker-compose up
version: "2"
services:
postgres:
environment:
- POSTGRES_DB=goby_test
- POSTGRES_USER=postgres
- POSTGRES_HOST_AUTH_METHOD=trust
- PGDATA=/data/postgres
image: "postgres:9.6.23-alpine"
mem_limit: 128M
cpus: 0.1
ports:
- 5432:5432
healthcheck:
test: "pg_isready --username=postgres && psql --username=postgres --list"
timeout: 10s
retries: 20
influxdb:
image: influxdb:1.8-alpine
platform: linux/x86_64
mem_limit: 1024M
cpus: 0.5
ports:
- 8086:8086
environment:
- INFLUXDB_DB=influx
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=admin
volumes:
- ./influxdb/scripts:/docker-entrypoint-initdb.d
# - influxdb_data:/var/lib/influxdb
telegraf:
image: telegraf:1.19.2-alpine
container_name: telegraf
platform: linux/x86_64
mem_limit: 128M
depends_on:
- influxdb
links:
- influxdb
ports:
- "8125:8125/udp"
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
grafana:
image: grafana/grafana:8.1.1
platform: linux/x86_64
mem_limit: 128M
ports:
- 3001:3000
links:
- influxdb
environment:
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_BASIC_ENABLED=false
depends_on:
- influxdb
volumes:
# - grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
- ./grafana/dashboards/:/var/lib/grafana/dashboards/
web:
environment:
- POSTGRES_HOST=postgres
image: gobylang/todo-sample:latest
links:
- postgres
depends_on:
postgres:
condition: service_healthy
ports:
- 3000:3000
- "52022:22"
mem_limit: 64M
cpus: 0.1
entrypoint: goby server.gb --bind 0.0.0.0:3000 wsgi
tank:
image: direvius/yandex-tank
# image: ovil/ltws-tank
# environment:
# - COMPOSE_CONVERT_WINDOWS_PATHS=1
volumes:
- ../:/data
cap_add: [NET_ADMIN]
depends_on:
- web
- grafana
entrypoint: tail -f /dev/null
k6:
image: loadimpact/k6:0.33.0
ports:
- 6565:6565
environment:
- K6_OUT=influxdb=http://influxdb:8086/k6
# - COMPOSE_CONVERT_WINDOWS_PATHS=1
volumes:
- ../:/data
cap_add: [NET_ADMIN]
privileged: true
depends_on:
- web
- influxdb
- grafana
entrypoint: tail -f /dev/null
volumes:
grafana_data: {}
influxdb_data: {}
I have a docker compose file that holds my cypress container:
version: '3'
services:
redis:
image: redis
ports:
- "6379"
# restart: unless-stopped
networks:
main:
aliases:
- redis
postgres:
image: postgres:12
ports:
- "5432:5432"
env_file: ./.env
# restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
networks:
main:
aliases:
- postgres
#access by going to localhost:16543
#when adding a server to the serve list
#the hostname is postgres
#the username is postgres
#the password is postgres
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
env_file: ./.env
# restart: unless-stopped
ports:
- "16543:80"
networks:
main:
aliases:
- pgadmin
celery:
build:
context: .
dockerfile: Dockerfile-dev # use docker-dev because production npm installs and npm builds
command: python manage.py celery
env_file: ./.env
# restart: unless-stopped
volumes:
- .:/code
- tmp:/tmp
links:
- redis
depends_on:
- redis
networks:
main:
aliases:
- celery
web:
build:
context: .
dockerfile: Dockerfile-dev
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- tmp:/tmp
ports:
- "8000:8000"
env_file: ./.env
# restart: unless-stopped
links:
- postgres
- redis
- celery
- pgadmin
depends_on:
- postgres
- redis
- celery
- pgadmin
networks:
main:
aliases:
- web
# Cypress container
cypress:
# the Docker image to use from https://github.com/cypress-io/cypress-docker-images
image: "cypress/included:4.0.2"
depends_on:
- web
environment:
# pass base url to test pointing at the web application
- CYPRESS_BASE_URL=http://web:8000
# share the current folder as volume to avoid copying
working_dir: /e2e
volumes:
- ./:/e2e
networks:
main:
aliases:
- cypress
volumes:
pgdata:
tmp:
networks:
main:
For some reason when I start my server and then start cypress using docker-compose up --exit-code-from cypress I get the following error, that I cannot seem to debug, please note my server is running, and all are on the same network main
Cypress could not verify that this server is running:
> http://web:8000
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
====================================================================================================
(Run Starting)
┌────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Cypress: 4.0.2 │
│ Browser: Electron 78 (headless) │
│ Specs: 1 found (login/test.spec.js) │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
────────────────────────────────────────────────────────────────────────────────────────────────────
Running: login/test.spec.js (1 of 1)
Browserslist: caniuse-lite is outdated. Please run the following command: `yarn upgrade`
Login Page
1) Visits Page
0 passing (671ms)
1 failing
1) Login Page Visits Page:
CypressError: cy.visit() failed trying to load:
http://127.0.0.1:8000/test/
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: connect ECONNREFUSED 127.0.0.1:8000
Common situations why this would fail:
- you don't have internet access
- you forgot to run / boot your web server
- your web server isn't accessible
- you have weird network configuration settings on your computer
The stack trace for this error is:
Error: connect ECONNREFUSED 127.0.0.1:8000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1056:14)
The docker is running and I want to run a docker container in Windows 10. When I run the docker-compose from Windows power shell, some downloading jobs are completed, an error occurs, and the docker container cannot run. It seems that jupyter fails to build or open a directory. Anyone could help me about this problem? The command line and the error is as the following:
PS C:\Users\mmva> cd C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose
PS C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose> docker-compose up
Building jupyter
Step 1/19 : FROM jupyter/jupyterhub
latest: Pulling from jupyter/jupyterhub
efd26ecc9548: Extracting [==================================================>] 51.34MB/51.34MB
a3ed95caeb02: Download complete
298ffe4c3e52: Download complete
758b472747c8: Download complete
8b9809a68afc: Download complete
93b253b5483d: Download complete
ef8136abb53c: Download complete
ERROR: Service 'jupyter' failed to build: failed to register layer: re-exec error: exit status 1: output: Failed to OpenForBackup failed in Win32: open \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz: The filename, directory name, or volume label syntax is incorrect. (0x1f) \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz
My docker version is 17.09.0-ce-win33 (13620).
I think the docker-compose's version is 3.
The content of docker-compose file:
version: '3'
# IPTABLES RULES IF NECESSARY
#-A INPUT -i br+ -j ACCEPT
#-A INPUT -i docker0 -j ACCEPT
#-A OUTPUT -o br+ -j ACCEPT
#-A OUTPUT -o docker0 -j ACCEPT
# The .env file is for production use with server-specific configurations
services:
# Frontend web proxy for accessing services and providing TLS encryption
nginx:
build: ./nginx
container_name: md2k-nginx
restart: always
volumes:
- ./nginx/site:/var/www
- ./nginx/nginx-selfsigned.crt:/etc/ssh/certs/ssl-cert.crt
- ./nginx/nginx-selfsigned.key:/etc/ssh/certs/ssl-cert.key
ports:
- "443:443"
- "80:80"
links:
- apiserver
- grafana
- jupyter
apiserver:
build: ../CerebralCortex-APIServer
container_name: md2k-api-server
restart: always
expose:
- 80
links:
- mysql
- kafka
- minio
depends_on:
- mysql
environment:
- MINIO_HOST=${MINIO_HOST:-minio}
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- MYSQL_HOST=${MYSQL:-mysql}
- MYSQL_DB_USER=${MYSQL_ROOT_USER:-root}
- MYSQL_DB_PASS=${MYSQL_ROOT_PASSWORD:-random_root_password}
- KAFKA_HOST=${KAFKA_HOST:-kafka}
- JWT_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- FLASK_HOST=${FLASK_HOST:-0.0.0.0}
- FLASK_PORT=${FLASK_PORT:-80}
- FLASK_DEBUG=${FLASK_DEBUG:-False}
volumes:
- ./data:/data
# Data vizualizations
grafana:
image: "grafana/grafana"
container_name: md2k-grafana
restart: always
ports:
- "3000:3000"
links:
- influxdb
environment:
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s:%(http_port)s/grafana/
# - GF_INSTALL_PLUGINS=raintank-worldping-app,grafana-clock-panel,grafana-simple-json-datasource
volumes:
- timeseries-storage:/var/lib/grafana
# - timeseries-storage:/etc/grafana
influxdb:
image: "influxdb:alpine"
container_name: md2k-influxdb
restart: always
ports:
- "8086:8086"
volumes:
- timeseries-storage:/var/lib/influxdb
# Data Science Dashboard Interface
jupyter:
build: ./jupyterhub
container_name: md2k-jupyterhub
ports:
- 8000
restart: always
network_mode: "host"
pid: "host"
environment:
TINI_SUBREAPER: 'true'
volumes:
- ./jupyterhub/conf:/srv/jupyterhub/conf
command: jupyterhub --no-ssl --config /srv/jupyterhub/conf/jupyterhub_config.py
# Cerebral Cortex backend
kafka:
image: wurstmeister/kafka:0.10.2.0
container_name: md2k-kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${MACHINE_IP:-10.0.0.1}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "filequeue:4:1,processed_stream:16:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- data-storage:/kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: md2k-zookeeper
restart: always
ports:
- "2181:2181"
mysql:
image: "mysql:5.7"
container_name: md2k-mysql
restart: always
ports:
- 3306:3306 # Default mysql port
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-random_root_password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-cerebralcortex}
- MYSQL_USER=${MYSQL_USER:-cerebralcortex}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-cerebralcortex_pass}
volumes:
- ./mysql/initdb.d:/docker-entrypoint-initdb.d
- metadata-storage:/var/lib/mysql
minio:
image: "minio/minio"
container_name: md2k-minio
restart: always
ports:
- 9000:9000 # Default minio port
environment:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
command: server /export
volumes:
- object-storage:/export
cassandra:
build: ./cassandra
container_name: md2k-cassandra
restart: always
ports:
- 9160:9160 # Thrift client API
- 9042:9042 # CQL native transport
environment:
- CASSANDRA_CLUSTER_NAME=cerebralcortex
volumes:
- data-storage:/var/lib/cassandra
volumes:
object-storage:
metadata-storage:
data-storage:
temp-storage:
timeseries-storage:
user-storage:
log-storage
I'm trying to cap the max size of docker's log files. Each container's log file should max out at 100M. So each container such as the edge, worker, etc should only be allowed to have a log file that is 100MB.
I tried to insert:
log_opt:
max-size: 100m
At the end of my docker-compose.yml file below but i'm getting an error.
Where should I place it?. Also when I place it inside each container definition I'm getting an error. I read the docker docs but no where does it say where exactly to place the option.
This is my docker-compose.yml file:
version: '2.0'
services:
ubuntu:
image: ubuntu
volumes:
- box:/box
cache:
image: redis:3.0
rabbitmq:
image: rabbitmq:3-management
volumes:
- ${DATA}/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
placements-store:
image: redis:3.0
command: redis-server ${REDIS_OPTIONS}
ports:
- "6379:6379"
api:
image: ruby:2.3
command: bundle exec puma -C config/puma.rb
env_file:
- ./.env
working_dir: /app
volumes:
- .:/app/
- box:/box
expose:
- 3000
depends_on:
- cache
- placements-store
worker:
image: ruby:2.3
command: bundle exec sidekiq -C ./config/schedule.yml -q default -q high_priority,5 -c 10
env_file:
- ./.env
working_dir: /app
environment:
INSTANCE_TYPE: worker
volumes:
- .:/app/
- box:/box
depends_on:
- cache
- placements-store
sidekiq-monitor:
image: ruby:2.3
command: bundle exec thin start -R sidekiq.ru -p 9494
env_file:
- ./.env
working_dir: /app
volumes:
- .:/app/
- box:/box
depends_on:
- cache
expose:
- 9494
sneakers:
image: ruby:2.3
command: bundle exec rails sneakers:run
env_file:
- ./.env
working_dir: /app
environment:
INSTANCE_TYPE: worker
volumes:
- .:/app/
- box:/box
depends_on:
- cache
- placements-store
- rabbitmq
edge:
image: ruby:2.3
command: bundle exec thin start -R config.ru -p 3000
environment:
REDIS_URL: redis://placements-store
RACK_ENV: development
BUNDLE_PATH: /box
RABBITMQ_HOST: rabbitmq
working_dir: /app
volumes:
- ./edge:/app/
- box:/box
depends_on:
- placements-store
- rabbitmq
expose:
- 3000
proxy:
image: openresty/openresty:latest-xenial
ports:
- "80:80"
- "443:443"
volumes:
- ./config/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
volumes:
box:
# node_modules:
# bower_components:
# client_dist:
This is what I tried, for example inserting under the rabbitmq container:
version: '2.0'
services:
ubuntu:
image: ubuntu
volumes:
- box:/box
cache:
image: redis:3.0
rabbitmq:
image: rabbitmq:3-management
#volumes:
# - ${DATA}/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
log_opt:
max-size: 50m
placements-store:
image: redis:3.0
command: redis-server ${REDIS_OPTIONS}
ports:
- "6379:6379"
This is the error I get:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.rabbitmq: 'log_opt'
Tried to change log_opt: with options: and got the same error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.rabbitmq: 'options'
Also the docker version is:
docker --version && docker-compose --version
Docker version 1.11.2, build b9f10c9/1.11.2
docker-compose version 1.9.0, build 2585387
UPDATE:
Tried using the logging option like the doc says (for version 2.0):
version: '2.0'
services:
ubuntu:
image: ubuntu
volumes:
- box:/box
cache:
image: redis:3.0
rabbitmq:
image: rabbitmq:3-management
#volumes:
# - ${DATA}/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5672"
logging:
driver: "json-file"
options:
max-size: 100m
max-file: 1
placements-store:
image: redis:3.0
command: redis-server ${REDIS_OPTIONS}
ports:
- "6379:6379"
Getting the error:
ERROR: for rabbitmq Cannot create container for service rabbitmq:
json: cannot unmarshal number into Go value of type string ERROR:
Encountered errors while bringing up the project.