I'm trying to create a docker compose file to setup my development environment but the following error keeps happening:
ERROR: for 36b488ac7408_pcare_redis_1 No closing quotation
ERROR: for redis No closing quotation
Traceback (most recent call last):
File "docker-compose", line 3, in <module>
File "compose\cli\main.py", line 81, in main
File "compose\cli\main.py", line 203, in perform_command
File "compose\metrics\decorator.py", line 18, in wrapper
File "compose\cli\main.py", line 1186, in up
File "compose\cli\main.py", line 1166, in up
File "compose\project.py", line 697, in up
File "compose\parallel.py", line 108, in parallel_execute
File "compose\parallel.py", line 206, in producer
File "compose\project.py", line 679, in do
File "compose\service.py", line 579, in execute_convergence_plan
File "compose\service.py", line 499, in _execute_convergence_recreate
File "compose\parallel.py", line 108, in parallel_execute
File "compose\parallel.py", line 206, in producer
File "compose\service.py", line 494, in recreate
File "compose\service.py", line 612, in recreate_container
File "compose\service.py", line 341, in create_container
File "compose\container.py", line 48, in create
File "docker\api\container.py", line 422, in create_container
File "docker\api\container.py", line 433, in create_container_config
File "docker\types\containers.py", line 703, in __init__
File "docker\utils\utils.py", line 464, in split_command
File "shlex.py", line 315, in split
File "shlex.py", line 300, in __next__
File "shlex.py", line 109, in get_token
File "shlex.py", line 191, in read_token
ValueError: No closing quotation
[17476] Failed to execute script docker-compose
version: '3'
services:
db:
restart: unless-stopped
image: 'postgres:13'
environment:
PGPASSWORD: '${DB_PASSWORD:-secret}'
POSTGRES_DB: '${DB_DATABASE}'
POSTGRES_USER: '${DB_USERNAME}'
POSTGRES_PASSWORD: '${DB_PASSWORD:-secret}'
healthcheck:
test: ["CMD", "pg_isready","-U","${DB_USERNAME}","-d","${DB_DATABASE}","-p","${DB_PORT}"]
interval: 5s
timeout: 5s
retries: 3
command: -p ${DB_PORT}
ports:
- "${DB_PORT}:${DB_PORT}"
redis:
image: 'redis:alpine'
restart: unless-stopped
command: 'redis-server --requirepass ${REDIS_PASSWORD} --port ${REDIS_PORT}'
healthcheck:
test: ["CMD", "redis-cli","-p","${REDIS_PORT}", "ping"]
retries: 3
timeout: 5s
interval: 5s
ports:
- "${REDIS_PORT}"
Removing the line "command: 'redis-server --requirepass ${REDIS_PASSWORD} --port ${REDIS_PORT}'" docker compose works, but I don't know what is wrong.
I have the follow .env in the same folder:
DB_CONNECTION=pgsql
DB_HOST=localhost
DB_PORT=6000
DB_DATABASE=test1
DB_USERNAME=test1
DB_PASSWORD=test1
REDIS_HOST=localhost
REDIS_PASSWORD='L\Cm;=9YV$v<{4,eLs/AN4[g{dA"R4wy'
REDIS_PORT=6001
I'm in windows 10.
You have single quotes in your password and the command is wrapped in single quotes so definitely something is up with quotes and escaping them.
I would also suggest to consider setting a password using the conf file.
Related
There is a docker-compose.yml, which starts three containers: frontend, backend and database. After starting the containers, the frontend is available, but the django cannot connect to the database, an error is thrown:
Is the server running on host "database" (10.254.0.2) and accepting
TCP/IP connections on port 5432?
After I restart the django container (docker restart f6aea49c798e), everything works and the django interface responds. At the time of the problem, if I enter into the django container, the ping to the database works, and telnet database 5432 works too. What am I doing wrong? Below is my docker-compose file. I've tried setting depends_on, I've tried doing health-check as you see below.
PORTS NAMES
f6aea49c798e localhost:5000/backend "python manage.py ru…" 23 hours ago Up 23 hours 0.0.0.0:8181->8000/tcp, :::8181->8000/tcp backend
53116b53c03a postgres:latest "docker-entrypoint.s…" 23 hours ago Up 23 hours (healthy) 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp database
e0b379a85516 localhost:5000/frontend "docker-entrypoint.s…" 23 hours ago Up 23 hours 0.0.0.0:81->3000/tcp, :::81->3000/tcp frontend
cc7d032bf271 registry:2 "/entrypoint.sh /etc…" 27 hours ago Up 27 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp registry
---
services:
frontend:
restart: unless-stopped
container_name: frontend
image: localhost:5000/frontend
tty: True
ports:
- 81:3000
backend:
restart: always
container_name: backend
image: localhost:5000/backend
ports:
- 8181:8000
networks:
- back
depends_on:
db:
condition: service_healthy
db:
restart: always
container_name: database
image: postgres:latest
ports:
- 5432:5432
environment:
POSTGRES_DB: django
POSTGRES_USER: django
POSTGRES_PASSWORD: django
networks:
- back
healthcheck:
test: ["CMD-SHELL", "pg_isready -U django"]
interval: 5s
timeout: 5s
retries: 5
networks:
back:
driver: bridge
ipam:
config:
- subnet: 10.254.0.0/24
gateway: 10.254.0.1
aux_addresses:
db: 10.254.0.10
backend: 10.254.0.20
If I just try to run the containers without a compose, the same thing happens.
docker network create --subnet 10.100.0.0/24 --ip-range 10.100.0.0/24 myNetwork
docker run -td --name database --network=myNetwork -p 5432:5432 localhost:5000/postgres
docker run -td --name django --network=myNetwork -p 8181:8000 localhost:5000/backend
Traceback (most recent call last):
File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 120, in inner_run
self.check_migrations()
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 458, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 49, in __init__
self.build_graph()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 212, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 76, in applied_migrations
if self.has_table():
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 56, in has_table
return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 260, in cursor
return self._cursor()
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 236, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 197, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py", line 185, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "database" (10.254.0.2) and accepting
TCP/IP connections on port 5432?
I'm trying to create an airflow (1.10.9) pipline, I'm using the puckel docker image (I'm working with the local docker-compose.yml every thing works well until I tried to import the BigQueryToCloudStorageOperator
from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
I get this exception :
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/dagbag.py", line 243, in process_file m = imp.load_source(mod_name, filepath)
File "/usr/local/lib/python3.7/imp.py", line 171, in load_source module = _load(spec)
File "<frozen importlib._bootstrap>", line 696, in _load
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/airflow/dags/coo_dag.py", line 6, in <module> from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/bigquery_to_gcs.py", line 20, in <module> from airflow.contrib.hooks.bigquery_hook import BigQueryHook
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/bigquery_hook.py", line 34, in <module> from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/gcp_api_base_hook.py", line 25, in <module> import httplib2
ModuleNotFoundError: No module named 'httplib2'
I tried to install the pakage apache-airflow[gcp]==1.10.9 either manuelly (by accessing the the aiflow webserver machine and running pip install) or by mounting a file (requirements.txt ) as a volume but it doesn't work
(when I mount the file as volume, the webserver machine doesn't start It cannot install the requirments.
here is the docker-compose.yml that I'm using :
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=y
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# - ./requirements.txt:/requirements.txt
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
and here is the content of the file requirements.txt :
apache-airflow[gcp]==1.10.9
To mount the requirements.txt file as a volume inside the container, the file has to be in the same directory as the docker-compose.yml file for the relative path to work. Consider correcting the indentation of the mounted volumes in the yml file as shown below.
volumes:
- ./dags:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
I have also added some more dependencies to requirements.txt which are required for the BigQueryToCloudStorageOperator() task to work.
Below is the contents of requirements.txt
pandas==0.25.3
pandas-gbq==0.14.1
apache-airflow[gcp]==1.10.9
In case your previous Airflow instance is already running, consider running a sudo docker-compose stop first before you compose again (sudo docker-compose up) .
Also, the bigquery_default connection in Airflow should be edited to add the correct GCP project_id and service account json key.
I'm getting this error whenever I try to initialize a Django website. Any help I would really appreciate it as I've been trying to solve this issue for a long time. Thank you everyone for any help!
Full traceback of the error:
Traceback (most recent call last):
File "C:\Users\user\Desktop\django-saas-boilerplate-master\manage.py", line 19, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line
utility.execute()
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\management\__init__.py", line 345, in execute
settings.INSTALLED_APPS
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\django\conf\__init__.py", line 83, in __getattr__
self._setup(name)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\django\conf\__init__.py", line 70, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\django\conf\__init__.py", line 177, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\user\Desktop\django-saas-boilerplate-master\conf\settings.py", line 311, in <module>
DATABASES = {"default": env.db("DATABASE_URL")}
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\environ\environ.py", line 204, in db_url
return self.db_url_config(self.get_value(var, default=default), engine=engine)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\environ\environ.py", line 402, in db_url_config
'PORT': _cast_int(url.port) or '',
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\urllib\parse.py", line 175, in port
raise ValueError(message) from None
ValueError: Port could not be cast to integer value as '${DOCKER_POSTGRES_PORT}'
This is how URL for postgres is configured:
DOCKER_POSTGRES_PORT=5432
DATABASE_URL=postgresql://${PROJECT_NAME}:${DB_PASSWORD}#localhost:${DOCKER_POSTGRES_PORT}/postgres
This is the docker compose file:
Version: "3.2"
volumes:
postgres_data: {}
redis_data: {}
services:
postgres:
build: ./devops/docker/postgres
restart: on-failure
container_name: ${PROJECT_NAME}_postgres
image: ${PROJECT_NAME}_postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${PROJECT_NAME}
POSTGRES_DB: ${PROJECT_NAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off
networks:
- default
ports:
- "5432:5432"
redis:
restart: on-failure
container_name: ${PROJECT_NAME}_redis
image: ${PROJECT_NAME}_redis
build:
context: ./devops/docker/redis/
volumes:
- redis_data:/data
networks:
- default
ports:
- "6379:6379"
backend:
image: ${PROJECT_NAME}_backend
container_name: ${PROJECT_NAME}_backend
build:
dockerfile: Dockerfile
context: ./
restart: on-failure
ports:
- "8000:8000"
working_dir: /app
environment:
- DEBUG=True
- DATABASE_URL=postgresql://${PROJECT_NAME}:${DB_PASSWORD}#postgres:5432/${PROJECT_NAME}
- ALLOWED_HOSTS=*
- SECRET_KEY=notsafeforproduction
- CORS_ORIGIN_ALLOW_ALL=True
- RQ_HOST=redis
- RQ_PORT=${DOCKER_REDIS_PORT}
- DJANGO_SETTINGS_MODULE=conf.settings
- PORT=${PORT}
- HOSTNAME=${HOSTNAME}
volumes:
- .:/app
networks:
- default
stdin_open: true
tty: true
depends_on:
- postgres
- redis
- rqworkers
rqworkers:
image: ${PROJECT_NAME}_rqworkers
container_name: ${PROJECT_NAME}_rqworkers
build:
dockerfile: Dockerfile
context: .
restart: always
working_dir: /app
environment:
- DEBUG=True
- DATABASE_URL=postgresql://${PROJECT_NAME}:${DB_PASSWORD}#postgres:5432/${PROJECT_NAME}
- ALLOWED_HOSTS=*
- SECRET_KEY=notsafeforproduction
- CORS_ORIGIN_ALLOW_ALL=True
- RQ_HOST=redis
- RQ_PORT=${DOCKER_REDIS_PORT}
- DJANGO_SETTINGS_MODULE=conf.settings
- PORT=${PORT}
- HOSTNAME=${HOSTNAME}
volumes:
- .:/app
networks:
- default
command: /bin/bash -c "python manage.py rqworker default"
stdin_open: true
tty: true
depends_on:
- postgres
- redis
networks:
default:
ipam:
driver: default
I am really confuse,
My server was setup about 2 weeks ago, with still learning and work to do to have it complete.
The server got a power fail when accidently turned off the UPS.
After restart, some containes are running and others not. Mostly Plex was not running. Portainer wouldn't list it. So I tried to compose back, start dockerd, etc. etc. with no real luck.
I am running
Ubuntu 20.04 LTS
Docker 20.10.02
docker-compose version 1.27.4, build 40524192
docker-py version: 4.3.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
Portainer 1.24.1
I can connect to Portainer and Organizr, my first two service but homeassistant, plex, doesn't work.
In portainer, I 1 stack with 5 container running but primary is listed as "Down" and when I click on it I have a message : "Failure Unable to connect to the Docker endpoint"
just running the command "docker version" is slow and would return "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? "
It seems Docker and Dockerd service are running. I tried to stop and restart, reboot.
When I try to run compose again I get :
jn#turcotteserver:~$ docker-compose -f ~/docker/docker-compose.yml up
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 426, in _make_request
File "<string>", line 3, in raise_from
File "urllib3/connectionpool.py", line 421, in _make_request
File "http/client.py", line 1344, in getresponse
File "http/client.py", line 306, in begin
File "http/client.py", line 267, in _read_status
File "socket.py", line 589, in readinto
ConnectionResetError: [Errno 104] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 727, in urlopen
File "urllib3/util/retry.py", line 403, in increment
File "urllib3/packages/six.py", line 734, in reraise
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 426, in _make_request
File "<string>", line 3, in raise_from
File "urllib3/connectionpool.py", line 421, in _make_request
File "http/client.py", line 1344, in getresponse
File "http/client.py", line 306, in begin
File "http/client.py", line 267, in _read_status
File "socket.py", line 589, in readinto
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker/api/client.py", line 205, in _retrieve_server_version
File "docker/api/daemon.py", line 181, in version
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 228, in _get
File "requests/sessions.py", line 543, in get
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 643, in send
File "requests/adapters.py", line 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in perform_command
File "compose/cli/command.py", line 69, in project_from_options
File "compose/cli/command.py", line 132, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "docker/api/client.py", line 188, in __init__
File "docker/api/client.py", line 213, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
[764006] Failed to execute script docker-compose
And this is what I have in my docker compose
version: "3.6"
services:
portainer:
image: portainer/portainer
container_name: portainer
restart: always
command: -H unix:///var/run/docker.sock
ports:
- "9100:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ${USERDIR}/docker/portainer/data:/data
- ${USERDIR}/docker/shared:/shared
environment:
- TZ=${TZ}
organizr:
container_name: organizr
hostname: organizr
restart: always
image: organizr/organizr
volumes:
- ${USERDIR}/docker/organizr:/config
- ${USERDIR}/docker/shared:/shared
ports:
- "80:80"
environment:
- fpm=true #true or false | using true will provide better performance
- branch=v2-master #v2-master or #v2-develop
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
# mariadb:
# image: "linuxserver/mariadb"
# container_name: "mariadb"
# hostname: mariadb
# volumes:
# - ${USERDIR}/docker/mariadb:/config
# ports:
# - target: 3306
# published: 3306
# protocol: tcp
# mode: host
# restart: always
# environment:
# - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
# - PUID=${PUID}
# - PGID=${PGID}
# - TZ=${TZ}
# phpmyadmin:
# hostname: phpmyadmin
# container_name: phpmyadmin
# image: phpmyadmin/phpmyadmin
# restart: always
# links:
# - mariadb:db
# ports:
# - 8080:80
# environment:
# - PMA_HOST=mariadb
# - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
# watchtower:
# container_name: watchtower
# restart: always
# image: v2tec/watchtower
# volumes:
# - /var/run/docker.sock:/var/run/docker.sock
# command: --schedule "0 0 4 * * *" --cleanup
homeassistant:
container_name: homeassistant
restart: always
image: homeassistant/home-assistant
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
- /dev/ttyUSB1:/dev/ttyUSB1
- /dev/ttyACM0:/dev/ttyACM0
volumes:
- ${USERDIR}/docker/homeassistant:/config
- /etc/localtime:/etc/localtime:ro
- ${USERDIR}/docker/shared:/shared
ports:
- "9091:8123"
privileged: true
environment:
- PUID=${PUID}
- TZ=${TZ}
plexms:
container_name: plexms
restart: always
network_mode: host
image: plexinc/pms-docker
volumes:
- ${USERDIR}/docker/plexms:/config
- /media/SSD860/Downloads/plex_tmp:/transcode
- /media/hdd1/Media:/media/media1
- /media/hdd2/Media:/media/media2
- ${USERDIR}/docker/shared:/shared
ports:
- "32400:32400/tcp"
- "3005:3005/tcp"
- "8324:8324/tcp"
- "32469:32469/tcp"
- "1900:1900/udp"
- "32410:32410/udp"
- "32412:32412/udp"
- "32413:32413/udp"
- "32414:32414/udp"
environment:
- TZ=${TZ}
- HOSTNAME="Turcotte Plex Server"
- PLEX_CLAIM="claim-b_6kMsxaERgzacA9w-6R"
- PLEX_UID=${PUID}
- PLEX_GID=${PGID}
- ADVERTISE_IP="http://192.168.3.112:32400/"
grafana:
image: grafana/grafana
container_name: grafana
restart: always
ports:
- "3000:3000"
networks:
- monitoring
volumes:
- grafana-volume:/vol01/Docker/monitoring
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- "8086:8086"
networks:
- monitoring
volumes:
- influxdb-volume:/vol01/Docker/monitoring
environment:
- INFLUXDB_DB=telegraf
- INFLUXDB_USER=telegraf
- INFLUXDB_ADMIN_ENABLED=true
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=Welcome123
telegraf:
image: telegraf
container_name: telegraf
restart: always
extra_hosts:
- “influxdb:192.168.3.112”
environment:
HOST_PROC: /rootfs/proc
HOST_SYS: /rootfs/sys
HOST_ETC: /rootfs/etc
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /sys:/rootfs/sys:ro
- /proc:/rootfs/proc:ro
- /etc:/rootfs/etc:ro
# broker:
# image: eclipse-mosquitto
# volumes:
# - "./mosquitto:/mosquitto"
# networks:
# - localnet
# ports:
# - "1883:1883"
# node-red:
# image: nodered/node-red:latest
# environment:
# - TZ=Europe/Amsterdam
# ports:
# - "1880:1880"
# networks:
# - node-red-net
# volumes:
# - node-red-data:/data
networks:
monitoring:
localnet:
# node-red-net:
volumes:
# node-red-data:
grafana-volume:
external: true
influxdb-volume:
external: true
Any help would be helpful before I just scrap everything.
I am really thinking that docker and docker compose is too complicated. Even with an engineer degree and a computer science degree (20 years ago things were easier I think lol)
I get an error when deploying 2 services in Bluemix using docker-compose:
Creating xxx
ERROR: for xxx-service 'message'
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 64, in main
File "compose/cli/main.py", line 116, in perform_command
File "compose/cli/main.py", line 876, in up
File "compose/project.py", line 416, in up
File "compose/parallel.py", line 66, in parallel_execute
KeyError: 'message'
Failed to execute script docker-compose
My docker-compose file (that perfectly runs in local) is:
yyy-service:
image: yyy
container_name: wp-docker
hostname: wp-docker
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: whatever
volumes:
- "~/whatever/:/var/www/html/wp-content"
links:
- xxx-service
xxx-service:
image: xxx
container_name: wp-mysql
hostname: wp-mysql
environment:
MYSQL_ROOT_PASSWORD: whatever
MYSQL_DATABASE: whatever
MYSQL_USER: root
MYSQL_PASSWORD: whatever
volumes:
- /var/data/whatever:/var/lib/mysql
The question is very similar to this one, but I see no solution, except for trying
export COMPOSE_HTTP_TIMEOUT=300
which hasn't worked for me.
Unfortunately, docker-compose eats the actual error messages returned and gives you a helpful stack trace of their python script with no info about the underlying cause.
From your compose file, my guess is that the issue is with your volumes. You've specced it to mount directories on your compute host directly into your containers. That won't work in Bluemix - instead you need to specify that the volumes are external (and create those first), then point to them.
For example, something like:
version: '2'
services:
test:
image: registry.ng.bluemix.net/ibmliberty
volumes:
- test:/tmp/data:rw
volumes:
test:
external: true
where you create the volume (in this case, "test") first with something like cf ic volume create test