Is it not possible to be using depends_on in docker swarm?
How can I be performing?
nominatim_db
ports:
${NOMINATIM_DB_PUBLIC_PORT:-5432}:5432
volumes:
nominatim_db_conf:/bitnami/postgresql/conf/conf.d
nominatim_db_data:/bitnami/postgresql
deploy:
placement:
constraints:
node.hostname == ${NOMINATIM_DB_DEPLOY_HOSTNAME}
healthcheck:
test:
/bin/sh
-รง
-exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
interval: 10s
timeout: 5s
retries: 6
start_period: 30s
depends_on:
generate_nominatim_db_conf:
condition: service_completed_successfully
I have already checked in some searches that it is not used, but how could I be doing
Related
I have two postgres containers one mdhillon/postgis and another postgrest/postgrest. And the python app depends on both the healthchecks of the postgres containers. Please help
In terminal after docker-compose up
Creating compose_postgis_1 ... done
Creating compose_postgrest_1 ... done
Error for app Container <postgrest_container_id> is unhealthy. And the terminal exits
Showing docker-compose.yml file
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
If you are using the official Postgres Docker Image, there is an option to RUN your postgres on a specific port. You need to add ENV variable PGPORT for postgree docker container to run on a different port. Try the below one...
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
PGPORT: 3000
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
By default, Postgres container runs on Port 5432 inside Docker Network. Since, you are not changing the port of Postgres Container, both the containers are trying to run on same port inside docker network, and due to this, one container will run and the other will not. You can check the logs of docker container for better understanding.
Hence, adding PGPORT env var to the container to run Postgres on diff port will resolve your issue...
I'm having some issues while trying to display my local DAGs on Airflow.
I deploy the Airflow with Docker but I'm missing to display the DAGs that I have on my local computer and it's only displaying the "standard" DAGs that came out when I set up the airflow inside the "docker-compose.yaml" file.
the path for my dag/log files is: C:\Users\taz\Documents\workspace (the workspace is the folder where I have the dag and logs folders)
And here is the "docker-compose.yaml"
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.5}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
AIRFLOW__CORE__DAGS_FOLDER: /opt/workspace/dags
volumes:
- ./dags: /opt/workspace/dags
- ./logs: /opt/workspace/logs
- ./plugins: /opt/workspace/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
#- postgres-db-volume:/var/lib/postgresql/data
- ./:/workspace
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
And while trying to run the docker, I got the error
Cannot start Docker Compose application. Reason: Error invoking remote method 'compose-action': Error: Command failed: docker compose --file "docker-compose.yaml" --project-name "workspace" --project-directory "C:\Users\taz\Documents\workspace" up -d services.airflow-webserver.volumes.0 type is required
You havent set the environment variable of $AIRFLOW_HOME which is why it dosent show your dags, as this variable is an home/ directory to airflow and your volume/mnt is kind of path to dags
I am trying to setup Vault as s secrets backend with Airflow on my local machine with docker-compose but unable to make a connection. I building on top of Official Airflow docker-compose file. I have added Vault as a service and added VAULT_ADDR=http://vault:8200 as environment variable for the Airflow application.
In one of my dag, I am trying to fetch a secret from the Vault but I am getting connection refused.
When the services are running, I can access Vault CLI and create secrets which means that Vault is running fine. I also tried docker compose exec -- airflow-webserver curl http://vault:8200 to see if there's some issue with the dag but I get the same connection refused error.
I also tried docker compose exec -- airflow-webserver curl http://flower:5555 just to see if the docker networking is working fine and it returned the correct response from flower service.
# example dag
from airflow.decorators import dag, task
from airflow.hooks.base import BaseHook
from airflow.utils.dates import days_ago
default_args = {
'owner': 'BooHoo'
}
#dag(default_args=default_args, schedule_interval=None, start_date=days_ago(2), tags=['example'])
def get_secrets():
#task()
def get():
conn = BaseHook.get_connection(conn_id='slack_conn_id')
print(f"Password: {conn.password}, Login: {conn.login}, URI: {conn.get_uri()}, Host: {conn.host}")
get()
get_secrets_dag = get_secrets()
Here's the docker compose file.
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.1.0-python3.7
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false' # default is true
AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'
# AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
AIRFLOW__SECRETS__BACKEND: 'airflow.providers.hashicorp.secrets.vault.VaultBackend'
AIRFLOW__SECRETS__BACKEND_KWARGS: '{"connections_path": "connections", "variables_path": "variables", "mount_point": "secrets", "token": "${VAULT_DEV_ROOT_TOKEN_ID}"}'
VAULT_ADDR: 'http://vault:8200'
SLACK_WEBHOOK_URL: "${SLACK_WEBHOOK_URL}"
volumes:
- ./src/dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
vault:
condition: service_healthy
services:
vault:
image: vault:latest
ports:
- "8200:8200"
environment:
VAULT_ADDR: 'http://0.0.0.0:8200'
VAULT_DEV_ROOT_TOKEN_ID: "${VAULT_DEV_ROOT_TOKEN_ID}"
cap_add:
- IPC_LOCK
command: vault server -dev
healthcheck:
test: [ "CMD", "vault", "status" ]
interval: 5s
retries: 5
restart: always
postgres:
# service configuration
redis:
# service configurations
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- "8080:8080"
healthcheck:
test: [ "CMD", "curl", "--fail", "http://localhost:8080/health" ]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: [ "CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"' ]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
flower:
<<: *airflow-common
# service configuration
volumes:
postgres-db-volume:
I think you need to specify dev listen address in your command:
vault server -dev -dev-listen-address="0.0.0.0:8200"
or set
VAULT_DEV_LISTEN_ADDRESS to 0.0.0.0:8200
Here are the docs: https://www.vaultproject.io/docs/commands/server#dev-options
Using the docker-compose file below.
version: "2.4"
services:
test-database:
image: mcr.microsoft.com/mssql/server
build: ms-sql-db
environment:
ACCEPT_EULA: Y
SA_PASSWORD: removed
MSSQL_PID: removed
tty: true
ports:
- 31433:1433
volumes:
- database-data:/var/opt/mssql
healthcheck:
test: nc -z localhost 1433 || exit -1
interval: 10s
timeout: 5s
retries: 5
start_period: 40s
api:
image: removed/api:1.0
build:
context: .
dockerfile: Api/Dockerfile-dev
depends_on:
test-database:
condition: service_healthy
volumes:
- ./Api:/app
ports:
- 5000:5000
- 5001:5001
tty: true
healthcheck:
test: curl --fail --insecure https://localhost:5001/api/health
interval: 5s
timeout: 5s
retries: 3
start_period: 40s
ui:
build:
context: .
dockerfile: webapp/Dockerfile-dev
depends_on:
api:
condition: service_healthy
volumes:
- ./webapp:/app
- /app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=development
healthcheck:
test: curl --fail --insecure https://localhost:3000/
interval: 5s
timeout: 5s
retries: 3
start_period: 40s
the container fails to star due to the health check.
test: curl --fail --insecure https://localhost:3000/
However, if I use the dns name for the container created by docker-compose as shown below it works.
test: curl --fail --insecure https://ui:3000/
I'm using a Mac and Docker Desktop.
Edit
Adding specific issue for clarity. My issue is that in container I can only access https://ui:3000/ and I can't access https://localhost:3000/ and I'd rather not couple the health check to the service name.
i'm using docker 18.04 and running the wso2 iot-server. I want to change the ip-address using this tutorial https://docs.wso2.com/display/IOTS330/Configuring+the+IP+or+Hostname . Im using the attached docker-compose file. I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
navigate to the script directory an execute the script.
After this files like conf/carbon.xml where changed and everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. But the strange thing is, if i create a new file e. g. in the conf directory this file remains, even after a restart.
Can someone explain this to me?
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "443:9443"
links:
- wso2iot-mysql
As far as I know, until the container is deleted write-able layer should be available. But this is not the expected way of using Docker. In this case, if you need to run the change-ip script I think it would be better to create a new Docker image where change-ip script is executed in the process of Docker image creation.