docker - can't change files (wso2) - docker

i'm using docker 18.04 and running the wso2 iot-server. I want to change the ip-address using this tutorial https://docs.wso2.com/display/IOTS330/Configuring+the+IP+or+Hostname . Im using the attached docker-compose file. I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
navigate to the script directory an execute the script.
After this files like conf/carbon.xml where changed and everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. But the strange thing is, if i create a new file e. g. in the conf directory this file remains, even after a restart.
Can someone explain this to me?
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "443:9443"
links:
- wso2iot-mysql

As far as I know, until the container is deleted write-able layer should be available. But this is not the expected way of using Docker. In this case, if you need to run the change-ip script I think it would be better to create a new Docker image where change-ip script is executed in the process of Docker image creation.

Related

How to use two healthchecks in docker-compose file where the python app depends on both healthchecks?

I have two postgres containers one mdhillon/postgis and another postgrest/postgrest. And the python app depends on both the healthchecks of the postgres containers. Please help
In terminal after docker-compose up
Creating compose_postgis_1 ... done
Creating compose_postgrest_1 ... done
Error for app Container <postgrest_container_id> is unhealthy. And the terminal exits
Showing docker-compose.yml file
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
If you are using the official Postgres Docker Image, there is an option to RUN your postgres on a specific port. You need to add ENV variable PGPORT for postgree docker container to run on a different port. Try the below one...
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
PGPORT: 3000
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
By default, Postgres container runs on Port 5432 inside Docker Network. Since, you are not changing the port of Postgres Container, both the containers are trying to run on same port inside docker network, and due to this, one container will run and the other will not. You can check the logs of docker container for better understanding.
Hence, adding PGPORT env var to the container to run Postgres on diff port will resolve your issue...

Docker error "Cannot start Docker Compose application" while trying to set up Airflow

I'm having some issues while trying to display my local DAGs on Airflow.
I deploy the Airflow with Docker but I'm missing to display the DAGs that I have on my local computer and it's only displaying the "standard" DAGs that came out when I set up the airflow inside the "docker-compose.yaml" file.
the path for my dag/log files is: C:\Users\taz\Documents\workspace (the workspace is the folder where I have the dag and logs folders)
And here is the "docker-compose.yaml"
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.5}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
AIRFLOW__CORE__DAGS_FOLDER: /opt/workspace/dags
volumes:
- ./dags: /opt/workspace/dags
- ./logs: /opt/workspace/logs
- ./plugins: /opt/workspace/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
#- postgres-db-volume:/var/lib/postgresql/data
- ./:/workspace
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
And while trying to run the docker, I got the error
Cannot start Docker Compose application. Reason: Error invoking remote method 'compose-action': Error: Command failed: docker compose --file "docker-compose.yaml" --project-name "workspace" --project-directory "C:\Users\taz\Documents\workspace" up -d services.airflow-webserver.volumes.0 type is required
You havent set the environment variable of $AIRFLOW_HOME which is why it dosent show your dags, as this variable is an home/ directory to airflow and your volume/mnt is kind of path to dags

Airflow Worker is not working when using CeleryExecutor

I'm trying to migrate from LocalExecutor to CeleryExecutor in Airflow 2.1.3 using Docker with Redis. I made separate containers for webserver, scheduler, worker, redis and database. Problem: tasks are queued but not executed.
docker-compose.yml:
version: "3.3"
services:
redis:
image: redis:6.0.9-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
ports:
- "6793:6793"
database:
image: postgres:12-alpine
restart: always
environment:
POSTGRES_DB: airflow
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
volumes:
- airflow_database:/var/lib/postgresql/data/
webserver:
image: airflow:latest
restart: always
depends_on:
- database
- redis
environment:
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#database/airflow
GUNICORN_CMD_ARGS: --log-level WARNING
EXECUTOR: Celery
volumes:
- airflow_logs:/var/log/airflow
- airflow_data:/var/spool/airflow
- ./airflow/dags:/usr/local/airflow/dags
- ./airflow/plugins:/usr/local/airflow/plugins
ports:
- 8080:8080
- 8888:8888
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
flower:
image: airflow:latest
restart: always
depends_on:
- redis
environment:
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#database/airflow
EXECUTOR: Celery
ports:
- "5555:5555"
command: celery flower -b "redis://redis:6379/1"
scheduler:
image: airflow:latest
restart: always
depends_on:
- webserver
volumes:
- airflow_logs:/var/log/airflow
- airflow_data:/var/spool/airflow
- ./airflow/dags:/usr/local/airflow/dags
- ./airflow/plugins:/usr/local/airflow/plugins
environment:
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#database/airflow
EXECUTOR: Celery
command: scheduler
worker:
image: airflow:latest
restart: always
depends_on:
- scheduler
volumes:
- airflow_logs:/var/log/airflow
- airflow_data:/var/spool/airflow
- ./airflow/dags:/usr/local/airflow/dags
- ./airflow/plugins:/usr/local/airflow/plugins
environment:
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#database/airflow
EXECUTOR: Celery
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
command: celery worker -b "redis://redis:6379/1" --result-backend "db+postgresql://airflow:airflow#database/airflow"
volumes:
airflow_database:
airflow_data:
airflow_logs:
staging_database:
Dockerfile, airflow.cfg, entrypoint.sh
All containers loaded normally. I tried to do celery_result_backend == broker_url == 'redis://redis:6379/1' but to no avail. The flower shows what the worker itself created, but the worker container doesn't show a single line of logs. I also tried to use the worker container separately - it did not help.
Flower
As I can see, there's an obvious port number inconsistency for Redis.
'6793' above and '6379' down below

docker containers in different named networks: curl: (7) Failed to connect to localhost port 9080: Connection refused

Problem Definition
I am trying to use two docker-compose.yml files (each in separate directories) on the same host machine, one for Airflow and the other for another application. I have put Airflow's containers in the same named network as my other app (see the below compose files) and confirmed using docker network inspect that the Airflow containers are in the network. However when I make a curl from the airflow worker container the my_keycloak server I get the following error:
Error
Failed to connect to localhost port 9080: Connection refused
Files
Airflow docker-compose.yml
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.0}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
#added working directory and scripts folder 6-26-2021 CP
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
#changed from default of 8080 because of clash with baton docker services 6-26-2021 CP
ports:
- 50309:8080
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:50309/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
#added baton network so that airflow can communicate with baton cp 6-28-2021
networks:
baton_docker_files_tempo:
external: true
other apps docker-compose file
version: "3.7"
services:
db:
image: artifactory.redacted.com/docker/postgres:11.3
ports:
- 11101:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: keycloaks156
networks:
- tempo
keycloak:
image: registry.git.redacted.com/tempo23/tempo23-server/keycloak:${TEMPO_VERSION:-develop}
container_name: my_keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
KEYCLOAK_DEFAULT_THEME: redacted
KEYCLOAK_WELCOME_THEME: redacted
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_FRONTEND_URL: http://localhost:9080/auth
DB_VENDOR: postgres
DB_ADDR: db
DB_USER: postgres
DB_PASSWORD: postgres
ports:
- 9080:8080
networks:
- tempo
depends_on:
- db
db-migrate:
image: registry.git.redacted.com/tempo23/tempo23-server/db-migrate:${TEMPO_VERSION:-develop}
command: "-url=jdbc:postgresql://db:5432/ -user=postgres -password=postgres -connectRetries=60 migrate"
restart: on-failure:3
depends_on:
- db
networks:
- tempo
keycloak-bootstrap:
image: registry.git.redacted.com/tempo23/tempo23-server/server-full:${TEMPO_VERSION:-develop}
command: ["keycloakBootstrap", "--config", "conf/single.conf"]
depends_on:
- db
restart: on-failure:10
networks:
- tempo
server:
image: registry.git.redacted.com/tempo23/tempo23-server/server:${TEMPO_VERSION:-develop}
command: [ "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005", "conf/single.conf" ]
environment:
AKKA_PARALLELISM_MAX: "2"
DB_THREADPOOL_SIZE: "4"
UNSAFE_ENABLED: "true"
DOCKER_BIND_HOST_ROOT: "${BIND_ROOT}"
DOCKER_BIND_CONTAINER_ROOT: "/var/lib/tempo2"
MESSAGING_HOST: "server"
PUBSUB_TYPE: inmem
TEMPOJOBS_DOCKER_TAG: registry.git.redacted.com/tempo23/tempo23-server/tempojobs:${TEMPO_VERSION:-develop}
NUM_WORKER: 1
ASSET_CACHE_SIZE: 500M
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "${BIND_ROOT}:/var/lib/tempo2"
ports:
- 2551:2551 # akka port
- 8080:8080 # application http port
- 8081:8081 # executor http port
- 5005:5005 # debug port
networks:
- tempo
restart: always
depends_on:
- db
networks:
tempo:
Read carefully the doc on ports.
It allows to expose a container port to a host port.
Between services in the same network you can just reach a service on service-name:port, in this case keycloak:8080 instead of localhost:9080
No matter where each container resides (any docker-compose file on the same machine). The only thing matter is network as you have mentioned in your question, they are on the same network, so they can see each other on network. But the misunderstanding is where the container are isolated from each other. Therefore instead of localhost you should pass the container-name and execute the curl with it.
Try running:
curl keycloak:9080

Losing all modification when container is restarted with Docker Compose

i'm using docker docker-compose for running web application. I want to change inside my container and modify some config file and restarting the container without losing modification .
I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
After changing in config files everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. Can someone explain to me the best way for doing this without losing modifications after restart ?
A useful technique here is to store a copy of the configuration files on the host and then inject them using a Docker-Compose volumes: directive.
version: '3'
services:
myapp:
image: me/myapp
ports: ['8080:8080']
volumes:
- './myapp.ini:/app/myapp.ini'
It is fairly routine to destroy and recreate containers, and you want things to be set up so that everything is ready to go immediately once you docker run or docker-compose up.
Other good uses of bind-mounted directories like this are to give a container a place to publish log files back out, and if your container happens to need persistent data on a filesystem, giving a place to store that across container runs.
docker exec is a useful debugging tool, but it is not intended to be part of your core Docker workflow.
thanks #David Maze for your reply in my case i have à script for changing many parameters in my app and generating ssl certificat after execution of script in my container i have to restart the contianer
my docker-compose.yml
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: docker.wso2.com/wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: docker.wso2.com/wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: docker.wso2.com/wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "9443:9443"
links:
- wso2iot-mysql

Resources