Unable to access https://localhost:3000 from inside docker container using docker-compose but can access via DNS name on a mac - docker

Using the docker-compose file below.
version: "2.4"
services:
test-database:
image: mcr.microsoft.com/mssql/server
build: ms-sql-db
environment:
ACCEPT_EULA: Y
SA_PASSWORD: removed
MSSQL_PID: removed
tty: true
ports:
- 31433:1433
volumes:
- database-data:/var/opt/mssql
healthcheck:
test: nc -z localhost 1433 || exit -1
interval: 10s
timeout: 5s
retries: 5
start_period: 40s
api:
image: removed/api:1.0
build:
context: .
dockerfile: Api/Dockerfile-dev
depends_on:
test-database:
condition: service_healthy
volumes:
- ./Api:/app
ports:
- 5000:5000
- 5001:5001
tty: true
healthcheck:
test: curl --fail --insecure https://localhost:5001/api/health
interval: 5s
timeout: 5s
retries: 3
start_period: 40s
ui:
build:
context: .
dockerfile: webapp/Dockerfile-dev
depends_on:
api:
condition: service_healthy
volumes:
- ./webapp:/app
- /app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=development
healthcheck:
test: curl --fail --insecure https://localhost:3000/
interval: 5s
timeout: 5s
retries: 3
start_period: 40s
the container fails to star due to the health check.
test: curl --fail --insecure https://localhost:3000/
However, if I use the dns name for the container created by docker-compose as shown below it works.
test: curl --fail --insecure https://ui:3000/
I'm using a Mac and Docker Desktop.
Edit
Adding specific issue for clarity. My issue is that in container I can only access https://ui:3000/ and I can't access https://localhost:3000/ and I'd rather not couple the health check to the service name.

Related

How to use two healthchecks in docker-compose file where the python app depends on both healthchecks?

I have two postgres containers one mdhillon/postgis and another postgrest/postgrest. And the python app depends on both the healthchecks of the postgres containers. Please help
In terminal after docker-compose up
Creating compose_postgis_1 ... done
Creating compose_postgrest_1 ... done
Error for app Container <postgrest_container_id> is unhealthy. And the terminal exits
Showing docker-compose.yml file
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
If you are using the official Postgres Docker Image, there is an option to RUN your postgres on a specific port. You need to add ENV variable PGPORT for postgree docker container to run on a different port. Try the below one...
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
PGPORT: 3000
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
By default, Postgres container runs on Port 5432 inside Docker Network. Since, you are not changing the port of Postgres Container, both the containers are trying to run on same port inside docker network, and due to this, one container will run and the other will not. You can check the logs of docker container for better understanding.
Hence, adding PGPORT env var to the container to run Postgres on diff port will resolve your issue...

Use docker `healthcheck` to wait for `up` command to detach

I have the following docker-compose.yml excerpt:
version: "3.9"
services:
elastic:
image: elasticsearch:8.2.3
container_name: elastic
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms1g -Xmx1g
- xpack.security.enabled=false
volumes:
- es_data:/usr/share/elasticsearch/data
ports:
- target: 9200
published: 9200
healthcheck:
test: curl -s http://elastic:9200 >/dev/null || exit 1
interval: 5s
timeout: 5s
retries: 10
networks:
- elastic
app:
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --reload
env_file: .env
volumes:
- ./app:/code/app
ports:
- target: 8000
published: 8000
restart: on-failure
depends_on:
elastic:
condition: service_healthy
healthcheck:
test: curl -s http://app:8000/nexus/health >/dev/null || exit 1
interval: 5s
timeout: 5s
retries: 10
networks:
- elastic
When running docker compose up -d I would like this command to only exit when the app health check condition is met.
I have found a --wait command in the docs but it does not seem to work when I try and run this. Also I would just like to double check if the health check test itself seems valid, I am not sure if the should use the service name or localhost in the path.
It seems the issue was that I was using the following command:
docker-compose up --wait
The problem was that this calls I think an older compose API, and it should be:
docker compose up --wait

How to set up airflow worker to allow webserver fetch logs on different machine with docker?

I just recently installed airflow 2.1.4 with docker containers, I've successfully set up the postgres, redis, scheduler, 2x local workers, and flower on the same machine with docker-compose.
Now I want to expand, and set up workers on other machines.
I was able to get the workers up and running, flower is able to find the worker node, the worker is receiving tasks from the scheduler correctly, but regardless of the result status of the task, the task would be marked as failed with error message like below:
*** Log file does not exist: /opt/airflow/logs/test/test/2021-10-29T14:38:37.669734+00:00/1.log
*** Fetching from: http://b7a0154e7e20:8793/log/test/test/2021-10-29T14:38:37.669734+00:00/1.log
*** Failed to fetch log file from worker. [Errno -3] Temporary failure in name resolution
Then I tried replaced AIRFLOW__CORE__HOSTNAME_CALLABLE: 'socket.getfqdn' with AIRFLOW__CORE__HOSTNAME_CALLABLE: 'airflow.utils.net.get_host_ip_address'
I got this error instead:
*** Log file does not exist: /opt/airflow/logs/test/test/2021-10-28T15:47:59.625675+00:00/1.log
*** Fetching from: http://172.18.0.2:8793/log/test/test/2021-10-28T15:47:59.625675+00:00/1.log
*** Failed to fetch log file from worker. [Errno 113] No route to host
Then I tried map the port 8793 of the worker with its host machine (in worker_4 below), now it's returning:
*** Failed to fetch log file from worker. [Errno 111] Connection refused
but sometimes still give "Temporary failure in name resolution" error.
I've also tried to copy the URL in the error, and change replace the IP with the host machine ip, and got this message:
Forbidden
You don't have the permission to access the requested resource. It is either read-protected or not readable by the server.
Please let me know if additional info is needed.
Thanks in advance!
Below is my docker-compose.yml for the scheduler/webserver/flower:
version: '3.4'
x-hosts: &extra_hosts
postgres: XX.X.XX.XXX
redis: XX.X.XX.XXX
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.4}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__CORE__DEFAULT_TIMEZONE: 'America/New_York'
AIRFLOW__CORE__HOSTNAME_CALLABLE: 'airflow.utils.net.get_host_ip_address'
AIRFLOW_WEBSERVER_DEFAULT_UI_TIMEZONE: 'America/New_York'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:- apache-airflow-providers-slack}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./assets:/opt/airflow/assets
- ./airflow.cfg:/opt/airflow/airflow.cfg
- /etc/hostname:/etc/hostname
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-0}"
extra_hosts: *extra_hosts
services:
postgres:
container_name: 'airflow-postgres'
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- ./data/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
ports:
- '5432:5432'
redis:
image: redis:latest
container_name: 'airflow-redis'
expose:
- 6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
ports:
- '6379:6379'
airflow-webserver:
<<: *airflow-common
container_name: 'airflow-webserver'
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
- redis
- postgres
airflow-scheduler:
<<: *airflow-common
container_name: 'airflow-scheduler'
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
- redis
- postgres
airflow-worker1:
build: ./worker_config
container_name: 'airflow-worker_1'
command: celery worker -H worker_1
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
environment:
<<: *airflow-common-env
DUMB_INIT_SETSID: "0"
restart: always
depends_on:
- redis
- postgres
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./assets:/opt/airflow/assets
- ./airflow.cfg:/opt/airflow/airflow.cfg
extra_hosts: *extra_hosts
airflow-worker2:
build: ./worker_config
container_name: 'airflow-worker_2'
command: celery worker -H worker_2
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
environment:
<<: *airflow-common-env
DUMB_INIT_SETSID: "0"
restart: always
depends_on:
- redis
- postgres
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./assets:/opt/airflow/assets
- ./airflow.cfg:/opt/airflow/airflow.cfg
extra_hosts: *extra_hosts
flower:
<<: *airflow-common
container_name: 'airflow_flower'
command: celery flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
- redis
- postgres
and my docker-compose.yml for worker on another machine:
version: '3.4'
x-hosts: &extra_hosts
postgres: XX.X.XX.XXX
redis: XX.X.XX.XXX
x-airflow-common:
&airflow-common
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__CORE__DEFAULT_TIMEZONE: 'America/New_York'
AIRFLOW__CORE__HOSTNAME_CALLABLE: 'airflow.utils.net.get_host_ip_address'
AIRFLOW_WEBSERVER_DEFAULT_UI_TIMEZONE: 'America/New_York'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./assets:/opt/airflow/assets
- ./airflow.cfg:/opt/airflow/airflow.cfg
- /etc/hostname:/etc/hostname
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-0}"
extra_hosts: *extra_hosts
services:
worker_3:
build: ./worker_config
restart: always
extra_hosts: *extra_hosts
volumes:
- ./airflow.cfg:/opt/airflow/airflow.cfg
- ./dags:/opt/airflow/dags
- ./assets:/opt/airflow/assets
- ./logs:/opt/airflow/logs
- /etc/hostname:/etc/hostname
entrypoint: airflow celery worker -H worker_3
environment:
<<: *airflow-common-env
WORKER_NAME: worker_147
healthcheck:
test: ['CMD-SHELL', '[ -f /usr/local/airflow/airflow-worker.pid ]']
interval: 30s
timeout: 30s
retries: 3
worker_4:
build: ./worker_config_py2
restart: always
extra_hosts: *extra_hosts
volumes:
- ./airflow.cfg:/opt/airflow/airflow.cfg
- ./dags:/opt/airflow/dags
- ./assets:/opt/airflow/assets
- ./logs:/opt/airflow/logs
- /etc/hostname:/etc/hostname
entrypoint: airflow celery worker -H worker_4_py2 -q py2
environment:
<<: *airflow-common-env
WORKER_NAME: worker_4_py2
healthcheck:
test: ['CMD-SHELL', '[ -f /usr/local/airflow/airflow-worker.pid ]']
interval: 30s
timeout: 30s
retries: 3
ports:
- 8793:8793
For this issue: " Failed to fetch log file from worker. [Errno -3] Temporary failure in name resolution"
Looks like the worker's hostname is not being correctly resolved. The web program of the master needs to go to the worker to fetch the log and display it on the front-end page. This process is to find the host name of the worker. Obviously, the host name cannot be found, Therefore, add the host name to IP mapping on the master's vim /etc/hosts
You need to have the image that's going to be used in all your containers except message broker, meta database and worker monitor. Following is the Dockerfile.
2.If using LocalExecutor, the scheduler and the webserver must be on the same host.
Docker file:
FROM puckel/docker-airflow:1.10.9
COPY airflow/airflow.cfg ${AIRFLOW_HOME}/airflow.cfg
COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
here is for deps for docker to deploy for webserver
webserver:
The web program of the master needs to go to the worker to fetch the log and display it on the front-end page. This process is to find the host name of the worker. Obviously, the host name cannot be found, therefore, add the host name to IP mapping on the master's vim /etc/hosts
to fix it:
Fist of all, get configuration file by typing:
helm show values apache-airflow/airflow > values.yaml
After that check that fixPermissions is true.
You need to enable persistence volumes:
Enable persistent volumes
enabled: true
Volume size for worker StatefulSet
size: 10Gi
If using a custom storageClass, pass name ref to all statefulSets here
storageClassName:
Execute init container to chown log directory.
fixPermissions: true
Update your installation by:
helm upgrade --install airflow apache-airflow/airflow -n ai

docker containers in different named networks: curl: (7) Failed to connect to localhost port 9080: Connection refused

Problem Definition
I am trying to use two docker-compose.yml files (each in separate directories) on the same host machine, one for Airflow and the other for another application. I have put Airflow's containers in the same named network as my other app (see the below compose files) and confirmed using docker network inspect that the Airflow containers are in the network. However when I make a curl from the airflow worker container the my_keycloak server I get the following error:
Error
Failed to connect to localhost port 9080: Connection refused
Files
Airflow docker-compose.yml
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.0}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
#added working directory and scripts folder 6-26-2021 CP
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
#changed from default of 8080 because of clash with baton docker services 6-26-2021 CP
ports:
- 50309:8080
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:50309/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
#added baton network so that airflow can communicate with baton cp 6-28-2021
networks:
baton_docker_files_tempo:
external: true
other apps docker-compose file
version: "3.7"
services:
db:
image: artifactory.redacted.com/docker/postgres:11.3
ports:
- 11101:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: keycloaks156
networks:
- tempo
keycloak:
image: registry.git.redacted.com/tempo23/tempo23-server/keycloak:${TEMPO_VERSION:-develop}
container_name: my_keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
KEYCLOAK_DEFAULT_THEME: redacted
KEYCLOAK_WELCOME_THEME: redacted
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_FRONTEND_URL: http://localhost:9080/auth
DB_VENDOR: postgres
DB_ADDR: db
DB_USER: postgres
DB_PASSWORD: postgres
ports:
- 9080:8080
networks:
- tempo
depends_on:
- db
db-migrate:
image: registry.git.redacted.com/tempo23/tempo23-server/db-migrate:${TEMPO_VERSION:-develop}
command: "-url=jdbc:postgresql://db:5432/ -user=postgres -password=postgres -connectRetries=60 migrate"
restart: on-failure:3
depends_on:
- db
networks:
- tempo
keycloak-bootstrap:
image: registry.git.redacted.com/tempo23/tempo23-server/server-full:${TEMPO_VERSION:-develop}
command: ["keycloakBootstrap", "--config", "conf/single.conf"]
depends_on:
- db
restart: on-failure:10
networks:
- tempo
server:
image: registry.git.redacted.com/tempo23/tempo23-server/server:${TEMPO_VERSION:-develop}
command: [ "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005", "conf/single.conf" ]
environment:
AKKA_PARALLELISM_MAX: "2"
DB_THREADPOOL_SIZE: "4"
UNSAFE_ENABLED: "true"
DOCKER_BIND_HOST_ROOT: "${BIND_ROOT}"
DOCKER_BIND_CONTAINER_ROOT: "/var/lib/tempo2"
MESSAGING_HOST: "server"
PUBSUB_TYPE: inmem
TEMPOJOBS_DOCKER_TAG: registry.git.redacted.com/tempo23/tempo23-server/tempojobs:${TEMPO_VERSION:-develop}
NUM_WORKER: 1
ASSET_CACHE_SIZE: 500M
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "${BIND_ROOT}:/var/lib/tempo2"
ports:
- 2551:2551 # akka port
- 8080:8080 # application http port
- 8081:8081 # executor http port
- 5005:5005 # debug port
networks:
- tempo
restart: always
depends_on:
- db
networks:
tempo:
Read carefully the doc on ports.
It allows to expose a container port to a host port.
Between services in the same network you can just reach a service on service-name:port, in this case keycloak:8080 instead of localhost:9080
No matter where each container resides (any docker-compose file on the same machine). The only thing matter is network as you have mentioned in your question, they are on the same network, so they can see each other on network. But the misunderstanding is where the container are isolated from each other. Therefore instead of localhost you should pass the container-name and execute the curl with it.
Try running:
curl keycloak:9080

docker - can't change files (wso2)

i'm using docker 18.04 and running the wso2 iot-server. I want to change the ip-address using this tutorial https://docs.wso2.com/display/IOTS330/Configuring+the+IP+or+Hostname . Im using the attached docker-compose file. I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
navigate to the script directory an execute the script.
After this files like conf/carbon.xml where changed and everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. But the strange thing is, if i create a new file e. g. in the conf directory this file remains, even after a restart.
Can someone explain this to me?
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "443:9443"
links:
- wso2iot-mysql
As far as I know, until the container is deleted write-able layer should be available. But this is not the expected way of using Docker. In this case, if you need to run the change-ip script I think it would be better to create a new Docker image where change-ip script is executed in the process of Docker image creation.

Resources