I've been trying to see my web app started from my docker-compose file but nothing is appearing. It works when I serve the app locally but not through docker. I'm using rust and actix-web for my backend set for localhost on port 8080 and I'm exposing the ports for docker-compose but it still isn't working.
my docker-compose file:
services:
database:
image: postgres
restart: always
expose:
- 5432
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: zion
PGDATA: /var/lib/postgresql/data/
healthcheck:
test: ["CMD", "pg_isready", "-d", "zion", "-U", "postgres"]
timeout: 25s
interval: 10s
retries: 5
networks:
- postgres-compose-network
test:
build:
context: .
dockerfile: Dockerfile
entrypoint: ./test-entrypoint.sh
depends_on:
database:
condition: service_healthy
networks:
- postgres-compose-network
server:
build:
context: .
dockerfile: Dockerfile
entrypoint: ./run-entrypoint.sh
restart: always
expose:
- 8080
ports:
- 8080:8080
depends_on:
database:
condition: service_healthy
networks:
- postgres-compose-network
networks:
postgres-compose-network:
driver: bridge
backend main.rs
#[actix_web::main]
async fn main() -> std::io::Result<()> {
env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));
log::info!("starting HTTP server at http://localhost:8080");
let secret_key = Key::generate();
let pool = establish_connection();
log::info!("database connection established");
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(pool.clone()))
.wrap(middleware::Logger::default())
.wrap(IdentityMiddleware::default())
.configure(database::routes::user::configure)
.service(
Files::new("/", "../frontend/dist")
.prefer_utf8(true)
.index_file("index.html"),
)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
The answer, as provided by #David-Maze, was to bind my app's address to 0.0.0.0 instead of 127.0.0.1.
Related
I create secret "databasePassword" using the below command:
echo 123456 | docker secret create databasePassword -
Below is my yml file to create the MySQL and phpMyAdmin where I'm trying to use this secret in the yml file.
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: unless-stopped
container_name: db-mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
MYSQL_ROOT_PASSWORD: /run/secrets/databasePassword
ports:
- 3306:3306
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
secrets:
- databasePassword
beyond-phpmyadmin:
image: phpmyadmin
restart: unless-stopped
container_name: beyond-phpmyadmin
environment:
PMA_HOST: db-mysql
PMA_PORT: 3306
PMA_ARBITRARY: 1
links:
- db
ports:
- 8081:80
But the databasePassword is not getting set as 123456. It is set as the string "/run/secrets/databasePassword" I tried using docker stack deploy also, but it also didn't work.
I tried setting the secrets at the end of the file like below by some web research, but it also didn't work.
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: unless-stopped
container_name: db-mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
MYSQL_ROOT_PASSWORD: /run/secrets/databasePassword
ports:
- 3306:3306
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
secrets:
- databasePassword
beyond-phpmyadmin:
image: phpmyadmin
restart: unless-stopped
container_name: beyond-phpmyadmin
environment:
PMA_HOST: db-mysql
PMA_PORT: 3306
PMA_ARBITRARY: 1
links:
- db
ports:
- 8081:80
secrets:
databasePassword:
external: true
Docker cannot know that /run/secrets/databasePassword is not a literal value of the MYSQL_ROOT_PASSWORD variable, but a path to a file that you would like to read the secret from. That's not how secrets work. They are simply available in a /run/secrets/<secret-name> file inside the container. To use a secret, your container needs to read it from the file.
Fortunatelly for you, the mysql image knows how to do it. Simply use MYSQL_ROOT_PASSWORD_FILE instead of MYSQL_ROOT_PASSWORD:
services:
db:
image: mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/databasePassword
secrets:
- databasePassword
...
secrets:
databasePassword:
external: true
See "Docker Secrets" in the mysql image documentation.
I just learned how to use docker-compose and I'm having some problems dockerizing my php-magento project
My project looks like this
app (magento)
nginx
mysql
redis
I'm getting an error when I try to execute these lines or when I add redis connection to magento env
Dockerfile - app
Error - without redis
Error- with redis
But if I comment these lines, it work's fine and I can execute they after the container is up.
I imagine that it's someting with the container's nettwork , but it's just a guess and I already put depends on and ensure that app is running after db and redis
Can someone help?
Docker-compose:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile-app
args:
...
volumes:
...
ports:
- 1000:80
healthcheck:
test: ["CMD","wait-for-it", "-h", "localhost", "-p", "80", "-t", "1", "-q"]
interval: 1m
timeout: 10s
retries: 3
start_period: 60s
environment:
...
#depends_on:
# - nginx
#entrypoint: ["sleep", "1200"]
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "80:80"
restart: on-failure
volumes:
...
environment:
VIRTUAL_HOST: localhost
#entrypoint: ["sleep", "1200"]
redis:
image: redis
ports:
- "6379:6379"
volumes:
...
restart: always
database:
image: mysql:5.7
ports:
- 3306:3306
environment:
...
volumes:
...
volumes:
...
Problem Definition
I am trying to use two docker-compose.yml files (each in separate directories) on the same host machine, one for Airflow and the other for another application. I have put Airflow's containers in the same named network as my other app (see the below compose files) and confirmed using docker network inspect that the Airflow containers are in the network. However when I make a curl from the airflow worker container the my_keycloak server I get the following error:
Error
Failed to connect to localhost port 9080: Connection refused
Files
Airflow docker-compose.yml
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.0}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
#added working directory and scripts folder 6-26-2021 CP
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
#changed from default of 8080 because of clash with baton docker services 6-26-2021 CP
ports:
- 50309:8080
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:50309/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
#added baton network so that airflow can communicate with baton cp 6-28-2021
networks:
baton_docker_files_tempo:
external: true
other apps docker-compose file
version: "3.7"
services:
db:
image: artifactory.redacted.com/docker/postgres:11.3
ports:
- 11101:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: keycloaks156
networks:
- tempo
keycloak:
image: registry.git.redacted.com/tempo23/tempo23-server/keycloak:${TEMPO_VERSION:-develop}
container_name: my_keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
KEYCLOAK_DEFAULT_THEME: redacted
KEYCLOAK_WELCOME_THEME: redacted
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_FRONTEND_URL: http://localhost:9080/auth
DB_VENDOR: postgres
DB_ADDR: db
DB_USER: postgres
DB_PASSWORD: postgres
ports:
- 9080:8080
networks:
- tempo
depends_on:
- db
db-migrate:
image: registry.git.redacted.com/tempo23/tempo23-server/db-migrate:${TEMPO_VERSION:-develop}
command: "-url=jdbc:postgresql://db:5432/ -user=postgres -password=postgres -connectRetries=60 migrate"
restart: on-failure:3
depends_on:
- db
networks:
- tempo
keycloak-bootstrap:
image: registry.git.redacted.com/tempo23/tempo23-server/server-full:${TEMPO_VERSION:-develop}
command: ["keycloakBootstrap", "--config", "conf/single.conf"]
depends_on:
- db
restart: on-failure:10
networks:
- tempo
server:
image: registry.git.redacted.com/tempo23/tempo23-server/server:${TEMPO_VERSION:-develop}
command: [ "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005", "conf/single.conf" ]
environment:
AKKA_PARALLELISM_MAX: "2"
DB_THREADPOOL_SIZE: "4"
UNSAFE_ENABLED: "true"
DOCKER_BIND_HOST_ROOT: "${BIND_ROOT}"
DOCKER_BIND_CONTAINER_ROOT: "/var/lib/tempo2"
MESSAGING_HOST: "server"
PUBSUB_TYPE: inmem
TEMPOJOBS_DOCKER_TAG: registry.git.redacted.com/tempo23/tempo23-server/tempojobs:${TEMPO_VERSION:-develop}
NUM_WORKER: 1
ASSET_CACHE_SIZE: 500M
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "${BIND_ROOT}:/var/lib/tempo2"
ports:
- 2551:2551 # akka port
- 8080:8080 # application http port
- 8081:8081 # executor http port
- 5005:5005 # debug port
networks:
- tempo
restart: always
depends_on:
- db
networks:
tempo:
Read carefully the doc on ports.
It allows to expose a container port to a host port.
Between services in the same network you can just reach a service on service-name:port, in this case keycloak:8080 instead of localhost:9080
No matter where each container resides (any docker-compose file on the same machine). The only thing matter is network as you have mentioned in your question, they are on the same network, so they can see each other on network. But the misunderstanding is where the container are isolated from each other. Therefore instead of localhost you should pass the container-name and execute the curl with it.
Try running:
curl keycloak:9080
I created a docker container using the standard "image: postgres:13", but inside the container it doesn't start postgresql because there is no cluster. What could be the problem?
Thx for answers!
My docker-compose:
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '${APP_PORT:-80}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- pgsql
pgsql:
image: 'postgres:13'
ports:
- '${FORWARD_DB_PORT:-5432}:5432'
environment:
PGPASSWORD: '${DB_PASSWORD:-secret}'
POSTGRES_DB: '${DB_DATABASE}'
POSTGRES_USER: '${DB_USERNAME}'
POSTGRES_PASSWORD: '${DB_PASSWORD:-secret}'
volumes:
- 'sailpgsql:/var/lib/postgresql/data'
networks:
- sail
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "${DB_DATABASE}", "-U", "${DB_USERNAME}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sailpgsql:
driver: local
and I get an error when trying to contact the container:
SQLSTATE[08006] [7] could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
and inside the container, when I try to start or restart postgres, I get this message:
[warn] No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
You should not connect through localhost but by the container name as host name.
So change your .env to contain
DB_CONNECTION=[what the name is in the config array]
DB_HOST=pgsql
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME=[whatever you want]
DB_PASSWORD=[whatever you want]
This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 2 years ago.
I have one java spring boot application which uses mysql DB . I want to start my spring application only after mysql is up and running . ( mysql takes 40-60 sec to up ) . Please suggest how to achieve it .
here is the compose file :
version: "3.8"
services:
mysql:
networks:
- my-network-1
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_USER: root
MYSQL_DATABASE: mydb
expose:
- "3306"
my-spring:
depends_on:
- mysql
build:
context: .
dockerfile: dockerfile.dockerfile
networks:
- my-network-1
expose:
- "8080"
networks:
my-network-1:
driver: overlay
Here is docker file :
FROM openjdk:8u252-jdk
ARG JAR_FILE=/somepath/jar.jar
COPY ${JAR_FILE} my.jar
ENTRYPOINT ["java","-jar","my.jar"]
currently getting connection refused error.
Thanks
Adarsha
Use this under mysql part on docker compose file
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
interval: 1m30s
timeout: 20s
retries: 10
so your compose file should be like this
version: "3.8"
services:
mysql:
networks:
- my-network-1
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_USER: root
MYSQL_DATABASE: mydb
expose:
- "3306"
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
interval: 1m30s
timeout: 20s
retries: 10
my-spring:
depends_on:
- mysql
build:
context: .
dockerfile: dockerfile.dockerfile
networks:
- my-network-1
expose:
- "8080"
networks:
my-network-1:
driver: overlay
If above solution doesn' work, I would recommend to go through this docs https://docs.docker.com/compose/startup-order/