timeout error for basic docker-compose commands - docker

I'm seeing a weird issue where basic docker-compose commands like ps and down time out.
$ docker-compose ps
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
There's no reason why this process should take anywhere near 60 seconds, normally it takes less than ten.
I found a stackoverflow post where 'docker ps' hangs forever after server restart but docker ps seems to work just fine so I think it's specifically related to docker-compose. I also found some other instances of the same error on Docker Mall but without a solution and on medium where the only advice is to increase the timeout, which doesn't help me.
Here's my docker-compose file:
---
version: '3.7'
services:
assets:
build:
context: .
args:
NPM_TOKEN: "${NPM_TOKEN}"
IS_LCL: "TRUE"
container_name: foobar_assets
volumes:
- .:/app:delegated
- /app/node_modules
- "${MESSAGING_PATH:-./node_modules/#foobar/baz}:/app/local_modules/#foobar/baz"
ports:
- "4005:4005"
# - "8888:8888"
env_file:
- .env
- .env-overrides
healthcheck:
test: curl -f http://localhost:4005/assets-manifest.json && echo 'assets are ready!'
interval: 2s
timeout: 1s
retries: 100
entrypoint: ['./rsync-entrypoint.sh']
command: ['/usr/local/bin/npm', 'run', 'dev:assets']
init: true
bff:
build:
context: .
args:
NPM_TOKEN: "${NPM_TOKEN}"
IS_LCL: "TRUE"
volumes:
- .:/app:delegated
- /app/node_modules
container_name: foobar_bff
links:
- redis
ports:
- "4010:4010"
- "9231:9231"
env_file:
- .env
- .env-overrides
depends_on:
- assets
entrypoint: ['./rsync-entrypoint.sh']
command: ['/usr/local/bin/npm', 'run', 'dev:server']
init: true
redis:
image: redis:2.8
container_name: foobar_redis
networks:
default:
external:
name: lcl.foobar.io
I'm running Docker for Mac 2.1.0.3 and have tried restarting it as well as my entire mac. This solves the problem temporarily but then it recurs.

Related

Not all docker containers are starting on server startup/reboot

We have Ubuntu servers that automatically run Docker on startup and in the docker compose the containers have restart: always. One containers depends on the other 2 and this is the one that doesn't boot up correctly. If I run docker-compose restart it does show as running.
When I run docker-compose logs control-php> they are empty.
docker-compose ps on startup results in:
control-php docker-php-entrypoint /hom ... Exit 127
db docker-entrypoint.sh mysqld Up 3306/tcp
redis docker-entrypoint.sh redis ... Up 6379/tcp
control-php depends on db and redis (which are both running). I did try introducing health_checks on db and redis and adding them to the depends on for control-php but this didn't seem to change anything. In fact, when I ran docker-compose ps I could see the db and control-php condition was running. After sometime they became healthy but web and control-php still showed as 'Exit 127'.
Here's my docker-compose config:
version: '3'
services:
db:
container_name: db
image: mariadb:10.3.4
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_DATABASE: control
MYSQL_ROOT_PASSWORD: password
redis:
container_name: redis
image: redis:3.2.11
restart: always
volumes:
- redis_data:/data
control-php:
container_name: control-php
image: livebuzzevents/php:e4d22a4
environment:
- APP_ENV
- NODE
- REDIS_CLIENT="predis"
- REDIS_HOST="redis"
restart: always
volumes:
- /home/livebuzz/code/control:/var/www/control
- /home/livebuzz/onsite-setup/control/crontab:/home/crontab
- /home/livebuzz/onsite-setup/control/nuke-locks.php:/home/nuke-locks.php
depends_on:
- db
- redis
volumes:
db_data:
redis_data:
Any help would be appreciated.
Thanks,
Jack

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

Getting Segmentation Fault on running hyperledger/explorer in docker container

I am getting segmentation fault and docker exited with code 139 on running hyperledger-explorer docker image.
docker-compose file for creating explorer-db
version: "2.1"
volumes:
data:
walletstore:
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:V1.0.0
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
restart: always
ports:
- 54320:5432
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/var/lib/postgresql/data
networks:
mynetwork.com:
aliases:
- postgresdb
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: user#domain.com
PGADMIN_DEFAULT_PASSWORD: SuperSecret
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION: "True"
# PGADMIN_CONFIG_LOGIN_BANNER: "Authorized Users Only!"
PGADMIN_CONFIG_CONSOLE_LOG_LEVEL: 10
volumes:
- "pgadmin_4:/var/lib/pgadmin"
ports:
- 8080:80
networks:
- mynetwork.com
docker-compose-explorer file
version: "2.1"
volumes:
data:
walletstore:
external: true
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorer.mynetwork.com:
image: hyperledger/explorer:V1.0.0
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
# restart: always
environment:
- DATABASE_HOST=xx.xxx.xxx.xxx
#Host is VM IP address with ports exposed for postgres. No issues here
- DATABASE_PORT=54320
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
# - LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./examples/net1/crypto:/tmp/crypto
- walletstore:/opt/wallet
- ./crypto-config/:/etc/data
command: sh -c "node /opt/explorer/main.js && tail -f /dev/null"
ports:
- 6060:6060
networks:
- mynetwork.com
error
Attaching to explorer.mynetwork.com
explorer.mynetwork.com | Segmentation fault
explorer.mynetwork.com exited with code 139
Postgres is working fine. Docker is updated to the latest version.
Fabric network being used is generated inside IBM Blockchain VS Code extension.
I too face the same problem with docker images but I was success on manual start.sh but not on the docker image. After some exploration, i came to know this is due to some architecture build related and there seem to be a segmentation fault issue in the latest v1.0.0 container image.
This get fixed it on the latest master branch, but not yet released it on Docker Hub.
Please build Explorer container image by yourself by using build_docker_image.sh on your local for the time being.
from hlf forum
Okay!! So I did some testings and found that if, the Docker is set to Run on Windows Login, Explorer will throw error of segmentation fault, but if, I manually start Docker after windows login, it works well. Strange !!

Docker-Compose: how to wait for other service to be ready?

I have the following docker-compose, where I need to wait for the service jhipster-registry to be up and accepting connections before starting myprogram-app.
I tried the healtcheck way, following the official doc https://docs.docker.com/compose/compose-file/compose-file-v2/
version: '2.1'
services:
myprogram-app:
image: myprogram
mem_limit: 1024m
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:postgresql://myprogram-postgresql:5432/myprogram
- JHIPSTER_SLEEP=0
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=myprogram-elasticsearch:9300
- JHIPSTER_REGISTRY_PASSWORD=53bqDrurQAthqrXG
- EMAIL_USERNAME
- EMAIL_PASSWORD
ports:
- 8080:8080
networks:
- backend
depends_on:
- jhipster-registry:
"condition": service_started
- myprogram-postgresql
- myprogram-elasticsearch
myprogram-postgresql:
image: postgres:9.6.5
mem_limit: 256m
environment:
- POSTGRES_USER=myprogram
- POSTGRES_PASSWORD=myprogram
networks:
- backend
myprogram-elasticsearch:
image: elasticsearch:2.4.6
mem_limit: 512m
networks:
- backend
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
mem_limit: 512m
ports:
- 8761:8761
networks:
- backend
healthcheck:
test: "exit 0"
networks:
backend:
driver: "bridge"
but I get the following error when running docker-compose up:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.myprogram-app.depends_on contains {"jhipster-registry": {"condition": "service_started"}}, which is an invalid type, it should be a string
Am I doing something wrong, or this feature is no more supported? How to achieve this sync between services?
Updated version
version: '2.1'
services:
myprogram-app:
image: myprogram
mem_limit: 1024m
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:postgresql://myprogram-postgresql:5432/myprogram
- JHIPSTER_SLEEP=0
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=myprogram-elasticsearch:9300
- JHIPSTER_REGISTRY_PASSWORD=53bqDrurQAthqrXG
- EMAIL_USERNAME
- EMAIL_PASSWORD
ports:
- 8080:8080
networks:
- backend
depends_on:
jhipster-registry:
condition: service_healthy
myprogram-postgresql:
condition: service_started
myprogram-elasticsearch:
condition: service_started
#restart: on-failure
myprogram-postgresql:
image: postgres:9.6.5
mem_limit: 256m
environment:
- POSTGRES_USER=myprogram
- POSTGRES_PASSWORD=tuenemreh
networks:
- backend
myprogram-elasticsearch:
image: elasticsearch:2.4.6
mem_limit: 512m
networks:
- backend
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
mem_limit: 512m
ports:
- 8761:8761
networks:
- backend
healthcheck:
test: ["CMD", "curl", "-f", "http://jhipster-registry:8761", "|| exit 1"]
interval: 30s
retries: 20
#start_period: 30s
networks:
backend:
driver: "bridge"
The updated version gives me a different error,
ERROR: for myprogram-app Container "8ebca614590c" is unhealthy.
ERROR: Encountered errors while bringing up the project.
saying that the container of jhipster-registry is unhealthy, but it's reachable via browser. How can I fix the command in the healthcheck to make it work?
Best Approach - Resilient App Starts
While docker does support startup dependencies, they officially recommend updating your app start logic to test for the availability of external dependencies and retry. This has lots of benefits for robust applications that may restart in the wild on the fly in addition to circumventing the race condition in docker compose up
depends_on & service_healthy - Compose 1.27.0+
depends_on is back in docker compose v1.27.0+ (was deprecated in v3) in the Compose Specification
Each service should also implement a service_healthy check to be able to report if it's fully setup and ready for downstream dependencies.
version: '3.0'
services:
php:
build:
context: .
dockerfile: tests/Docker/Dockerfile-PHP
depends_on:
redis:
condition: service_healthy
redis:
image: redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30
wait-for-it.sh
The recommended approach from docker according to their docs on Control startup and shutdown order in Compose is to download wait-for-it.sh which takes in the domain:port to poll and then executes the next set of commands if successful.
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Note: This requires overriding the startup command of the image, so make sure you know what wanted to pass to maintain parity of the default startup.
Further Reading
Docker Compose wait for container X before starting Y
Difference between links and depends_on in docker_compose.yml
How can I wait for a docker container to be up and running?
Docker Compose Wait til dependency container is fully up before launching
depends_on doesn't wait for another service in docker-compose 1.22.0
The documentation suggests that, in Docker Compose version 2 files specifically, depends_on: can be a list of strings, or a mapping where the keys are service names and the values are conditions. For the services where you don't have (or need) health checks, there is a service_started condition.
depends_on:
# notice: these lines don't start with "-"
jhipster-registry:
condition: service_healthy
myprogram-postgresql:
condition: service_started
myprogram-elasticsearch:
condition: service_started
Depending on how much control you have over your program and its libraries, it's better still if you can arrange for the service to be able to start without its dependencies necessarily being available (equivalently, to function if its dependencies die while the service is running), and not use the depends_on: option. You might return an HTTP 503 Service Unavailable error if the database is down, for instance. Another strategy that often is helpful is to immediately exit if your dependencies aren't available but use a setting like restart: on-error to ask the orchestrator to restart the service.
Update to version 3+.
Please follow the documents from version 3:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before
starting web - only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order for more on
this problem and strategies for solving it. Version 3 no longer
supports the condition form of depends_on.
The depends_on option is
ignored when deploying a stack in swarm mode with a version 3 Compose
file.
I would consider using the restart_policy option for configuring your myprogram-app to restart until the jhipster-registry is up and accepting connections:
restart_policy:
condition: on-failure
delay: 3s
max_attempts: 5
window: 60s
With the new docker compose API, we can now use the new --wait option:
docker compose up --wait
If your service has a healthcheck, Docker waits until it has the "healthy" status; otherwise, it waits for the service to be started. That's why it is crucial to have relevant healthchecks for all your services.
Note that this option automatically activate the --detach option.
Check out the documentation here.
The best approach I found is to check for the desired port in the entrypoint. There are different ways to do that e.g. wait-for-it but I like to use this solution that is cross-platform between apline and bash images and doesn't download custom scripts from GitHub:
Install netcat-openbsd (works with apt and apk). Then in the entrypoint (works with both #!/bin/bash and #!/bin/sh):
#!/bin/bash
wait_for()
{
echo "Waiting $1 seconds for $2:$3"
timeout $1 sh -c 'until nc -z $0 $1; do sleep 0.1; done' $2 $3 || return 1
echo "$2:$3 available"
}
wait_for 10 db 5432
wait_for 10 redis 6379
You can also make this into a 1-liner if you don't want to print anything.
Although you already got an answer, it should be mentioned that what you are trying to achieve have some nasty risks.
Ideally a service should be self sufficient and smart enough to retry and await for dependencies to be available (before a going down). Otherwise you will be more exposed to one failure propagating to other services. Also consider that a system reboot, unlike a manual start might ignore the dependencies order.
If one service crash causes all your system to go down, you might have a tool to restart everything again, but it would be better having services that resist that case.
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize (dockerize) utility image with its -wait flag. Here is a simple example where I need a PostgreSQL database to be ready before starting my app:
version: "3.8"
services:
# Start Postgres.
db:
image: postgres
# Wait for Postgres to be joinable.
check-db-started:
image: jwilder/dockerize:0.6.1
depends_on:
- db
command: 'dockerize -wait=tcp://db:5432'
# Only start myapp once Postgres is joinable.
myapp:
image: myapp:latest
depends_on:
- check-db-started

Docker healthcheck in composer file

I try to integrate the new healthcheck into my docker system, but I don't really know how to do it in the right way :/
The problem is, my database container needs more time to start up and initialize the database then the container who starts my main application.
As a result: the main container wont start correct, cause of the missing database connection.
I wrote an healthcheck.sh script to check the database container for connectivity, so the main container starts booting after the connectivity is available. But I dont know how to integrate it correctly in the Dockerfile and my docker-compose.yml
healthcheck.sh is like:
#!bin/bash
COUNTER=0
while [[ $COUNTER = 0 ]]; do
mysql --host=HOST --user="user" --password="password" --database="databasename" --execute="SELECT 1";
if [[ $? == 1 ]]; then
sleep 1
echo "Let's sleep again"
else
COUNTER=1
echo "OK, lets go!"
fi
done
mysql container Dockerfile:
FROM repository/mysql-5.6:latest
MAINTAINER Me
... some copies, chmod and so on
VOLUME ["/..."]
EXPOSE 3306
CMD [".../run.sh"]
HEALTHCHECK --interval=1s --timeout=3s CMD ./healthcheck.sh
docker-compose.yml like:
version: '2'
services:
db:
image: db image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
data:
image: data image
main application:
image: application image
restart: always
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
What do I have to do to integrate this healthcheck into my docker-compose.yml file to work?
Or is there any other chance to delay the container startup of my main container?
Thx Markus
I believe this is similar to Docker Compose wait for container X before starting Y
Your db_image needs to support curl.
To do that, create your own db_image as:
FROM base_image:latest
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 3306
Then all you should need is a docker-compose.yml that looks like this:
version: '2'
services:
db:
image: db_image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:${MYSQL_INTERNAL_PORT}"]
interval: 30s
timeout: 10s
retries: 5
environment:
TZ: Europe/Berlin
main_application:
image: application_image
restart: always
depends_on:
db:
condition: service_healthy
links:
- db
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
In general your application should be able to cope with unavailable resources, but there are also some cases when starting up where it is pretty convenient to have one container waiting for another to be "fully available". Docker itself doesn't handle that for you, but there are ways to handle the startup in the resource-using container by delaying the actual command with some script.
There is a good example for a postgresql startup check that can be used in any container that needs to wait for the database to be "fully started". Please see the sample code in the docker docs: https://docs.docker.com/compose/startup-order/
Since docker-compose 1.10.0 you can specify healthchecks in your compose file: https://github.com/docker/docker.github.io/blob/master/compose/compose-file.md#healthcheck
It makes use of https://docs.docker.com/engine/reference/builder/#/healthcheck which has been introducded with Docker 1.12

Resources