Container db1479aa5900ccb7acb83db7e3f4f1a5351f4ab1609edca36e41cb01a39d9e0e is not running - docker

I have my docker-compose.yml with this code i have added 2 services. This successfully build with docker-compose up -d --build
version: "2"
services:
expo:
build:
context: ./dev
dockerfile: Dockerfile
container_name: sis-expo
working_dir: /home/node/app
volumes:
- .:/home/node/app
environment:
- NODE_ENV=development
#- EXPO_DEVTOOLS_LISTEN_ADDRESS=0.0.0.0
- ADB_IP=192.168.1.1
- REACT_NATIVE_PACKAGER_HOSTNAME=192.168.1.33
expose:
- "19000"
- "19001"
- "19002"
ports:
- 19000:19000
- 19001:19001
- 19002:19002
tty: true
firebase:
build:
context: ./dev
dockerfile: firebase.dockerfile
container_name: sis-expo-firebase
ports:
- 4000:4000 # Emulator Suite UI
- 5000:5000 # Firebase Hosting
- 5001:5001 # Clound Functions
- 9000:9000 # Realtime Database
- 8080:8080 # Cloud Firestore
- 8085:8085 # Cloud Pub/Sub
- 9005:9005
command: "echo this is a test"
then run docker-compose up -d to start container then i check with docker ps but the container isn't there then i tried to do docker ps -a it show container exited.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
511c3cf9163c sis-mobile_firebase "docker-entrypoint.s…" About a minute ago Exited (0) 11 seconds ago sis-expo-firebase
And its logs can be viewed with docker logs <id>.
this is a test
Please let me know if you need further information. Not sure if this is enough to truly solve my problem. thanks a lot.

Related

When scaling, run some docker container startup commands only once

I have the following docker-compose.yml.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
I am scaling backend service so my startup command is sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=10.
The problem I am facing is command A, command B in service backend was running for all 10 containers startup(means they were being run 10 times).
But I want command A to run only once for all the backend service-related containers but Command B should run for all containers.
Any suggestions in accomplishing this?
I'm not entirely sure that there would be an out-of-the-box solution for your requirement.
However, I can suggest you a workaround like this. You can duplicate your backend service in docker-compose and run one backend service with both Command A and Command B, while the other backend has only Command B.
Then when you want to scale, you scale the backend which has only Command B.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend_default:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command B
&& ... "
restart: unless-stopped
Now you can use the scale option like below:
sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=9
Now if there happens to be a scenario, where you need only 1 backend to be run, you can use profiles in docker-compose to only run backend when there is a specific profile is given with docker-compose command. That means only default_backend will run if that profile is not given and hence the scale is 1.
Hope this helps you. Cheers 🍻 !!!
If BACKEND_IMAGE is being built by you, you should do RUN command A in your Dockerfile. The RUN line will be executed only once during build time — so you will also need to make sure that this meshes with your needs — while the ENTRYPOINT and CMD lines will only be run upon execution of the container. The command in the docker-compose file overrides the CMD line.

How to add image name to docker-compose up

I have a docker-compose file that I use the image block in the service to name. For example
version: '3'
services:
redis:
image: redis
restart: unless-stopped
ports:
- "6379"
worker:
image: worker:production
build: .
user: root
command: celery -A ExactEstate worker --loglevel=info
env_file: ./.env.prod
restart: unless-stopped
links:
- redis
depends_on:
- redis
beats:
image: beats:production
build: .
user: root
command: celery --pidfile= -A ExactEstate beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
env_file: ./.env.prod
restart: unless-stopped
links:
- redis
depends_on:
- redis
web:
image: web:production
build: .
user: root
command: daphne -b 0.0.0.0 -p 8000 ExactEstate.asgi:application
ports:
- "8000:8000"
env_file: ./.env.prod
restart: unless-stopped
links:
- redis
- worker
- beats
depends_on:
- redis
- worker
- beats
This gives a docker ps of:
0ad78269a9ce beats:production "celery --pidfile= -…" 7 minutes ago Up 7 minutes exactestate_beats_1
1a44f7c98b50 worker:production "celery -A ExactEsta…" 7 minutes ago Up 7 minutes exactestate_worker_1
f3a09723ba66 redis "docker-entrypoint.s…" 7 minutes ago Up 7 minutes 0.0.0.0:32769->6379/tcp exactestate_redis_1
Let's suppose I also have built containers from a different compose file (i.e. staging) How can I use docker-compose to on pull up the exact service/image I want?
For example: docker-compose up web:production or docker-compose up web:staging
You can achieve this by using environment variables. Variables for docker-compose up could be passed as .env file or set by export (or set on Windows) command (docker documentation).
web:
image: web:production
Should be changed to
web:
image: web:${ENV}
And then you can run your application by running
$ export ENV=production && docker-compose up
Or you can create .env file containing line ENV=production. Then you can simply run application with docker-compose up.

docker-compose unable to connect to container in build process

I try i setup a Shopware Docker Container for development. I setup a Dockerfile for the Shopware initialize process but every time i run the build process shopware return this error message:
mysql -u 'root' -p'root' -h 'dbs' --port='3306' -e "DROP DATABASE IF EXISTS `shopware6dev`"
ERROR 2005 (HY000): Unknown MySQL server host 'dbs' (-2)
i think docker setup the default network after all build processes are done but i need to connect before all containers are ready. The depends_on option brings nothing. I hope anyone have a idea to solve this problem.
This is my docker-compose file:
version: '3'
services:
shopwaredev:
build:
context: ./docker/web
dockerfile: Dockerfile
volumes:
- ./log:/var/log/apache2
environment:
- VIRTUAL_HOST=shopware6dev.test,www.shopware6dev.test
- HTTPS_METHOD=noredirect
restart: on-failure:10
depends_on:
- dbs
adminer:
image: adminer
restart: on-failure:10
ports:
- 8080:8080
dbs:
image: "mysql:5.7"
volumes:
- ./mysql57:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=shopware6dev
restart: on-failure:10
nginx-proxy:
image: solution360/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./ssl:/etc/nginx/certs
restart: on-failure:10
and this is my dockerfile for web shopwaredev container:
FROM solution360/apache24-php74-shopware6
WORKDIR /var/www/html
RUN rm index.html
RUN git clone https://github.com/shopware/development.git .
RUN cp .psh.yaml.dist .psh.yaml
RUN sed -i 's|DB_USER: "app"|DB_USER: "root"|g' .psh.yaml
RUN sed -i 's|DB_PASSWORD: "app"|DB_PASSWORD: "root"|g' .psh.yaml
RUN sed -i 's|DB_HOST: "mysql"|DB_HOST: "dbs"|g' .psh.yaml
RUN sed -i 's|DB_NAME: "shopware"|DB_NAME: "shopware6dev"|g' .psh.yaml
RUN sed -i 's|APP_URL: "http://localhost:8000"|APP_URL: "http://shopware6dev.test"|g' .psh.yaml
RUN ./psh.phar install

How to run and access web server running inside a Docker container

I have trouble understanding how Docker handle things.
I am trying to run a node web server for development purpose. I have it defined in a docker-compose.yml and everything works fine when i run it from there. But when i manually run it from inside the container, it can't be reach from outside.
e.g : this is working fine
node:
image: node:10.15-stretch
tty: true
command: bash -c "./node_modules/.bin/encore dev-server --host 0.0.0.0 --public http://dev.local:8080 --port 8080 --disable-host-check --hot"
working_dir: /var/www/
volumes:
- ${PATH_SOURCE}:/var/www/
ports:
- 8080:8080
The files are now accessible from http://dev.local:8080 !
But i would prefer run it manually only when i need it...
So i remove the command from the docker-compose.yml and run it from inside the container :
node:
image: node:10.15-stretch
tty: true
working_dir: /var/www/
volumes:
- ${PATH_SOURCE}:/var/www/
ports:
- 8080:8080
docker-compose run node bash
root#1535e3c963cc:/var/www/# ./node_modules/.bin/encore dev-server --host 0.0.0.0 --public http://dev.local:8080 --port 8080 --disable-host-check --hot
The process is running fine but the files are not accessible from http://dev.local:8080 ...
I am sure there is something i am missing from Docker but i can't find what...
Thanks for your help.
EDIT:
here the full config
version: '3'
services:
apache:
image: httpd
volumes:
- ${PATH_SOURCE}/.docker/conf/apache/httpd.conf:/usr/local/apache2/conf/httpd.conf
- ${PATH_SOURCE}/.docker/conf/apache/httpd-vhosts.conf:/usr/local/apache2/conf/extra/httpd-vhosts.conf
- ${PATH_SOURCE}:/var/www/sadc/alarm
ports:
- 80:80
- 443:443
restart: always
depends_on:
- php
- postgres
php:
build: .docker
restart: always
ports:
- 9000:9000
volumes:
- ${PATH_SOURCE}/.docker/conf/php/php.ini:/etc/php/7.1/cli/php.ini
- ${PATH_SOURCE}/.docker/conf/php/php.ini:/etc/php/7.1/fpm/php.ini
- ${PATH_SOURCE}:/var/www/sadc/alarm
environment:
- PGDATESTYLE=ISO,DMY
working_dir: /var/www/sadc/alarm
postgres:
image: mdillon/postgis:10
restart: always
environment:
- POSTGRES_DB=${PG_DATABASE}
- POSTGRES_USER=${PG_USERNAME}
- POSTGRES_PASSWORD=${PG_PASSWORD}
- PGDATESTYLE=ISO,DMY
ports:
- 5432:5432
volumes:
- sadc-alarm-pgdata:/var/lib/postgresql/data
- ${PATH_SOURCE}:/var/www/sadc/alarm
- ${PATH_SOURCE}/.docker/conf/postgres/initdb.sql:/docker-entrypoint-initdb.d/initdb.sql
node:
image: node:10.15-stretch
tty: true
working_dir: /var/www/sadc/alarm
volumes:
- ${PATH_SOURCE}:/var/www/sadc/alarm
ports:
- 8080:8080
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c6a394453de4 node:10.15-stretch "node" 2 hours ago Up 50 seconds 0.0.0.0:8080->8080/tcp alarm_node_1
5dcc8b936b58 httpd "httpd-foreground" 21 hours ago Up 49 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp alarm_apache_1
bb616453d0cc alarm_php "/bin/sh -c '/usr/sb…" 21 hours ago Up 49 seconds 0.0.0.0:9000->9000/tcp alarm_php_1
3af75f3a3716 mdillon/postgis:10 "docker-entrypoint.s…" 28 hours ago Up 49 seconds 0.0.0.0:5432->5432/tcp
EDIT 2
Problem is with "docker-compose run" method...
When i run the following using "docker exec" it works
docker exec -it node_alarm_1 bash
FINAL EDIT
OK.
So i missused "docker-compose run" method. It is "docker-compose exec" method that should be use because it's reuse the running container that is correctly mapped. "docker-compose run" instead run a non-mapped container...
docker-compose run it doesn't seem to respect port publishing described in docker-compose.yml file.
To fix your issue do the following
docker-compose run -p 8080:8080 node bash
or
docker-compose run --service-ports node bash

Restart docker compose with a different command

I've an application running on a server, but somehow the server rebooted but some docker services could restart, another not.
docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
kibana sh -c ./bin/kibana-plugin ... Restarting
logstash /usr/local/bin/docker-entr ... Up 5044/tcp, 9600/tcp
If I try to see the logs of kibana by docker kibana ps:
Plugin kbn_radar already exists, please remove before installing a new version
Found previous install attempt. Deleting...
Attempting to transfer from file:///usr/share/kibana/config/kbn_radar.zip
Transferring 3686700 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
The problem is: kbn_radar takes a long time to restart, so I want to restart the kibana service without needing to restart the other applications. I've tried to change my .yml file where I've run the commands to start de plugins:
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- './bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip && ./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip && exec /usr/local/bin/kibana-docker'
So at the end, my docker compose was:
docker-compose.yml:
version: "3"
networks:
elasticsearch-net-624:
services:
elasticsearch-products-624-service:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
container_name: elasticsearch
restart: always
networks:
- elasticsearch-net-624
ports:
- "9200:9200"
- "9300:9300"
expose:
- "9200"
volumes:
- /home/docker/elastic.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/docker/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
- /docker/elastic/data:/usr/share/elasticsearch/data
- /docker/elastic/data/snapshots:/usr/share/elasticsearch/data/snapshots
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- 'exec /usr/local/bin/kibana-docker'
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/docker/ob-kb-funnel-6.8.zip:/usr/share/kibana/config/ob-kb-funnel-6.8.zip
- /home/docker/kbn_radar.zip:/usr/share/kibana/config/kbn_radar.zip
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
logstash:
image: docker.elastic.co/logstash/logstash:6.8.0
container_name: logstash
restart: always
volumes:
- /home/docker/logstash.yml:/usr/share/logstash/config/logstash.yml
Finally I've tried to restart the service:
docker-compose -f docker-kibana.yml restart kibana
But, the service keeps trying to restart the plugins and if I run docker-compose ps, the command continues "sh -c ./bin/kibana-plugin ..."
How could I restart docker service with another command? Or restart my service without restarting the plugin that already exists?
I recommend that you create a build for your plugin and not do everything at the container start.
A simple dockerfile to fix your issue would look somewhat like this
FROM docker.elastic.co/kibana/kibana:6.8.0
COPY ob-kb-funnel-6.8.zip kbn_radar.zip /usr/share/kibana/config/
RUN ./bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip &&
./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip
ENTRYPOINT /usr/local/bin/kibana-docker
Next you would need to use docker-compose to build your image. We can do that by updating your service definition
kibana:
build:
context: ./kibana
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
As you can see in the service definition we replaced image with build. We assume that your Dockerfile for kibana resides in a folder called kibana and also contains your plugin zip files.
next you can run docker-compose build and it will build you the required images for your compose stack.
The problem is that when you run a docker-compose or a docker stack, a context is created with all the initial data. If you later change this data, for example the command in your case, it will not take effect unless you restart the whole context, that is, unless you bring down and up again the docker-compose or stack.
However, you might try your luck with the following:
Edit the compose with the command you want to run now.
Remove the kibana container at all. I mean, don't try to restart kibana with docker-compose, but remove the container. docker rm -f dir_kibana
Run docker-compose up again. It should detect that kibana is missing and run it again.

Resources