Docker compose will not pull all images from compose file - docker

I'm using docker with docker-compose.yml file.
There I put two different services inside, which I'd like to update.
Moreover I ran portainer and added also some other services there:
pi#raspberrypi:~/docker $ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec830e789d38 nodered/node-red:latest "npm --no-update-not…" 8 days ago Up 6 minutes (healthy) 0.0.0.0:1880->1880/tcp, :::1880->1880/tcp docker_node-red_1
15aa942b2b94 openhab/openhab:3.1.1 "/entrypoint gosu op…" 8 days ago Up 8 days (healthy) docker_openhab_1
e805e3f527c4 portainer/portainer-ce "/portainer" 8 days ago Up 8 days 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp portainer
80990d1ad7e7 influxdb:latest "/entrypoint.sh infl…" 9 months ago Up 8 days InfluxDB
My actual docker-compose.yml file looks like this:
pi#raspberrypi:~/docker $ cat docker-compose.yml
version: "2"
services:
openhab:
image: "openhab/openhab:3.1.1"
restart: always
network_mode: host
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "./openhab_addons:/openhab/addons"
- "./openhab_conf:/openhab/conf"
- "./openhab_userdata:/openhab/userdata"
environment:
USER_ID: "1000"
GROUP_ID: "1000"
OPENHAB_HTTP_PORT: "8080"
OPENHAB_HTTPS_PORT: "8443"
EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Berlin"
services:
node-red:
image: nodered/node-red:latest
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
networks:
- node-red-net
volumes:
- node-red-data:/data
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
volumes:
node-red-data:
networks:
node-red-net:
In order to update the openhab container from 3.1.1 to 3.2.0, I changed the image name inside compose file to openhab/openhab:3.2.0.
Afterwards I started docker-compose pull and the system only checked if there is a new image for node-red available. But not for openhab.
What is wrong?

You need to put all the services under a single services key. That's also why it's plural.
services:
openhab:
...
node-red:
...

Related

Docker - Symfony5/Mercure : Impossible to reach mercure hub

I try with no success to access to Mercure's hub through my browser at this URL :
http://locahost:3000 => ERR_CONNECTION_REFUSED
I use Docker for my development. Here's my docker-compose.yml :
# docker/docker-compose.yml
version: '3'
services:
database:
container_name: test_db
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3309:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
container_name: test_php
build:
context: ./php-fpm
depends_on:
- database
environment:
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ./src:/var/www
nginx:
container_name: test_nginx
build:
context: ./nginx
volumes:
- ./src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "8095:80"
caddy:
container_name: test_mercure
image: dunglas/mercure
restart: unless-stopped
environment:
MERCURE_PUBLISHER_JWT_KEY: '!ChangeMe!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeMe!'
PUBLISH_URL: '${MERCURE_PUBLISH_URL}'
JWT_KEY: '${MERCURE_JWT_KEY}'
ALLOW_ANONYMOUS: '${MERCURE_ALLOW_ANONYMOUS}'
CORS_ALLOWED_ORIGINS: '${MERCURE_CORS_ALLOWED_ORIGINS}'
PUBLISH_ALLOWED_ORIGINS: '${MERCURE_PUBLISH_ALLOWED_ORIGINS}'
ports:
- "3000:80"
I have executed successfully :
docker-compose up -d
docker ps -a :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e4a72fe75b2 dunglas/mercure "caddy run --config …" 2 hours ago Up 2 hours 443/tcp, 2019/tcp, 0.0.0.0:3000->80/tcp, :::3000->80/tcp test_mercure
724fe920ebef nginx "/docker-entrypoint.…" 3 hours ago Up 3 hours 0.0.0.0:8095->80/tcp, :::8095->80/tcp test_nginx
9e63fddf50ef php-fpm "docker-php-entrypoi…" 3 hours ago Up 3 hours 9000/tcp test_php
e7989b26084e database "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:3309->3306/tcp, :::3309->3306/tcp test_db
I can reach http://localhost:8095 to access to my Symfony app but I don't know on which URL I am supposed to reach my Mercure's hub.
Thank's for your help !
I tried for months to get symfony + nginx + mysql + phpmyadmin + mercure + docker to work both locally for development and in production (obviously). To no avail.
And, while this isn't directly answering your question. The only way I can contribute is with an "answer", as I don't have enough reputation to only comment, or I would have done that.
If you're not tied to nginx for any reason besides a means of a web server, and can replace it with caddy, I have a repo that is symfony + caddy + mysql + phpmyadmin + mercure + docker that works with SSL both locally and in production.
https://github.com/thund3rb1rd78/symfony-mercure-website-skeleton-dockerized

How to connect the application container to the mysql containier

I am having an application service and a MySQL service but I am not able to connect the two containers and it keeps returning me this error
jango.db.utils.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
I have included the links in my application service but nothing is working out.
Mine MySQL container is working up fine and even I can log into the MySQL container.
Here is the snapshot of the services:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc26d09a81d1 gmasmatrix_worker:latest "/entrypoint.sh /sta…" 17 seconds ago Exited (1) 11 seconds ago gmasmatrix_celeryworker_1
749f23c37b16 gmasmatrix_application:latest "/entrypoint.sh /sta…" 18 seconds ago Exited (1) 9 seconds ago gmasmatrix_application_1
666029ad063a gmasmatrix_flower "/entrypoint.sh /sta…" 18 seconds ago Exited (1) 10 seconds ago gmasmatrix_flower_1
50ac0497e66b mysql:5.7.10 "/entrypoint.sh mysq…" 21 seconds ago Up 17 seconds 0.0.0.0:3306->3306/tcp gmasmatrix_db_1
669fbbe0a81d mailhog/mailhog:v1.0.0 "MailHog" 21 seconds ago Up 18 seconds 1025/tcp, 0.0.0.0:8025->8025/tcp gmasmatrix_mailhog_1
235a46c8d453 redis:5.0 "docker-entrypoint.s…" 21 seconds ago Up 17 seconds 6379/tcp gmasmatrix_redis_1
Docker-compose file
version: '2'
services:
application: &application
image: gmasmatrix_application:latest
command: /start.sh
volumes:
- .:/app
# env_file:
# - .env
ports:
- 8000:8000
# cpu_shares: 874
# mem_limit: 1610612736
# mem_reservation: 1610612736
build:
context: ./
dockerfile: ./compose/local/application/Dockerfile
args:
- GMAS_ENV_TYPE=local
links:
- "db"
celeryworker:
<<: *application
image: gmasmatrix_worker:latest
depends_on:
- redis
- mailhog
ports: []
command: /start-celeryworker
links:
- "db"
flower:
<<: *application
image: gmasmatrix_flower
ports:
- "5555:5555"
command: /start-flower
links:
- "db"
mailhog:
image: mailhog/mailhog:v1.0.0
ports:
- "8025:8025"
redis:
image: redis:5.0
db:
image: mysql:5.7.10
environment:
MYSQL_DATABASE: gmas_mkt
MYSQL_ROOT_PASSWORD: pulkit1607
ports:
- "3306:3306"
``
Your application is trying to connect to 127.0.0.1 - which in docker points to the app container itself.
Instead you should use the IP of the db container. You can utilize the built-in docker DNS service to do this. In your application configuration, use db (the name of the mysql container) as the host to connect to instead of localhost or 127.0.0.1

Unable to install Odoo 12 Community Edition using docker compose file on an AWS EC2 instance

I am getting an "Internal Server Error" when trying to install Odoo 12 Community Edition on an AWS EC2 instance using a docker-compose file. My EC2 instance uses Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1031-aws x86_64) operating system, Docker version 18.09.0 and docker-compose version 1.23.2.
Here is my docker-compose.yml file:
version: '2'
services:
web:
image: odoo:12.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
I can see the containers being created as is evidenced by the following output of the "docker ps -a" command:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64e271db9565 odoo:12.0 "/entrypoint.sh odoo" 2 hours ago Up 2 hours 0.0.0.0:8069->8069/tcp, 8071/tcp ubuntu_web_1
10b3198e3230 postgres:10 "docker-entrypoint.s…" 2 hours ago Up 2 hours 5432/tcp ubuntu_db_1
Moreover, the "docker-compose config" command returned the following output:
services:
db:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: odoo
POSTGRES_USER: odoo
image: postgres:10
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata:rw
web:
depends_on:
- db
image: odoo:12.0
ports:
- 8069:8069/tcp
volumes:
- odoo-web-data:/var/lib/odoo:rw
- /home/ubuntu/config:/etc/odoo:rw
- /home/ubuntu/addons:/mnt/extra-addons:rw
version: '2.0'
volumes:
odoo-db-data: {}
odoo-web-data: {}
What am I missing here?

Why can't I connect to my local docker-compose container on Windows 10?

I'm trying to dockerize a Python application, for which I've been following this tutorial. The tutorial is from April 2015 and still uses Docker Machine, which, judging from this answer, is no longer necessary to run Docker containers locally on Windows.
I got it working with Docker Machine before, and was able to see the web app and interact with it. But now I'm trying to get this working without Docker Machine, with Docker version 17.06.0-ce, build 02c1d87, on Windows 10.
Here's the docker-compose.yml:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
I started the containers:
$ docker-compose up -d
Creating polly_data_1 ...
Creating polly_data_1 ... done
Creating polly_postgres_1 ...
Creating polly_postgres_1 ... done
Creating polly_web_1 ...
Creating polly_web_1 ... done
Creating polly_nginx_1 ...
Creating polly_nginx_1 ... done
Then, when I run docker ps, it shows the following three containers running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9b2c1048f3a5 polly_nginx "/usr/sbin/nginx" 4 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp polly_nginx_1
d561ac5b901a polly_web "/usr/local/bin/gu..." 5 seconds ago Up 4 seconds 8000/tcp polly_web_1
ecb029d6ec3a postgres:latest "docker-entrypoint..." 7 seconds ago Up 5 seconds 0.0.0.0:5432->5432/tcp polly_postgres_1
(At this point, navigating to http://localhost:8000/ in Chrome already yields ERR_CONNECTION_REFUSED.)
I then ran the script to set up the database, as per the tutorial (extra //s because I'm using Git Bash on Windows 10):
$ docker-compose run web ///usr/local/bin/python create_db.py
Starting polly_data_1 ...
Starting polly_data_1 ... done
Starting polly_postgres_1 ... done
Now when I run docker ps, it shows the following four containers running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129c12f5982 polly_web "//usr/local/bin/p..." 5 seconds ago Up Less than a second 8000/tcp polly_web_run_1
9b2c1048f3a5 polly_nginx "/usr/sbin/nginx" 16 seconds ago Up 15 seconds 0.0.0.0:80->80/tcp polly_nginx_1
d561ac5b901a polly_web "/usr/local/bin/gu..." 17 seconds ago Up 16 seconds 8000/tcp polly_web_1
ecb029d6ec3a postgres:latest "docker-entrypoint..." 19 seconds ago Up 17 seconds 0.0.0.0:5432->5432/tcp polly_postgres_1
And localhost:8000 is still refusing to connect. The web container exposes port 8000, so I don't get why I can't connect to it.
How can I get this working so I can access the web app in the web container locally?
Just change:
expose:
- "8000"
By
ports:
- "8000:8000"
Btw http://localhost:80 is not working?
Regards
Turns out, as suggested by Carlos and 200_OK as part of their answers and comments, it was working as intended - it was running at port 80, not 8000.
Web exposes port 8000 internally inside the container. But that port is not mapped to your host machine port.
I think the problem is in your command. The option is -p, not -b.
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -p :8000 app:app

How to name a volume using a docker-compose.yml file?

I'm new to Docker and I'm trying to find out how to set the name of the created data volume. Currently the directory is automatically named as a long hash under /var/libs/docker which is far from user friendly.
I'm attempting to set up a development environment for MODX as shown here:
https://github.com/modxcms/docker-modx
Currently my docker-compose.yml file is as follows:
web:
image: modx
links:
- db:mysql
ports:
- 80:80
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: example
ports:
- 3306:3306
command: mysqld --sql-mode=NO_ENGINE_SUBSTITUTION
myadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8080:8080
This works perfectly but I'm unsure as to how to name the data volume that I would edit directly with my IDE.
(As a side question, does it have to be created under /var/libs/docker ? Or is there a way of setting it to a directory in my home folder?)
Update:
Thanks to the help from #juliano I've updated my docker-compose.yml file to:
version: '2'
services:
web:
image: modx
volumes:
- html:/home/muzzstick/dev/modxdev
links:
- db:mysql
ports:
- 80:80
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: example
ports:
- 3306:3306
command: mysqld --sql-mode=NO_ENGINE_SUBSTITUTION
myadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8080:8080
volumes:
html:
external: false
Unfortunately this seems to stop the web container from running.
db and myadmin containers show they're running ok.
There weren't any errors... if I type docker start docker_web_1 it appears to start but docker ps -a shows it exited as soon as it started.
Update 2
Running docker-compose up -d appears to run without issue. But then as you can see below, the web container exits as soon as it's created.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1dd6d8ac94e modx "/entrypoint.sh apach" 10 seconds ago Exited (1) 5 seconds ago docker_web_1
ee812ae858dc phpmyadmin/phpmyadmin "/run.sh phpmyadmin" 10 seconds ago Up 5 seconds 80/tcp, 0.0.0.0:8080->8080/tcp docker_myadmin_1
db496134e0cf mysql "docker-entrypoint.sh" 11 seconds ago Up 10 seconds 0.0.0.0:3306->3306/tcp docker_db_1
Update 3
OK the error logs for this container shows:
error: missing MODX_DB_HOST and MYSQL_PORT_3306_TCP environment variables
Did you forget to --link some_mysql_container:mysql or set an external db
with -e MODX_DB_HOST=hostname:port?
This error appears to be originating from https://github.com/modxcms/docker-modx/blob/master/apache/docker-entrypoint.sh#L15-L20
Could it be something like linking is handled differently in docker-compose version 2?
To create a named data volume using the version 2 of compose files you will have a separated area:
version: '2'
services:
db:
image: postgres
volumes:
- amazingvolume:/var/lib/postgresql/data
volumes:
amazingvolume:
external: true
So you can define the volume name (amazingvolume), if it's external or not and under your service (db in this example) you can define which directory you gonna mount.
Just search in the docker documentation for hosted mounted volumes:
version: '2'
services:
web:
image: modx
environment:
- MYSQL_PORT_3306_TCP=3306
- MODX_DB_HOST=mysql:3306
volumes:
- /home/muzzstick/dev/modxdev/html:/var/www/html
links:
- db:mysql
ports:
- 80:80
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: example
ports:
- 3306:3306
command: mysqld --sql-mode=NO_ENGINE_SUBSTITUTION
myadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8080:8080
Change /var/www/html to the directory where the html files will be inside the container. And also create the directory at the left in your host and give read permission to all users.
Regards

Resources