docker-compose not starting postgres and gives error - docker

I'm really new to Docker (also postgres) and still finding my feet. I get an error and can't seem to get one of my postgres services running, although when I start it, I'm able to access pgadmin and airflow via the browser. I think there is some sort of conflict happening but I'm not sure where. I have a docker-compose.yml file that starts a few containers, as well as the postgres one in question which has the servce name db:
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
db:
image: postgres:13.0-alpine
restart: always
environment:
POSTGRES_DB: postgres
POSTGRES_USER: admin_user
POSTGRES_PASSWORD: secret_password
# PGDATA: /var/lib/postgresql/data
volumes:
- ./db-data:/var/lib/postgresql/data
ports:
- "5433:5432"
pgadmin:
image: dpage/pgadmin4:4.27
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin_user#test_email.com
PGADMIN_DEFAULT_PASSWORD: test_password
PGADMIN_LISTEN_PORT: 1111
ports:
- "1111:1111"
volumes:
- pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
webserver:
image: l/custom_airflow:1.5
container_name: l_custom_airflow
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
- ./db-data:/usr/local/airflow/db-data
- ./pgadmin-data:/usr/local/airflow/pgadmin-data
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
volumes:
db-data:
pgadmin-data:
The relevant part is this:
db:
image: postgres:13.0-alpine
restart: always
environment:
POSTGRES_DB: postgres
POSTGRES_USER: admin_user
POSTGRES_PASSWORD: secret_password
# PGDATA: /var/lib/postgresql/data
volumes:
- ./db-data:/var/lib/postgresql/data
ports:
- "5433:5432"
[I already have two version of postgres on my local machine, and I saw that they use ports 5432 and then 5433, so it looks like the latest one goes to 5433. Similarly, I have another service (airflow) that depends on an older version of postgres to run, so I assume since that one comes first it takes 5432, and then the new postgres service I want will likely be mapped to 5433 as default - please correct me if I'm wrong]
But when I run docker-compose up -d and check my containers with docker container ls -a I see that this particular container is continuously restarting. I ran docker logs --tail 50 --follow --timestamps pipeline_5_db_1 (the container name for the db service) and I see the following error:
2020-10-28T08:46:29.730973000Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:30.468640800Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:31.048144200Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:31.803571400Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:32.957604600Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:34.885928500Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:38.479922200Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:45.384436400Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:58.612202300Z chmod: /var/lib/postgresql/data: Operation not permitted
I googled the error and saw a couple of other SO posts but I can't see a clear explanation. This post and this post are a bit unclear to me (might be because I'm not so familiar), so I'm not sure how to use the responses to solve this issue.

You've got dbdata defined as a named volume at the bottom of the compose file but you're using ./dbdata within each service which is a bind mount. You might try using the named volume instead of the shared directory in your db and webserver services, like this:
volumes:
- db-data:/var/lib/postgresql/data
A bind mount should also work but can be troublesome if permissions on the mounted directory aren't quite right, which might be your problem.
The above also applies to pgadmin-data where the pgadmin service is using a named volume but webserver is using the bind mount (local directory). In fact, it's not clear why the webserver would need access to those data directories. Typically, a webserver would connect to the database via port 5432 (which doesn't even need to be mapped on the host). See for instance the bitnami/airflow docs on Docker Hub.

Related

What are docker networks are needed for?

Please explain why docker network is needed? I read some documentation, but on a practical example, I don't understand why we need to manipulate the network. Here I have a docker-compose in which everything works fine with and without network. Explain me please, what benefits will be in practical use if you uncomment docker-compose in the right places? Now my containers interact perfectly, there are migrations from the ORM to the database, why do I need a networks?
version: '3.4'
services:
main:
container_name: main
build:
context: .
target: development
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- ${PORT}:${PORT}
command: npm run start:dev
env_file:
- .env
# networks:
# - webnet
depends_on:
- postgres
postgres:
container_name: postgres
image: postgres:12
# networks:
# - webnet
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
PG_DATA: /var/lib/postgresql/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1m30s
timeout: 10s
retries: 3
# networks:
# webnet:
volumes:
pgdata:
If no networks are defined, docker-compose will create a default network with a generated name. Otherwise you can manually specify the network and its name in the compose file.
You can read more at Networking in Compose
Here explained Docker network - Networking overview, and here are tutorials:
Macvlan network tutorial,
Overlay networking tutorial,
Host networking tutorial,
Bridge network tutorial.
Without config network for containers, all container IPs are in one range (Ex. 172.17.0.0/16). also, they can see each other by name of the container (Docker internal DNS).
Simple usage of networking in docker:
When you want to have numbers of containers in a specific network range isolated, you must use the network of docker.

Possible to access files from a different docker image within a container?

I'm trying to set up a docker-compose file for running Apache Guacamole.
The compose file has 3 services, 2 for guacamole itself and 1 database image. The problem is that the database has to be initialized before the guacamole container can use it, but the files to initialize the database are in the guacamole image. The solution I came up with is this:
version: "3"
services:
init:
image: guacamole/guacamole:latest
command: ["/bin/sh", "-c", "cp /opt/guacamole/postgresql/schema/*.sql /init/" ]
volumes:
- dbinit:/init
database:
image: postgres:latest
restart: unless-stopped
volumes:
- dbinit:/docker-entrypoint-initdb.d
- dbdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- init
guacd:
image: guacamole/guacd:latest
restart: unless-stopped
guacamole:
image: guacamole/guacamole:latest
restart: unless-stopped
ports:
- "8080:8080"
environment:
GUACD_HOSTNAME: guacd
POSTGRES_HOSTNAME: database
POSTGRES_DATABASE: guac
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- database
- guacd
volumes:
dbinit:
dbdata:
So I have one container whose job is to copy the database initialization files into a volume and then I mount that volume in the database. The problem is that this creates a race condition and is ugly. Is there some elegant solution for this? Is it possible to mount the files from the guacamole image into the database container? I would rather avoid having an extra sql file with the docker-compose file.
Thanks in advance!

the database is not created during container startup with postgres

docker-compose.yml
version: '3.5'
services:
postgres:
container_name: postgres_container
image: postgres:11.7
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-root}
PGDATA: /data/postgres
# ./init.sql (for unix system)
# //docker/init.sql:/docker-entrypoint-initdb.d/init.sql - for Windows
volumes:
- postgres:/data/postgres
- //docker/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
depends_on:
- postgres
restart: unless-stopped
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
When the container is raised, the script must be run.
init.sql
CREATE DATABASE example;
CREATE DATABASE test;
But no databases are created. I have to create them through the console, manually
Who has any idea why this is the case and how to fix it? (The figure shows that the script is mounted in a container)
Solution
I stopped and deleted all the containers.
Then deleted the volumes.
After that, I started docker-compose.yml again.
The databases were created.
Perhaps the first launch failed, the volumes were created, and when I corrected the file, the second launch of the database creation command was not executed, since the volumes were already created for the current container. Thanks for the tip.
From the update, it appears that a previous startup of the database had been done. Once that happens, the volume gets initialized. And once the volume has data, the entrypoint for the database will not perform the initialization step again.
The solution is to stop the database, delete the volume with the bad database data, and then restart the database.

Connection error mariadb + wordpress with docker

I have a docker-compose where I pick up two containers, one with mariadb and one with wordpress.
The problem
I receive a connection failure, apparently the user loses and cannot perform authentication.
wp-mysql | 2019-08-09 13:21:16 18 [Warning] Aborted connection 18 to db: > 'unconnected' user: 'unauthenticated' host: '172.31.0.3' (This connection > closed normally without authentication)
Situation
When I go to http: // localhost: 8010 the wordpress service is available, but with an error connecting to the database.
The docker-compose.yml ...
version: '3'
services:
db:
container_name: wp-mysql
image: mariadb
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 12345678
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
ports:
- "3307:3306"
networks:
- my_net
restart: on-failure
wp:
depends_on:
- db
container_name: wp-web
volumes:
- "$PWD/html:/var/www/html"
image: wordpress
ports:
- "8010:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
networks:
- my_net
networks:
my_net:
Error:
wp-mysql | 2019-08-09 13:21:16 18 [Warning] Aborted connection 18 to db: > 'unconnected' user: 'unauthenticated' host: '172.31.0.3' (This connection > closed normally without authentication)
Where is the configuration error?
Why can't the wordpress container not use the user created in the mariadb container environment?
Finally solve it.
After going around and helped by the user #JackNavaRow the solution came out.
It was as simple as rebooting the system and deleting the volumes.
Pick up the containers and everything worked ok.
I leave it here in case anyone encounters this problem, that does not give more turns.
it may due to database files corrupted due to unexpected shutdown, you can delete the database volume
warnning: this action will drop all your database data
you could use docker-compose down -v to remove the volumes and then execute docker-compose up -d to bring it up
in your case, you are not using volume to store your database data, you can remove the data and try again
rm -rf $PWD/data

What is the impact on not using volumes in my docker-compose?

I am new to docker, what a wonderful tool!. Following the Django tutorial, their docs provide a basic docker-compose.yml, that looks similar to the following one that I've created.
version: '3'
services:
web:
build: .
container_name: web
command: python manage.py migrate
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./src:/src
ports:
- "8000:8000"
depends_on:
- postgres
postgres:
image: postgres:latest
container_name: postgres
environment:
POSTGRES_USER: my_user
POSTGRES_PASSWORD: my_secret_pass!
POSTGRES_DB: my_db
ports:
- "5432:5432"
However, in every single docker-compose file that I see around, the following is added:
volumes:
- ./postgres-data:/var/lib/postgresql/data
What are those volumes used for? Does it mean that if I now restart my postgres container all my data is deleted, but if I had the volumes it is not?
Is my docker-compose.yml ready for production?
What are those volumes used for?
Volumes persist data from your container to your Docker host.
This:
volumes:
- ./postgres-data:/var/lib/postgresql/data
means that /var/lib/postgresql/data in your container will be persisted in ./postgres-data in your Docker host.
What #Dan Lowe commented is correct, if you do docker-compose down without volumes, all the data insisde your containers will be lost, but if you have volumes the directories, and files you specified will be kept in your Docker host
You can see this data in your Docker host in /var/lib/docker/volumes/<your_volume_name>/_data even after your container don't exist anymore.

Resources