Docker-Compose freezing on database-creation - docker

I wish to expand a web-application using docker (I'm beginner in Docker). My application demands PostgreSQL. Therefore I decided to use the docker-compose.
The docker-compose.yaml looks like:
version: '3.8'
services:
db:
image: postgres
command: "postgres -c listen_addresses='*'"
environment:
- POSTGRES_DB=CatsQMS
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345qwerty
application:
build: .
ports:
- "8080:8080"
depends_on:
- db
Also I have the file config.yaml which configures my web-application, it looks like this:
database_config:
host: db
user: postgres
password: 12345qwerty
port: 5432
database: CatsQMS
# Unimportant stuff
And when I lunch the docker-compose, using docker-compose up the build is freezing at this point:
Recreating cats_queue_management_system_db_1 ... done
Recreating cats_queue_management_system_application_1 ... done
Attaching to cats_queue_management_system_db_1, cats_queue_management_system_application_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-05-20 13:42:51.628 UTC [1] LOG: starting PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-05-20 13:42:51.628 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-05-20 13:42:51.628 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-05-20 13:42:51.635 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-05-20 13:42:51.660 UTC [24] LOG: database system was shut down at 2020-05-20 13:39:45 UTC
db_1 | 2020-05-20 13:42:51.673 UTC [1] LOG: database system is ready to accept connections
Perhaps it's an important thing, my Dockerfile:
FROM python:3.8
RUN mkdir /docker-entrypoint-initdb.d/
COPY ./application/sources/database/init.sql /docker-entrypoint-initdb.d/
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
RUN python3 setup.py develop
ENTRYPOINT ["start_app", "-d"]
Where have I admitted a mistake?

Your application and PostgreSQL database are running and should work as expected. But Docker attached after running the containers to the db container.
You can avoid this by using the option -d or --detach on docker-compose up:
The docker-compose up command aggregates the output of each container (essentially running docker-compose logs -f). When the command exits, all containers are stopped. Running docker-compose up -d starts the containers in the background and leaves them running.
So your command looks like this:
docker-compose up -d

I don't think the issue is with the database container. It looks like it starts up normally and is listening for connections.
What is the ENTRYPOINT in your Dockerfile executing, "start_app"?
The -d argument looks like it may be starting the website as a daemon (background process) which doesn't give docker a process to hook into.
Maybe try
ENTRYPOINT ["start_app"]

Is the issue that your web app cant talk to the database? If so it might be that a) they don't share a network, and b) you're trying to connect on port 5432, but the postgres db container is not exposing or publishing this port.
I've added a few things to your docker-compose file that might make it work...
version: '3.8'
networks:
test:
services:
db:
image: postgres
command: "postgres -c listen_addresses='*'"
environment:
- POSTGRES_DB=CatsQMS
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345qwerty
ports:
- 5432:5432
networks:
- test
application:
build: .
ports:
- 8080:8080
networks:
- test
depends_on:
- db

Related

Docker takes a long time to start mysql

I have the follow docker compose file:
version: '3.9'
services:
db-production:
container_name: mysql-production
image: mysql:latest
restart: always
environment:
MYSQL_HOST: localhost
MYSQL_DATABASE: dota2learning-db
MYSQL_ROOT_PASSWORD: toor
ports:
- "3306:3306"
volumes:
- ./data/db-production:/home/db-production
db-testing:
container_name: mysql-testing
image: mysql:latest
restart: always
environment:
MYSQL_HOST: localhost
MYSQL_ROOT_PASSWORD: toor
ports:
- "3307:3306"
volumes:
- ./data/db-testing:/home/db-testing
volumes:
data:
I also have a sql script to dump my database. The problem that's docker take a long time to start mysql and the script don't work.
I tried add the follow command on docker compose file:
command: mysql --user=root --password=toor dota2learning-db < /home/db-production/dumb-db-production.sql
This command does not work because it tries to run before the mysql server is working.
I know because as soon as I created the container I got into it and tried to log into mysql and it wasn't available yet:
sudo docker exec -it mysql-production bash
on container:
mysql --user=root --password=toor
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
I also tried to start MySQL manually:
root#4c91b5407561:/# mysqld start
2022-06-20T14:56:18.448123Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.29) starting as process 97
2022-06-20T14:56:18.451281Z 0 [ERROR] [MY-010123] [Server] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!
2022-06-20T14:56:18.451346Z 0 [ERROR] [MY-010119] [Server] Aborting
2022-06-20T14:56:18.451514Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.29) MySQL Community Server - GPL.
That's add the follow command on docker compose don't work:
command: mysqld start
NOTE:
But I know that if I wait 1 or 2 minutes mysql will be available to run the script though, I want to run this script automatically and not manually.
When I add the commands on docker compose the docker container keeps restarting forever, because it keeps trying to execute the commands and mysql is not available yet.

Docker-compose can not connected to the database

I am new to docker so sorry in advance if this question is going to be stupid or something.
I have a docker-compose.yaml which looks like this:
version: "3.5"
services:
platform:
image: memgraph/memgraph-platform:2.0.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
my_app:
build:
context: .
dockerfile: Dockerfile
container_name: something
restart: unless-stopped
command: python main.py
depends_on:
- platform
environment:
HOST: platform
PORT: 7687
My code which is responsible to connect to the database is looking like this:
host = os.getenv("HOST")
port = os.getenv("PORT")
conn = mgclient.connect(host=host, port=port)
What those 2 are the commands that I am running
docker-compose up -d
docker-compose exect my_app
So far as I understand this is what it should be done in order for the service my_app to be able to connect to the memgraph database. But something is wrong and I don't get it what.
Here are the logs run by docker-compose logs:
something | Traceback (most recent call last):
something | File "/MemgraphProject/DataBase.py", line 12, in __init__
something | self._connection: mgclient.Connection = mgclient.connect(host=host, port=port)
something | mgclient.OperationalError: couldn't connect to host: Connection refused
My whole component is stuck in a Restarting State for some reason and is not getting out of there:
memgraph_container /bin/sh -c /usr/bin/superv ... Restarting
something python main.py Restarting
Sometimes if I keep running docker-compose ps I will get memgraph_container State as Up, but is not going to keep that state for long.
#########################################################################################
On the end of the day all what I want to do is to run successfuly my application with docker-compose exec my_app -d but is not working.
If I will run docker-compose run my_app to initiate a new container I will get the error message regarding the connection to the database.
But if I will run docker-compose exec my_app -d I will receive this error message cause of the Restarting State
Error response from daemon: Container 6fd406ab0b4249424aabd3674aa4572380ac9ddf4f81d756d075bb692c768303 is restarting, wait until the container is running
#########################################################################################
Dose any of you what I am doing wrong and how can I fix this?

Program in docker can't connect to PostgreSQL in docker

I run my Golang API and PostgreSQL with docker-compose.
My log with error connection refused:
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
api_1 | Unable to connect to database: dial tcp 0.0.0.0:5432: connect: connection refusedartpaper_api_1 exited with code 1
db_1 | 2021-12-26 15:18:35.152 UTC [1] LOG: starting PostgreSQL 14.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
db_1 | 2021-12-26 15:18:35.152 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2021-12-26 15:18:35.152 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2021-12-26 15:18:35.216 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2021-12-26 15:18:35.329 UTC [22] LOG: database system was shut down at 2021-12-26 15:05:11 UTC
db_1 | 2021-12-26 15:18:35.515 UTC [1] LOG: database system is ready to accept connections
My config:
config := pgx.ConnConfig{
Host: "0.0.0.0",
Port: 5432,
Database: "artpaper",
User: "admin",
Password: "admin1",
}
I think mistake in docker-compose.yml or Dockerfile for API, because on correct ports docker ps:
0.0.0.0:5432->5432/tcp artpaper_db_1
Dockerfile for API:
FROM golang:1.17.5-alpine3.15
WORKDIR /artpaper
COPY ./ ./
RUN go mod download // download dependencies
RUN go build ./cmd/main/main.go // compile code to one binary file
EXPOSE 8080
CMD [ "./main" ] // run binary file
docker-compose.yml:
version: "3.3"
services:
db:
image: postgres:14.1-alpine3.15
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=artpaper
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin1
ports:
- 5432:5432
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
Password and user in API config like in docker-compose.yml
Hostname from container with api is 0.0.0.0:5432
In your Golang application try using: db:5432, not 0.0.0.0:5432.
version: '3.8'
services:
db:
image: postgres:14.1-alpine3.15
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=artpaper
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin1
ports:
- 5432:5432
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
debug:
image: postgres:14.1-alpine3.15
command: sleep 1d
Try connect to the database within:
docker-compose up -d
docker-compose exec debug ash -c 'psql -h db -U admin --dbname artpaper'
First: In config database host should be a db like in docker-compose.yml
Second: Postgres as a program does not have time to turn on inside the container. I added in api code delay before connection to database.

Is there something missing in docker getting-started tutorial?

I'm going through getting-started tutorial (https://www.docker.com/101-tutorial - Docker Desktop) from docker and they have this docker-compose here:
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
The problem is that MySQL is not creating the "todos" database.
And then my application can't connect to it giving me this error:
app_1 | Error: ER_HOST_NOT_PRIVILEGED: Host '172.26.0.2' is not allowed to connect to this MySQL server
app_1 | at Handshake.Sequence._packetToError (/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
app_1 | at Handshake.ErrorPacket (/app/node_modules/mysql/lib/protocol/sequences/Handshake.js:123:18)
app_1 | at Protocol._parsePacket (/app/node_modules/mysql/lib/protocol/Protocol.js:291:23)
app_1 | at Parser._parsePacket (/app/node_modules/mysql/lib/protocol/Parser.js:433:10)
app_1 | at Parser.write (/app/node_modules/mysql/lib/protocol/Parser.js:43:10)
app_1 | at Protocol.write (/app/node_modules/mysql/lib/protocol/Protocol.js:38:16)
app_1 | at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:91:28)
app_1 | at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:525:10)
app_1 | at Socket.emit (events.js:310:20)
app_1 | at addChunk (_stream_readable.js:286:12)
app_1 | --------------------
app_1 | at Protocol._enqueue (/app/node_modules/mysql/lib/protocol/Protocol.js:144:48)
app_1 | at Protocol.handshake (/app/node_modules/mysql/lib/protocol/Protocol.js:51:23)
app_1 | at PoolConnection.connect (/app/node_modules/mysql/lib/Connection.js:119:18)
app_1 | at Pool.getConnection (/app/node_modules/mysql/lib/Pool.js:48:16)
app_1 | at Pool.query (/app/node_modules/mysql/lib/Pool.js:202:8)
app_1 | at /app/src/persistence/mysql.js:35:14
app_1 | at new Promise (<anonymous>)
app_1 | at Object.init (/app/src/persistence/mysql.js:34:12)
app_1 | at processTicksAndRejections (internal/process/task_queues.js:97:5) {
app_1 | code: 'ER_HOST_NOT_PRIVILEGED',
app_1 | errno: 1130,
app_1 | sqlMessage: "Host '172.26.0.2' is not allowed to connect to this MySQL server",
app_1 | sqlState: undefined,
app_1 | fatal: true
app_1 | }
If I run this command alone to spin MySQL, the "todos" database is created:
docker run -d --network todo-app --network-alias mysql -v todo-mysql-data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=todos mysql:5.7
Is there any command that was updated or that doesn't work properly on windows with docker-compose?
TL;DR;
Run the command
docker-compose down --volumes
to remove any problematic volume created during the tutorial early phases, then, resume your tutorial at the step Running our Application Stack.
I suppose that the tutorial you are following is this one.
If you did follow it piece by piece and tried some docker-compose up -d in the step 1 or 2, then you've probably created a volume without your todos database.
Just going docker-compose down with your existing docker-compose.yml won't suffice because volumes is exactly made for this, the volume is the permanent storage layer of Docker.
By default all files created inside a container are stored on a writable container layer. This means that:
The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount. If you’re running Docker on Windows you can also use a named pipe.
Source: https://docs.docker.com/storage/
In order to remove that volume, you probably created without your database there is an extra flag you can add to docker-compose down: the flag --volumes or, in short -v
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
Source: https://docs.docker.com/compose/reference/down/
So your fix should be as simple as:
docker-compose down --volumes
docker-compose up -d, so back in your tutorial at the step Running our Application Stack
docker-compose logs -f as prompted in the rest of the tutorial
Currently you're database todo is created inside your mysql container when you launch docker-compose start.
In fact, your issue come from mysql user permission.
Add the line below, at the end of your file which initialize todo database
CREATE USER 'newuser'#'%' IDENTIFIED BY 'user_password';
That line will create a user : newuser and give it access from any host (%) with the password user_password
Follow by this line
GRANT ALL PRIVILEGES ON *.* TO 'newuser'#'%';
It'll grant all permissions on every database and every table you have to newuser from any host
Finally, change your mysql environment variable MYSQL_USER and MYSQL_PASSWORD with the new one you just create
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: newuser
MYSQL_PASSWORD: user_password
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:

Linked container IP not in hosts

I'm trying to configure a simple LAMP app.
Here is my Dockerfile
FROM ubuntu
# ...
RUN apt-get update
RUN apt-get -yq install apache2
# ...
WORKDIR /data
And my docker-compose.yml
db:
image: mysql
web:
build: .
ports:
- 80:80
volumes:
- .:/data
links:
- db
command: /data/run.sh
After docker-compose build & up I was expecting to find db added to my /etc/hosts (into the web container), but it's not there.
How can this be explained ? What am I doing wrong ?
Note1: At up time, I see only Attaching to myapp_web_1, shouldn't I see also myapp_db_1 ?
Note2: I'm using boot2docker
Following #Alexandru_Rosianu's comment, I checked
$ docker-compose logs db
error: database is uninitialized and MYSQL_ROOT_PASSWORD not set
Did you forget to add -e MYSQL_ROOT_PASSWORD=... ?
Since I now set the variable MYSQL_ROOT_PASSWORD
$ docker-compose up
Attaching to myapp_db_1, myapp_web_1
db_1 | Running mysql_install_db
db_1 | ...
I can see the whole db log and the db host effectively set in web's /etc/hosts

Resources