I have a docker-compose.yml with postgres and a web app (ghost). I would like to run a container between postgres and ghost to initialize postgres, add a database and user permissions, and exit.
My database initialization code looks something like:
ghostdb:
extends:
file: ./compose/ghost.yml
service: ghostdb
links:
- postgres
volumes:
- ./ghost-db/volumes/sql:/sql
Which in turn runs
#!/bin/bash
echo Importing SQL
until pg_isready -h postgres; do
sleep 1
done
for f in /sql/*.sql; do
echo Importing $f
psql -h postgres -f $f
done
I know I can extend postgres to add this functionality, but I would rather separate these two concerns. So I have two questions:
Is there a preferable pattern for initializing a database? Is it possible to run a container that exits between postgres and ghost?
Full repository can be viewed here: https://github.com/devpaul/ghost-compose
Related
I have a local project early in development which uses Nestjs and TypeORM to connect to a Docker postgres instance (called 'my_database_server'). Things were working on my old computer, an older Macbook Pro.
I've just migrated everything onto a new Macbook Pro with the new M2 chip (Apple silicon). I've downloaded the version of Docker Desktop that's appropriate for Apple silicon. It runs fine, it still shows 'my_database_server', it can launch that fine, and I can even use the Terminal to go into its Postgres db and see the data that existed in my old computer.
But, I can't figure out how to adjust the config of my project to get it to connect to this database. I've read from other articles that because Docker is running on Apple silicon now and is using emulation, that the host should be different.
This is what my .env used to look like:
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5432
POSTGRES_USER=postgres
On my new computer, the above doesn't connect. I have tried these other values for POSTGRES_HOST, many inspired by other SO posts, but these all yield Error: getaddrinfo ENOTFOUND _____ errors:
my_database_server (the container name)
docker (since I didn't use a docker-compose.yaml file - see below - I don't know what the 'service name' is in this case)
192.168.65.0/24 (the "Docker subnet" value in Docker Desktop > Preferences > Resources > Network)
Next, for some other values I tried, the code is trying to connect for a longer time, but it's getting stuck on something later in the process. With these, eventually I get Error: connect ETIMEDOUT ______:
192.168.65.0
172.17.0.2 (from another SO post, I tried the terminal command docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 78f6e532b324 - the last part being the container ID of my_database_server)
In case it helps, I originally set up this docker container using the script I found here, not using a docker-compose.yaml file. Namely, I ran this script once at the beginning:
#!/bin/bash
set -e
SERVER="my_database_server";
PW="mysecretpassword";
DB="my_database";
echo "echo stop & remove old docker [$SERVER] and starting new fresh instance of [$SERVER]"
(docker kill $SERVER || :) && \
(docker rm $SERVER || :) && \
docker run --name $SERVER -e POSTGRES_PASSWORD=$PW \
-e PGPASSWORD=$PW \
-p 5432:5432 \
-d postgres
# wait for pg to start
echo "sleep wait for pg-server [$SERVER] to start";
SLEEP 3;
# create the db
echo "CREATE DATABASE $DB ENCODING 'UTF-8';" | docker exec -i $SERVER psql -U postgres
echo "\l" | docker exec -i $SERVER psql -U postgres
What should be my new db config settings?
I never figured the above problem out, but it was blocking me so I found a different away around.
Per other SO questions, I decided to go with the more typical route of using a docker-compose.yml file to create the Docker container. In case it helps others in this problem, this is what the main part of my docker-compose.yml looks like:
version: '3'
services:
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DB_NAME}
container_name: postgres-db
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "54320:5432"
I then always run this with docker-compose up -d, not starting the container through the Docker Desktop app (though after that command, you should see the new container light up in the app).
Then in .env, I have this critical part:
POSTGRES_HOST=localhost
POSTGRES_PORT=54320
I mapped Docker's internal 5432 to the localhost-accessible 54320 (a suggestion I found here). Doing "5432:5432" as other articles suggest was not working for me, for reasons I don't entirely understand.
Other articles will suggest changing the host to whatever the service name is in your docker-compose.yml (for the example above, it would be db) - this also did not work for me. I believe the "54320:5432" part maps the ports correctly so that host can remain localhost.
Hope this helps others!
I am new to the Docker development environment. In my JHipster Docker environment, I run into an error of "relation "" already exists" when I start the Docker image. This error occurs after the DB schema is changed. The following is the docker-compose.yml file:
version: '2'
services:
foo-app:
image: foo
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- SPRING_DATASOURCE_URL=jdbc:postgresql://foo-postgresql:5432/foo
- JHIPSTER_SLEEP=10 # gives time for the database to boot before the applicationelasticsearch:9300
ports:
- 8080:8080
foo-postgresql:
extends:
file: foo.yml
service: foo-postgresql
And the foo.yml file is the following:
version: '2'
services:
foo-postgresql:
image: postgres:9.6.5
# volumes:
# - ~/volumes/jhipster/foo/postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=foo
- POSTGRES_PASSWORD=
ports:
- 5432:5432
At this point, I can drop DB tables since the application isn't up and the DB is. I, however, don't see it as a right approach of a DB management. I also can bring up the DB image with a command
docker run -i -t '<DB image name>' /bin/bash
I, however, can't access the DB with a command
psql -h 127.0.0.1 -p 5432 -U postgres
What is a right way to manage a DB in Docker?
To be honest because of the same problems you are facing I went with a cloud DB service (AWS RDS) instead of hosting docker containers for DB. I still use Docker DB container for local/qa environments because these environments do not need to be scalable.
For dev I throw both services (app/db) in a docker-compose.yml file and kick it off using a bash script. In the script I put a few sleep commands to ensure the db was up and ready, and then init the app after that. I'm using mysql container, and usually 40 seconds is plenty of time to ensure mysql service is up. Not a production ready situation, but I couldn't get a nice solution for figuring out if my db container was up and available.
In a production or HA environment with RDS the database is always available, so when a new docker container is added to the cluster there is no waiting for a dependent DB container as well.
Here's the script to start services and wait:
#!/bin/bash
## Use environment variable set in the .env file.
source ./.env
###Name the container and launch, Check docker-compose.yml ####
echo "Starting Docker containers ..."
CONTAINER=${CONTAINER_NAME}
docker-compose up -d
sleep 20
echo "..."
sleep 20
Continue with my app setup...
I'm running a ruby on rails application in docker container. I want to create and then restore the database dump in postgres container.
But I'm
Below is what I've done so far:
1) Added bash script in /docker-entrypoint-initdb.d folder. Script is just to create database:
psql -U docker -d postgres -c 'create database dbname;'
RESULT: Database created but rails server exited with code 0. Error: web_1 exited with code 0
2) Added script to be executed before docker-compose up.
# Run docker db container
echo "Running db container"
docker-compose run -d db
# Sleep for 10 sec so that container have time to run
echo "Sleep for 10 sec"
sleep 10
echo 'Copying db_dump.gz to db container'
docker cp db_dump/db_dump.gz $(docker-compose ps -q db):/
# Create database `dbname`
echo 'Creating database `dbname`'
docker exec -i $(docker-compose ps -q db) psql -U docker -d postgres -c 'create database dbname;'
echo 'importing database `dbname`'
docker exec -i $(docker-compose ps -q db) bash -c "gunzip -c /db_dump.gz | psql -U postgres dbname"
RESULT: Database created and restored data. But another container runs while running web application server using docker-compose up.
docker--compose.yml:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=docker
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0' -d
image: uname/application
links:
- db
ports:
- "3000:3000"
depends_on:
- db
tty: true
Can some one please help to create and import database?
EDIT:
I've tried one more approach by adding POSTGRES_DB=db_name environment variable in docker-compose.yml file so that database will be created and after running the application (docker-compose up), I'll import the database. But getting an error: web_1 exited with code 0.
I'm confused why I'm getting this error (in first and third approach), seems to be something is messed up in docker-compose file.
Set up a database dump mount
You'll need to mount the dump into the container so you can access it. Something like this in docker-compose.yml:
db:
volumes:
- './db_dump:/db_dump'
Make a local directory named db_dump and place your db_dump.gz file there.
Start the database container
Use POSTGRES_DB in the environment (as you mentioned in your question) to automatically create the database. Start db by itself, without the rails server.
docker-compose up -d db
Import data
Wait a few seconds for the database to be available. Then, import your data.
docker-compose exec db gunzip /db_dump/db_dump.gz
docker-compose exec db psql -U postgres -d dbname -f /db_dump/db_dump.gz
docker-compose exec db rm -f /db_dump/db_dump.gz
You can also just make a script to do this import, stick that in your image, and then use a single docker-compose command to call that. Or you can have your entrypoint script check whether a dump file is present, and if so, unzip it and import it... whatever you need to do.
Start the rails server
docker-compose up -d web
Automating this
If you are doing this by hand for prep of a new setup, then you're done. If you need to automate this into a toolchain, you can do some of this stuff in a script. Just start the containers separately, doing the db import in between, and use sleep to cover any startup delays.
web_1 exited with code 0
Did you tried check the log of web_1 container? docker-compose logs web
I strongly recommend you don't initialize your db container manually, make it automatically within the process of start container.
Look into the entrypoint of postgres, we could just put the db_dump.gz into /docker-entrypoint-initdb.d/ directory of the container, and it will be automatic execute, so docker-compose.yml could be:
db:
volumes:
- './initdb.d:/docker-entrypoint-initdb.d'
And put your db_dump.gz into ./initdb.d on your local machine.
When you use command
docker-compose run -d db
you run a separate container it means you are running 3 containers where 1 is application 2 are dbs. The container you run using above command will not be a part of service. compose is using separate db.
So instead of running docker-compose up -d db run docker-compose up -d and continue with your script
I got it working by adding a container_name for db container. My db container have different name (app_name_db_1) and I was connecting to a container named db.
After giving the hard-coded container_name (db), it gets working.
This works:
$ docker-compose run web psql -U dbuser -h db something_development
My docker-compose.yml file has environment variables all over the place. If I run docker-compose run web env I see all kinds of tasty things I'd like to reuse in these one off commands (scripts and one-time shells).
docker-compose run env
...
DATABASE_USER=dbuser
DATABASE_HOST=db
DATABASE_NAME=something_development
DB_ENV_POSTGRES_USER=dbuser
... many more
This won't work because my current shell evals it.
docker-compose run web psql -U ${DATABASE_USER} -h ${DATABASE_HOST} ${DATABASE_NAME}
```
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
````
These environment variables are coming from an env file like .app.env as referenced by docker-compose.yml but docker-compose itself can set environment variables. Seems a shame to even type dbuser when they are right there. I've tried my normal escaping tricks.
docker-compose run web psql -U \$\{DATABASE_USER\} -h \$\{DATABASE_HOST\} \$\{DATABASE_NAME\}
... many other attempts
I totally rubber ducked on SO so I'm going to answer my own question.
The answer (and there may be many ways to do it) is to run bash with -c and use single quotes so that your local shell doesn't interpolate the string.
docker-compose run web bash -c 'psql -U ${DATABASE_USER} \
-h ${DATABASE_HOST} ${DATABASE_NAME}'
Fantastic. DRY shell commands in docker-compose.
prob not the answer you want but the way we set env vars ...
(this is one of many containers)
api:
image: 10.20.2.139:5000/investigation-api:${apiTag}
container_name: "api"
links:
- "database"
- "ldap"
ports:
- 8843:8843
environment:
KEYSTORE_PASSWORD: "ooooo"
KEYSTORE: "${MYVAR}"
volumes_from:
- certs:rw
running compose ...
MYVAR=/etc/ssl/certs/keystoke.jks docker-compose (etcetera)
typically the above line will be in a provision.sh script - cheers
What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting