Error: Postgres database import in docker container - ruby-on-rails

I'm running a ruby on rails application in docker container. I want to create and then restore the database dump in postgres container.
But I'm
Below is what I've done so far:
1) Added bash script in /docker-entrypoint-initdb.d folder. Script is just to create database:
psql -U docker -d postgres -c 'create database dbname;'
RESULT: Database created but rails server exited with code 0. Error: web_1 exited with code 0
2) Added script to be executed before docker-compose up.
# Run docker db container
echo "Running db container"
docker-compose run -d db
# Sleep for 10 sec so that container have time to run
echo "Sleep for 10 sec"
sleep 10
echo 'Copying db_dump.gz to db container'
docker cp db_dump/db_dump.gz $(docker-compose ps -q db):/
# Create database `dbname`
echo 'Creating database `dbname`'
docker exec -i $(docker-compose ps -q db) psql -U docker -d postgres -c 'create database dbname;'
echo 'importing database `dbname`'
docker exec -i $(docker-compose ps -q db) bash -c "gunzip -c /db_dump.gz | psql -U postgres dbname"
RESULT: Database created and restored data. But another container runs while running web application server using docker-compose up.
docker--compose.yml:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=docker
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0' -d
image: uname/application
links:
- db
ports:
- "3000:3000"
depends_on:
- db
tty: true
Can some one please help to create and import database?
EDIT:
I've tried one more approach by adding POSTGRES_DB=db_name environment variable in docker-compose.yml file so that database will be created and after running the application (docker-compose up), I'll import the database. But getting an error: web_1 exited with code 0.
I'm confused why I'm getting this error (in first and third approach), seems to be something is messed up in docker-compose file.

Set up a database dump mount
You'll need to mount the dump into the container so you can access it. Something like this in docker-compose.yml:
db:
volumes:
- './db_dump:/db_dump'
Make a local directory named db_dump and place your db_dump.gz file there.
Start the database container
Use POSTGRES_DB in the environment (as you mentioned in your question) to automatically create the database. Start db by itself, without the rails server.
docker-compose up -d db
Import data
Wait a few seconds for the database to be available. Then, import your data.
docker-compose exec db gunzip /db_dump/db_dump.gz
docker-compose exec db psql -U postgres -d dbname -f /db_dump/db_dump.gz
docker-compose exec db rm -f /db_dump/db_dump.gz
You can also just make a script to do this import, stick that in your image, and then use a single docker-compose command to call that. Or you can have your entrypoint script check whether a dump file is present, and if so, unzip it and import it... whatever you need to do.
Start the rails server
docker-compose up -d web
Automating this
If you are doing this by hand for prep of a new setup, then you're done. If you need to automate this into a toolchain, you can do some of this stuff in a script. Just start the containers separately, doing the db import in between, and use sleep to cover any startup delays.

web_1 exited with code 0
Did you tried check the log of web_1 container? docker-compose logs web
I strongly recommend you don't initialize your db container manually, make it automatically within the process of start container.
Look into the entrypoint of postgres, we could just put the db_dump.gz into /docker-entrypoint-initdb.d/ directory of the container, and it will be automatic execute, so docker-compose.yml could be:
db:
volumes:
- './initdb.d:/docker-entrypoint-initdb.d'
And put your db_dump.gz into ./initdb.d on your local machine.

When you use command
docker-compose run -d db
you run a separate container it means you are running 3 containers where 1 is application 2 are dbs. The container you run using above command will not be a part of service. compose is using separate db.
So instead of running docker-compose up -d db run docker-compose up -d and continue with your script

I got it working by adding a container_name for db container. My db container have different name (app_name_db_1) and I was connecting to a container named db.
After giving the hard-coded container_name (db), it gets working.

Related

Setting up Rails console on Docker container not taking any input

I was trying to setup Rails console in my dockerized container. The entire application has multiple components and I have set up the orchestration using docker-compose. Here is the relevant service from my docker-compose.yml file.
app:
image: <image_name>
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
tty: true
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
depends_on:
- other-db
# the environment variables are used in docker/config/env_config.rb to connect to different database containers
container_name: application
environment:
- CONSOLE=$CONSOLE
My Dockerfile has the following command ENTRYPOINT /usr/src/app/docker-entrypoint.sh
And in the docker-entrypoint.sh
#!/bin/bash
echo "waiting for all db connections to be healthy... Sleeping..."
sleep 1m
mkdir -p /usr/src/app/tmp/pids/
mkdir -p /usr/src/app/tmp/sockets/
if [ "$CONSOLE" = "Y" ];
then
echo "Starting Padrino console"
bundle exec padrino console
fi
When I run
export CONSOLE=Y
docker-compose -f docker-compose.yml up -d && docker attach application
The console starts up and I see >> but I cannot type in it. Where am I going wrong?
Try starting your container with -i mode.
-i, --interactive Attach container's STDIN
something like
docker-compose -f docker-compose.yml up -i && docker attach application
you can also mix -d and -i as per need.
With help from this post, I figured that I was missing stdin_open: true in the docker-compose.yml. Adding it worked like a breeze.

DB management in Docker

I am new to the Docker development environment. In my JHipster Docker environment, I run into an error of "relation "" already exists" when I start the Docker image. This error occurs after the DB schema is changed. The following is the docker-compose.yml file:
version: '2'
services:
foo-app:
image: foo
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- SPRING_DATASOURCE_URL=jdbc:postgresql://foo-postgresql:5432/foo
- JHIPSTER_SLEEP=10 # gives time for the database to boot before the applicationelasticsearch:9300
ports:
- 8080:8080
foo-postgresql:
extends:
file: foo.yml
service: foo-postgresql
And the foo.yml file is the following:
version: '2'
services:
foo-postgresql:
image: postgres:9.6.5
# volumes:
# - ~/volumes/jhipster/foo/postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=foo
- POSTGRES_PASSWORD=
ports:
- 5432:5432
At this point, I can drop DB tables since the application isn't up and the DB is. I, however, don't see it as a right approach of a DB management. I also can bring up the DB image with a command
docker run -i -t '<DB image name>' /bin/bash
I, however, can't access the DB with a command
psql -h 127.0.0.1 -p 5432 -U postgres
What is a right way to manage a DB in Docker?
To be honest because of the same problems you are facing I went with a cloud DB service (AWS RDS) instead of hosting docker containers for DB. I still use Docker DB container for local/qa environments because these environments do not need to be scalable.
For dev I throw both services (app/db) in a docker-compose.yml file and kick it off using a bash script. In the script I put a few sleep commands to ensure the db was up and ready, and then init the app after that. I'm using mysql container, and usually 40 seconds is plenty of time to ensure mysql service is up. Not a production ready situation, but I couldn't get a nice solution for figuring out if my db container was up and available.
In a production or HA environment with RDS the database is always available, so when a new docker container is added to the cluster there is no waiting for a dependent DB container as well.
Here's the script to start services and wait:
#!/bin/bash
## Use environment variable set in the .env file.
source ./.env
###Name the container and launch, Check docker-compose.yml ####
echo "Starting Docker containers ..."
CONTAINER=${CONTAINER_NAME}
docker-compose up -d
sleep 20
echo "..."
sleep 20
Continue with my app setup...

Using docker environment variables in docker-compose run commands

This works:
$ docker-compose run web psql -U dbuser -h db something_development
My docker-compose.yml file has environment variables all over the place. If I run docker-compose run web env I see all kinds of tasty things I'd like to reuse in these one off commands (scripts and one-time shells).
docker-compose run env
...
DATABASE_USER=dbuser
DATABASE_HOST=db
DATABASE_NAME=something_development
DB_ENV_POSTGRES_USER=dbuser
... many more
This won't work because my current shell evals it.
docker-compose run web psql -U ${DATABASE_USER} -h ${DATABASE_HOST} ${DATABASE_NAME}
```
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
````
These environment variables are coming from an env file like .app.env as referenced by docker-compose.yml but docker-compose itself can set environment variables. Seems a shame to even type dbuser when they are right there. I've tried my normal escaping tricks.
docker-compose run web psql -U \$\{DATABASE_USER\} -h \$\{DATABASE_HOST\} \$\{DATABASE_NAME\}
... many other attempts
I totally rubber ducked on SO so I'm going to answer my own question.
The answer (and there may be many ways to do it) is to run bash with -c and use single quotes so that your local shell doesn't interpolate the string.
docker-compose run web bash -c 'psql -U ${DATABASE_USER} \
-h ${DATABASE_HOST} ${DATABASE_NAME}'
Fantastic. DRY shell commands in docker-compose.
prob not the answer you want but the way we set env vars ...
(this is one of many containers)
api:
image: 10.20.2.139:5000/investigation-api:${apiTag}
container_name: "api"
links:
- "database"
- "ldap"
ports:
- 8843:8843
environment:
KEYSTORE_PASSWORD: "ooooo"
KEYSTORE: "${MYVAR}"
volumes_from:
- certs:rw
running compose ...
MYVAR=/etc/ssl/certs/keystoke.jks docker-compose (etcetera)
typically the above line will be in a provision.sh script - cheers

Compose: running a container that exits

I have a docker-compose.yml with postgres and a web app (ghost). I would like to run a container between postgres and ghost to initialize postgres, add a database and user permissions, and exit.
My database initialization code looks something like:
ghostdb:
extends:
file: ./compose/ghost.yml
service: ghostdb
links:
- postgres
volumes:
- ./ghost-db/volumes/sql:/sql
Which in turn runs
#!/bin/bash
echo Importing SQL
until pg_isready -h postgres; do
sleep 1
done
for f in /sql/*.sql; do
echo Importing $f
psql -h postgres -f $f
done
I know I can extend postgres to add this functionality, but I would rather separate these two concerns. So I have two questions:
Is there a preferable pattern for initializing a database? Is it possible to run a container that exits between postgres and ghost?
Full repository can be viewed here: https://github.com/devpaul/ghost-compose

Difference between docker-compose and manual commands

What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting

Resources