I am new to the Docker development environment. In my JHipster Docker environment, I run into an error of "relation "" already exists" when I start the Docker image. This error occurs after the DB schema is changed. The following is the docker-compose.yml file:
version: '2'
services:
foo-app:
image: foo
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- SPRING_DATASOURCE_URL=jdbc:postgresql://foo-postgresql:5432/foo
- JHIPSTER_SLEEP=10 # gives time for the database to boot before the applicationelasticsearch:9300
ports:
- 8080:8080
foo-postgresql:
extends:
file: foo.yml
service: foo-postgresql
And the foo.yml file is the following:
version: '2'
services:
foo-postgresql:
image: postgres:9.6.5
# volumes:
# - ~/volumes/jhipster/foo/postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=foo
- POSTGRES_PASSWORD=
ports:
- 5432:5432
At this point, I can drop DB tables since the application isn't up and the DB is. I, however, don't see it as a right approach of a DB management. I also can bring up the DB image with a command
docker run -i -t '<DB image name>' /bin/bash
I, however, can't access the DB with a command
psql -h 127.0.0.1 -p 5432 -U postgres
What is a right way to manage a DB in Docker?
To be honest because of the same problems you are facing I went with a cloud DB service (AWS RDS) instead of hosting docker containers for DB. I still use Docker DB container for local/qa environments because these environments do not need to be scalable.
For dev I throw both services (app/db) in a docker-compose.yml file and kick it off using a bash script. In the script I put a few sleep commands to ensure the db was up and ready, and then init the app after that. I'm using mysql container, and usually 40 seconds is plenty of time to ensure mysql service is up. Not a production ready situation, but I couldn't get a nice solution for figuring out if my db container was up and available.
In a production or HA environment with RDS the database is always available, so when a new docker container is added to the cluster there is no waiting for a dependent DB container as well.
Here's the script to start services and wait:
#!/bin/bash
## Use environment variable set in the .env file.
source ./.env
###Name the container and launch, Check docker-compose.yml ####
echo "Starting Docker containers ..."
CONTAINER=${CONTAINER_NAME}
docker-compose up -d
sleep 20
echo "..."
sleep 20
Continue with my app setup...
Related
How to access the running containers during new container docker build?
Need to access the database container during the build of the application container
docker-compose
version: '3'
services:
db:
build: ./db
ports:
- 1433:1433
networks:
- mynetwork
app:
build: ./app
ports:
- 8080:8080
depends_on:
- db
networks:
- mynetwork
networks:
mynetwork: {}
Tried to bring up the db prior to building the app container, but not working:
docker-compose build db
docker-compose up -d db
docker-compose build app
You can't, and it's not a good idea. For example, if you run:
docker-compose build
docker-compose down -v
docker-compose up
The down step will delete all of the containers and their underlying storage (including the contents of the database); then the up step will create all new containers from existing images without re-running the Dockerfile. Even if you added a --build option, Docker's layer caching would conclude that the filesystem output of your database setup command hasn't changed, and will skip re-running that step.
You can encounter a similar problem if you docker push the built image to some registry and run it on a different host: since the image is reusable, commands from its Dockerfile won't get re-run, but it's not the same database, so the setup won't get done.
Depending on what kind of setup you're trying to do, probably the best approach is to configure your image with an entrypoint script that runs your application's database migrations, then exec "$#" runs the main container command. It can also work to put setup commands in the database's /docker-entrypoint-initdb.d directory, though these won't get re-run if your application's database schema changes.
At a technical level, this doesn't work because the docker build environment isn't on any particular Docker network, neither the mynetwork you manually specify nor the default network Compose creates on its own. The build sequence runs separately from running the resulting image, and it ignores most of the Docker Compose settings.
The project is currently running in the background from this command:
docker-compose up -d
I need to make two changes to their docker-compose.yml:
Add a new container
Update a previous container to have a link to the new container
After changes are made:
NOTE the "<--" arrows for my changes
web:
build: .
restart: always
command: ['tini', '--', 'rails', 's']
environment:
RAILS_ENV: production
HOST: example.com
EMAIL: admin#example.com
links:
- db:mongo
- exim4:exim4.docker # <-- Add link
ports:
- 3000:3000
volumes:
- .:/usr/src/app
db:
image: mongo
restart: always
exim4: # <-------------------------------- Add new container
image: exim4
restart: always
ports:
- 25:25
environment:
EMAIL_USER: user#example.com
EMAIL_PASSWORD: abcdabcdabcdabcd
After making the changes, how do I apply them? (without destroying anything)
I tried docker-compose down && docker-compose up -d but this destroyed the Mongo DB container... I cannot do that... again... :sob:
docker-compose restart says it won't recognize any changes made to docker-compose.yml
(Source: https://docs.docker.com/compose/reference/restart/)
docker-compose stop && docker-compose start sounds like it'll just startup the old containers without my changes?
Test server:
Docker version: 1.11.2, build b9f10c9/1.11.2
docker-compose version: 1.8.0, build f3628c7
Production server is likely using older versions, unsure if that will be an issue?
If you just run docker-compose up -d again, it will notice the new container and the changed configuration and apply them.
But:
(without destroying anything)
There are a number of settings that can only be set at container startup time. If you change these, Docker Compose will delete and recreate the affected container. For example, links are a startup-only option, so re-running docker-compose up -d will delete and recreate the web container.
this destroyed the Mongo DB container... I cannot do that... again...
db:
image: mongo
restart: always
Add a volumes: option to this so that data is stored outside the container. You can keep it in a named volume, possibly managed by Docker Compose, which has some advantages, but a host-system directory is probably harder to accidentally destroy. You will have to delete and restart the container to change this option. But note that you will also have to delete and restart the container if, for example, there is a security update in MongoDB and you need a new image.
Your ideal state here is:
Actual databases (like your MongoDB container) store data in named volumes or host directories
Applications (like your Rails container) store nothing locally, and can be freely destroyed and recreated
All code is in Docker images, which can always be rebuilt from source control
Use volumes as necessary to inject config files and extract logs
If you lose your entire /var/lib/docker directory (which happens!) you shouldn't actually lose any state, though you will probably wind up with some application downtime.
Just docker-compose up -d will do the job.
Output should be like
> docker-compose up -d
Starting container1 ... done
> docker-compose up -d
container1 is up-to-date
Creating container2 ... done
As a side note, docker-compose is not really for production. You may want to consider docker swarm.
the key here is that up is idempotent.
if you update configuration in docker-compose.yaml
docker compose up -d
If compose is building images before run it, and you want to rebuild them:
docker compose up -d --build
I am downloaded a debian image for docker and i have created a container from it.
I haver successfully installed apache and mysql on this container (from /bin/bash).
I want to make this docker container running in background.
I have tried a lot of tutorials (i have created images with Dockerfile) but nothing really works. Apache and mysql were run as root...
So i have launched this command:
docker run -d -p 80:80 myimagefile /bin/bash -c "while true; do sleep 10; done"
Then i have attached a /bin/bash with exec command and i started manually mysql and apache2 (/etc/init.d/ scripts). When i type CTRL-D, the bash is killed but the container stills in background, with mysql and apache alive !
I am wondering if this method is correct or is it something ugly ? Is there a best way to do this ?
I do not want to write a Dockerfile that describes how to install apache and mysql. I have made my own image, with my application and all prerequisites.
I just want to start a container from my image and start automatically apache and mysql.
I have a second question: With my method, the container is not reloaded if i reboot physical computer. How can i start it automatilcy with persistence of data ?
Thanks
I would suggest using running mysql and apache in separate containers. Additionally the docker hub already has container images that you could re-use:
https://hub.docker.com/_/mysql/
The following is an example of a docker-compose file that describe how to launch Drupal
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- /var/www/html/sites
- /var/www/private
Run as follows
$ docker-compose up -d
Creating dockercompose_db_1
Creating dockercompose_web_1
Which exposes Drupal on port 8080
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note:
When running the drupal installer, configure it to connect to a host called "db", which is the mysql container.
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose
What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting