Share connection details with container and host - docker

My docker-compose.yml contains application container and database container
app:
links:
- db
db:
image: postgres
ports:
- 5003:5432
Let's say that during development I want to start only db container and to connect to it and I use localhost:5003.
In production I want to start both containers, one with application and one with database. But now I need to use db:5432 in application container to connect to db
Is it possible to modify docker-compose configuration file to be able to use same database uri in both cases?

I would suggest to create multiple docker-compose files for the different environments.
You can create a base docker-compose file and add overrides for the different environments as described here: https://docs.docker.com/compose/extends/#different-environments

Related

Docker share environment variables using volumes

How can I share environment variables since the --link feature was deprecated?
The Docker documentation (https://docs.docker.com/network/links/) states
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
But how do I share environment variable by using volumes? I did not find anything about environment variables in the volumes section.
The problem that I have is that I want to set a database password as environment variable when I start the container. Some other container loads data into the database and for that needs to connect to it and provide the credentials. So far the loading container discovered the password on its own by reading the environment variable. How do I do that now without --link?
Generally, you do it by explicitly providing the same environment variable to other containers. This is easy if you're using a docker-compose.yml to manage your containers, because then you can do this:
version: 3
services:
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
frontend:
image: webserver
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
Then if you set MYSQL_ROOT_PASSWORD in your .env file, the same value will be provided to both the database and frontend container. If you're not using docker-compose, you can still simplify things by using an environment file. Create a file named, e.g., database.env that contains:
MYSQL_ROOT_PASSWORD=secret
Then point your containers at that using docker run --env-file database.env ....
You can't share environment variables using volumes, but you can of course share files. So another option would be to have your database container write a file containing the password to a shared volume, and then read that in your other containers.

Running application within Docker containers

If someone may know, does it need to be separate Dockerfile for a database and service itself in case if you want to run an application within Docker containers?
It's not quite clear where to specify the external database and server name, is it in the .env file?
https://github.com/gurock/testrail-docker/blob/master/README.md
http://docs.gurock.com/testrail-admin/installation-docker/migrating-upgrading-testrail
Yes, you should run both application and Database in a separate container.
It's not quite clear where to specify the external database and server
name, is it in the .env file?
You have two option to speicy Environment variable
.env file
Envrionment Variables
place the .env file in the root of your docker-compose and specify this in your docker-compose file.
services:
api:
image: 'node:6-alpine'
env_file:
- .env
Using Environment
environment:
MYSQL_USER: "${DB_USER:-testrail}"
MYSQL_PASSWORD: "${DB_PWD:-testrail}"
MYSQL_DATABASE: "${DB_NAME:-testrail}"
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PWD:-my-secret-password}"
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
does it need to be separate Dockerfile for a database and service
Better to use offical database image, and for service, you can customize the image, but you provided link is the better choice for you to start with docker-compose.yml.
Also, the documentation of docker-compose is already given in the link.
Theoretically you can have an application and the database running in the same container, but this will have kinds of unintended consequences for example if the database falls over the application might still be running, but docker won't notice that the database fell over if it is not aware of it.
Something to wrap your mind around when running the database in a container is data persistence, which means that data would survive even when the container is killed or deleted and that once you create the container again the container would still be able to access the databases and other data.
Here is a good article explaining volumes in docker in the context of running mysql in its own container with a volume to hold the data:
https://severalnines.com/database-blog/mysql-docker-containers-understanding-basics
In context of the repo that you linked it seems there is a separate Dockerfile for the database and that you have the option to choose to use either Mariadb or MySQL, see here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mariadb
and here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mysql

Docker container names

I'm using Docker on Rails project. I found only one way to reliably link services between each other, i. e. specifying container_name in docker-compose.yml:
version: '3'
services:
db:
container_name: sociaball_db
...
web:
container_name: sociaball_web
...
sphinx:
container_name: sociaball_sphinx
...
So now I can write something like this in database.yml and stop worrying about, say, database container randomly changing its name from db to db_1:
common: &common
...
host: sociaball_db
However, I can only run three containers at the same time. Whenever I try to run docker-container up if some containers aren't down it will raise an error.
ERROR: for sociaball_db Cannot create container for service db: Conflict. The container name "/sociaball_db" is already in use by container "ee787c06db7b2a0205e3c1e552b6a5496545a78fe12d942fb792b27f3c38769c". You have to remove (or rename) that container to be able to reuse that name.
It is very inconvenient. It often forces explicitly deleting all the containers just to make sure they have no opportunity to break. Is there a way around that?
When running several containers from one compose file, there will be a default network where all containers are attached to (if not specified differently).
There is no need to reference a container by its container or hostname as docker-compose automatically sets up some dns service discovery where each docker-compose service can be resolved by its service name (the key used one level below services:.
So your service called web can reach your database using the name db. No need to specify a container name for this use case. For more details please see the docker docs on networking that also demonstrates a rails app accessing a database.

How to run docker-compose with link all services into one another container ubuntu

I have question for Docker, I have many containers like:
nginx
php-fpm
mysql
nodejs
composer
...
And I want to setup them by Docker Compose on Windows 10 with "Docker for Windows" application, but I would to bring them into one another container such as "Ubuntu 16.04". So how can I do that?
Thanks so much, guys!
To do that, create a Dockerfile based on "Ubuntu" image and setup all of them manually, same as you would install them on a "normal" machine.
But this is against the purpose, why docker was created for. Image, or more specifically, a container based on the image, is a specialized virtual machine intended to handle one specific service - search for "microservices" term.
You should be using docker-compose to create and manage multiple services. Ideally there will be a container for each of your components nginx, php-fpm, node, mysql. The container can be linked and accessible to each other over the network.
You can create multiple dockerfiles like one for nodejs another fo angular and another one for database and then you can link all the dockerfiles by creating one docker-compose file like this :
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: gamification-frontend # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forwarding
express: #name of the second service
build: gamification-backend # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
database: # name of the third service
image: redis # specify an image to build container from
So these are three different containers
1. angular 2. express 3. database
which are linked together. To run these containers use:
docker-compose up --build

How to configure dns entries for Docker Compose

I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!
Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"

Resources