Docker-compose redis: start with fixture? - docker

I am using docker-compose to create a redis container. However, I need it to start with some default key values. Is this possible?

You need to modify your DockerCompose file, You can also add from some file which contains key value but here is the simplest example that adds and get key in DockerCompose file.
version: '2'
services:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
command:
- /bin/sh
- -c
- |
nohup redis-server &
sleep 5
echo "adding some default key value"
redis-cli SET docker awesome
echo "Get docker key value"
redis-cli GET docker
# this will keep container running
tail -f /dev/null

There are several approaches but be aware that, by default, services start in an arbitrary order using Docker Compose and, even if you use depends_on this only checks that containers are running (e.g. redis) and not that they've completed some initialization process.
1. Easiest: Pre-create
See the option to run the redis image with persistent storage:
https://hub.docker.com/_/redis/
Using this approach, you'd either mount a local directory into the container's /data directory or create a (data) volume and use that. Then, you'd pre-populate the redis server by running the redis-cli against it.
One hack to doing this is to your planned docker-compose.yml file but docker-compose --file=/path/to/docker-compost.yaml up redis where redis is the name of the redis service too. You'll need to ensure the redis service is accessible from the host --ports: 6379:6379 perhaps so that the external redis-cli can access it.
This approach works well for local-only use but does not facilitate deploying the solution elsewhere.
2. Resilient: Test for keys
Docker Compose -- to my knowledge -- doesn't offer an elegant equivalent to Kubernetes' init containers which are run before the dependent container.
With Docker Compose, you could include an initialization (run once) redis-cli to populate the server but you must then augment any clients to check that this has completed or for the existence of this data before starting (successfully).
The simplest solution for this is for the redis clients to fail and restart: always if the redis keys aren't present.
A more advanced solution would be to define a healthcheck for the existence of the redis keys and then depends_upon: ... condition: service_healthy (see link)
See also startup order in Docker Compose described here

Related

Expose docker port based on environment variable in compose

I am using docker compose to set up application environments. There are two distinct environments, test and production.
In a test environment, I need to expose additional ports (for debugging). These ports should remain closed in a production environment.
I would also like to use the same image and docker-compose.yml file. Using the same image is no problem but I am struggeling with the compose file. In it, I would like to open or close a port based on an environment variable.
The current setup is pretty much the standard, like this:
# ...
ports:
- "8080:8080" # HTTP Server port
- "9301:9301" # debug port
# ...
In this example, both ports are always exposed. Is it possible to expose the port 9301 only if a certain environment variable, say EXPOSE_DEBUG, is set?
You can use profiles or a second compose file.
services:
app-prod:
&app
image: busybox
profiles:
- production
ports:
- 8080:8080
app-dev:
<<: *app
profiles:
- development
ports:
- 8080:8080
- 9090:9090
Then you can use the below command or an environment variable to set the profile, COMPOSE_PROFILES.
docker compose --profile <profile-name> up
Alternatively, you can use a second compose file and override the ports.
# compose.yaml
services:
app:
image: busybox
ports:
- 8080:8080
# compose.dev.yaml
services:
app:
ports:
- 8080:8080
- 9090:9090
Then you can use the file after the main file to patch it:
docker compose -f compose.yaml -f compose.dev.yaml up
The file(s) to use can also be controls with an environment variable, COMPOSE_FILE.
If you name the file compose.override.yaml, docker will automatically use it, so you don't have to point to it with the -f flag. Be careful that you don't add this file to your production system, if you choose to do this.
You could also bind the debug port to the loopback interface so that you can only access it locally.
ports:
- 8080:8080
- 127:0.0.1:9090:9090
The solution I usually use in my projects is to make a bash script that writes the docker-compose.yml based on the value of the environment variable. But you could write it with any other programming language as well.
Conditional statements (if else) are not supported in docker compose.
Use additional software like jinja-compose adding Jinja2 logic to docker-compose
Use just two different files (dc-dev.yml and dc-prod.yml) and give them as arg (docker compose -f)
Generate docker-compose.yml programmatically by yourself
Use profiles (Was to slow, see answer of the fool)
To just maintain dev/prod environments in my opinion solution 2 is the most efficient in terms of effort.
To follow your approach:
You can set port mapping by envs like:
.env-File or add them in docker compose up -e command
PORT1="8080:8080"
PORT2="9301:9301"
docker-compse.yml
services:
container1:
ports:
- ${PORT1}
- ${PORT2}
But afaik there is no way to omit one of them

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

How to implement changes made to docker-compose.yml to detached running containers

The project is currently running in the background from this command:
docker-compose up -d
I need to make two changes to their docker-compose.yml:
Add a new container
Update a previous container to have a link to the new container
After changes are made:
NOTE the "<--" arrows for my changes
web:
build: .
restart: always
command: ['tini', '--', 'rails', 's']
environment:
RAILS_ENV: production
HOST: example.com
EMAIL: admin#example.com
links:
- db:mongo
- exim4:exim4.docker # <-- Add link
ports:
- 3000:3000
volumes:
- .:/usr/src/app
db:
image: mongo
restart: always
exim4: # <-------------------------------- Add new container
image: exim4
restart: always
ports:
- 25:25
environment:
EMAIL_USER: user#example.com
EMAIL_PASSWORD: abcdabcdabcdabcd
After making the changes, how do I apply them? (without destroying anything)
I tried docker-compose down && docker-compose up -d but this destroyed the Mongo DB container... I cannot do that... again... :sob:
docker-compose restart says it won't recognize any changes made to docker-compose.yml
(Source: https://docs.docker.com/compose/reference/restart/)
docker-compose stop && docker-compose start sounds like it'll just startup the old containers without my changes?
Test server:
Docker version: 1.11.2, build b9f10c9/1.11.2
docker-compose version: 1.8.0, build f3628c7
Production server is likely using older versions, unsure if that will be an issue?
If you just run docker-compose up -d again, it will notice the new container and the changed configuration and apply them.
But:
(without destroying anything)
There are a number of settings that can only be set at container startup time. If you change these, Docker Compose will delete and recreate the affected container. For example, links are a startup-only option, so re-running docker-compose up -d will delete and recreate the web container.
this destroyed the Mongo DB container... I cannot do that... again...
db:
image: mongo
restart: always
Add a volumes: option to this so that data is stored outside the container. You can keep it in a named volume, possibly managed by Docker Compose, which has some advantages, but a host-system directory is probably harder to accidentally destroy. You will have to delete and restart the container to change this option. But note that you will also have to delete and restart the container if, for example, there is a security update in MongoDB and you need a new image.
Your ideal state here is:
Actual databases (like your MongoDB container) store data in named volumes or host directories
Applications (like your Rails container) store nothing locally, and can be freely destroyed and recreated
All code is in Docker images, which can always be rebuilt from source control
Use volumes as necessary to inject config files and extract logs
If you lose your entire /var/lib/docker directory (which happens!) you shouldn't actually lose any state, though you will probably wind up with some application downtime.
Just docker-compose up -d will do the job.
Output should be like
> docker-compose up -d
Starting container1 ... done
> docker-compose up -d
container1 is up-to-date
Creating container2 ... done
As a side note, docker-compose is not really for production. You may want to consider docker swarm.
the key here is that up is idempotent.
if you update configuration in docker-compose.yaml
docker compose up -d
If compose is building images before run it, and you want to rebuild them:
docker compose up -d --build

Export RabbitMQ Docker image with vhost and queues

I have a rabbitMQ docker container that I started using the following command:
docker run -d --name myrabbit1 -p 15672:15672 rabbitmq:3-management
I then loggin to the management plugin and create users, vhosts, queues, etc.
I want to save all those settings so they can be loaded up again. To do that I tried committing to a new image:
docker commit myrabbit1 vbrabbit:withVhostAndQueues
I then start up my new container (after stopping the old one):
docker run -d --name vbrabbit2 -p 15672:15672 -p 5672:5672 vbrabbit:withVhostAndQueues
I expect that all the queues, vhosts, etc would be saved, but they are not.
What am I missing?
Result from docker ps -a:
I want to save all those settings so they can be loaded up again
are you needing to create a copy of the container, with the same settings?
or are you just looking to docker stop myrabbit1 and then later docker start myrabbit to run the same container, again?
TL;DR
The RabbitMQ instance within the container is looking for data in a different place. The default configuration changes the data storage/load location per container creation. Thus the OPs data existed in the created "final" image but rabbitmq wasn't loading it.
To fix statically set RABBITMQ_NODENAME which likewise might requiring adding another line to /etc/hosts for RabbitMQ to affirm the node is active.
Details
This happened to me with docker rabbit:3.8.12-management
This is caused by RabbitMQ's default configuration impacting how it does data storage. By default RabbitMQ starts a node on UNIX system with a name of rabbit#$HOSTNAME (see RABBITMQ_NODENAME on config docs). In Docker the $HOSTNAME changes per container run it defaults to the container id (e.g. something like dd84759287560).
In #jhilden's case is when the vbrabbit:withVhostAndQueues image is booted as a new container the RABBITMQ_NODENAME becomes a different value then what was used to create and store the original vhosts, user, queues, etc. And as RabbitMQ stores data inside a directory named after the RABBITMQ_NODENAME the existing data isn't loaded on boot of vbrabbit:withVhostAndQueues. As when the $HOSTNAME changes the RABBITMQ_NODENAME changes. Thus the booting RabbitMQ instance cannot find any existing data. (e.g. the existing data is there in the image but for a different RABBITMQ_NODENAME and isn't loaded).
Note: I've only looked into solving this for a local development single instance cluster. If you're using RabbitMQ docker for a production deployment you'd probably need to look into customized hostnames
To fix this issue we set a static RABBITMQ_NODENAME for the container.
In docker-compose v3 file we updated from:
# Before fix
rabbitmq:
image: "rabbitmq:$RABBITMQ_VERSION"
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
- "61613:61613"
volumes:
- "./etc/rabbit-plugins:/etc/rabbitmq/enabled_plugins"
- type: volume
source: rabbitmq-data
target: /var/lib/rabbitmq
Into after fix:
rabbitmq:
image: "rabbitmq:$RABBITMQ_VERSION"
container_name: rabbitmq
# Why do we set NODENAME and extra_hosts?
#
# It codifies that we're using the same RabbitMQ instance between container rebuilds.
# If NODENAME is not set it defaults to "rabbit#$HOST" and because $HOST is dynamically
# created in docker it changes per container deployment. Why is a changing host an issue?
# Well because under the hood Rabbit stores data on a per node basis. Thus without the
# static RABBITMQ_NODENAME the directory the data is stored within changes per restart.
# Going from "rabbit#7745942c559e" to "rabbit#036834720485" the next. Okay, but why do we
# need extra_hosts? We'll Rabbit wants to resolve itself to affirm it's management UI is
# functioning post deployment and does that with an HTTP call. Thus to resolve the static
# host from RABBITMQ_NODENAME we need to add it to the containers /etc/hosts file.
environment:
RABBITMQ_NODENAME: "rabbit#staticrabbit"
extra_hosts:
- "staticrabbit:127.0.0.1"
ports:
- "5672:5672"
- "15672:15672"
- "61613:61613"
volumes:
- "./etc/rabbit-plugins:/etc/rabbitmq/enabled_plugins"
- type: volume
source: rabbitmq-data
target: /var/lib/rabbitmq

Using docker-compose (formerly fig) to link a cron image

I'm runing a simple rails app in docker using docker-compose (formerly fig) like this:
docker-compose.yml
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/usr/src/app
ports:
- "3011:3000"
links:
- db
Dockerfile
FROM rails:onbuild
I need to run some periodical maintainance scripts, such as database backups, pinging sitemaps to search engines etc.
I'd prefer not to use cron on my host machine, since I prefer to keep the application portable and my idea is to use docker-compose to link an image such as https://registry.hub.docker.com/u/hamiltont/docker-cron/ using docker-compose.
The rails official image does not have ssh enabled so I cannot just have the cron container to ssh into the web container and run the scripts.
Does docker-compose have a way for a container to gain a shell into a linked container to execute some commands?
What actually would you like to do with your containers? If you need to access some objects from container's file system, you should just mount the volume to the ancillary container (consider --volumes-from option).
Any SSH interaction between containers is considered as a bad practice (at least since docker 1.3, when docker exec has been implemented). Running more than one process inside the container (e.g. smth but the postgres or rails in your case) will result in a large overhead: in order to have a sshd along with rails you'll have to deploy something like supervisord.
But if you really need to provide some kind of nonstandard interaction between the containers and you're sure that you really need it, I would suggest you to use one of the full-featured docker client libraries (like docker-py). It will allow you to launch docker exec in a programmable way.

Resources