Docker-compose specify config file - docker

I'm new to docker and now I want to use docker-compose. I would like to provide the docker-compose.yml script the config file with host/port, maybe credentials and other cassandra configuration.
DockerHub link.
How to specify it in the compose script:
cassandra:
image: bitnami/cassandra:latest

You can do it using Docker compose environment variables(https://docs.docker.com/compose/environment-variables/#substituting-environment-variables-in-compose-files). You can also specify a separate environment file with environment variables.

Apparently, including a file outside of the scope of the image build, such as a config file, has been a major point of discussion (and perhaps some tension) between the community and the folks at Docker. A lot of the developers in that linked issue recommended adding config files in a folder as a volume (unfortunately, you cannot add an individual file).
It would look something like:
volumes:
- ./config_folder:./config_folder
Where config_folder would be at the root of your project, at the same level as the Dockerfile.

If Docker compose environment cannot solved your problem, you should create a new image from the original image with COPY to override the file.
Dockerfile:
From orginal_image:xxx
COPY local_conffile in_image_dir/conf_dir/

Related

docker-compose: (Re)Build Dockerfile from inside docker-compose file?

Had a hard time googling this question as most suggestions is how to do it through command line which I sadly do not have access to in this environment. Is it possible to do the equivalent of
docker-compose up --build --force-recreate
From inside a docker-compose file?
The environment you describe sounds similar to Kubernetes in a couple of ways, except that it's driven by a Docker Compose YAML file. The strategies that work for Kubernetes will work here too. In Compose there's no way to put "actions" in a YAML file, or flag that a service always needs to be rebuilt or recreated. It sounds like the only thing it's possible to do in your environment is run docker-compose up -d.
The trick that I'd use here is to change the image: for a container whenever you have a change you need to deploy. That means the image tag needs to be something unique; it could be a date stamp or source control ID.
version: '3.8'
services:
myapp:
image: registry.example.com/myapp:20220209
Now when you have a change to your application, you (or your CI system) need to build a new copy of it, offline, and docker push it to a registry. Then change this image: value, and push the updated file to the deployment system. Compose will see that it's only running version 20220208 from yesterday and recreate that specific container.
If you have the ability to specify environment variables, you can use that in the Compose setup
image: registry.example.com/myapp:${MYAPP_TAG:-latest}
to avoid having to physically modify the file.

Mount files in read-only volume (where source is in .dockerignore)

My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.

Common.py at Kiwi. How to mount to docker

I followed this Kiwi TCMS step, but what is really for me to is to understand how do you mount the common.py(main configuration file) to the working kiwi instance.
I don't see the place of common.py in the kiwi, so I dunno where to mount it? Or do I have to recreate the images every time to get the new settings?
EDIT:
I've tried Kiwi TCMS configuration settings guide and I changed some settings in tcms/settings/common.py
How to implement that setting in the working Kiwi environment?
The config file approach
The common.py file it seems to be located at tcms/settings/common.py as per your second link
All sensible settings are defined in tcms/settings/common.py. You will have to update some of them for your particular production environment.
If you really want to map only this file then from the root of your proeject:
docker run -v ./tcms/settings/common.py:/absolute/container/path/to/tcms/settings/common.py [other-options-here] image-name
Running the docker command with the above volume map will replace the file inside the docker container /absolute/container/path/to/tcms/settings/common.py with the one in the host tcms/settings/common.py, thus the application will run with the settings defined in the host.
If you don't know the full path to tcms/settings/common.py inside the docker container, then you need to add the Dockerfile to your question so that we can help further.
The ENV file approach
If not already existing a .env file in the root of your project create one and add there all env variables in the common.py:
.env example:
KIWI_DB_NAME=my_db_name
KIWI_DB_USER=my_db_user
KIWI_DB_PASSWORD=my_db_password
KIWI_DB_HOST=my_db_host
KIWI_DB_PORT=my_db_port
Add as many environment variables to the .env file as the ones you find in the python code that you want to customize.
Start the docker container from the place where the .env file is with the flag --env-file .env, something like:
docker run --env-file .env [other-options-here] image-name

How to change Prometheus.yml file in the container

How can I change my / Prometheus/ Prometheus.yml on the container itself
I want it to track
1) my appserver - an Node application in a docker container
2) my postgres db
3) my apached and nginx web server
I do know that one has to change the Prometheus.yml file and add targets
Generic mechanisms to change docker images are
Mount your configuration file at the desired path.
Create a new image by copying the co fig file in the new Dockerfile. Not recommended if you have to use different configs for different environments/apps
Change the file on the running container if the application (peomerheus in this case) supports it. I know that some of the apps like Kibana do this. Good for debugging, not recommended for production environments.
It's hard to be precise with an answer given the lack of details but in general, you place your modified prometheus.yml file within the Docker context and modify your Dockerfile to add the instruction
COPY prometheus.yml /path/to/prometheus.yml

Managing dev/test/prod environments with Docker

There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.

Resources