Common.py at Kiwi. How to mount to docker - docker

I followed this Kiwi TCMS step, but what is really for me to is to understand how do you mount the common.py(main configuration file) to the working kiwi instance.
I don't see the place of common.py in the kiwi, so I dunno where to mount it? Or do I have to recreate the images every time to get the new settings?
EDIT:
I've tried Kiwi TCMS configuration settings guide and I changed some settings in tcms/settings/common.py
How to implement that setting in the working Kiwi environment?

The config file approach
The common.py file it seems to be located at tcms/settings/common.py as per your second link
All sensible settings are defined in tcms/settings/common.py. You will have to update some of them for your particular production environment.
If you really want to map only this file then from the root of your proeject:
docker run -v ./tcms/settings/common.py:/absolute/container/path/to/tcms/settings/common.py [other-options-here] image-name
Running the docker command with the above volume map will replace the file inside the docker container /absolute/container/path/to/tcms/settings/common.py with the one in the host tcms/settings/common.py, thus the application will run with the settings defined in the host.
If you don't know the full path to tcms/settings/common.py inside the docker container, then you need to add the Dockerfile to your question so that we can help further.
The ENV file approach
If not already existing a .env file in the root of your project create one and add there all env variables in the common.py:
.env example:
KIWI_DB_NAME=my_db_name
KIWI_DB_USER=my_db_user
KIWI_DB_PASSWORD=my_db_password
KIWI_DB_HOST=my_db_host
KIWI_DB_PORT=my_db_port
Add as many environment variables to the .env file as the ones you find in the python code that you want to customize.
Start the docker container from the place where the .env file is with the flag --env-file .env, something like:
docker run --env-file .env [other-options-here] image-name

Related

The same docker compose service in multiple folders

I have a package (bootstrap) that is included in multiple local projects. Example:
project1/:
src/...
tests/...
vendor/bootstrap/...
project2/:
src/...
tests/...
vendor/bootstrap/...
This package has its internal tests and static code analyzers that I want to run inside each projectX/vendor/bootstrap folder. The tests and analyzers are run from docker containers. I.e. bootstrap has docker-compose.yml with some configuration:
version: '3.7'
services:
cli:
build: docker/cli
working_dir: /app
volumes:
- ./:/app
tty: true
The problem is when I run something inside project1/vendor/bootstrap, then switch to project2/vendor/bootstrap and run something there, docker thinks that I execute containers from project1. I believe it's because of the same folder name as Docker Compose generates container names as [folder-name_service-name]. So when I run docker-compose exec cli sh it checks if there is a running container bootstrap_cli, but it can be created within another bootstrap folder of another project.
Example of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
128c3e834df4 bootstrap_cli "docker-php-entrypoi…" 55 minutes ago Up 55 minutes bootstrap_cli
NAMES is the same for containers in all these projectX folders.
There is an option to add container_name: bootstrap_project1_cli, but it seems Docker Compose ignores it when searching for a running container.
So is it possible to differentiate containers of the same name and have all of them at the same time?
Have a look at this github issue:
https://github.com/docker/compose/issues/2120
There are two options to set the COMPOSE_PROJECT_NAME. Use the -p commandline flag or the COMPOSE_PROJECT_NAME environment variable. Both are documented here: https://docs.docker.com/compose/reference/overview/#compose-project-name
When you run docker-compose, it needs a project name for the containers. If you don't specify the -p option, docker-compose looks for an environment varaible named COMPOSE_PROJECT_NAME. If both are not set, it defaults to the current working directory. Thats the behaviour you have.
If you don't want to add a commandline parameter, you can specify the environment variable in your .env file inside the directory of your docker compose file. See https://docs.docker.com/compose/env-file/
docker-compose is basically just a wrapper around the docker cli. It provides some basic scoping for the services inside a compose file by prefixing the containers (networks and volumes, too) with the COMPOSE_PROJECT_NAME value. If not configured differently, this value corresponds to the directory name of the compose file. You can overwrite this by having according environment variables set. A simple solution would be to place an .env file into the bootstrap directories that contains an instruction like
COMPOSE_PROJECT_NAME=project1_bootstrap
which will lead to auto-generated container names like e.g. project1_bootstrap_cli_1
Details:
https://docs.docker.com/compose/reference/envvars/
https://docs.docker.com/compose/env-file/

Mount files in read-only volume (where source is in .dockerignore)

My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.

How to change Prometheus.yml file in the container

How can I change my / Prometheus/ Prometheus.yml on the container itself
I want it to track
1) my appserver - an Node application in a docker container
2) my postgres db
3) my apached and nginx web server
I do know that one has to change the Prometheus.yml file and add targets
Generic mechanisms to change docker images are
Mount your configuration file at the desired path.
Create a new image by copying the co fig file in the new Dockerfile. Not recommended if you have to use different configs for different environments/apps
Change the file on the running container if the application (peomerheus in this case) supports it. I know that some of the apps like Kibana do this. Good for debugging, not recommended for production environments.
It's hard to be precise with an answer given the lack of details but in general, you place your modified prometheus.yml file within the Docker context and modify your Dockerfile to add the instruction
COPY prometheus.yml /path/to/prometheus.yml

creating a directory using docker swarm compose/yaml file

Is there a way to create a directory on the local file system via yaml file if it does not exist?
I currently am mounting a dir from my local file sys inside the container and it works. But if the dir on the file system does not exist, container launch fails as the dir cannot be mounted. How can I make this as seamless as possible and embed the dir creation logic in the swarm yaml file?
As far as I know, docker-compose doesn't permit this, you probably have to do this by hand.
But you could also use an automation tool like puppet or ansible to handle such step to deploy your application and create the appropriate directories and set up your servers.
Here is how your tasks could look like in an ansible playbook to deploy a simple app and create a directory to mount your containers volumes on for instance :
- name: copy docker content
copy:
src: /path/to/app_src
dest: /path/to/app_on_server
- name: create directory for volume
file:
name: /path/to/mountpoint
state: directory
- name: start containers
shell: docker-compose up -d --build
args:
chdir: /path/to/app_on_server
(Note that this snippet is here to provide a general idea of the concept, you'd probably have to set up become directives, permissions, ownership, software installation and many other steps very specific to your application)
The cleanest way would be, that you get the Dockerfile for example from the official Nginx image and add an additional RUN mkdir /my/folder to it.
Afterwards you build your own Docker image for the Nginx via docker build .. Then you have a clean image which contains what you need based on the official source.

Docker-compose specify config file

I'm new to docker and now I want to use docker-compose. I would like to provide the docker-compose.yml script the config file with host/port, maybe credentials and other cassandra configuration.
DockerHub link.
How to specify it in the compose script:
cassandra:
image: bitnami/cassandra:latest
You can do it using Docker compose environment variables(https://docs.docker.com/compose/environment-variables/#substituting-environment-variables-in-compose-files). You can also specify a separate environment file with environment variables.
Apparently, including a file outside of the scope of the image build, such as a config file, has been a major point of discussion (and perhaps some tension) between the community and the folks at Docker. A lot of the developers in that linked issue recommended adding config files in a folder as a volume (unfortunately, you cannot add an individual file).
It would look something like:
volumes:
- ./config_folder:./config_folder
Where config_folder would be at the root of your project, at the same level as the Dockerfile.
If Docker compose environment cannot solved your problem, you should create a new image from the original image with COPY to override the file.
Dockerfile:
From orginal_image:xxx
COPY local_conffile in_image_dir/conf_dir/

Resources