Airflow how to mount airflow.cfg in docker container - docker

I'm running airflow in a docker container and want to mount my airflow.cfg as a volume so I can quickly edit the configuration without rebuilding my image or editing directly in the running container. I'm able to mount my airflow.cfg as a volume and my airflow webserver successfully reads the configuration from it on start up. However, when I edit on the host changes aren't reflected inside the docker container.
The output for findmnt -M airflow.cfg inside the docker container returns:
TARGET SOURCE FSTYPE OPTIONS
/usr/local/airflow/airflow.cfg /dev/sda1[/host/path/airflow/airflow.cfg~//deleted] ext4 rw,relatim
From that output it seems like airflow.cfg continues to point to the original unedited version of airflow.cfg. Is there any workaround to allow updating the config file from the host machine?
I'm using the LocalExecutor compose file from the puckel github repo as a base. I modify it to mount airflow.cfg in the compose file instead of copying it in the Dockerfile.

I had the same issue and I solved it by adding the following line to docker-compose.yml, under the webserver service
- volumes:
- ./config/airflow.cfg:/opt/airflow/airflow.cfg
I have my config file in a folder called config where the docker-compose.yml file is.

In order to quick change airflow config inside a docker container,There are many ways. instead of change airflow.cfg, you can change environment variable directly. In docker container, it can very easy to revise in docker-compose.yml directly.
And you can just restart the docker-compose quickly.
Here is some common configuration variable:
dag_folder : AIRFLOW__CORE__DAGS_FOLDER
sql_alchemy_conn: AIRFLOW__CORE__SQL_ALCHEMY_CONN
executor:AIRFLOW__CORE__EXECUTOR all configuration variable could be found at official doc
Below is my airflow docker-compose snips
webserver:
image: apache/airflow:1.10.12
depends_on:
- postgres
environment:
- AIRFLOW_HOME=/opt/airflow
- AIRFLOW__CORE_dags_folder=/opt/airflow/dags
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql://airflow:airflow#postgres/airflow
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__FERNET_KEY=#####youkey################
volumes:
- ./dags:/opt/airflow/dags
command: webserver

Related

add a secret to container without exposing file locations to source control

I want to add my aws credentials file to a docker container, so it can access AWS apis.
The credentials file exists in my host machine at /home/user/.aws/credentials
When running the container from command line, I can do
docker run --rm -d -v /home/user/.aws/:/.aws:ro -d \
--env AWS_CREDENTIAL_PROFILES_FILE=/.aws/credentials proj:latest
In docker compose, I can mount the .aws directory with volumes property like so:
services:
proj:
volumes:
- aws_credentials:/.aws:ro
environment:
AWS_CREDENTIAL_PROFILES_FILE: /.aws/credentials
volumes:
aws_credentials:
external: true
My question is, how to populate the external aws_credentials volume with data?
Approaches that do not work:
Use secrets instead of volumes. I am not using Docker swarm
Use config instead of volumes. I am not using Docker swarm
Use a bind mount instead of a volume. The docker-compose file gets checked into source control, and I do not want directories checked in.
services:
proj:
volumes:
- /home/user/.aws/:/.aws:ro #<-- DO NOT WANT THIS IN SOURCE CONTROL
environment:
AWS_CREDENTIAL_PROFILES_FILE: /.aws/credentials
One answer I came up with is using environment variables like so:
services:
proj:
secrets:
- aws_credentials
environment:
AWS_CREDENTIAL_PROFILES_FILE: /run/secrets/aws_credentials
secrets:
aws_credentials:
file: ${awscredfile}
and making sure awscredfile is either loaded in the environment for the parent process of docker compose, or passed in in an env file with --env-file parameter to docker compose.

How to move docker volume to disk location?

I have a MySQL docker image running in a docker container on Ubuntu VPS. I bring up MySQL using the docker-compose up -d command via the following docker-composer.yml file
version: "3"
services:
mysql_server:
image: mysql:8.0.21
restart: always
container_name: mysql_server
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MYSQL_ROOT_PASSWORD: root_password
volumes:
- mysql_server_data:/var/lib/mysql
- /mysql/files/conf.d:/etc/mysql/conf.d
I am having some performance issues and would like to do the following in attempt to improve performance.
I want the data in the mysql_server_data volume to be mounted on /mysql/data without losing any data as this instance in running in production.
I also want to mount the MySQL config file on /mysql/files so I can change the instance configuration to increase performance.
Questions
How can change the data location of a the volume from mysql_server_data to /mysql/data?
Also, how can I mount MySQL's config file on /mysql/files/conf.d to allow me to update the settings?
I tried to mount config file like this
volumes:
- /mysql/files/conf.d:/etc/mysql/conf.d
But that created a directory /mysql/files/conf.d with no config file.
To move the data:
Shutdown the container with docker-compose down, then on the local file system, copy the data from mysql_server_data to /mysql/data. Then change the compose file to reflect the new location. Finally restart the container with docker-compose up.
To mount the config files, as per the docker hub documentation for MySQL, If /my/custom/config-file.cnf is the path and name of your custom configuration file, the your volume map is:
/my/custom:/etc/mysql/conf.d
Note that mapping the volume to the container does not bring the data from your container to you local, but the other way around. So if you want to have the file in the container, you must first create it on your local.
Use the trick suggested by Docker maintainer Sebastiaan van Stijn at https://github.com/moby/moby/issues/31417 to send the tar over stdout:
docker run --rm -v vol_name:/vol_path img_name sh -c 'tar -cOzf - /vol_path' > volume-export.tgz

How to configure a dockerfile and docker-compose for Jenkins

Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.
docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>
I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker
HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.

Docker volumes mounting on Windows 8 is not working

Context
I want to run a Docker Compose application on a Windows 8. I made it under a Ubuntu 16.04 and it's perfectly working on it.
This Docker Compose run:
nginx
php-fpm
The two containers use volumes.
Files
My .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/Users/my_user/Documents/Development/my_application
My docker-compose.yml file:
version: '2'
services:
web:
build: ../application-web/
ports:
- "80:80"
tty: true
# Add a volume to link php code on the host and inside the container
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
# Add hostnames to allow devs to call special url to open sites
extra_hosts:
- "localhost:127.0.0.1"
- "assistant.docker:127.0.0.1"
- "application.dev:127.0.0.1"
depends_on:
- custom-php
links:
- custom-php:custom-php
custom-php:
build: ../application-php/
ports:
- "50:50"
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
Problem
When I run docker-compose up, everything goes well. Containers start.
But when I try to reach http://192.168.99.100 in my web browser, I got a 403 error.
My investigations show that there is no mounted volumes in the nginx and the php containers:
docker exec -it compose_web_1 bash
ls -la /usr/share/nginx/html/assistant/
shows
drwxr.xr.x 2 root root 80 May 18 15:30 .
drwxr.xr.x 2 root root 4096 May 18 16:10 ..
It seems that Docker cannot mount volumes. Why?
Other information
I am using the Docker Toolbox: https://www.docker.com/products/docker-toolbox
I know that's the good IP address because when I try to reach it in my web browser, I see my nginx container displaying logs.
The environment variable APPLICATION_PATH set as //C:/Users/my_user/Documents/Development/my_application cannot work because Docker use the ":" character as separator for volume declaration:
ERROR: Volume //C:/Users/my_user/Documents/Development/my_application://C:/Users/my_user/Documents/Development/my_application has incorrect format, should be external:internal[:mode]
It's not a nginx problem because when I create an index.phtml file in the folder, I am able to run it:
<?php
echo 'Hello world!';
Ok, I finally did it!
TL;DR
Follow those instructions to be able to access C:\ inside your containers.
1. Install the Docker Toolbox
Go get it here: https://www.docker.com/products/docker-toolbox
Install it.
2. Run a Hello world
Open a Docker Quickstart Terminal.
Run in it:
docker run hello-world
3. Share C:\ with Docker
Open Virtualbox
Open configuration of the default virtual machine and go to shared folders
Modify or create a new shared folder by clicking on buttons to the right. Set options to:
C:\
C
Auto mount
Permanent configuration
Then validate.
4. Activate sharing
Shutdown the default virtual machine then restart it.
5. Set your paths
e.G. if you have a .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/path_from_C_to_the_folder_you_want_to_share_on_the_volume
/!\ you need to set COMPOSE_CONVERT_WINDOWS_PATHS to 1!
6. Start your Compose
In the Docker Quickstart Terminal:
Go to your Docker Compose folder, then start it:
cd /path_to_your_compose_folder
docker-compose up
Why have I to do that? It's so complicated!
The Docker technology rely on Linux namespaces. Without Linux, it can't work. To allow use of Docker on a Windows, Docker needs to install a Linux virtual machine. All the containers will run inside it.
The default virtual machine is now created and running within Virtualbox, that's why you have to share your folders using Virtualbox.
After sharing, the default virtual machine will have a mounted folder in it with a custom name (in the above example, it's C but it could be elephant or whatever).
Finally, Docker will mount volumes from the default virtual machine to the container: you have to use the name of the default machine shared folder in your volume declaration (in the above example, it's C but it could be elephant or whatever).

Empty directory when mounting volume using windows for docker

I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".

Resources