Ansible: copy files to Docker Volume - docker

I run a Ansible Playbook that gathers machine specific information and store the date within a file. One for each host. So I end up with a buch of files which should now be send to my Docker-Based Application for further processing.
Actuall I need to store it in a specific folder and create a volume so the container is able to read the files.
This requires the existence/creation of /tmp/incoming ...
Now if the Monitor app gets moved or a second Instance is needed, you'll have to access the filesystem and create the dirctory.
So I'd like to create a more dynamic volume:
docker volume create --name monitor-incoming:/var/www/monitor/incoming
Now Docker containers will be able to access the volume. But can I use Ansible to "copy" the files to this remote volume ? Sending them to monitor-incoming instead of /tmp/incoming ?

You could use any of the following methods:
Ansible non-SSH docker connection with standard Ansible file/copy modules. Example taken from from Ansible documentation:
- name: create jenkins container
docker_container:
docker_host: myserver.net:4243
name: my_jenkins
image: jenkins
- name: add container to inventory
add_host:
name: my_jenkins
ansible_connection: docker
ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
ansible_user: jenkins
changed_when: false
- name: create directory for ssh keys
delegate_to: my_jenkins
file:
path: "/var/jenkins_home/.ssh/jupiter"
state: directory
Ansible command module using the docker cli. I have had more success with this method.
- name: remove temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
state: absent
- name: recreate data volume
become: yes
docker_volume:
name: your_data_volume
state: present
- name: create temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
image: tianon/true
state: present
volumes:
- your_data_volume:/data
- name: copy folder contents to data volume via temporary data-only container
become: yes
command: docker cp /some_data_path_on_this_ansible_host/. your_temp_data_container:/data
- name: remove temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
state: absent

I don't have a docker environment to test this out, but I think you could do it from the host with a docker run on a bash image that:
Binds the host's directory in which the file you want to copy is (I assumed we are copying /path/to/filename/on/host). I bound that directory to /tmp/source/ inside the container.
Binds the monitor-incoming volume somewhere in the container. I chose to bind it to /tmp/destination/.
Runs a simple cp command (since the entrypoint of the bash image is bash itself, we just have to add the command to run).
Here is the command:
docker run \
--mount source=monitor-incoming,target=/tmp/destination \
--mount type=bind,source=/path/to/filename/on/host,target=/tmp/source \
bash:4.4 \
cp "/tmp/source" "/tmp/destination/path/to/destination/inside/volume/"
This is not tested, but I think something along those lines should work. Notice that if that script is used fairly frequently, you should probably have a container dedicated to that task rather than call docker run many times.
I'm not sure if there's a more direct way that would not involve running cp inside a container...

I wish we could do it with a built-in module, but unless we have one, you can add a file into a named volume in a single task:
- name: actual config file is in the volume
command:
cmd: docker run --rm -iv your-config-volume:/v busybox sh -c 'cat > /v/config.yml'
stdin: "{{ lookup('file', 'config.yml') | regex_replace('\\r\\n', '\\n') }}"
Here we use command module to create a temporary container with the your-config-volume volume attached.
The sh -c 'cat > /v/config.yml' command saves all stdin into a file.
Finally, we set stdin to be the config.yml file from your role.
The regex_replace('\\r\\n', '\\n') part is required only if you run Windows.

Related

Copy file to docker container via Ansible

I want to copy a file to a docker container, as one of my Ansible playbook steps. I create the file with jinja2 "template". I can copy the file in /tmp/ and the run a command to copy it to the docker container, such as:
`docker cp /tmp/config.json my_image:/app/config/path/`
But I'm looking for the better way not to use "/tmp" or like that.
Ansible has a docker connection plugin that you can use to interact with existing containers in your playbook. For example, if I have a container named mycontainer:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07899303ac55 alpine "sleep inf" 7 seconds ago Up 2 seconds mycontainer
I can create an Ansible inventory like this that sets the ansible_connection variable to community.general.docker:
all:
hosts:
mycontainer:
ansible_connection: community.docker.docker
Now I can target the container in a play like this:
- hosts: mycontainer
gather_facts: false
become: true
tasks:
- name: create target directory in container
file:
path: /target
state: directory
- name: copy a file into the container
copy:
src: example.file
dest: /target/example.file

How can I set permissions on an attached Volume within a Docker container?

I'm trying to deploy the Prometheus docker container with persistent data via an NFS volume using a Docker named volume. I'm deploying with Ansible, so I'll post Ansible config, but I've executed the same using Docker's CLI commands and the issue presents in that case as well.
When I deploy the container and review the containers docker logs, I see that /etc/prometheus is shared appropriately and attached to the container. However, /prometheus, which is where the container stores relevant DB and metrics, gives permission denied.
According to this answer, /prometheus is required to be chowned to nobody. This doesn't seem to happen within the container upon startup.
Here's the volume creation from my Ansible role:
- name: "Creates named docker volume"
docker_volume:
volume_name: prometheus_persist
state: present
driver_options:
type: nfs
o: "addr={{ nfs_server }},rw,nolock"
device: ":{{ prometheus_nfs_path }}"
Which is equivalent to this Docker CLI command:
docker volume create -d local -o type=nfs -o o=addr={{ nfs_server }},rw -o device=:{{ prometheus_nfs_path }} prometheus_persist
Here's my container deployment stanza
- name: "Deploy prometheus container"
docker_container:
name: prometheus
# hostname: prometheus
image: prom/prometheus
restart_policy: always
state: started
ports: 9090:9090
user: ansible:docker
volumes:
- "{{ prometheus_config_path }}:/etc/prometheus"
mounts:
- source: prometheus_persist
target: /prometheus
read_only: no
type: volume
comparisons:
env: strict
Which is equivalent to this Docker CLI command:
docker run -v prometheus_persist:/prometheus -v "{{ prometheus_config_path }}:/etc/prometheus" -p 9090:9090 --name prometheus prom/prometheus
Again, the container logs upon deployment indicate permission denied on /prometheus. I've tested by mounting the prometheus_persist named volume on a generic Ubuntu container, it mounts fine, and I can touch files within. Any advice on how to resolve this?
This turned out to be an issue with NFS squashing. The NFS exporter in my case is an old Synology, which doesn't allow no_root_squash to be set. However, mapping all users to admin on the NFS share resolved the issue.

Is it possible to run Docker command from inside an Azure Pipelines container job?

Azure Pipelines support containerized jobs
I tried to run a Docker command inside a container job:
pool:
vmImage: 'ubuntu-16.04'
container: ubuntu:16.04
steps:
- script: docker ps
And I got an error saying command not found: docker, which makes sense, as running Docker from inside a Docker container is not a standard use case.
However, I need to run a job inside a container to use a specific build tool, and I also need to publish Docker images from inside that container.
Is it possible to achieve it Azure Pipelines?
basically, what you need to do is something like this:
create a dockerfile for your build agent
use that as a build agent container or host it somewhere as a real build agent. in the latter case you'd need to map host docker to the container:
volumeMounts:
- mountPath: /docker-volume
name: docker-in-docker
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-in-docker
hostPath:
path: /agent
- name: docker-volume
hostPath:
path: /var/run/docker.sock
you need both the socket and the hostpath, because if you want to map directories to the container, they have to be present on the host, docker builds are fine without folder sharing

Docker volume vs Kubernetes persistent volume

The closest answer I found is this.
But I want to know is that, will the Dockerfile VOLUME command be totally ignored by Kubernetes? Or data will be persisted into two places? One for docker volume (in the host which pod running) and another is Kubernetes's PV?
The reason of asking this is because I deploy some containers from docker hub which contain VOLUME command. Meanwhile I also attach PVC to my pod. I am thinking whether local volume (docker volume, not K8 PV) will be created in the node? If my pod scheduled to another node, then another new volume created?
On top of this, thanks for #Rico to point out that -v command and Kubernetes's mount will take precedence over dockerfile VOLUME command, but what if as scenario below:
dockerfile VOLUME onto '/myvol'
Kubernetes mount PVC to '/anotherMyVol'
In this case, will myvol mount to my local node harddisk? and cause unaware data persisted locally?
It will not be ignored unless you override it on your Kubernetes pod spec. For example, if you follow this example from the Docker documentation:
$ docker run -it container bash
root#7efcf5ef12a2:/# mount | grep myvol
/dev/nvmeXnXpX on /myvol type ext4 (rw,relatime,discard,data=ordered)
root#7efcf5ef12a2:/#
You'll see that it's mounted on the root drive of the host where the container is running on. Docker actually creates a volume on the host filesystem under /var/lib/docker/volumes (/var/lib/docker is your Docker graph directory):
$ pwd
/var/lib/docker/volumes
$ find . | grep greeting
./d0bc20d085243c39c4f386dce2f6cafcd8146128d6b0c8f9dcb27cfb61a7ecab/_data/greeting
You can override this with the -v option in Docker:
$ docker run -it -v /mnt:/myvol container bash
root#1c7211cf43d0:/# cd /myvol/
root#1c7211cf43d0:/myvol# touch hello
root#1c7211cf43d0:/myvol# exit
exit
$ pwd # <= on the host
/mnt
$ ls
hello
So on Kubernetes you can override it in the pod spec:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: container
volumeMounts:
- name: storage
mountPath: /myvol
volumes:
- name: storage
hostPath:
path: /mnt
type: Directory
You need to explicitly define a PersistentVolumeClaim and/or PersistentVolume. This is not done for you.

Running docker container in Jenkins container, How can I set the volume from host?

I'm running a container with jenkins using "docker outside of docker". My docker compose is:
---
version: '2'
services:
jenkins-master:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/urandom:/dev/random
- /home/jj/jenkins/jenkins_home/:/var/jenkins_home
ports:
- "8080:8080"
So all containers launched from "jenkins container" are running in host machine.
But when I try to run docker-compose in "jenkins container" in a job thats needs a volume, it takes the path from host instead of jenkins. I mean, when I run docker-compose with
volumes:
- .:/app
It is mounted in /var/jenkins_home/workspace/JOB_NAME in the host but I want that it is mounted in /home/jj/jenkins/jenkins_home/workspace/JOB_NAME
Any idea for doing this with a "clean" mode?
P.D.: I did a workaround using environments variables.
Docker on the host will map the path as is from the request, and docker-compose will make the request with the path it sees inside the container. This leaves you with a few options:
Don't use host volumes in your builds. If you need volumes, you can use named volumes and use docker io to read in and out of those volumes. That would look like:
tar -cC data . | docker run -i --rm -v app-data:/target busybox /bin/sh -c "tar -xC /target". You'd reverse the docker/tar commands to pull data back out.
Make the path on the host match that of the container. On your host, if you have access to make a symlink in var, you can ln -s /home/jj/jenkins/jenkins_home /var/jenkins_home and then update your compose file to have the same path (you may need to specify /var/jenkins_home/. to follow the symlink).
Make the path of the container match that of the host. This may be the easiest option, but I'm not positive it would work (depends on where compose thinks it's running). Your Dockerfile for the jenkins master can include the following:
RUN mkdir -p /home/jj/jenkins \
&& ln -s /var/jenkins_home /home/jj/jenkins/jenkins_home
ENV JENKINS_HOME /home/jj/jenkins/jenkins_home
If the easy option doesn't work, you can rebuild the image from jenkins and change the JENKINS_HOME variable to match your environment.
Make your compose paths absolute. You can add some code to set a variable:
export CUR_DIR=$(pwd | sed 's#/var/jenkins_home#/home/jj/jenkins/jenkins_home#'). Then you can set your volume with that variable:
volumes:
- ${CUR_DIR:-.}:/app

Resources