I want to copy a file to a docker container, as one of my Ansible playbook steps. I create the file with jinja2 "template". I can copy the file in /tmp/ and the run a command to copy it to the docker container, such as:
`docker cp /tmp/config.json my_image:/app/config/path/`
But I'm looking for the better way not to use "/tmp" or like that.
Ansible has a docker connection plugin that you can use to interact with existing containers in your playbook. For example, if I have a container named mycontainer:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07899303ac55 alpine "sleep inf" 7 seconds ago Up 2 seconds mycontainer
I can create an Ansible inventory like this that sets the ansible_connection variable to community.general.docker:
all:
hosts:
mycontainer:
ansible_connection: community.docker.docker
Now I can target the container in a play like this:
- hosts: mycontainer
gather_facts: false
become: true
tasks:
- name: create target directory in container
file:
path: /target
state: directory
- name: copy a file into the container
copy:
src: example.file
dest: /target/example.file
Related
I'm having trouble demonstrating that data I generate on a shared volume is persistent, and I can't figure out why. I have a very simple docker-compose file:
version: "3.9"
# Define network
networks:
sorcernet:
name: sorcer_net
# Define services
services:
preclean:
container_name: cleaner
build:
context: .
dockerfile: DEESfile
image: dees
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
#command: python run dees.py
process:
container_name: processor
build:
context: .
dockerfile: OASISfile
image: oasis
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
volumes:
pgdata:
name: pgdata
Running the docker-compose file to keep the containers running in the background:
vscode ➜ /com.docker.devenvironments.code $ docker compose up -d
[+] Running 4/4
⠿ Network sorcer_net Created
⠿ Volume "pgdata" Created
⠿ Container processor Started
⠿ Container cleaner Started
Both images are running:
vscode ➜ /com.docker.devenvironments.code $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oasis latest e2399b9954c8 9 seconds ago 1.09GB
dees latest af09040befd5 31 seconds ago 1.08GB
and the volume shows up as expected:
vscode ➜ /com.docker.devenvironments.code $ docker volume ls
DRIVER VOLUME NAME
local pgdata```
Running the docker container, I navigate to the volume folder. There's nothing in the folder -- this is expected.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#049dac037802 opt]# cd /usr/share/appdata/
[root#049dac037802 appdata]# ls
[root#049dac037802 appdata]#
Since there's nothing in the folder, I create a file in called "dog.txt" and recheck the folder contents. The file is there. I exit the container.
[root#049dac037802 appdata]# touch dog.txt
[root#049dac037802 appdata]# ls
dog.txt
[root#049dac037802 appdata]# exit
exit
To check the persistence of the data, I re-run the container, but nothing is written to the volume.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#1787d76a54b9 opt]# cd /usr/share/appdata/
[root#1787d76a54b9 appdata]# ls
[root#1787d76a54b9 appdata]#
What gives? I've tried defining the volume as persistent, and I know each of the images have a folder location at /usr/share/appdata.
If you want to check the persistence of the data in the containers defined in your docker compose, the --volumes-from flag is the way to go
When you run
docker run -it oasis
This newly created container shares the same image, but it doesn't know anything about the volumes defined.
In order to link the volume to the new container run this
docker run -it --volumes-from $CONTAINER_NAME_CREATED_FROM_COMPOSE oasis
Now this container shares the volume pgdata.
You can go ahead and create files at /usr/share/appdata and validate their persistence
On a Linux system I am running a simple test job from the command line using the following command:
gitlab-runner exec docker --builds-dir /home/project/buildsdir test_job
with the following job definition in .gitlab-ci.yml:
test_job:
image: python:3.8-buster
script:
- date > time.dat
However, the build folder is empty. after having run the job. I only can imaging that build-dir means a location inside the docker image.
Also after having run the job successfully I am doing
docker image ls
and I do not see a recent image.
So how can I "share"/"mount" the actual build folder for the docker gitlab job to the hosts system so I can access all the output files?
I looked at the documentation and I found nothing, the same for
gitlab-runner exec docker --help
I also tried to use artifcats
test_job:
image: python:3.8-buster
script:
- pwd
- date > time.dat
artifacts:
paths:
- time.dat
but that also did not help. I was not able to find the file time.dat anywhere after the completion of the job.
I also tried to use docker-volumes:
gitlab-runner exec docker --docker-volumes /home/project/buildsdir/:/builds/project-0 test_job
gitlab-runner exec docker --docker-volumes /builds/project-0:/home/project/buildsdir/ test_job
but neither worked (job failed in both cases).
you have to configure your config.toml file located at /etc/gitlab-runner/
here's the doc: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section
first add a build_dir and mention it in the volumes at the end bind it with a
directory on your host machine like this:
build_dir = "(Your build dir)"
[runners.docker]
volumes = ["/tmp/build-dir:/(build_dir):rw"]
Azure Pipelines support containerized jobs
I tried to run a Docker command inside a container job:
pool:
vmImage: 'ubuntu-16.04'
container: ubuntu:16.04
steps:
- script: docker ps
And I got an error saying command not found: docker, which makes sense, as running Docker from inside a Docker container is not a standard use case.
However, I need to run a job inside a container to use a specific build tool, and I also need to publish Docker images from inside that container.
Is it possible to achieve it Azure Pipelines?
basically, what you need to do is something like this:
create a dockerfile for your build agent
use that as a build agent container or host it somewhere as a real build agent. in the latter case you'd need to map host docker to the container:
volumeMounts:
- mountPath: /docker-volume
name: docker-in-docker
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-in-docker
hostPath:
path: /agent
- name: docker-volume
hostPath:
path: /var/run/docker.sock
you need both the socket and the hostpath, because if you want to map directories to the container, they have to be present on the host, docker builds are fine without folder sharing
I run a Ansible Playbook that gathers machine specific information and store the date within a file. One for each host. So I end up with a buch of files which should now be send to my Docker-Based Application for further processing.
Actuall I need to store it in a specific folder and create a volume so the container is able to read the files.
This requires the existence/creation of /tmp/incoming ...
Now if the Monitor app gets moved or a second Instance is needed, you'll have to access the filesystem and create the dirctory.
So I'd like to create a more dynamic volume:
docker volume create --name monitor-incoming:/var/www/monitor/incoming
Now Docker containers will be able to access the volume. But can I use Ansible to "copy" the files to this remote volume ? Sending them to monitor-incoming instead of /tmp/incoming ?
You could use any of the following methods:
Ansible non-SSH docker connection with standard Ansible file/copy modules. Example taken from from Ansible documentation:
- name: create jenkins container
docker_container:
docker_host: myserver.net:4243
name: my_jenkins
image: jenkins
- name: add container to inventory
add_host:
name: my_jenkins
ansible_connection: docker
ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
ansible_user: jenkins
changed_when: false
- name: create directory for ssh keys
delegate_to: my_jenkins
file:
path: "/var/jenkins_home/.ssh/jupiter"
state: directory
Ansible command module using the docker cli. I have had more success with this method.
- name: remove temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
state: absent
- name: recreate data volume
become: yes
docker_volume:
name: your_data_volume
state: present
- name: create temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
image: tianon/true
state: present
volumes:
- your_data_volume:/data
- name: copy folder contents to data volume via temporary data-only container
become: yes
command: docker cp /some_data_path_on_this_ansible_host/. your_temp_data_container:/data
- name: remove temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
state: absent
I don't have a docker environment to test this out, but I think you could do it from the host with a docker run on a bash image that:
Binds the host's directory in which the file you want to copy is (I assumed we are copying /path/to/filename/on/host). I bound that directory to /tmp/source/ inside the container.
Binds the monitor-incoming volume somewhere in the container. I chose to bind it to /tmp/destination/.
Runs a simple cp command (since the entrypoint of the bash image is bash itself, we just have to add the command to run).
Here is the command:
docker run \
--mount source=monitor-incoming,target=/tmp/destination \
--mount type=bind,source=/path/to/filename/on/host,target=/tmp/source \
bash:4.4 \
cp "/tmp/source" "/tmp/destination/path/to/destination/inside/volume/"
This is not tested, but I think something along those lines should work. Notice that if that script is used fairly frequently, you should probably have a container dedicated to that task rather than call docker run many times.
I'm not sure if there's a more direct way that would not involve running cp inside a container...
I wish we could do it with a built-in module, but unless we have one, you can add a file into a named volume in a single task:
- name: actual config file is in the volume
command:
cmd: docker run --rm -iv your-config-volume:/v busybox sh -c 'cat > /v/config.yml'
stdin: "{{ lookup('file', 'config.yml') | regex_replace('\\r\\n', '\\n') }}"
Here we use command module to create a temporary container with the your-config-volume volume attached.
The sh -c 'cat > /v/config.yml' command saves all stdin into a file.
Finally, we set stdin to be the config.yml file from your role.
The regex_replace('\\r\\n', '\\n') part is required only if you run Windows.
I'm running a container with jenkins using "docker outside of docker". My docker compose is:
---
version: '2'
services:
jenkins-master:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/urandom:/dev/random
- /home/jj/jenkins/jenkins_home/:/var/jenkins_home
ports:
- "8080:8080"
So all containers launched from "jenkins container" are running in host machine.
But when I try to run docker-compose in "jenkins container" in a job thats needs a volume, it takes the path from host instead of jenkins. I mean, when I run docker-compose with
volumes:
- .:/app
It is mounted in /var/jenkins_home/workspace/JOB_NAME in the host but I want that it is mounted in /home/jj/jenkins/jenkins_home/workspace/JOB_NAME
Any idea for doing this with a "clean" mode?
P.D.: I did a workaround using environments variables.
Docker on the host will map the path as is from the request, and docker-compose will make the request with the path it sees inside the container. This leaves you with a few options:
Don't use host volumes in your builds. If you need volumes, you can use named volumes and use docker io to read in and out of those volumes. That would look like:
tar -cC data . | docker run -i --rm -v app-data:/target busybox /bin/sh -c "tar -xC /target". You'd reverse the docker/tar commands to pull data back out.
Make the path on the host match that of the container. On your host, if you have access to make a symlink in var, you can ln -s /home/jj/jenkins/jenkins_home /var/jenkins_home and then update your compose file to have the same path (you may need to specify /var/jenkins_home/. to follow the symlink).
Make the path of the container match that of the host. This may be the easiest option, but I'm not positive it would work (depends on where compose thinks it's running). Your Dockerfile for the jenkins master can include the following:
RUN mkdir -p /home/jj/jenkins \
&& ln -s /var/jenkins_home /home/jj/jenkins/jenkins_home
ENV JENKINS_HOME /home/jj/jenkins/jenkins_home
If the easy option doesn't work, you can rebuild the image from jenkins and change the JENKINS_HOME variable to match your environment.
Make your compose paths absolute. You can add some code to set a variable:
export CUR_DIR=$(pwd | sed 's#/var/jenkins_home#/home/jj/jenkins/jenkins_home#'). Then you can set your volume with that variable:
volumes:
- ${CUR_DIR:-.}:/app