Support for `volume_mount` in Nomad Podman task driver? - docker

I am doing some proof of concept work using Nomad to orchestrate several different containers running on RHEL 8 hosts using Podman. I am using the Nomad Podman driver to execute my containers using Podman. I have shared state in the form of an Elasticsearch data directory that I mount into /usr/share/elasticsearch/data.
I initially tried to get this working by defining a host volume in the Nomad client configuration, then adding a volume stanza that references my host volume and a volume mount stanza that references the volume in my Nomad job specification. That approach didn't work - no errors, but the mounting never happens.
After some digging, I found that the Podman task driver's capabilities documentation says that volume mounts are not supported. Instead, I seem to have to use the more limited driver-specific volumes configuration.
So my question is this: Is the lack of support for volume mounts just a temporary shortcoming that will eventually be supported? It does appear that the Docker task driver supports volume mapping and only Podman does not, so perhaps the Podman driver is just not there yet? Or is there a specific reason why there is a difference between how Docker supports volumes and how Podman does it?

yes, currently it does not support host volume defined in nomad client section.
this will works if this PR get merge:
https://github.com/hashicorp/nomad-driver-podman/pull/152
you can build the binary uging golang in this branch:
git clone https://github.com/ttys3/nomad-driver-podman
git checkout append-nomad-task-mounts
./build.sh
replace with new generated nomad-driver-podman and restart nomad.

Related

Volume or directory shared between services and runner container in Gitlab CI

I'm specifically trying to get files from services (docker containers) in a Gitlab CI job to the runner container. *I could provide more details on exactly what I'm trying to do, but I'd like to keep this question fairly generic and platform/language agnostic.
Essentially I have the following .gitlab-ci.yml:
image: customselfhostedimage-redacted
services:
- postgres:latest
- selenium/standalone-chrome
...
There are files being downloaded in one of the service containers (selenium) which I need to gain access to from the main container being run by the Gitlab runner. Unfortunately I can not seem to find any method to create a volume mount or share of some sort between service containers and the host (※ NOTE: This was not true, see accepted answer.). Adding commands to specifically copy files from within service containers is also not an option for me. I'm aware of multiple issues requesting such functionality, such as this one:
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3207
The existence of these open issues indicates to me there is not currently a solution.
I have tried to specify volumes in config.toml as has been in comments in various Gitlab CI issues related to this subject, but this does not seem to create volume mounts on service containers?
Is there any way to create volume mounts inside service containers accessible to the runner/runner container, or if not is there any simple solution to make files accessible from (and possibly between) service containers?
※ NOTE: This is NOT a docker-compose question, and it is NOT a docker-in-docker question.
If you self-host your runners, you can add volumes to the runner configuration, which applies to services and job containers alike.
Per the documentation:
GitLab Runner 11.11 and later mount the host directory for the defined services as well.
For example:
[runners.docker]
# ...
volumes = ["/path/to/bind/from/host:/path/to/bind/in/container:rw"]
This path will be available both in the job as well as in containers defined in services:. But keep in mind that the data (and changes) are available and persisted across all jobs that use this runner.

Can Airflow running in a Docker container access a local file?

I am a newbie as far as both Airflow and Docker are concerned; to make things more complicated, I use Astronomer, and to make things worse, I run Airflow on Windows. (Not on a Unix subsystem - could not install Docker on Ubuntu 20.4). "astro dev start" breaks with an error, but in Docker Desktop I see, and can start, 3 Airflow-related containers. They see my DAGs just fine, but my DAGs don't see the local file system. Is thus unavoidable with the Airflow + Docker combo? (Seems like a big handicap; one can only use a file in the cloud).
In general, you can declare a volume at image runtime in Docker using the -v switch with your docker run command to mount a local folder on your host to a mount point in your container, and you can access that point from inside the container.
If you go on to use docker-compose up to orchestrate your containers, you can specify volumes in the docker-compose.yml file for your containers which configures the volumes for the containers that run.
In your case, the Astronomer docs here suggest it is possible to create a custom directive in the Astronomer docker-compose.override.yml file to mount the volumes in the Airflow containers created as part of your astro commands for your stack which should then be visible from your DAGs.

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

How to mimic --device option in docker run in kubernetes

I am very new to Kubernetes and docker. Am trying to find the config equivalent of --device option in docker run. This option in docker is used to add a device on the host to the container.
Is there a equivalent in kubernetes which can be added to the yaml file?
Thanks
Currently we do not have a passthrough to this option in the API, though you may have some success with using a hostpath volume to mount a device file in.

Appropriate use of Volumes - to push files into container?

I was reading Project Atomic's guidance for images which states that the 2 main use cases for using a volume are:-
sharing data between containers
when writing large files to disk
I have neither of these use cases in my example using an Nginx image. I intended to mount a host directory as a volume in the path of the Nginx docroot in the container. This is so that I can push changes to a website's contents into the host rather then addressing the container. I feel it is easier to use this approach since I can - for example - just add my ssh key once to the host.
My question is, is this an appropriate use of a data volume and if not can anyone suggest an alternative approach to updating data inside a container?
One of the primary reasons for using Docker is to isolate your app from the server. This means you can run your container anywhere and get the same result. This is my main use case for it.
If you look at it from that point of view, having your container depend on files on the host machine for a deployed environment is counterproductive- running the same container on a different machine may result in different output.
If you do NOT care about that, and are just using docker to simplify the installation of nginx, then yes you can just use a volume from the host system.
Think about this though...
#Dockerfile
FROM nginx
ADD . /myfiles
#docker-compose.yml
web:
build: .
You could then use docker-machine to connect to your remote server and deploy a new version of your software with easy commands
docker-compose build
docker-compose up -d
even better, you could do
docker build -t me/myapp .
docker push me/myapp
and then deploy with
docker pull
docker run
There's a number of ways to achieve updating data in containers. Host volumes are a valid approach and probably the simplest way to achieve making your data available.
You can also copy files into and out of a container from the host. You may need to commit afterwards if you are stopping and removing the running web host container at all.
docker cp /src/www webserver:/www
You can copy files into a docker image build from your Dockerfile, which is the same process as above (copy and commit). Then restart the webserver container from the new image.
COPY /src/www /www
But I think the host volume is a good choice.
docker run -v /src/www:/www webserver command
Docker data containers are also an option for mounted volumes but they don't solve your immediate problem of copying data into your data container.
If you ever find yourself thinking "I need to ssh into this container", you are probably doing it wrong.
Not sure if I fully understand your request. But why you need do that to push files into Nginx container.
Manage volume in separate docker container, that's my suggestion and recommend by Docker.io
Data volumes
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization.
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
refer: Manage data in containers
As said, one of the main reasons to use docker is to achieve always the same result. A best practice is to use a data only container.
With docker inspect <container_name> you can know the path of the volume on the host and update data manually, but this is not recommended;
or you can retrieve data from an external source, like a git repository

Resources