Is there a way to get an optional bind mount in docker swarm - docker-swarm

I have a swarm service that bind-mounts a file that may not exist. If the file does not exist the service fails to deploy (and I get logs complaining about the missing file). I would prefer to have the service deploy anyway, just missing that mount. Is there a way to let that happen?
The being being mounted is a unix socket to a local memcached instance. The app can run without it and we don't run memcached on every node, so I'd like to allow the service to deploy even if the bind mount fails (if the ideal node goes down and the service has to move to another node that doesn't run memcached).
I realize I could move the mount point to a directory that will always exist on every host machine, but I'd prefer to keep the bind mount exposure minimal if possible.

Recently I had a similar scenario and I implemented a NFS server in one node and then I mount it in every swarm node. So, I always have files in the same path.

Related

How can I trigger a script in my host machine from a docker container?

I have a script in my host machine that needs to run everytime an action occurs within a docker container (running a Django Rest API). This script depends on many local files and environment variables, so it is not possible for me to just map everything (volumes and env vars) from the host to the container in order for it to be called inside it. It must be called from the host machine. After it is executed, then it will generate some output files that will be read and used from the container (through a mounted volume).
Is there any way I can achieve this? I've seen lots of comments about using docker socket and mapping volumes, but it never seems to be suitable to this case.
Thanks!

If a user had access to the terminal in a docker container can they do anything to destroy the hard drive its on?

If a user had access to a root Ubuntu terminal in a docker container, can they do anything to destroy the hard drive or SSD it is on?
Link: gitlab.com/pwnsquad/term
Docker by default gives root access to containers.
Container can damage your host system only if you bypassed the container isolation mechanisms of Docker, otherwise the only damage can be done to the container itself, not host.
The simplest ways to break the isolation mechanisms are following:
using Dockers' bind mounts, when you map host's path into container' path. In this case this path may be completely cleaned from inside container. Avoid bind mounts (use volumes) or mount in ro mode to avoid that
using networking, specially network=host guarantees container access to all host's active network services and thus probably making host vulnerable to attacks on them. In this case you can connect to services, which are bound locally (to 127.0.0.1/localhost) thus not expecting remote connections and as a result could be less protected.

Bind mount volume between host and pod containers in Kubernetes

I have a legacy application that stores some config/stats in one of the directory on OS partition (e.g. /config/), and I am trying to run this as a stateful container in Kubernetes cluster.
I am able to run it as a container but due to the inherent ephemeral nature of containers, whatever data my container is writing to the OS partition directory /config/ is lost when the container goes down/destroyed.
I have the Kubernetes deployment file written in such a way that the container is brought back to life, albeit as a new instance either on same host or on another host, but this new container has no access to the data written by previous instance of the container.
If it was a docker container I could get this working using bind-mounts, so that whatever data the container writes to its OS partition directory is saved on the host directory, so that any new instance would have access to the data written by previous instance.
But I could not find any alternative for this in Kubernetes.
I could use hostpath provisioning, but hostpath-provisioning right now works only for single-node kubernetes cluster.
Is there a way I could get this working in a multi-node Kubernetes cluster? Any other option other than hostpath provisioning? I can get the containers talk to each other and sync-the data between nodes, but how do we bind-mount a host directory to container?
Thanks for your help in advance!
This is what you have Volumes and VolumeMounts for in your POD definition. Your lead about hostPath is the right direction, but you need a different volume type when you host data in a cluster (as you seen your self).
Take a look at https://kubernetes.io/docs/concepts/storage/volumes/ for a list of supported storage backends. Depending on your infrastructure you might find one that suits your needs, or you might need to actually create a backing service for one (ie. NFS server, Gluster, Ceph and so on).
If you want to add another abstraction layer to make a universal manifest that can work on different environments (ie. with storage based on cloud provider, or just manualy provisioned depending on particular needs). You will want to get familiar with PV and PVC (https://kubernetes.io/docs/concepts/storage/persistent-volumes/), but as I said they are esntially an abstraction over the basic volumes, so you need to crack that first issue anyway.

Where should live docker volumes on the host?

On the host side, should all the mount points be located in the same location? Or should they reflect the locations which are inside the containers?
For example, what is the best place to mount /var/jenkins_home on the host side in order to be consistent with its Unix filesystem?
/var/jenkins_home
/srv/jenkins_home
/opt/docker-volumes/jenkins/var/jenkins_home
Other location ?
It absolutely depends on you where you want to mount the volume on the host. Just don't map it to any system file locations.
In my opinion the volumes reflecting the locations inside the container is not a great idea since you will have many containers, and all will have similar file system structure, so you will never be able to isolate container writes.
With jenkins, since the official Jenkins docker image runs with user "jenkins", it will be not a bad idea for you to create jenkins user on the host and map /home/jenkins on the host to /var/jenkins_home on the container.
Rather than using explicit host:container mounts, consider using named volumes. This has several benefits:
They can be shared easily into other containers
They are host-agnostic (if you don't have the specific mount on that machine, it will fail)
They can be managed as first-class citizens in the Docker world (docker volume)
You don't have to worry about where to put them on your host ;)

Moving Docker Containers Around

I would like to use this Docker container:
https://registry.hub.docker.com/u/cptactionhank/atlassian-confluence/dockerfile/
My concern is that if I have to wind up moving this docker container to another machine (or it quits for some reason and needs to be restarted) that all the data (server config and other items stored on the file system) is lost. How do I ensure that this data isn't lost?
Thanks!
The first rule of Docker containers is don't locate your data inside your application container. Data that needs to persist beyond the lifetime of the container should be stored in a Docker "volume", either mounted from a host directory or from a data-only container.
If you want to be able to start containers on different hosts and still have access to your data, you need to make sure your data is available on those hosts. This problem isn't unique to Docker; it's the same problem you would have if you wanted to scale an application across hosts without using Docker.
Solutions include:
A network filesystem like NFS.
A cluster fileystem like Gluster.
A non-filesystem based data store, like a database, or something like Amazon S3.
This is not necessarily an exhaustive list but hopefully gives you some ideas.

Resources