How to mimic --device option in docker run in kubernetes - docker

I am very new to Kubernetes and docker. Am trying to find the config equivalent of --device option in docker run. This option in docker is used to add a device on the host to the container.
Is there a equivalent in kubernetes which can be added to the yaml file?
Thanks

Currently we do not have a passthrough to this option in the API, though you may have some success with using a hostpath volume to mount a device file in.

Related

Support for `volume_mount` in Nomad Podman task driver?

I am doing some proof of concept work using Nomad to orchestrate several different containers running on RHEL 8 hosts using Podman. I am using the Nomad Podman driver to execute my containers using Podman. I have shared state in the form of an Elasticsearch data directory that I mount into /usr/share/elasticsearch/data.
I initially tried to get this working by defining a host volume in the Nomad client configuration, then adding a volume stanza that references my host volume and a volume mount stanza that references the volume in my Nomad job specification. That approach didn't work - no errors, but the mounting never happens.
After some digging, I found that the Podman task driver's capabilities documentation says that volume mounts are not supported. Instead, I seem to have to use the more limited driver-specific volumes configuration.
So my question is this: Is the lack of support for volume mounts just a temporary shortcoming that will eventually be supported? It does appear that the Docker task driver supports volume mapping and only Podman does not, so perhaps the Podman driver is just not there yet? Or is there a specific reason why there is a difference between how Docker supports volumes and how Podman does it?
yes, currently it does not support host volume defined in nomad client section.
this will works if this PR get merge:
https://github.com/hashicorp/nomad-driver-podman/pull/152
you can build the binary uging golang in this branch:
git clone https://github.com/ttys3/nomad-driver-podman
git checkout append-nomad-task-mounts
./build.sh
replace with new generated nomad-driver-podman and restart nomad.

Where docker volumes are located?

Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?
It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.
macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!
If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/
location of volumes when using docker official install
/var/lib/docker/volumes/
Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data

Docker -v (volume mount) equivalent in kubernetes

I am looking for a kubernetes equivalent of docker -v for mounting the volumes in gcloud.
I am trying to run my container using google-container-engine which uses kubectl to manage clusters. In the kubectl run command I could not fund any provision for mounting the volumes.
kubectl run foo --image=gcr.io/project_id/myimage --port 8080
I checkout their official docs but could not find any clue whatsoever.
As at the moment, It's not possible to mount a persistent Volume in a container by using imperative ways or using generators command (run, expose).Therefore, You could use declarative way to get it done.
Kubernetes provides 2 abstractions for storage in a cluster which are persistent volume claim (PVC) and persistent volume (PV). Moreover, you can use storage class to provide Persistent volume (PV) in a dynamic way.
persistent-volumes.
storage-classes
When you write a manifest file for deployment you need to use a volume claim field to access PVC as well as you will write a PVC to claim PV.

How to use docker with volume and device as an action in the Openwhisk

I have some code in docker which polls a directory to do some action upon it.
This directory is passed using -v option while running docker there are also some devices that are used like --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl
in the wsk docker action i see that to create a docker action i use the command as below
wsk -i action create --docker
i wanted to understand how to pass the volume and device info to the docker as the starting and stopping of this docker will be maintained by openwhisk.
Or is there some other workaround for this
OpenWhisk does not support running Docker-based Actions with attached volumes. Users do not have any control over the storage devices.
The workaround would be to use an object store as the storage location. The OpenWhisk Action can then use an API to query, retrieve and modify data from the serverless runtime.
Old question, but I leave a note, that for local test deployments, I have OpenWhisk running with mounted NFS network filesystems and local paths. For this, I have just simply added the mount to the source (hardcoded, could go for ENV settings, next):
Add volume mounts -v here:
File: core/invoker/src/main/scala/whisk/core/containerpool/docker/DockerContainer.scala
val args = Seq(
"--cpu-shares",
cpuShares.toString,
"--memory",
s"${memory.toMB}m",
"--memory-swap",
s"${memory.toMB}m",
"--network",
network,
"-v",
"/mnt/nfs:/mnt/nfs",
"-v",
"/mnt/data:/mnt/data") ++
environmentArgs ++
dnsServers.flatMap(d => Seq("--dns", d)) ++
name.map(n => Seq("--name", n)).getOrElse(Seq.empty) ++
params
To build this then run in the openwhisk-master directory:
./gradlew distdocker
And tag the produced invoker container in order to use it in the stack:
docker tag whisk/invoker openwhisk/invoker
After a restart, you have your volumes.
BUT NOTE: This is contradicting design principles of stateless micro services and probably not the smartest way to go. Re-check if you can do without mounting volumes (they are not stateless).

Using SMB shares as docker volumes

I'm new to docker and docker-compose.
I'm trying to run a service using docker-compose on my Raspberry PI. The data this service uses is stored on my NAS and is accessible via samba.
I'm currently using this bash script to launch the container:
sudo mount -t cifs -o user=test,password=test //192.168.0.60/test /mnt/test
docker-compose up --force-recreate -d
Where the docker-compose.yml file simply creates a container from an image and binds it's own local /home/test folder to the /mnt/test folder on the host.
This works perfectly fine, when launched from the script. However, I'd like the container to automatically restart when the host reboots, so I specified 'always' as restart policy. In the case of a reboot then, the container starts automatically without anyone mounting the remote folder, and the service will not work correctly as a result.
What would be the best approach to solve this issue? Should I use a volume driver to mount the remote share (I'm on an ARM architecture, so my choices are limited)? Is there a way to run a shell script on the host when starting the docker-compose process? Should I mount the remote folder from inside the container?
Thanks
What would be the best approach to solve this issue?
As #Frap suggested, use systemd units to manage the mount and the service and the dependencies between them.
This document discusses how you could set up a Samba mount as a systemd unit. Under Raspbian, it should look something like:
[Unit]
Description=Mount Share at boot
After=network-online.target
Before=docker.service
RequiredBy=docker.service
[Mount]
What=//192.168.0.60/test
Where=/mnt/test
Options=credentials=/etc/samba/creds/myshare,rw
Type=cifs
TimeoutSec=30
[Install]
WantedBy=multi-user.target
Place this in /etc/systemd/system/mnt-test.mount, and then:
systemctl enable mnt-test.mount
systemctl start mnt-test.mount
The After=network-online.target line should cause systemd to wait until the network is available before trying to access this share. The Before=docker.service line will cause systemd to only launch docker after this share has been mounted. The RequiredBy=docker.service means that if you start docker.service, this share will be mounted first (if it wasn't already), and that if the mount fails, docker will not start.
This is using a credentials file rather than specifying the username/password in the unit itself; a credentials file would look like:
username=test
password=test
You could just replace the credentials option with username= and password=.
Should I mount the remote folder from inside the container?
A standard Docker container can't mount filesystems. You can create a privileged container (by adding --privileged to the docker run command line), but that's generally a bad idea (because that container now has unrestricted root access to your host).
I finally "solved" my own issue by defining a script to run in the /etc/rc.local file. It will launch the mount and docker-compose up commands on every reboot.
Being just 2 lines of code and not dependent on any particular Unix flavor, it felt to me like the most portable solution, barring a docker-only solution that I was unable to find.
Thanks all for the answers

Resources