I am trying to map an SMB network storage to Docker, in a development environment, to make it available to containers, in the same way as a shared local drive. This means, for the entire Docker VM, not individual containers. Another application needs the network storage through SMB access, but is in another domain, so I can't share anything from my local drives to it. Windows network drives also don't work with Docker.
The current workaround is to open nested shells on Docker, to access the VM and then mount the network storage. I tried this as a Windows batch file, but it stops at the first shell prompt and does not input anymore via "echo".
docker run --rm -it --privileged --pid=host justincormack/nsenter1
echo ctr -n services.linuxkit task exec -t --exec-id foo docker-ce /bin/sh
echo mkdir host_mnt/mystorage
echo mkdir host_mnt/mystorage/Videos
echo mkdir host_mnt/mystorage/Videos/my-private-storage
echo mount -v -t cifs -o username=myname,password=p#s$w0rd,file_mode=0777,dir_mode=0777,vers=2.0,uid=1234,gid=1234 //mystorage.mycompany.com/Videos/my-private-storage /host_mnt/mystorage/Videos/my-private-storage
echo exit
echo exit
Typing this into the console (without the "echo"s) requires deletion/restart of Docker containers afterwards.
Is there any way to map a network drive to Docker easily and upon Docker startup? Or any other way to easily use an SMB resource?
I think the biggest problem you're going to face is that the entire Moby VM used for Docker for Windows has a read-only filesystem. If you were to just attempt to do the mount directly from Moby itself, you would get the it's missing the helper applications for CIFS / NFS.
mount: /mnt: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
In most environments, we would just install cifs-utils or nfs-common, but because it's a read only filesystem, I can't think of a way to get that working.
Related
How to create a local-registry container, that mounts a volume from the host machine and persist locally all the images that get pulled?
I want to not download images more than once, if not necessary, even after the registry (or the whole Docker VM) is being thrown away and recreated.
This is useful when having slow connection or no connectivity. Would also allow to mount a backup with pre-downloaded images, as docker volume, skipping altogether the need for an internet connection.
This latter is already possible, but it would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there.
It's a rephrasing on this, that wasn't reopened because of lack of feedback. Main purpose is to make the answer available for search, but feel free to propose better solutions.
Here are the step-by-step instructions. Hopefully will save time & make life easier to somebody else, travelling or living in disadvantaged areas of the world where internet connections can't access the Docker world, because they are too limited or sometime absent altogether!
Istructions are for macOS and Minikube but can be adapted also for VM running on Windows or via Docker Desktop.
(note: you will need to check if your virtualization technology provides automount of the system user directory)
Configuration
Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.)
On the host machine
Minikube
If using minikube, first bind to docker on its VM's
eval $(minikube docker-env)
or run the commands directly from inside the VM, via minikube ssh.
Create local registry
(note: some envs might be unnecessary; check Docker docs to see what you need)
The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers).
When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run.
docker run -d -p 5000:5000 \
-e STANDALONE=false \
-e "REGISTRY_LOG_LEVEL=debug" \
-e "REGISTRY_REDIRECT_DISABLE=true" \
-e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \
--restart=always \
--name local-registry \
registry:2
Login to your local registry
echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000"
(optional) Verify that the persist directories are present
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
Try to pull one image from your private registry
(to see it proxied through the repository localhost:5000)
docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
(optional) Verify the image data has been synced on local host, where desired
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
If using Kubernetes
change the deployment spec container image to:
localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
Et voila!
You now can keep the images downloaded from your repository stored onto your host machine!
If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub).
And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether!
(really useful with Minikube, when you need to destroy your docker virtual machine)
References:
https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache
https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts
I cannot start tensorflow with image download from tensorflow
I used docker on windows 10 and for error ouput said this:
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
I try search a problem for google... but cannot found, my experience with docker is null
This is a warning specifying that to access/change the files created in the mounted directory you may require sudo and you may not be able to change such files as a non sudo user, since your docker container used sudo permissions while creating them.
A quick search shows that there are many blog references available, check these -
Docker creates files as root in mounted volume
Running a Docker container as a non-root user
Setup Docker for windows using windows subsystem linux
https://jtreminio.com/blog/running-docker-containers-as-current-host-user/
https://medium.com/better-programming/running-a-container-with-a-non-root-user-e35830d1f42a
https://docs.docker.com/install/linux/linux-postinstall/
I'm trying to access the file system of a container made with docker-machine. I've used the ssh command but it doesn't seem to have anything that will allow you to list files / folders (like ls).
How would one explore the files currently on a container with docker-machine?
You can create a bash session(assuming your image has bash installed) in a RUNNING container with the following command.
docker exec -ti <container_id> bash
Then you can explore the filesystem of the container.
I have a dockerized web application that I'm running in a HA setup. I have a cron setup that runs dockup every midnight to backup my important information stored on other containers. Now I would like to backup and aggregate my logs from my web application too. Problem is, how do I that? If I use the VOLUME key in Dockerfile to expose /logs to the host machine, there would be a collision because there would be two /logs directories on the dockup container?
I have checked dockup. It does not have a /logs directory. Seems it uses /var/logs for log output.
$ docker run -it --name dockup borja/dockup bash
Otherwise, yes it would be a problem because the volume will be mounted under the mentioned name and also the current container processes will log to the folder. Not good.
Use a logging container like fluentd. In this tutorial it also offers writing to S3 buckets like dockup. Tutorial can be founder here.
Tweak your container, e.g. with symbolic links to log or relay the log to a different volume.
Access log not through containers but native docker and copy it to S3 yourself or running dockup on your local mounted log file.
$ docker logs container/name > logfile.log
$ docker run --rm \
--env-file env.txt \
-v $(pwd)/logfile.log:/customlogs/logfile.txt \
--name dockup borja/dockup
Now you can take the folder /customlogs/ as your backup path inside the env.txt.
So, despite Docker 1.3 now allowing easy access to external storage on OSX through boot2docker for files in /Users/, I still need to access files not in /Users/. I have a settings file in /etc/settings/ that I'd like my container to have access to. Also, the CMD in my container writes logs to /var/log in the container, which I'd rather have it write to /var/log on the host. I've been playing around with VOLUME and passing stuff in with -v at run, but I'm not getting anywhere. Googling hasn't been much help. Can someone who has this working provide help?
As boot2docker now includes VirtualBox Guest Additions, you can now share folders on the host computer (OSX) with guest operating systems (boot2docker-vm). /Users/ is automatically mounted but you can mount/share custom folders. In your host console (OSX) :
$ vboxmanage sharedfolder add "boot2docker-vm" --name settings-share --hostpath /etc/settings --automount
Start boot2docker and ssh into it ($boot2docker up / $boot2docker ssh).
Choose where you want to mount the "settings-share" (/etc/settings) in the boot2docker VM :
$ sudo mkdir /settings-share-on-guest
$ sudo mount -t vboxsf settings-share /settings-share-on-guest
According that /settings is the volume declared in the docker container add -v /settings-share-on-guest:/settings to the docker run command to mount the host directory settings-share-on-guest as a data volume.
Works on Windows, not tested on OSX but should work.