Map docker cointainer folder to host folder - dev purposes - docker

Is there a way how to map/mount container folder to host folder? Like volumes but vice-versa.
Why?
I do have large project which including all the libraries has 0.5M files and folders. I have several of projects like this.
I would like to have project folder once on my disk (container) and not twice (local folder + cointainer)
I know that mounting folder from "closed" container to a host is from logic point of view nonsense - but again this is just for dev purposes.
I also know docker creates network interfaces. I'm not a net expert but maybe (in case that docker do not support this natively) there is a way how can I connect from my machine into container and map a folder to a folder (sshfs maybe)?
The dream is to have laptop with clean host OS instalation + docker. You pull up a docker image, run IDE, mount folders and start developing. No need of local instalation of NPM, Python/PHP/Perl/... nor their libraries.
This is just a vision but maybe I'm close to the reality than I can imagine.
Thanks

Related

Move docker desktop data folder (windows containers)

I'm using docker desktop (4.X) over win10 pro. We are building Windows applications and using Windows containers.
On our setup, the folder C:\ProgramData\Docker(images/windowsfilter/tmp & co) can grow up to hundreds of GB, and i need to move this folder to an alternative location.
Again, i am using WINDOWS CONTAINERS (i do not care about wsl2 or hyper-v specific solutions)
I tried moving / creating a junction between
C:\ProgramData\ Docker => D:\DockerData, but windows containers backend does not start.
If i switch back to linux containers, everything is working fine (and i know how to move WSL2 vhdx, if needed, but again, i DO NOT NEED THAT information).
Moving HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\ProgramData location BEFORE installing docker desktop, works, but it is not an acceptable solution
I tried configuring data-root directory in %USERPROFILE%.docker\windows-daemon.json, But it does not work, windows containers backend does not start.
Please give me a reliable way to move the C:\ProgramData\Docker folder to another location.
Unfortunately, when utilizing Windows Containers, it is not yet feasible to relocate the C:ProgramDataDocker folder to another location. This is so that container images and other data can be stored in this directory, which the Docker for Windows service is hard-coded to utilize.
You might try using a symbolic link to reroute the C:ProgramDataDocker folder to an other place as a solution. This may not be a reliable approach, though, as the Docker for Windows service might not handle the symbolic link correctly, which would prevent the service from starting.

Is it possible if I want to change my docker data-root directory to another server?

I installed docker inside of a server or what I usually called as a VM (Virtual Machine, with RHEL environment) and I have been thinking of using a new directory in a remoted Windows network as my new data-root directory (due to space reason).
In my case, the data-root directory from my VM located at /home/docker_new and the path where I want to use as a new data-root directory from the remote Windows network is for example, at
\\xx.xx.x.x\a\b\c.
I did try to make my research and finds that most of the solutions are all focusing on changing the data-root directory within the same VM. Some focusing on how to 'move' the docker to another server. While what I intended to do is just to change the data-root directory to another directory in a remoted windows network .
So, my question is:
1. Is it possible to do so?
2. If its possible, what are the steps that I should follow?

Is it possible to use a single docker volume to map to two different directories?

I am using VS Code, Remote Development Containers, and Docker to create development environments within containers. Everything works fine, but I did notice that when working with different projects that doing things such as yarn install means having to download the npm modules each time. Of course, once a container image does this they are stored in the cache, specifically /usr/local/share/.cache/yarn/v6.
When I attempted to mount that folder to the host machine yarn install would start to fail too often, stating that it was having trouble downloading the package due to a bad network connection (the connection was just fine). So, I created a volume instead and everything worked just fine.
The problem I am running into is that I also want to share other folders in the volume so that multiple containers use the same cache for things such as NuGet packages. I was hoping to somehow have my volume look like so:
mysharedvolume/yarn => /usr/local/share/.cache/yarn/v6
mysharedvolume/nuget => /wherever/nuget/packages/are/cached
mysharedvolume/somefile.config => /wherever/somefile.config
This does not seem to be the way volumes work in docker, all of the files are mixed up at the root of the volume (there is no subdirectories). Of course, I can't simply map the entire /usr folder or anything like that, that's crazy.
Before I go off and create different volumes for each cache and config files, is there a way to do this with a single shared volume?

Use VSCode remote development on docker image without local files

Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.

Docker base images, what do they compose of?

I am trying to wrap my head around the Docker architecture, in particular figuring out what exactly a base image consists of, and in doing I have been exploring some of the images found on the docker hub. Specifically when looking at the following repo it references the centos-7.2.1511-docker.tar.xz file.
I've downloaded and examined the contents of the tar and it has your typical Linux filesystem.
As I understand it, this is not a complete Linux OS and is just a replica of a linux filesystem with all the non essentials removed? Where all other requirements are drawn from the Host OS when a container is run(?)
My question essentially boils down to how one would go about creating that tar file? What exactly do you need. My intention is not to create one but rather understand what portion of files/data/dependencies come from a target OS to create an image and what gets used on the Host OS
A Docker container is a set of processes, running a sandbox enabled by Linux namespaces, on top of the host kernel.
A Docker image is a set of layers, which are often simply tarballs, of files that are unpacked, and made to look as if they are the root of the filesystem when used to start a container.
A Docker image could be just a single statically-linked executable! You can create your own Docker image from scratch by simply creating a tarball of a single executable, and giving it to docker load which wI'll store it as the appropriate internal format and register it as an image.
As you can see then, a Docker image need not be much. It certainly doesn't need a kernel, or any of the components normally used for configuring the system, networking daemons, or even things like cron. Those are all left to the host.
Things that are usually available in an image are a dynamic library runtime, and files like /etc/hosts, /etc/resolv.conf, and other files which are referenced directly by libc. This allows you to add typical dynamically-linked executables which interact with the system as if they're running on a traditionalal OS.
I have successfully "Dockerized" a legacy CentOS 6-based VM by uninstalling as many packages as possible, then tar-ing up the filesystem (excluding directories like /proc, /sys, /dev, etc.) and loading this via docker load. Afterwards, I started a container and (sometimes forcefully) removed additional "system" packages that serve no purpose in a Docker image, like kernel, udev, etc.
This blog post goes into some of the specifics of docker load:
http://tuhrig.de/difference-between-save-and-export-in-docker/

Resources