docker install on private network without network instructions - docker

I have to install Docker on windows 7 in a private netwrok with no internet access.
I can download anything and bring it in by usb from another computer.
How do I intall and use docker?
Meaning: From installation, (what to install and how to setup) to creating the first image.
Most of the instruction I found use proxy and I cant use a proxy.

The installation itself involve copying docker-machine-Windows-x86_64.exe, renaming it to docker-machine.exe, and creating a virtualbox machine with it.
The issue is that it will attempt to download boot2docker.iso (the tinyCore-based linux image which includes docker pre-installed)
That means you need to copy that file on your usb key first, from boot2docker/boot2docker/releases.
From issue 539:
docker-machine create mydocker --virtualbox-boot2docker-url=file:///Users/auser/Downloads/boot2docker.iso --driver=virtualbox
You will need a similar docker setup on a machine with internet access in order to:
docker pull the images you want
docker save them
copy them on the USB key
copy them on your offline server, in C:\Users... (which is the only folders mounted in boot2docker VM)
Then you need to open an ssh session
docker-machine ssh default
And within that session, you can access the folder where the saved images are copied, and docker load them.

Related

VScode remote-container extension to docker container - build results root

I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?

Restore container after docker-destkop uninstall on Windows

I met some issues with docker-desktop on Windows and performed a fresh install.
The only problem is that my images and containers are gone...
I'm only interested to recover one specific container using TensorFlow, containing jupyter notebook that I should have save.
Is there any way to restore it?
I'm sorry but you won't be able to restore it.
When you have Docker Desktop without WSL2 backend, the resources are stored under C:\ProgramData\docker and those are deleted at uninstall.
When you have Docker Desktop with WSL2 backend, the distribution where Docker is running is completely wiped out at uninstall.
I suppose you were using the official image under tensorflow/tensorflow (or the equivalent with GPU support), so next time don't forget to use a volume for those contents you'd like to persist or even better, have a disposable container that binds-mount your workspace.
Example:
Create a folder under C: where you want your Jupyter Workspace, let's say C:\Projects
Start the container by mounting that folder on the container and run Jupyter Notebook: docker run -it --rm -v C:\Projects:/usr/workspace -p 8888:8888 tensorflow/tensorflow:nightly-jupyter
When you access to your Notebook in the browser (under localhost:8888), open in Jupyter Notebook the directory /usr/workspace, so all the work you're doing will be also stored in the host in C:\Projects
When you finish, you can safely stop and delete your container, since the work is stored in the host and not only in the container.

Copy docker configuration to other PC

I’m using docker composer in order to run ChirpStack on my Windows 10 machine. I need to reinstall operating system, but how to keep working ChirpStack docker system without creating new one?
If all the base images you're using are from public repos and are not only saved on your machine you only need to save your docker configuration, since you're using docker compose you can just copy the docker-compose.yml file to an external storage medium and you're all set. Unless you have some more dependencies that only exist on your computer that's all the files you need.

Where docker volumes are located?

Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?
It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.
macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!
If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/
location of volumes when using docker official install
/var/lib/docker/volumes/
Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data

How can I use a local file on container?

I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).

Resources