Storing local docker images on External HDD boot2docker - docker

I'm using docker on my macbook air which unfortunately has quite limited hard drive space (120gb).
Was wondering how I could store containers on my external drive instead of the default (which I believe is /var/lib/docker/) ?
EDIT: It is in fact not /var/lib/docker - when using boot2docker I believe the files are stored on the virtualbox instance.

You can do this by changing file location in docker.
You can go to Preferences->Advanced, and under the storage path change the location to your external hard drive.
View the screenshot for reference

After clearing your macbook folder, mount your external hard drive on that path:
mount -t <fstype> -o defaults /dev/<your device> /var/lib/docker/
For use with boot2docker, try with something like:
mount -t vboxsf -o uid=1000,gid=50 /dev/<your device> /var/lib/docker/
where <your device> could be for example sdb.

Related

After I 'download' apps through docker, do I really have it in my computer? If so, where are they stored?

For example, if I build an image through dockerfile which contains nginx and php-frm, do I really have them in my computer? Where can I find the related directories?
They are stored on your computer (unless you have some special configuration like storing them by NFS somewhere). You may see location of docker images by calling
docker info
and looking for Docker Root Dir line
Docker Root Dir: /var/lib/docker
To verify this go to this directory - its content is usually quite big.
This tutorial lists some common locations on various OS:
Ubuntu: /var/lib/docker/
Fedora: /var/lib/docker/
Debian: /var/lib/docker/
Windows: C:\ProgramData\DockerDesktop
MacOS: ~/Library/Containers/com.docker.docker/Data/vms/0/

Where docker volumes are located?

Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?
It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.
macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!
If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/
location of volumes when using docker official install
/var/lib/docker/volumes/
Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data

Docker volume vs mount bind for external hdd

First time docker ser here, running on Raspberry Pi 3 (Hypriot OS). I have an external hdd attached to my raspberry pi to store all the files. The os is on the sdcard.
I am setting up many images on docker: sonarr, radarr, emby server and bittorrent client.
I have created all containers following the lines on docker hub page, so I attached all of the folders using mount bind (-v /some/path:/some/path).
Now the documentation says volume is better because it doesn't rely on filesystem. Also, I am having problems because I want to use hardlink between files on my external hdd, but because I am using mount binds, it seems to not work when calling hardlink from one mount to another on the same hdd. I think adding only one mount bind should solve this but I just want to make the config correct now.
Is volume an option to store all the movies or should I keep using mount bind?
In canse of volume, can I specify the external hdd to store movies? I have docker installed on an sdcard but I need the movies on my external hdd.
I have used docker create volume --name something -o device=/myhddmount/ but I am not sure if this is ok, because docker volume inspect shows a mountpoint on the sdcard. Also, when I create the volume, should I set -o type=ext4? because according to the manual etx4 doesn't has a device= option.
Thanks!

Shared folder in Docker. With Windows. Not only "C/user/" path

I'm new to Docker, I come from Vagrant.
I'm using Docker (1.9.1) inside my "D:/Works/something/DockerFirstTime" folder.
Now I create the machine with
docker-machine create first
and simple Dockerfile:
FROM ruby:2.2-onbuild
and simple Gemfile:
source 'https://rubygems.org'
gem 'rails'
Now with this command I want to use a shared folder like in Vagrant in the same hard drive of my Dockerfile:
docker run -it -v //d/Works/something/DockerFirstTime:/usr/src/app -w /usr/src/app ruby:2.2 bundle install
But it doesn't works.
How to do this?
I know that Docker only shares the /c/User/folder, is that right?
How can I use the folder with the files and modify my files with editor in Windows and then restart server like in a normal shell on a single PC or like in Vagrant?
This question and this question have a similar root problem, mounting a non C:/ drive folder in boot2docker. I wrote an in-depth answer to the other question that provide the same information that is in the first half of #VonC's answer.
From Docker Docs:
All other paths come from your virtual machine’s filesystem. [...] In
the case of VirtualBox you need to make the host folder available as a
shared folder in VirtualBox. Then, you can mount it using the Docker
-v flag.
To get your folder mounted in a container:
This mounts your entire D:\ drive, you can simply change the file paths to be more granular and specific.
Share the directory with VBox:
This only needs to be done once.
In windows CMD:
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
Mount the shared directory in your VM:
This will need to be done each time you restart the VM.
In the Boot2Docker VM terminal:
mount -t vboxsf -o uid=1000,gid=50 d-share /d
To see sources and explanation for how this works see my full answer to the other similar question
After this you can use the -v/--volume flag in Docker to mount this folder or any sub-folders or files into containers. If you mounted your whole D:\ drive you can use that exact docker run command from your question and it should now work. If you mounted a specific part of you drive you will have to change the paths to match.
To edit in windows, run in docker:
Also from Docker Docs:
Mounting a host directory can be useful for testing. For example, you
can mount source code inside a container. Then, change the source code
and see its effect on the application in real time.
As a VBox shared directory you should be able to see changes made from the Windows side reflected in the boot2docker vm.
You may need to restart containers to see the changes actually appear, this depends on how the program running inside the container, in your case ruby, uses the files. If the files are compiled into an app when the container starts, for example, you will definitely need to restart the container to see the changes.
Note:
Beware the CR LF vs. LF line ending difference when writing files in Windows and reading them in Linux. Make sure your text editor is saving files with Unix line endings or else you may start to see errors caused by '^M' appended to the end of all your lines.
I know that Docker only shares the /c/User/folder, is that right?
It does, and it is able to do so because the VirtualBox VM used for providing a Linux host for docker is sharing C:\Users.
For docker to see another folder, you would need to:
use VBoxmanage sharedfolder add "VM name" --name "sharename" --hostpath "D:\Works"
then mount /D/Works within a VM session, as mentioned in "share windows folder (other than c/Users/) with docker container (using docker windows client)", and mentioned in boot2docker:
mount -t vboxsf -o uid=1000,gid=50 sharename /some/mount/location
The issue with that last alternative is described in "
Introduction to boot2docker" (scroll down to the "Shared folders" section)
The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.
The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.
And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.
This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.
The magic sauce in the volume sharing is in this line.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.
However, that project seems quite old and would need to be adapted in order to include the latest boot2docker.iso (docker 1.9.1).

Where are docker images stored by boot2docker?

I'm playing around with Docker on OS X (with boot2docker) and can't figure out where these images are being stored. I'm just curious.
This question answers it for Linux which is apparently in /var/lib/docker/graph/<id>/layer
I installed via Homebrew and poked around in /usr/local/Cellar/docker but it doesn't look like it's there. There's a boot2docker-vm.vmdk in ~/.boot2docker. Are the images actually stored in this VM?
CORRECT ANSWER BY CREACK BELOW.
THIS IS JUST SOME ELABORATION:
To get into the boot2docker VM:
boot2docker ssh
user: docker
pass: tcuser
I had to sudo su to get access to the directory
And voilà, there are the images.
Boot2docker is a Virtal Machine (via VirtualBox). So the images are stored within the virtualbox image (the vmdk file). Once mounter by VirtualBox, it is linux and the images are stored at the place you stated.
You can store the images outside of the VirtualBox images, by using VirtualBox shared folder option.
I was able to use a folder in C:\ drive, for all the data that docker needs.
To do so you have to mount /var/lib/docker to a local folder in your host machine.
Set "Auto-Mount", but do not set "Read-Only".

Resources