From what I can tell, docker images are installed to /var/lib/docker as they are pulled. Is there a way to change this location, such as to a mounted volume like /mnt?
With recent versions of Docker, you would set the value of the data-root parameter to your custom path, in /etc/docker/daemon.json
(according to https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file).
With older versions, you can change Docker's storage base directory (where container and images go) using the -goption when starting the Docker daemon. (check docker --help).
You can have this setting applied automatically when Docker starts by adding it to /etc/default/docker
Following advice from comments I utilize Docker systemd documentation to improve this answer.
Below procedure doesn't require reboot and is much cleaner.
First create directory and file for custom configuration:
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo $EDITOR /etc/systemd/system/docker.service.d/docker-storage.conf
For docker version before 17.06-ce paste:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --graph="/mnt"
For docker after 17.06-ce paste:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --data-root="/mnt"
Alternative method through daemon.json
I recently tried above procedure with 17.09-ce on Fedora 25 and it seem to not work. Instead of that simple modification in /etc/docker/daemon.json do the trick:
{
"graph": "/mnt",
"storage-driver": "overlay"
}
Despite the method you have to reload configuration and restart Docker:
sudo systemctl daemon-reload
sudo systemctl restart docker
To confirm that Docker was reconfigured:
docker info|grep "loop file"
In recent version (17.03) different command is required:
docker info|grep "Docker Root Dir"
Output should look like this:
Data loop file: /mnt/devicemapper/devicemapper/data
Metadata loop file: /mnt/devicemapper/devicemapper/metadata
Or:
Docker Root Dir: /mnt
Then you can safely remove old Docker storage:
rm -rf /var/lib/docker
For new docker versions we need to use data-root as graph is deprecated in v17.05.0: official deprecated docs
Edit /etc/docker/daemon.json (if it doesn’t exist, create it) and include:
{
"data-root": "/new/path/to/docker-data"
}
Then restart Docker with:
sudo systemctl daemon-reload
sudo systemctl restart docker
A more detailed step-by-step explanation (including moving data) using Docker Storage with data-root can be found in: Blog post
In case of Windows a similar post Windows specific
Much easier way to do so:
Stop docker service
sudo systemctl stop docker
Move existing docker directory to new location
sudo mv /var/lib/docker/ /path/to/new/docker/
Create symbolic link
sudo ln -s /path/to/new/docker/ /var/lib/docker
Start docker service
sudo systemctl start docker
Since I haven't found the correct instructions for doing this in Fedora (EDIT: people pointed in comments that this should also work on CentOS and Suse) (/etc/default/docker isn't used there), I'm adding my answer here:
You have to edit /etc/sysconfig/docker, and add the -g option in the OPTIONS variable. If there's more than one option, make sure you enclose them in "". In my case, that file contained:
OPTIONS=--selinux-enabled
so it would become
OPTIONS="--selinux-enabled -g /mnt"
After a restart (systemctl restart docker) , Docker should use the new directory
Don't use a symbolic Link to move the docker folder to /mnt (for example).
This may cause in trouble with the docker rm command.
Better use the -g Option for docker.
On Ubuntu you can set it permanently in /etc/default/docker.io. Enhance or replace the DOCKER_OPTS Line.
Here an example:
`DOCKER_OPTS="-g /mnt/somewhere/else/docker/"
This solution works on Red Hat 7.2 & Docker 1.12.0
Edit the file
/lib/systemd/system/docker.service in your text editor.
add -g /path/to/docker/ at the end of ExecStart directive. The complete line should look like this.
ExecStart=/usr/bin/dockerd -g /path/to/docker/
Execute the below command
systemctl daemon-reload
systemctl restart docker
Execute the command to check docker directory
docker info | grep "loop file\|Dir"
If you have /etc/sysconfig/docker file in Red Hat or docker 1.7.1 check this answer.
In CentOS 6.5
service docker stop
mkdir /data/docker (new directory)
vi /etc/sysconfig/docker
add following line
other_args=" -g /data/docker -p /var/run/docker.pid"
then save the file and start docker again
service docker start
and will make repository file in /data/docker
Copy-and-paste version of the winner answer :)
Create this file with only this content:
$ sudo vi /etc/docker/daemon.json
{
"graph": "/my-docker-images"
}
Tested on Ubuntu 16.04.2 LTS in docker 1.12.6
The official way of doing this based on this Post-installation steps for Linux guide and what I found while web-crawling is as follows:
Override the docker service conf:
sudo systemctl edit docker.service
Add or modify the following lines, substituting your own values.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --graph="/mnt/docker"
Save the file. (It creates: /etc/systemd/system/docker.service.d/override.conf)
Reload the systemctl configuration.
sudo systemctl daemon-reload
Restart Docker.
sudo systemctl restart docker.service
After this if you can nuke /var/lib/docker folder if you do not have any images there you care to backup.
For Debian/Ubuntu or Fedora, you can probably use the other answers. But if you don't have files under /etc/default/docker or /etc/sysconfig/docker, and your system is running systemd, you may want to follow this answer by h3nrik. I am using Arch, and this works for me.
Basically, you need to configure systemd to read the new docker image location as an environment variable, and pass that environment variable into the Docker daemon execution script.
For completeness, here is h3nrick's answer:
Do you have a /lib/systemd/system/docker.service file?
If so, edit it so that the Docker service uses the usual /etc/default/docker as an environment file: EnvironmentFile=-/etc/default/docker.
In the /etc/default/docker file then add DOCKER_OPTS="-g /home/rseixas/Programs/Docker/images".
At the end just do a systemctl daemon-reload && systemctl restart docker.
For further information please also have a look at the documentation.
As recommneded by #mbarthelemy this can be done via the -g option when starting the docker daemon directly.
However, if docker is being started as a system service, it is not recommended to modify the /etc/default/docker file. There is a guideline to this located here.
The correct approach is to create an /etc/docker/daemon.json file on Linux (or Mac) systems or %programdata%\docker\config\daemon.json on Windows. If this file is not being used for anything else, the following fields should suffice:
{
"graph": "/docker/daemon_files"
}
This is assuming the new location where you want to have docker persist its data is /docker/daemon_files
A much simpler solution is to create a soft link point to whatever you want, such as
link -s /var/lib/docker /mnt/whatever
It works for me on my CentOS 6.5 server.
I was having docker version 19.03.14. Below link helped me.
Check this Link
in /etc/docker/daemon.json file I added below section:-
{
"data-root": "/hdd2/docker",
"storage-driver": "overlay2"
}
On openSUSE Leap 42.1
$cat /etc/sysconfig/docker
## Path : System/Management
## Description : Extra cli switches for docker daemon
## Type : string
## Default : ""
## ServiceRestart : docker
#
DOCKER_OPTS="-g /media/data/installed/docker"
Note that DOCKER_OPTS was initially empty and all I did was add in the argument to make docker use my new directory
On Fedora 26 and probably many other versions, you may encounter an error after moving your base folder location as described above. This is particularly true if you are moving it to somewhere under /home. This is because SeLinux kicks in and prevents the docker container from running many of its programs from under this location.
The short solution is to remove the --enable-selinux option when you add the -g parameter.
On an AWS Ubuntu 16.04 Server I put the Docker images on a separate EBS, mounted on /home/ubuntu/kaggle/, under the docker dir
This snippet of my initialization script worked correctly
# where are the images initially stored?
sudo docker info | grep "Root Dir"
# ... not where I want them
# modify the configuration files to change to image location
# NOTE this generates an error
# WARNING: Usage of loopback devices is strongly discouraged for production use.
# Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
# see https://stackoverflow.com/questions/31620825/
# warning-of-usage-of-loopback-devices-is-strongly-discouraged-for-production-use
sudo sed -i ' s##DOCKER_OPTS=.*#DOCKER_OPTS="-g /home/ubuntu/kaggle/docker"# ' /etc/default/docker
sudo chmod -R ugo+rw /lib/systemd/system/docker.service
sudo cp /lib/systemd/system/docker.service /etc/systemd/system/
sudo chmod -R ugo+rw /etc/systemd/system/
sudo sed -i ' s#ExecStart.*#ExecStart=/usr/bin/dockerd $DOCKER_OPTS -H fd://# ' /etc/systemd/system/docker.service
sudo sed -i '/ExecStart/a EnvironmentFile=-/etc/default/docker' /etc/systemd/system/docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo docker info | grep "Root Dir"
# now they're where I want them
For Mac users in the 17.06.0-ce-mac19 version you can simply move the Disk Image location from the user interface in the preferences option Just change the location of the disk image and it will work (by clicking Move disk Image) and restarting the docker. Using this approach I was able to use my external hardisk for storing docker images.
For Those looking in 2020. The following is for Windows 10 Machine:
In the global Actions pane of Hyper-V Manager click Hyper-V
Settings…
Under Virtual Hard Disks change the location from the
default to your desired location.
Under Virtual Machines change the
location from the default to your desired location, and click apply.
Click OK to close the Hyper-V Settings page.
This blog post helps me
Here are the steps to change the directory even after you’ve created Docker containers etc.
Note, you don’t need to edit docker.service or init.d files, as it will read the change from the .json file mentioned below.
Edit /etc/docker/daemon.json (if it doesn't exist, create it)
Add the following
{
"data-root": "/new/path/to/docker-data"
}
Stop docker
sudo systemctl stop docker
Check docker has been stopped
ps aux | grep -i docker | grep -v grep
Copy the files to the new location
sudo rsync -axPS /var/lib/docker/ /new/path/to/docker-data
Start Docker back up
sudo systemctl start docker
Check Docker has started up using the new location
docker info | grep 'Docker Root Dir'
Check everything has started up that should be running
docker ps
Leave both copies on the server for a few days to make sure no issues arise, then feel free to delete it.
sudo rm -r /var/lib/docker
Related
I'm trying to change the default data folder of docker images, containers, etc to a different path. Snap installation of docker has such folder at /var/snap/docker/common/var-lib-docker.
Theoretically I could change that with data-root option in deamon.json. But, if I change the daemon.json adding "data-root": "/home/user/docker" docker won't start due to a conflict with flags (which always has the previously described default path on it).
I do can start docker with my custom path if I stop it and then start it like this: sudo snap start docker.dockerd --data-root=/home/user/docker. Which is not pretty but works. Is there a way to change docker snap flags on startup or make it prefers the daemon.json options?
I've read this archived post, which treats such issue on docker version 17, but it didn't helped much the same way several other material I found online. I seems that symbolic link may be a way tho...
I'm using docker 19.03.11, snap installed on Ubuntu 20.04.
P.s.: The new path is on a second HDD mounted as my home directory. Changing the path will save space in my system SSD.
Thanks for the attention.
From https://github.com/docker-snap/docker-snap/issues/3 and https://askubuntu.com/questions/550348/how-to-make-mount-bind-permanent, the not-perfect-but-working solution seems to be the bind mount between /var/snap/docker/common/var-lib-docker and /home/username/docker which is the previous docker data-root I had before installing docker with snap.
So first, clear the data-root option in daemon.json.
Then add the following at the end of /etc/fstab with the following command:
echo '/home/username/docker /var/snap/docker/common/var-lib-docker none bind' >> /etc/fstab
After reboot, your docker data root will be stored in /home/username/docker
I ran out of space on an Ubuntu VirtualBox VM and had to do the following:
Stop the VM and create a new Fixed Volume
Start the VM and make sure the new volume was mounted
Stop the docker service
sudo systemctl stop docker.service
sudo systemctl stop docker.socket
Copy /var/lib/docker to new volume
sudo rsync -aqxP /var/lib/docker/ /media/username/spare\ disk/
Update /etc/docker/daemon.json
{
"data-root": "/media/username/spare disk/docker",
"storage-driver": "overlay2"
}
Reload systemd and start docker service
sudo systemctl daemon-reload
sudo systemctl start docker
See: https://docs.docker.com/config/daemon/systemd/#runtime-directory-and-storage-driver
I am using Docker 18.09 and I am trying to build some images for my work. The problem is that the images are always inside the root directory, precisely the /var/lib/docker/overlay2 are in /var/docker/. As there isn't enough space in my root directory, so I want to change this default directory to my other disk but none of the solutions I have looked upon the internet have worked for me.
I have gone through these already but none of them are working:
https://forums.docker.com/t/how-do-i-change-the-docker-image-installation-directory/1169
https://medium.com/#ibrahimgunduz34/how-to-change-docker-data-folder-configuration-33d372669056
How to change the docker image installation directory?
The default directory to store docker related data (containers, images and so on) is /var/lib/docker.
To override this default location use -g option.
While starting docker deamon use -g option.
dockerd -g /mnt/path/to/docker/dir
In your case, the best option is to attach some external storage to machine at some mountpoint. And mention that mountpoint in -g option.
Hope this helps.
Update:
-g option is deprecated. Use --data-root option. Check this.
I also faced similar issue with containerd used with rke2, I am using RKE2 with containerd and its storing all the images on /var/lib/rancher/rke2, which was causing VMs'/' partition getting full all the time.
I wanted to move containerd root directory to custom directory
I changed rke agent start command in rke service file and it worked.
Created a new FS /containerdata/containerd and configured rke service to point to this directory for containerd data
change /usr/local/lib/systemd/system/rke2-agent.service
ExecStart=/usr/local/bin/rke2 agent --data-dir /containerdata/containerd/
Reload and retart rke2-agent.service
systemctl daemon-reload
systemctl restart rke2-agent.service
This may cause pods to be unstable, but system gets stable over time.
I used the following technique when i too ran into this problem and using a different filesystem which had a larger size was the only available option left.
Stop the docker service.
sudo systemctl stop docker.service
Edit the docker.service file to include the new directory in the ExecStart line as below.
ExecStart=/usr/bin/dockerd -g /u01/docker -H fd:// --containerd=/run/containerd/containerd.sock
Reload daemon
sudo systemctl daemon reload
Restart docker
sudo systemctl start docker.service
Note: the docker.service file path is shown when you ran the systemctl stop docker.service
A much simpler and straightforward way out for this problem is to simply create a symbolic link to the new path once you move the existing folder like below.
sudo systemctl stop docker.service
sudo mv /var/lib/docker/ /
sudo ln -s / /var/lib/docker
sudo systemctl start docker.service
Purpose - What do I want to achieve?
I want to access systemctl from inside a container running a kubernetes node (ami: running debian stretch)
Working setup:
Node AMI: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
Node Directories Mounted in the container to make systemctl work:
/var/run/dbus
/run/systemd
/bin/systemctl
/etc/systemd/system
Not Working setup:
Node AMI: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
Node Directories Mounted in the container to make systemctl work:
/var/run/dbus
/run/systemd
/bin/systemctl
/etc/systemd/system
Debugging in an attempt to solve the problem
To debug this issue with the debian-stretch image not supporting systemctl with the same mounts as debian-jessie
1) I began by spinning up a nginx deployment by mounting the above mentioned volumes in it
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
kubectl exec -it nginx-deployment /bin/bash
root#nginx-deployment-788f65877d-pzzrn:/# systemctl
systemctl: error while loading shared libraries: libsystemd-shared-
232.so: cannot open shared object file: No such file or directory
2) As the above issue showed the file libsystemd-shared-232.so not found. I found the actual path by looking into the node.
admin#ip-10-0-20-11:~$ sudo find / -iname 'libsystemd-shared-232.so'
/lib/systemd/libsystemd-shared-232.so
3) Mounted the /lib/systemd in the nginx pod and ran the systemctl again
kubectl exec -it nginx-deployment /bin/bash
root#nginx-deployment-587d866f54-ghfll:/# systemctl
systemctl: error while loading shared libraries: libcap.so.2:cannot
open shared object file: No such file or directory
4) Now the systemctl was failing with a new so missing error
root#nginx-deployment-587d866f54-ghfll:/# systemctl
systemctl: error while loading shared libraries: libcap.so.2: cannot
open shared object file: No such file or directory
5) To solve the above error i again searched the node for libcap.so.2 Found it in the below path.
admin#ip-10-0-20-11:~$ sudo find / -iname 'libcap.so.2'
/lib/x86_64-linux-gnu/libcap.so.2
6) Seeing the above directory not mounted in my pod. I mounted the below path in the nginx pod.
/lib/x86_64-linux-gnu mounted in the nginx pod(deployment)
7) The nginx pod is not able to come up after adding the above mount. Getting the below error:
$ k logs nginx-deployment-f9c5ff956-b9wn5
standard_init_linux.go:178: exec user process caused "no such file
or directory"
Please suggest how to debug further. And what all mounts are required to make systemctl work from inside a container in a debian stretch environment.
Any pointers to take the debugging further could be helpful.
Rather than mounting some of the library files from the host you can just install systemd in the container.
$ apt-get -y install systemd
Now, that won't necessarily make systemctl run. You will need systemd to be running in your container which is spawned by /sbin/init on your system. /sbin/init needs to run as root so essentially you would have to run this with the privileged flag in the pod or container security context on Kubernetes. Now, this is insecure and there is a long history about running systemd in a container where the Docker folks were mostly against it (security) and the Red Hat folks said that it was needed.
Nevertheless, the Red Hat folks figured out a way to make it work without the unprivileged flag. You need:
/run mounted as a tmpfs in the container.
/sys/fs/cgroup mounted as read-only is ok.
/sys/fs/cgroup/systemd/ mounted as read/write.
Use for STOPSIGNAL SIGRTMIN+3
In Kubernetes you need an emptyDir to mount a tmpfs. The others can be mounted as host volumes.
After mounting your host's /lib directory into the container, your Pod most probably will not start because the Docker image's /lib directory contained some library needed by the Nginx server that should start in that container. By mounting /lib from the host, the libraries required by Nginx will not be accessible any more. This will result in a No such file or directory error when trying to start Nginx.
To make systemctl available from within the container, I would suggest simply installing it within the container, instead of trying to mount the required binaries and libraries into the container. This can be done in your container's Dockerfile:
FROM whatever
RUN apt-get update && apt-get install systemd
No need to mount /bin/systemd or /lib/ with this solution.
I had a similar problem where one of the lines in my Dockerfile was:
RUN apt-get install -y --reinstall systemd
but after docker restart, when I tried to use systemctl command. The output was:
Failed to connect to bus: No such file or directory.
I solved this issue by adding following to my docker-compose.yml:
volumes:
- "/sys/fs/cgroup:/sys/fs/cgroup:ro"
It can be done also by:
sudo docker run -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro {other options}
unfortunately, I can't use my docker behind the proxy , I do what googling search suggest and this is the error I get when I run sudo docker run hello-world:
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-
1.docker.io/v2/: Proxy Authentication Required.
See 'docker run --help'.
this is my '/etc/systemd/system/docker.service.d/http-proxy.conf' file :
[Service]
Environment="HTTP_PROXY=http://user:pass#127.0.0.1:8800/"
Environment="HTTPS_PROXY=https://user:pass#127.0.0.1:8800/"
my "etc/default/docker" file :
export http_proxy="http://127.0.0.1:3128/"
export https_proxy="http://127.0.0.1:3128/"
export HTTP_PROXY="http://127.0.0.1:3128/"
export HTTPS_PROXY="http://127.0.0.1:3128/"
what is the problem?
thank you :)
try this,
$ sudo vim /etc/resolv.conf
#add these lines on top and above one for home router…
nameserver 8.8.8.8
nameserver 8.8.4.4
After saving the /etc/resolv.conf file.
run $ sudo systemctl daemon-reload for reloading daemon process.
Then restart your docker :
run $ sudo systemctl restart docker
Docker is not available is some countries because of some unfair sanctions by US which are targeting people directly also startups ...
Any way you can use registry docker instead of docker_hub.
But for creating images and container from micro-services and projects and run them (local) you check this Link
All the best :)
I'm trying to configure docker (version 17.03.1-ce) in ubuntu 16.04 using configuration file /etc/docker/daemon.json to add an host:
{
"debug": true,
"hosts": ["tcp://0.0.0.0:1234", "unix:///var/run/docker.sock"],
"dns" : ["8.8.8.8","8.8.4.4"]
}
when I try to restart docker.. it fails
#service docker restart
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
watching on systemctl status docker.service:
Starting Docker Application Container Engine...
docker-slave-ubuntu-build dockerd[24806]: unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
hosts: (from flag: [fd://], from file: [tcp://0.0.0.0:4243 unix:///var/run/docker.sock])
Where I can remove the mentioned flag ? I have to modify maintainer's script ?
For systemd, my preferred method is to deploy a simple override file (you may need to first create the directory):
$ cat /etc/systemd/system/docker.service.d/override.conf
# Disable flags to dockerd, all settings are done in /etc/docker/daemon.json
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
This removes the -H ... default flag from dockerd along with any other options and lets you manage docker from the daemon.json file. This also allows docker to make changes to their startup scripts as long as they don't modify the ExecStart and you'll continue to receive those changes without maintaining your own copy of the docker.service.
After creating this file, run systemctl daemon-reload; systemctl restart docker.
It looks like this is an issue merging configuration from both the command line and configuration file. The default systemd unit file is specifying -H fd:// and it conflicts with your tcp://0.0.0.0:1234 and unix:///var/run/docker.sock.
There are a number of GitHub issues on the subject:
https://github.com/moby/moby/issues/22339
https://github.com/moby/moby/issues/21559
https://github.com/moby/moby/issues/25471
https://github.com/moby/moby/pull/27473
They don't seem to consider this a bug. But it is definitely an annoyance. A workaround is to copy the default unit file and remove the -H fd:// from it:
$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system/
$ sudo sed -i 's/\ -H\ fd:\/\///g' /etc/systemd/system/docker.service
$ sudo systemctl daemon-reload
$ sudo service docker restart
I found this on the Docker docs and it worked on Docker 18.09.1 and Centos 8:
To work around this problem, create a new file /etc/systemd/system/docker.service.d/docker.conf with the following contents, to remove the -H argument that is used when starting the daemon by default.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Then reload
systemctl daemon-reload
The reason is:
Docker listens on a socket by default. On Debian and Ubuntu systems using systemd, this means that a host flag -H is always used when starting dockerd. If you specify a hosts entry in the daemon.json, this causes a configuration conflict (as in the above message) and Docker fails to start.
Here is the link: https://docs.docker.com/config/daemon/#troubleshoot-conflicts-between-the-daemonjson-and-startup-scripts
In my case I tried to add both the daemon.json under /etc/docker and a *.conf file under /etc/systemd/system/docker.service.d.
It turned out it was enough to have a .conf file only (in my case called override.conf):
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375
this way I could expose the docker socket.
I had copied the daemon.json from a website. After running
sudo systemctl stop docker
/usr/sbin/dockerd
it showed me a better error message that stated that I had a strange invisible character in the daemon.json file