How do I change the default docker container location? [duplicate] - docker

This question already has answers here:
How to change the docker image installation directory?
(20 answers)
Closed 2 years ago.
When I run docker, downloaded docker images (seem to be) stored in /var/lib/docker somewhere.
Since disk space is limited on this directory, and I'm provisioning docker to multiple machines at once; is there a way to change this default location to i.e. /mnt/hugedrive/docker/?

Working solution as of Docker v18.03
I found #Alfabravo's comment to work in my situation, so credit to them and upvoted.
However I think it adds value to provide an answer here to elaborate on it:
Ensure docker stopped (or not started in the first place, e.g. if you've just installed it)
(e.g. as root user):
systemctl stop docker
(or you can sudo systemctl stop docker if not root but your user is a sudo-er, i.e. belongs to the sudo group)
By default, the daemon.json file does not exist, because it is optional - it is added to override the defaults. (Reference - see Answer to: Where's docker's deamon.json? (missing)
)
So new installs of docker and those setups that haven't ever modified it, won't have it, so create it:
vi /etc/docker/daemon.json
And add the following to tell docker to put all its files in this folder, e.g:
{
"graph":"/mnt/cryptfs/docker"
}
and save.
(Note: According to stackoverflow user Alireza Mohamadi's comment beneath this answer on May 11 5:01: "graph option is deprecated in v17.05.0. Use data-root instead." - I haven't tried this myself yet but will update the answer when I have)
Now start docker:
systemctl start docker
(if root or prefix with sudo if other user.)
And you will find that docker has now put all its files in the new location, in my case, under: /mnt/cryptfs/docker.
This answer from #Alfabravo is also supported by: This answer to this problem: Docker daemon flags ignored
Notes and thoughts on Docker versioning
My host platform that is running docker is Ubuntu Linux 16.04.4 LTS 64bit.
I would therefore assume that this solution would apply to later, future versions of Docker, as well as the current time of writing, v18.03. In other words: "this solution should work from v18.03 onwards". As what seems to be the case with other answers, there is also the possibility that this answer might not work for some future version of Docker, if the Docker developers decide to change things in this area. But for now, it works with v18.03, at least in my case, I hope you also find it to work for you.
Optional Housekeeping tip:
If you had files in the original location /var/lib/docker and you know yourself that you definitely don't need them anymore (i.e. you have all the data (databases inside containers, files etc) within them backed up or in another form), you can delete them, so as to keep your machine tidy.
What did NOT work - other answers here (unfortunately):
Other solutions here did not work for my situation for the current version of docker that I am using (as the time of writing, current docker version was: Docker v18.03 (current) ).
Also note (as #AlfaBravo correctly points out in their comment to my answer) that the other answers may well have worked for different or earlier versions of docker.
I should note that my host platform is Ubuntu Linux 16.04.4 LTS 64bit.
In all cases when attempting the other answers I had followed the process of stopping docker before doing the solution and then starting it up after, as required. :
https://stackoverflow.com/a/47604857/227926 - #Gerald Sabu M's
solution to alter the /lib/systemd/system/docker.service - alter
the line to: ExecStart=/usr/bin/docker daemon -g
/mnt/hugedrive/docker/ - Outcome for me: docker still put its files
in the default, original location: /var/lib/docker
I tried #Fai's comment, but that file does not exist on my system, so
it would be something particular to their setup:
/etc/systemd/system/docker.service.d/exec_start.conf.
docker.service
I also tried #Hatem Jaber's answer
https://stackoverflow.com/a/32072042/227926 - but again, as will
#Gerald Sabu M's answer, docker still puts the files in the original
default location of /var/lib/docker.
(I would of course like to thank them for their efforts, though).
Why I am changing the default docker location: encrypted file system for GDPR purposes:
As an aside, and perhaps useful to you, I'm running docker inside an encrypted file system (as part of a GDPR initiative) in order to provide Encryption of Data-at-Rest data state (also known as Encryption-at-Rest) and also for Data-In-Use) (definitions).
The process of defining a GDPR datamap includes, among many other things, looking at the systems where the sensitive data is stored (Reference 1: GDPR Data Map Template: An easy to use self-assessment tool for understanding how data moves through your organisation) (Reference 2: Data mapping: Where to start for GDPR compliance). And by encrypting the filesystem where the database and application code is stored and the swap file, risk of residual data left behind when deleting or moving a VM can be eliminated.
I've made use of some of the steps defined in the following links, credit to them:
Encrypting Docker containers on a Virtual Server
How To: Linux Hard Disk Encryption With LUKS [ cryptsetup Command
]
I would note that a further step of encryption is recommended: to encrypt the database fields themselves - the sensitive fields at least - i.e. user data. You can probably find out about various levels of support for this in the implementation of popular database systems. Field encryption provides defence against malicious instrusion and leakage of data while the web application is running.
Also, as another aside point: to cover 'Data-In-Motion' state of data, I am using the free Let's Encrypt

The best solution would be to start the docker daemon (dockerd) with a correct data root path. According to the official documentation, as of Feb 2019, there are no --graph, -g options. These were renamed to the single argument --data-root.
https://docs.docker.com/engine/reference/commandline/dockerd/
So you should modify your /lib/systemd/system/docker.service so that the ExecStart takes into consideration that argument
An example could be
ExecStart=/usr/bin/dockerd --data-root /mnt/data/docker -H fd://
Then you should restart your docker daemon. (Keep in mind that you will no longer have your containers and your images, copy the data from your old folder to the new one if you want to maintain everything)
service docker restart
Keep in mind that if you restart the docker daemon your containers will be stopped, and only those with a correct restart policy will be restarted.
Tested on Ubuntu 16.04.5 Docker version 18.09.1, build 4c52b90

You can start the Docker daemon using -g option and the directory of your choice. This sets the appropriate runtime for Docker.
With version 1.8, it should be something like:
docker daemon -g /path/to/directory
With earlier versions, it would be:
docker -d -g /path/to/directory
From the man page:
-g, --graph=""
Path to use as the root of the Docker runtime. Default is /var/lib/docker.

You can perform the following steps to modify the default docker image location, i.e /var/lib/docker:-
Stop Docker
# systemctl stop docker
# systemctl daemon-reload
Add the following parameters to /lib/systemd/system/docker.service.
FROM:
ExecStart=/usr/bin/dockerd
TO:
ExecStart=/usr/bin/docker daemon -g /mnt/hugedrive/docker/
Create a new directory and rsync the current docker data to new directory.
# mkdir /mnt/hugedrive/docker/
# rsync -aqxP /var/lib/docker/ /mnt/hugedrive/docker/
Now, Docker Daemon can be started safely
# systemctl start docker

In /etc/default/docker or whatever location it exists in your system, change the following to something like this:
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.8.4 -g /drive/location
If you have issues and it is ignored, apply this solution: Docker Opts in Etc Default Docker Ignored

Related

Docker local registry - Image naming [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

is docker has config to replace image`s repository [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

How to pull image using HELM without URL [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

How do pass in DOCKER_OPTS into docker image running from Docker for Mac?

My root problem is that I need to support a local docker registry, self-signed certs and whatnot, and after upgrading to Docker for Mac, I haven't quite been able to figure out how to pass in options, or persist options, in the docker/alpine image running via the new and shiny xhyve that got installed with Docker for Mac.
I do have the functional piece of my problem solved, but it's very manual:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
log in as root
vi /etc/init.d/docker
Append --insecure-registry foo.local.machine:5000 to DOCKER_OPTS; write file; quit vi.
/etc/init.d/docker restart
Now, if from the perspective of the main OS / OSX, Docker is restarted -- like a simple reboot of the computer -- of course this change and option is lost, and I have to go through this process again.
So, what can I do to automate this?
Am I missing where DOCKER_OPTS may be set? The /etc/init.d/docker file, internally, doesn't overwrite the env var, it appends to it, so this seems like it should be possible.
Am I missing where files may be persisted in the new docker image? I admit I'm not as familiar with it than the older image that I believe was boot2docker based, where I could have a persisted volume attached, and an entry point where to start these modifications.
Thank you for any help, assistance, and inspiration.
Go to Docker preferences (you can find icon on main panel)
Advanced -> Insecure docker registry
Advanced settings pictures

How to make docker image of host operating system which is running docker itself?

I started using Docker and I can say, it is a great concept.
Everything is going fine so far.
I installed docker on ubuntu (my host operating system) , played with images from repository and made new images.
Question:
I want to make an image of the current(Host) operating system. How shall I achieve this using docker itself ?
I am new to docker, so please ignore any silly things in my questions, if any.
I was doing maintenance on a server, the ones we pray not to crash, and I came across a situation where I had to replace sendmail with postfix.
I could not stop the server nor use the docker hub available image because I need to be clear sure I will not have problems. That's why I wanted to make an image of the server.
I got to this thread and from it found ways to reproduce the procedure.
Below is the description of it.
We start by building a tar file of the entire filesystem of the machine (excluding some non necessary and hardware dependent directory - Ok, it may not be as perfect as I intent, but it seams to be fine to me. You'll need to try whatever works for you) we want to clone (as pointed by #Thomasleveil in this thread).
$ sudo su -
# cd /
# tar -cpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/proc --exclude=/tmp --exclude=/mnt --exclude=/dev --exclude=/sys /
Then just download the file into your machine, import targz as an image into the docker and initialize the container. Note that in the example I put the date-month-day of image generation as image tag when importing the file.
$ scp user#server-uri:path_to_file/backup.tar.gz .
$ cat backup.tar.gz | docker import - imageName:20190825
$ docker run -t -i imageName:20190825 /bin/bash
IMPORTANT: This procedure generates a completely identical image, so it is of great importance if you will use the generated image to distribute between developers, testers and whateever that you remove from it or change any reference containing restricted passwords, keys or users to avoid security breaches.
I'm not sure to understand why you would want to do such a thing, but that is not the point of your question, so here's how to create a new Docker image from nothing:
If you can come up with a tar file of your current operating system, then you can create a new docker image of it with the docker import command.
cat my_host_filesystem.tar | docker import - myhost
where myhost is the docker image name you want and my_host_filesystem.tar the archive file of your OS file system.
Also take a look at Docker, start image from scratch from superuser and this answer from stackoverflow.
If you want to learn more about this, searching for docker "from scratch" is a good starting point.

Resources