Currently running a k8s cluster however occasionally I get memory issues. The following error will pop up,
Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "<web app>": Error response from daemon: devmapper: Thin Pool has 6500 free data blocks which is less than minimum required 7781 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
I can resolve this by manually running docker ps -a -f status=exited -q | xargs -r docker rm -v
However I want Kubernetes to do this work itself. Currently in my kublet config I have:
evictionHard:
imagefs.available: 15%
memory.available: "100Mi"
nodefs.available: 10%
nodefs.inodesFree: 5%
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
What am i doing wrong?
Reading the error you've posted seems to me you are using "devicemapper" as storage driver.
The devicemapper storage driver is deprecated in Docker Engine 18.09, and will be removed in a future release. It is recommended that users of the devicemapper storage driver migrate to overlay2.
I should suggest you use "overlay2" as storage drive, unless you are running a non-support OS. See here the support OS versions.
You can check your actual storage drive using docker info command, you will get an output like this:
Client:
Debug Mode: false
Server:
Containers: 21
Running: 18
Paused: 0
Stopped: 3
Images: 11
Server Version: 19.03.5
Storage Driver: devicemapper <<== See here
Pool Name: docker-8:1-7999625-pool
Pool Blocksize: 65.54kB
...
>
Supposing you want to change the storage drive from devicemapper to overlay2, you need to following this steps:
Changing the storage driver makes existing containers and images inaccessible on the local system. Use docker save to save any images you have built or push them to Docker Hub or a private registry before changing the storage driver, so that you do not need to re-create them later.
Before following this procedure, you must first meet all the prerequisites.
Stop Docker.
$ sudo systemctl stop docker
Copy the contents of /var/lib/docker to a temporary location.
$ cp -au /var/lib/docker /var/lib/docker.bk
If you want to use a separate backing filesystem from the one used by /var/lib/, format the filesystem and mount it into /var/lib/docker. Make sure add this mount to /etc/fstab to make it permanent.
Edit /etc/docker/daemon.json. If it does not yet exist, create it. Assuming that the file was empty, add the following contents.
{
"storage-driver": "overlay2"
}
Docker does not start if the daemon.json file contains badly-formed JSON.
Start Docker.
$ sudo systemctl start docker
Verify that the daemon is using the overlay2 storage driver. Use the docker info command and look for Storage Driver and Backing filesystem.
Client:
Debug Mode: false
Server:
Containers: 35
Running: 15
Paused: 0
Stopped: 20
Images: 11
Server Version: 19.03.5
Storage Driver: overlay2 <=== HERE
Backing Filesystem: extfs <== HERE
Supports d_type: true
Extracted from Docker Documentation.
Related
I am trying build the nodemcu firmware with a docker on a windows 10 system.
When I try to build the nodemcu firmware, I have this error:
(...)
PRUNE libmain.a libc.a
/opt:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/nodemcu-firmware/tools/toolchains/esp8266-linux-x86_64-20190731.0/bin
xtensa-lx106-elf-ar: /opt/nodemcu-firmware/sdk/esp_iot_sdk_v3.0-e4434aa/lib/libc.a: No space left on device
Makefile:334: recipe for target '/opt/nodemcu-firmware/sdk/.pruned-3.0-e4434aa' failed
make: *** [/opt/nodemcu-firmware/sdk/.pruned-3.0-e4434aa] Error 1
make: Leaving directory '/opt/nodemcu-firmware'
I tried docker system prune but this error persists.
I tried execute docker info but doesn't help me:
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 12
Server Version: 19.03.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.131-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.419GiB
Name: docker-desktop
ID: RCEX:VYON:IG6T:AXCF:CSLX:3NVK:V453:DTSP:HW4Y:EKBY:PCVW:2UOJ
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 28
Goroutines: 44
System Time: 2020-12-21T02:04:22.387341144Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
When I execute docker system df, I have this result:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 1 673.8MB 209.8MB (31%)
Containers 1 0 0B 0B
Local Volumes 10 0 0B 0B
Build Cache 0 0 0B 0B
I am new in docker and I don't know what do to from here. Can anyone help me?
Docker tends to need cleanup after you work with it sometimes. There may be previous resources that were created in the past that are no longer in use, this is especially true if you are making use of volumes.
Sometimes, images are not fully created and end up becoming untagged dangling images which for the most part are undesirable. So you have to remove these undesirables that consume space.
docker image prune
docker image prune removes all dangling images.
2. docker volume prune
docker volume prune removes all 'volumes' which are not attached to any containers. These should always be ran before pruning containers otherwise all containers will get deleted this is because pruning containers removes all containers which are not running at the time the command is ran. For the most part in my experience volumes is often what consumes the most amount of resources in terms of space.
docker container prune
For the most part this is optional and you can choose to do this or not
Now even after deleting dangling images there might still be other images that do not get deleted but you no longer need, like for example if you had older versions of the redis image and later download the most recent version of it, you may need to remove the older one if you no longer use it. You will have to delete this manually. Use the following commands:
docker image ls
docker rmi <image id>
The first command gives you a list of all the images you have downloaded, and the second command deletes it. sometimes you may have to use the -f flag on the delete command to force the deletion of an image:
docker rmi -f <image id>
Below is the file system in overlay2 eating disk space, on Ubuntu Linux 18.04 LTS
Disk space of server 125GB
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/9ac0eb938cd2a50bb87e8ed13605d3f09214fdd9c8967f18dfc3f9432701fea7/merged
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/397b099799212060ee7a4718660aa13aba8aa1fbb92f4d88d86fbad94e572847/merged
shm 64M 0 64M 0% /var/lib/docker/containers/7ffb129016d187a61a31c33f9e468b98d0ac7ab1771b87631f6caade5b84adc6/mounts/shm
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/df7c4acee73f7aa2536d2a8929a48241bc8e92a5f7b9cb63ab70cea731b52cec/merged
Another solution if the above doesn't work is setup a log rotation.
nano /etc/docker/daemon.json
if not found
cat > daemon.json
Add the following lines to file:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Restart the docker daemon: systemctl restart docker
Please refer: How to setup log rotation post installation
In case someone else runs into this, here's what's happening:
Your container may be writing data (logs, deployables, downloads...) to its local filesystem, and overlay2 will create a diff on each append/create/delete, so the container's filesystem will keep growing until it fills all available space on the host.
There are a few workarounds that won't require changing the storage driver:
first of all, make sure the data saved by the container may be discarded (you probably don't want to delete your database or anything similar)
periodically stop the container, prune the system docker system prune and restart the container
make sure the container doesn't write to its local filesystem, but if you can't:
replace any directories the container writes to with volumes or mounts.
Follow the Steps if your Server is Linux Ubuntu 18.04 LTS (should work for others too)
Docker info for Overlay2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
if you got the following lines when you enter df -h --total
19M /var/lib/docker/overlay2/00d82017328c49c661c78ce14550c4073c50a550fe5004911bd3488b085aea76/diff
5.9M /var/lib/docker/overlay2/00e3e4fa0cbff7c242c38cfc9501ef1a523158d69b50779e08a773e7e22a01f1/diff
44M /var/lib/docker/overlay2/0e8e7e893b2c8aa17b4875d421670e058e4d97de066c970bbeab6cba566a44ba/diff
28K /var/lib/docker/overlay2/12a4c4e4877d35e9db657e4acff32e513042cb44119cca5c43fc19ad81c3915f/diff
............
............
then do the changes as follows:
First stop docker : sudo systemctl stop docker
Next: got to path /etc/docker
Check file daemon.json if not found
cat > daemon.json
and enter the following inside:
{
"storage-driver": "aufs"
}
and close
Finally restart docker : sudo systemctl start docker
Check if the changes have been made:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true
Changing the file system can help you to resolve this issue.
Please if check your docker version supports aufs here:
Please do check the Linux distribution and what storage drivers supported here :
I had a similar issue with the docker swarm.
The docker system prune --volume, restarting the server, removing the swarm stack and recreation was not helping.
In my case, I was hosting RabbitMQ where docker-compose config was:
services:
rabbitmq:
image: rabbitmq:.....
....
volumes:
- "${PWD}/queues/data:/var/lib/rabbitmq"
In such a case each container restart, each server reboot, just all that leads to restarting the rabbitmq container takes more and more hard drive space.
Initial value:
ls -ltrh queues/data/mnesia/ | wc -l
61
du -sch queues/data/mnesia/
7.8G queues/data/mnesia/
7.8G total
After restart:
ls -ltrh queues/data/mnesia/ | wc -l
62
du -sch queues/data/mnesia/
8.3G queues/data/mnesia/
8.3G total
My solution was to stop the rabbitmq and remove directories in queues/data/mnesia/. Then restart the rabbitmq.
Maybe sth is wrong with my config... But if you have such an issue then worth checking your volumes of containers whether do not leave some trash there.
If you are troubled by that /var/lib/docker/overlay2 directory is taking too much space(use du command to check space usage), then the answer below may be suitable for you.
docker xxx prune commands will clean up something unused, such as all stopped containers(in /var/lib/docker/containers), files in the virtual filesystems of stopped containers(in /var/lib/docker/overlay2), unmounted volumes(in /var/lib/docker/volumes) and images that don't have related containers(in /var/lib/docker/images). But all of this will not touch the containers which are in running.
limiting the size of logs in configurations will limit the size of /var/lib/docker/containers/*/*-json.log, but it doesn't involve the overlay2 directory.
you can find two folders called merged and diff in /var/lib/docker/overlay2/<hash>/. If these folders are big. That means there are high disk usage in your containers SELVES but not the docker host. In this case, you have to attach a terminal into relevant containers, find the high usage locations in the containers, and take your own solutions.
Just like Nick M said.
All my docker containers suddenly exist. I try to restart the container and get following error:
Error response from daemon: Cannot start container 1559: Error getting container 1559fdbc6c2ab8f12d9efe1a066880ddedb2c424d3a3ed8a1f8a2eb181e1c3ba from driver devicemapper: Error mounting '/dev/mapper/docker-253:2-33554560-1559fdbc6c2ab8f12d9efe1a066880ddedb2c424d3a3ed8a1f8a2eb181e1c3ba' on '/data/docker/devicemapper/mnt/1559fdbc6c2ab8f12d9efe1a066880ddedb2c424d3a3ed8a1f8a2eb181e1c3ba': invalid argument
Here is docker info
[root#localhost ~]# docker info
Containers: 9
Images: 189
Storage Driver: devicemapper
Pool Name: docker-253:2-33554560-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 7.367 GB
Data Space Total: 107.4 GB
Data Space Available: 54.15 GB
Metadata Space Used: 11.58 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.136 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /data/docker/devicemapper/devicemapper/data
Metadata loop file: /data/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 3.703 GiB
Name: localhost.localdomain
ID: UBGK:AERA:AYMM:XB6P:XCOG:MUGB:NKZM:GSIY:AH25:UGN7:FUF3:ID44
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
I tried to restart the docker and it still didn't work.
I cannot remove all the containers or images.
The disk used to run out of space.
My docker version is 1.8
What can I do to recover my container?
Really thank you for your help!!!!!!!
Can you post your docker info command output?
If your Docker Root dir -
/var/lib/docker (default)
is under the same disk running out of space as mentioned by you then you wouldn't be able to pull image or run docker containers. Try to clean up some space and then you may be able to recover your container. BTW when your say recover - Are you writing any data into the container?
Any specific reason to be on older version of Docker?
The recommended storage driver for Docker is overlay2 & FYI the devicemapper storagedriver is deprecated in 18.09 release and would be removed in future release.
Read more about storage drivers here
I´m facing a weird problem when I want to build my image on Windows. I haven´t used Docker for anything else, so the installation can be considered as fresh. There are no volumes at all and no images yet.
When I´m trying to build my application from my Dockerfile, it finishes with this error
docker build ./
Sending build context to Docker daemon 1.4 GB
Error response from daemon: Error processing tar file(exit status 1): write /.bowerrc: no space left on device
I´ve read that you can increase the basesize of docker, but I haven´t found a solution for that for Windows (Why is this even limited by default?)
docker info prints some stuff, but it doesn´t show anything about the basesize under "Storage Driver" at all
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: tmpfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.8-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.934 GiB
Name: moby
ID: GUXQ:KPKS:PHBV:BMEF:QHHM:B2YG:MWPB:2W5H:Z3GX:27YS:QBT6:O4RV
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 13
Goroutines: 21
System Time: 2017-02-19T20:15:57.8764828Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
According to some posts in the internet, this was once or is the way to go on Linux, but it doesn´t work on Windows
docker daemon --storage-opt dm.basesize=20G
What is wrong with my docker installation and how can I increase the basesize?
I had the same problem on Windows. One of this two settings should solve this problem:
Increase Memory that Docker is using. If is it 2GB, add more
Increase "Disk image max size" - initially is 60, move it to 80 GB
This should be enough, but depends on complexity of what you are building. In some cases increase of Swap would be needed. The initial Swap of Docker is 1024 MB
Thanks
I reset to factory default , then select : delete all image.
At advanced tab : increase 'Disk image max size' (I modify to 80 GB)
Then build docker again
I´ve reinstalled Docker and now it seems to work
This wouldn't have been the underlying issue for the OP back in 2017, but if you're experiencing this problem with Windows 10 1903, then you may be running into Docker for Windows issue #4100 ("Windows 1903 fails when storage-opt used").
On January 6th, 2020, a commit was pushed to the docker-ce GitHub repository that, from the comments (below), appears to implement a workaround to the underlying bug in Windows.
microsoft/hcsshim#718 wclayer: Work around Windows bug when expanding sandbox size
fixes microsoft/hcsshim#708 Windows Host Compute Service bug breaks docker (and other) sandboxes bigger than 20G on Windows 1903
fixes microsoft/hcsshim#624The hcsshim on Windows 10 1903 always fails to build Docker image
fixes/addresses docker/for-win#3884 An error occurred while attempting to build Docker image (especially this comment and the next comments after: docker/for-win#3884 (comment))
fixes/addresses docker/for-win#4100 Windows 1903 fails when storage-opt used
fixes moby/moby#36831 hcsshim::PrepareLayer failed in Win32: The parameter is incorrect (moby/moby#36831 (comment))
fixes Stannieman/audacity-with-asio-builder#5 Docker won't build container
fixes MicrosoftDocs/visualstudio-docs#3523 Error when running build with storage-opts set
fixes moby/moby#39524 Docker build windows 19.03 --storage-opt size>20G
Unfortunately, no new release of Docker has yet been made that includes this commit, so those of us on 1903 and experiencing this bug are kind of stuck for the time being.
Assuming you're doing this with a dev environment, just run:
docker system prune -a -f
Caution: Do not run this on a production environment
The private registry was worked well based on docker 1.10.3,but I can not pull/push images after the docker updated to 1.12.0.
I had modified the /etc/sysconfig/docker as:
OPTIONS='--selinux-enabled=true --insecure-registry=myip:5000'
or
OPTIONS='--selinux-enabled=true --insecure-registry myip:5000'
but when I exec pull/push,I got this error:
$ docker pull myip:5000/cadvisor
Using default tag: latest
Error response from daemon: Get https://myip:5000/v1/_ping: http: server gave HTTP response to HTTPS client
when I change back docker to 1.10.3, it still work well as below:
$ docker pull myip:5000/cadvisor
Using default tag: latest
Trying to pull repository myip:5000/cadvisor ...
latest: Pulling from myip:5000/cadvisor
09d0220f4043: Pull complete
a3ed95caeb02: Pull complete
151807d34af9: Pull complete
14cd28dce332: Pull complete
Digest:
sha256:33b6475cd5b7646b3748097af1224de3eee3ba7cf5105524d95c0cf135f59b47
Status: Downloaded newer image for myip/cadvisor:latest
Some relative information are listed below:
docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built:
OS/Arch: linux/amd64
docker info
Containers: 4
Running: 1
Paused: 0
Stopped: 3
Images: 241
Server Version: 1.12.0
Storage Driver: devicemapper
Pool Name: docker-253:0-6809-pool
Pool Blocksize: 65.54 kB
Base Device Size: 107.4 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 5.459 GB
Data Space Total: 107.4 GB
Data Space Available: 34.74 GB
Metadata Space Used: 9.912 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.138 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use '--storage-opt dm.thinpooldev' to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host overlay null bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 62.39 GiB
Name: server_3
ID: TITS:BL4B:M5FE:CIRO:5SW6:TVIV:HW36:J7OS:WLHF:46T6:2RBA:WCNV
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 21
Goroutines: 32
System Time: 2016-08-02T10:33:06.414048675+08:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
docker exec <registry-container> registry -version
registry github.com/docker/distribution v2.2.1
After I restart the docker daemon in debug mode, the daemon logs when reproducing my problem are listed below:
DEBU[0794] Calling POST /v1.24/images/create?fromImage=10.10.10.40%3A5000%2Fcadvisor&tag=latest
DEBU[0794] hostDir: /etc/docker/certs.d/10.10.10.40:5000
DEBU[0794] hostDir: /etc/docker/certs.d/10.10.10.40:5000
DEBU[0794] Trying to pull 10.10.10.40:5000/cadvisor from https://10.10.10.40:5000 v2
WARN[0794] Error getting v2 registry: Get https://10.10.10.40:5000/v2/: http: server gave HTTP response to HTTPS client
ERRO[0794] Attempting next endpoint for pull after error: Get https://10.10.10.40:5000/v2/: http: server gave HTTP response to HTTPS client
DEBU[0794] Trying to pull 10.10.10.40:5000/cadvisor from https://10.10.10.40:5000 v1
DEBU[0794] hostDir: /etc/docker/certs.d/10.10.10.40:5000
DEBU[0794] attempting v1 ping for registry endpoint https://10.10.10.40:5000/v1/
DEBU[0794] Fallback from error: Get https://10.10.10.40:5000/v1/_ping: http: server gave HTTP response to HTTPS client
ERRO[0794] Attempting next endpoint for pull after error: Get https://10.10.10.40:5000/v1/_ping: http: server gave HTTP response to HTTPS client
ERRO[0794] Handler for POST /v1.24/images/create returned error: Get https://10.10.10.40:5000/v1/_ping: http: server gave HTTP response to HTTPS client
DEBU[1201] clean 2 unused exec commands
What's more, I just run a simple command to launch the private registry for test, anything else is by default:
docker run -d -p 5000:5000 --restart=always --name registry -v 'pwd'/data:/var/lib/registry registry:2
No proxy is configured. In summary, it is only a quiet sample environment for test.
I had the same issue.
This helped for me:
Create or modify /etc/docker/daemon.json on the client machine
{ "insecure-registries":["myregistry.example.com:5000"] }
Restart docker daemon
sudo /etc/init.d/docker restart
For Windows users
Add local registry here and apply:
For Mac Users:
Update the docker preferences using the (docker) icon in top bar
Preferences -> Daemon -> Insecure Registry [Click (+) sign] -> add :port
hit "Apply & Restart" button at bottom
I also had same issue and followed below steps:
1. Create file
vi /etc/docker/daemon.json
2. Add below content
{
"insecure-registries":["192.168.1.142:5000"]
}
3.Restart Docker
service docker restart
If you are using Windows and you get this error you need to create a file here: "C:\ProgramData\docker\config\daemon.json"
and do the same as #Bspec mentioned above:
{ "insecure-registries":["myregistry.example.com:5000"] }
Then restart docker using PowerShell commands:
Stop-Service docker
Start-Service docker
modifying "/etc/docker/daemon.json" didn't work for me.
Putting it under "/etc/sysconfig/docker" as below, worked.
INSECURE_REGISTRY="--insecure-registry 192.168.24.1:8787"
In order to push, add the ip to insecure registry on the client side (e.g. for Windows)
To pull, add it to the server side (in this case Ubuntu)
vim /etc/docker/daemon.json
and then restart Docker.
None of the solutions worked on Ubuntu 18.04 so spend some time to find the root cause.
Steps to solve an issue
sudo vi /lib/systemd/system/docker.service
# ExecStart=dockerd .... --insecure-registry=192.168.99.100:5000
sudo systemctl stop docker.service
sudo systemctl daemon-reload
sudo systemctl start docker.service
What was the issue?
I would recommend to check where exactly dockerd options are configured regardless of you Linux distribution with:
sudo find /etc /lib -name 'docker*' | while read -r line; do grep dockerd $line /dev/null; done
first test localy
docker push localhost:5000/<ImageName>
if docker pushed is Done going to another server and do it:
sudo nano /etc/docker/daemon.json
{"insecure-registries" : ["<HostName or IP Address registry server>:5000"]}
Saving and...
and for next
sudo systemctl daemon-reload
sudo service docker restart
nice!
now docker pushing on another server:
docker tag <image id> <HostName or IP Address registry server>:5000/<ImageName>
docker push <HostName or IP Address registry server>:5000/<ImageName>
Enjoy It.