I'm trying to add a mirror to my docker in order to use my server
to cache images from docker hub using the following syntax:
/etc/docker/daemon.json
{
"registry-mirrors": ["https://myserver.com"]
}
I have seen the above config even docker's official documentation.
but my ubuntu 20.04 does not read that file at all. Even if I restart the
docker service.
You should rewrite the configuration file as follow:
{
"registry-mirrors": ["myserver.com"]
}
Remove the protocol!
Intro
Added a directive to daemon.json which was just being ignored when I restarted Docker. Docker was restarting without error, it was just ignoring my change.
Problem
I was attempting to change the default log target to syslog from json-file by APPENDING the log-driver directive to the end of /etc/docker/daemon.json (I was scripting my Docker install and so was building this file incrementally).
But no matter WHAT I did, I could not get the change read. The output of docker info --format '{{.LoggingDriver}}' was always json-file.
Troubleshooting
Investigated the potential of a formatting error like the accepted answer, but this bore no fruit. Reading, re-reading of the Docker docs. Googling. Nothing could clear the error.
Solution
The Problem? Looks like Docker was really finicky about the ORDER the logging directive "log-driver" appeared. After wasting hours and beating my brains in, I changed the order the directive appeared in the file by PREPENDING it to the top of daemon.json like so:
{
"log-driver": "syslog",
"default-address-pools":
[
{"base":"192.168.X.X/24","size":28}
]
}
With the directive at the TOP, the change was recognized after restarting Docker and the output of docker info --format '{{.LoggingDriver}}' was now as expected: syslog. Go figure...
Conclusion
It was a silly problem, but wow did it waste some cycles figuring out how things were broken. Hope this get folks like myself out of a hole who couldn't find this solution Googling-
Related
I'm trying to run docker in a partially locked-down environment, with /etc on a read-only mount point and a "/data" folder in a read/write mount point. I've added an /etc/docker/daemon.json file:
{
"data-root": "/data/docker"
}
but dockerd is failing on startup with this error:
failed to start daemon: Error saving key file: open /etc/docker/.tmp-key.json128868007: read-only file system
Can I stop dockerd from trying to write into /etc? Are there best practices for running docker on a host with read-only mounts?
EDIT: Turns out there was only one file being written: /etc/docker/key.json which is talked about in detail here. The .tmp-key.json bit is likely a part of some atomic file write code.
Looks like only the "key.json" file is written to /etc. After some digging, I found this PR which talks about making it configurable. As of docker 19.03.6, the option is still available for use in the daemon.json file as "deprecated-key-path": "/path/to/file".
I'm feeling really terrible atm so any help would be really appreciated. I kept running out of space when downloading docker images on /var, so I decided I needed to change the location for where docker was installing images. I tried several methods but had no success. First, I tried creating daemon.json in etc/docker and mapping data-root to a place with more storage (data2/docker). I stopped docker, moved everything over, made the file, but no dice. The docker daemon wouldn't start.
Then, I saw this method https://stackoverflow.com/a/49743270/13034460 which involves creating a symbolic link between /var/lib/docker and the new directory (data2/docker). I followed his instructions:
Much easier way to do so:
Stop docker service: sudo systemctl stop docker
Move existing docker directory to new location sudo mv /var/lib/docker/ /path/to/new/docker/
Create symbolic link
sudo ln -s /path/to/new/docker/ /var/lib/docker
Start docker service
sudo systemctl start docker
Well, this didn't work for me. I can't find the error message b/c it's too far up in my terminal, but it was along the lines of "you don't have enough storage/we don't know where to store this image". /data2/docker should have tons of storage so that can't be the issue.
But the big problem now is that this symbolic link exists and I can't figure out how to get rid of it. I tried removing everything related to docker on the computer, uninstalling, then reinstalling docker (which always used to work for me if there were any issues). But when I reinstall, it won't even run docker hello-world b/c of the link (I think). I get a message:
docker: open /data2/docker/tmp/GetImageBlob289478576: no such file or directory
So...it's looking in data2/docker because of the symbolic link (I assume), but that directory doesn't exist anymore. But neither does /var/lib/docker! All I want is to delete this link and get everything back to fresh defaults. I can worry about the storage issue another time. If I can't use docker at all, I'm so screwed. I've tried looking in every directory to find the link using -ls -l, but I can't find it. I used the exact code that the above references when I created the link (just my paths instead).
I would be so grateful to anyone who could help--I'm so lost on this. Thank you!
This error appears randomly when I'm working with docker-compose on Windows 10, sometimes after pycharm already working with docker-compose as interpreter.
I tried:
Make sure docker-compose file is valid, without tabs instead of spaces.
Use yml and yaml suffixes (sometimes yaml works and yml doesn't, sometimes both are working or not working)
Add project-compose to configuration files.
Problem is 'solved' just after rebooting, and then happened again.
Linux here
tl;dr: same solution as for windows
check path to docker-compose executable:
➤ which docker-compose
/home/voy/.pyenv/shims/docker-compose
Go to File | Settings | Build, Execution, Deployment | Docker | Tools | Docker Compose Executable and paste docker compose executable path from above
Restart pycharm
Here is a JetBrains issue about this:
https://youtrack.jetbrains.com/issue/WI-49652
And another post:
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360000129510-Couldn-t-refresh-skeletons-for-remote-interpreter-Docker
They suggest multiple things, first I had to change the docker-compose executable path, as PyCharm found the docker-compose.txt first, I needed to set it to docker-compose.exe.
After this the problem still occured from time to time, but restarting PyCharm fixed it. Though it takes for a few minutes to index everything and reload the project.
Line endings can also be an issue, if in the docker-compose.yml set to use CRLF instead of LF, that can be a cause to fail parsing as well. I suggest to use a .editorconfig file to control your line endings, that seemed to help as well. Also setting your git autocrlf to 'input' might help if you use Windows.
Slowest one is posted on the forum:
remove pycharm helper containers: $ docker rm -f $(docker ps -a | grep pycharm_helper | awk '{print $1}')
invalidate caches and restart PyCharm
No a great solutions yet as I know, unfortunately.
I am using Jupyterhub 0.9.4 with DockerSpawner.
My goal is to pass every container spawned by the Spawner an additional host name, so make an additional entry in /etc/hosts.
I first tried via my docker-compose.yml file, which does not work, as the container are created by Jupyterhub.
I also tried it in the Dockerfile itself, but there it got overwritten.
I further tried it with changes in the jupyterhub_config.py file, by adding:
c.DockerSpawner.extra_create_kwargs.update({'command': '--add-host="<ip-address> <hostname>"'})
Still I do not see an entry in the /etc/hosts file in the container.
Anyone has a clue where I have to add it?
Thanks,
Max
You can do the equivalent of docker run --add-host "foo.domain.local:192.168.1.12" ... like so:
c.DockerSpawner.extra_host_config.update({
"extra_hosts": {
"foo.domain.local":"192.168.1.12",
"other.domain.local":"192.168.1.13"
}
})
I couldn't find that in any documentation.
Hello guys I'm trying to get my vagrant up but the docker keeps on throwing an error which is given below :
Can't find a suitable configuration file in this directory or any parent. Are you in the right directory
The file is present at the root of my project. It was all working well but it just started to throw an error. Can somebody tell me what is it that I have done due to which I'm getting this error
well, I had this error but it was due to vagrant. If you are running vagrant then first of all enter into your vagrant machine using :
vagrant ssh command
and try to find the file over there. If you don't have it over there then this is the problem. That file is not being loaded over here because of which you are getting this error.
My error was coming because vagrant was not mounting the nfs partition because of which the whole project was not loading in the vagrant machine and after that, the docker command was being run. Since the project was not being loaded docker command was not able to find the required file.
If this is your problem try to mount your nfs partition first.
Run:
docker-compose -f rootoftheprojectpath/docker-compose.yml up -d
Check read permissions, typos, etc. Also check that your file is not empty
Regards