I want to start a new docker container with
docker run hello-world
but I get the error
...
Error response from daemon: oci runtime error: flag provided but not defined: -console
ERRO[0000] error getting events from daemon: net/http: request canceled
What is confusing me is that is used to work the last time I tried.
The machine I am working on is not administrated by me (but I have sudo privileges). The admin said that he did not change anything.
Some info:
docker version -> Docker version 1.13.1, build 092cba3
lsb_release -a -> ... Description: Ubuntu 16.04.6 LTS ...
I am connected to the server via ssh.
If I can provide any more info please let me know how to generate is.
Any help appreciated.
It may be related to this bug reported in Moby project (components used by Docker).
You can try to update your Docker version to a more recent one (it seems you are using 1.13 which is quite out-of-date)
Related
I'm a complete newcomer to Docker, so the following questions might be a bit naive, but I'm stuck and I need help.
I'm trying to reproduce some results in research. The authors just released code along with a specification of how to build a Docker image to reproduce their results. The relevant bit is copied below:
I believe I installed Docker correctly:
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
However, when I try checking that my nvidia-docker installation was successful, I get the following error:
$ sudo docker run --gpus all --rm nvidia/cuda:10.1-base nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded\\\\n\\\"\"": unknown.
It looks like the key error is:
nvidia-container-cli: initialization error: nvml error: driver not loaded
I don't have a GPU locally and I'm finding conflicting information on whether CUDA needs to be installed before NVIDIA Docker. For instance, this NVIDIA moderator says "A proper nvidia docker plugin installation starts with a proper CUDA install on the base machine."
My questions are the following:
Can I install NVIDIA Docker without having CUDA installed?
If so, what is the source of this error and how do I fix it?
If not, how do I create this Docker image to reproduce the results?
Can I install NVIDIA Docker without having CUDA installed?
Yes, you can. The readme states that nvidia-docker only requires NVIDIA GPU driver and Docker engine installed:
Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed
If so, what is the source of this error and how do I fix it?
That's either because you don't have a GPU locally or it's not NVIDIA, or you messed up somewhere when installed drivers. If you have a CUDA-capable GPU I recommend using NVIDIA guide to install drivers. If you don't have a GPU locally, you can still build an image with CUDA, then you can move it somewhere where there is a GPU.
If not, how do I create this Docker image to reproduce the results?
The problem is that even if you manage to get rid of CUDA in Docker image, there is software that requires it. In this case fixing the Dockerfile seems to me unnecessary - you can just ignore Docker and start fixing the code to run it on CPU.
I think you need
ENV NVIDIA_VISIBLE_DEVICES=void
then
RUN your work
finally
ENV NVIDIA_VISIBLE_DEVICES=all
I am using docker on my CentOS Linux release 7.8.2003 (Core) with 16 GB RAM. My docker version is Docker version 19.03.7. Docker-compose version is docker-compose version 1.23.2. I have 30+ docker containers running on my machine.
Everything was working smoothly, but I ran into a problem. Sometimes, when I try to run a container I get this error
ERROR: for container_name Cannot start service container_name: OCI runtime create failed: container_linux.go:349:
starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"EOF\"": unknown
When I retry 3-5 times to run container, the container started successfully. Sometimes I need to restart docker service and my server to make it working. I don't know the exact reason why It is giving me this error sometimes and gets created successfully sometimes with same docker-compose file.
Can somebody explain this weird behavior of docker to me? Is it due to so many containers running on my machine or something else?
I had a similar issue:
OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:722: waiting for init preliminary setup caused: EOF: unknown
and the problem turned out to be the wrong version of my WSL distro, which was 1 instead of 2:
PS C:\Users\myself> wsl -l -v
NAME STATE VERSION
* Ubuntu Running 1
So I used the wsl --set-version command to upgrade it:
PS C:\Users\myself> wsl --set-version Ubuntu 2
PS C:\Users\myself> wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
Then I was able to successfully build my Docker image.
Hope can help someone.
Came across this link, which solved the issue for me. It apparently works for WSL, but definitely also for my Ubuntu 18.04 installation: the latest version(s) of docker have this problem, a few versions back they haven't.
I am a complete newb to Docker and am running Linux 18.04.6 Bionic Beaver
docker --version reports Docker version 20.10.7, build 20.10.7-0ubuntu5~18.04.3
I'm not sure if this is the solution, but after reinstalling Docker from various unrelated problems I ran runc init and killed an old running dockerd process and was able to get hello-world to run. I've wasted so much time on it that I don't want to find the root cause.
I have installed docker in my machine following the official installation steps for ubuntu. At the verification steps it fails.
When I run the command: docker run hello-world it throws following error message:
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-
1.docker.io/v2/: net/http: request canceled while waiting for
connection (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.
Below are the docker details for my machine.
Client: Docker Engine - Community
Version: 19.03.6
API version: 1.40
Go version: go1.12.16
Git commit: 369ce74a3c
Built: Thu Feb 13 01:27:49 2020
OS/Arch: linux/amd64
Experimental: false
Got permission denied while trying to connect to the Docker daemon socket at
unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix
/var/run/docker.sock: connect: permission denied
If I tried for docker info I got the following message:
Client:
Debug Mode: false
Server:
ERROR: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info: dial unix /var/run/docker.sock: connect: permission denied
errors pretty printing info
You can simply pull and test it in this way:
$ sudo docker pull hello-world
$ sudo docker run hello-world
First check if docker is running using
sudo service docker status
If its running, then you probably missed out adding your user to docker group. To confirm this, try docker commands with sudo
If you don't want to use sudo every time follow below guide to add you user to docker group
Step 2 — Executing the Docker Command Without Sudo (Optional)
NOTE : You can not run Docker in WSL i.e Ubuntu on Windows, so you need to install docker for windows, following guide provided complete steps of using Docker in WSL.
Setting Up Docker for Windows and WSL to Work Flawlessly
Post installation steps of docker are probably not executed. Basically, the current logged in used need to be added to docker group.
Just follow the instructions here from docker documentation - https://docs.docker.com/engine/install/linux-postinstall/
FYA- group membership evaluation would happen only after a reboot of ubuntu (in 18.04). So, after following the above link, reboot ubuntu machine. Then try docker images and permission issue reported should get resolved.
This problem has solved when I upgrade my ubuntu 19.04 to 19.10 and then reinstall it.
I had similar problem, while trying to fix the below error,
root#neno88:/home/mohan# docker run hello-world Unable to find image
'hello-world:latest' locally docker: Error response from daemon: Get
https://registry-1.docker.io/v2/: dial tcp: lookup
registry-1.docker.io on 10.187.215.112:53: read udp
10.187.215.103:58777->10.187.215.112:53: read: connection refused.
So, The error was due to the proxy in my enterprise setup, the daemon requests are refused via proxy
WRONG TRY to fix it, ( which caused error like above.)
I have added the registry-1.docker.io ip to the /etc/hosts, but it caused the similar error as in this StackOverflow here.
root#neno88:/home/mohan# docker run hello-world Unable to find image
'hello-world:latest' locally docker: Error response from daemon: Get
https://registry-1.docker.io/v2/: net/http: request canceled while
waiting for connection (Client.Timeout exceeded while awaiting
headers). See 'docker run --help'. root#neno88:/home/mohan#
CORRECT STEPS:
How to fix it,
Just add your Proxy details to the /etc/systemd/system/docker.service.d/proxy.conf (folder docker.service.d may not exists , so create the directory before)
After adding the proxy details check using the below commands , whether the daemon successfully see/read your environment variables.
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ systemctl show --property=Environment docker
Refer this doc: https://www.serverlab.ca/tutorials/containers/docker/how-to-set-the-proxy-for-docker-on-ubuntu/
I faced an issue with docker.
The scenario is like this: we use Codebuild+Packer+docker to create AMI, which is used in deploy. During this step we pull image from artifactory and all pass fine except pulling one of the layers which is > 1Gb. After several retries it fails with error: Download failed, retrying: unknown blob and then “unexpected EOF”. Have you ever faced such issue? Any comments or advices are highly appreciated.
This was mainly because of weak network ( as I was using mobile hotspot )
configured the docker daemon to reduce the tcp packets
$ dockerd --max-concurrent-downloads <int>
here <int> suggests the number of docker pull layers you want to download concurrently.
default is 3
in mycase i had set to 2
$ dockerd --max-concurrent-downloads 2 &>/dev/null
downside of doing this is sacrificing your precious time :)
takes time like hell
I had this problem with a very small layer that was corrupted or broken in the registry V2 for some unknown reason. docker pull failed with "unexpected EOF" after retrying the layer (identified as "1f8fd317c5a4" in this case).
Rebuilding the image from source and trying to docker push said "layer already exists", not fixing the issue.
I was able to delete the offending layer using curl like so;
curl -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' -sk "https://registry.local/v2/image-name/manifests/1033-develop-7e414712"
(substitute your registry for "registry.local", your image name for "image-name", and your image tag or "latest" for "1033-develop-7e414712".)
Get the complete sha256 digest for layer 1f8fd317c5a4 from the JSON output, and use it in next command:
curl -k -X DELETE "https://registry.local/v2/image-name/blobs/sha256:1f8fd317c5a406a75130dacddc02bd09a9abf44e068e2730dd8f5238666bb390"
Now you will be able to docker push registry.local/image-name:1033-develop-7e414712 to upload the layer you deleted, and everything works.
With Docker Desktop on Windows, could not find the dockerd command, then added the below entry in the daemon.json file and restarted the docker service.
"max-concurrent-downloads": 1
You will find this file at path- C:\Users\<user-name>\.docker\daemon.json.
This will pull the layers in a sequential manner hence it will take time, but yes, this is an alternative solution to download the large image file over the weak network connection.
Had the same issue due to a bad connection. In the documentation, here is the dockerd command.
For Linux, simply add:
$ dockerd --max-concurrent-downloads 2
$ dockerd --max-download-attempts 10
For windows docker desktop, open settings -> Docker Engine and pop the following in with the numbers best for you. You can see all the options in the docs as above.
Stop docker service: sudo service docker stop
Run docker service with decreasing max-concurrent-downloads to what suits your internet bandwidth (Unfortunately 1 for me) and increasing max-download-attempts: sudo dockerd --max-concurrent-downloads 1 --max-download-attempts 10
PS: I am not a docker expert. But, I believe there is a better way to do it by adding some config whether to the registry or your docker client.
Problem: Unable to pull docker image its giving retrying to pull image and EOF
Solution: Update docker software then try to pull image it resolves the issue.
This does not match the situation described by OP perfectly, but I'll post it here for future reference. Docker Desktop 4.15.0 introduced a bug which caused a similar issue for me. Depending on the Docker Desktop version and command used, one of these errors would pop up:
% docker pull alpine:3.7
Error response from daemon: Get "https://registry-1.docker.io/v2/": read tcp 192.168.65.4:55694->192.168.65.5:3128: read: connection reset by peer
% docker-compose up --build
// Some stuff
=> ERROR [container_name internal] load metadata for docker.io/library/alpine:3.7 0.0s
------
> [container_name internal] load metadata for docker.io/library/alpine:3.7:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to do request: Head "https://registry-1.docker.io/v2/library/alpine/manifests/3.7": unexpected EOF
% docker pull alpine:3.7
Error response from daemon: Get "https://registry-1.docker.io/v2/": unexpected EOF
The solution was to uninstall Docker Desktop and install an older version. I'm posting this here since a lot of guides and instructions recommend updating Docker Desktop to its newest version, but in my case that is exactly what caused the issue. Of course this bug will most likely be patched in a newer version at some point, I have notified Docker support about it.
Edit: It seems that there is a GitHub topic for my issue.
docker is gigving me a hard time currently. I followed these instructions in order to install docker on my virtual server running Ubuntu 14.04 hosted by strato.de.
wget -qO- https://get.docker.com/ | sh
Executing this line runs me directly into this error message:
modprobe: ERROR: ../libkmod/libkmod.c:507 kmod_lookup_alias_from_builtin_file() could not open builtin file '/lib/modules/3.13.0-042stab092.3/modules.builtin.bin'modprobe: FATAL: Module aufs not found.
Warning: current kernel is not supported by the linux-image-extra-virtual
package. We have no AUFS support. Consider installing the packages linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.
After the installation was done, I installed the two mentioned packages. Now my problem is that I can't get docker to run.
service docker start
results in:
start: Job failed to start
docker -d
results in
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
ERRO[0000] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.
INFO[0000] +job init_networkdriver()
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
package not installed
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] Shutting down daemon due to errors: package not installed
and
docker run hello-world
results in
FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Does anybody have a clue about what dependencies could be missing? What else could have gone wrong? Are there any logs which docker provides?
I'm searching back and forth for a solution, but couldn't find one.
Just to mention this is a fresh Ubuntu 14.04 setup. I didn't install any other services except for java. And the reason why I need docker is for using the dockerimage of sharelatex.
I'm thankful for any help!
Here's what I tried/found out, hoping that it will save you some time or even help you solve it.
Docker's download script is trying to identify the kernel through uname -r to be able to install the right kernel extras for your host.
I suspect two problems:
My (united-hoster.de) and probably your provider use customized kernel images (eg. 3.13.0-042stab108.2) for virtual hosts. Since the script is explicitly looking for -generic in the name, the lookup fails.
While the naming problem would be easy to fix, I wasn't able to install the generic kernel extras with my hoster's custom kernel. It seems like using a upgrading the kernel does not work either, since it would affect all users/vHosts on the same physical machine. This is because the kernel is shared (stated in some support ticket).
To get around that ..
I skipped it, hoping that Docker would work without AUFS support, but it didn't.
I tried to force Docker to use devicemapper instead, but to no avail.
I see two options: get a dedicated host so you can mess with kernels and filesystems or a least let the docker installer do it or install the binaries manually.
You need to start docker
sudo start docker
and then
sudo docker run hello-world
I faced same problem on ubuntu 14.04, solved.
refer comment of Nino-K https://github.com/docker-library/hello-world/issues/3