I went through the list of debian packages in bio-ngs (http://blends.debian.org/med/tasks/bio-ngs) and tried to install them in a docker container based on Ubuntu Precise.
I did it manually by doing a sudo docker run [...] /bin/bash and doing apt-get install on each package.
I then tried to docker commit and docker push the container with everything in bio-ngs, and it tried to upload the 1.215 GB of data in one go.
A few seconds after starting the process, it failed with this error:
4160cce5fef0: Pushing [=> ] 24.42 MB/1.215 GB 1h6m34s
2014/02/19 17:07:37 Failed to upload layer: Put https://registry-1.docker.io/v1/images/4160cce5fef01ae777b856c15a42e0a632021d1891c33f29018024e35aed60be/layer: write tcp 54.224.119.89:443: broken pipe
Any ideas?
Related
I'm a complete newcomer to Docker, so the following questions might be a bit naive, but I'm stuck and I need help.
I'm trying to reproduce some results in research. The authors just released code along with a specification of how to build a Docker image to reproduce their results. The relevant bit is copied below:
I believe I installed Docker correctly:
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
However, when I try checking that my nvidia-docker installation was successful, I get the following error:
$ sudo docker run --gpus all --rm nvidia/cuda:10.1-base nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded\\\\n\\\"\"": unknown.
It looks like the key error is:
nvidia-container-cli: initialization error: nvml error: driver not loaded
I don't have a GPU locally and I'm finding conflicting information on whether CUDA needs to be installed before NVIDIA Docker. For instance, this NVIDIA moderator says "A proper nvidia docker plugin installation starts with a proper CUDA install on the base machine."
My questions are the following:
Can I install NVIDIA Docker without having CUDA installed?
If so, what is the source of this error and how do I fix it?
If not, how do I create this Docker image to reproduce the results?
Can I install NVIDIA Docker without having CUDA installed?
Yes, you can. The readme states that nvidia-docker only requires NVIDIA GPU driver and Docker engine installed:
Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed
If so, what is the source of this error and how do I fix it?
That's either because you don't have a GPU locally or it's not NVIDIA, or you messed up somewhere when installed drivers. If you have a CUDA-capable GPU I recommend using NVIDIA guide to install drivers. If you don't have a GPU locally, you can still build an image with CUDA, then you can move it somewhere where there is a GPU.
If not, how do I create this Docker image to reproduce the results?
The problem is that even if you manage to get rid of CUDA in Docker image, there is software that requires it. In this case fixing the Dockerfile seems to me unnecessary - you can just ignore Docker and start fixing the code to run it on CPU.
I think you need
ENV NVIDIA_VISIBLE_DEVICES=void
then
RUN your work
finally
ENV NVIDIA_VISIBLE_DEVICES=all
When trying to get (pull or run) the docker smtp4dev, I've got the following error message : /usr/bin/docker-current: unknown blob.
I'm trying to run it from :
a CentOS VM,
with Docker version 1.13.1, build cccb291/1.13.1
Please find hereafter my terminal output
sudo docker run --rm -p 3001:80 -p 2525:25 rnwood/smtp4dev:3.1.0-ci2020052101
Unable to find image 'rnwood/smtp4dev:3.1.0-ci2020052101' locally
Trying to pull repository rnwood/smtp4dev ...
3.1.0-ci2020052101: Pulling from rnwood/smtp4dev
68ced04f60ab: Downloading [=======> ] 3.898 MB/27.09 MB
4ddb1a571238: Downloading [===========> ] 3.784 MB/17.06 MB
94b78a0446e2: Download complete
b48f8e1b0b06: Downloading
a41ea3d79519: Waiting
7064c9d40b9c: Waiting
/usr/bin/docker-current: unknown blob.
See '/usr/bin/docker-current run --help'.
Thanks by advance for your support.
Following the new update of SMTP4DEV...it seems now working.. no root cause found...
The best hypothesis : problem with proxy network which leads to fail the build of the docker...
Stay tuned...
If you launch docker-run by yourself it works, if you do this with docker-compose it doesn't
roman#debian ~/D/O/devops> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:083de497cff944f969d8499ab94f07134c50bcf5e6b9559b27182d3fa80ce3f7
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
roman#debian ~/D/O/devops> docker-compose build app
Building app
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
roman#debian ~/D/O/devops>
Ok it's solved, previously been installing compose from repository, now installed through pip and it's working
Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull.
I am blocked by the company firewall and proxy, and I can't get a hole through it.
My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy.
On my personal laptop (OS X)
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xz
Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
On my work laptop (CentOS 7)
$ docker load < couchbase.tar.xz
References
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site:
Downloading Docker Images from Docker Hub without using Docker
With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
download-frozen-image-v2.sh
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][#digest] ...
The image can then be imported with tar and docker load:
tar -cC 'target_dir' . | docker load
To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user#host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#host:~$ tar -cC 'ubuntu' . | docker load
user#host:~$ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user#nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user#nodocker:~$ scp ubuntu.tar user#hasdocker:~
and then load and use the image on the target host:
user#hasdocker:~ docker load -i ubuntu.tar
user#hasdocker:~ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
I adapted a python script for having an OS independant solution:
docker-drag
Use it like that, and it will create a TAR archive that you will be able to import using docker load :
python docker_pull.py hello-world
python docker_pull.py alpine:3.9
python docker_pull.py kalilinux/kali-linux-docker
Use Skopeo. It is a tool specifically made for that (and others) purpose.
After install simply execute:
mkdir ubuntu
skopeo --insecure-policy copy docker://ubuntu ./ubuntu
Copy these files and import as you like.
First, check if your Docker daemon is configured for using the proxy. With boot2docker and docker-machine, for instance, this is done on docker-machine create, with the --engine-env option.
If this is just a certificate issue (i.e., Firefox does access Docker Hub), try and install that certificate:
openssl s_client -connect index.docker.io:443 -showcerts /dev/null | openssl x509 -outform PEM > docker.pem
sudo cp docker.pem /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust
sudo systemctl restart docker
sudo docker run hello-world
The other workaround (not a recommended solution) would be to access Docker Hub without relying on certificate with --insecure-registry:
If the firewall is actively blocking any Docker pull, to the point you can't even access Docker Hub from Firefox, then you would need to docker save/docker load an image archive. Save it from a machine where you did access Docker Hub (and where the docker pull succeeded). Load it on your corporate machine (after approval of your IT system administrators, of course).
Note: you cannot easily "just" download an image, because it is often based on top of other images which you would need to download too. That is what docker pull does for you. And that is what docker save does too (create one archive composed of all the necessary images).
The OP Ephreal adds in the comments:
[I] didn't get my corp image to work either.
But I found that I could download the Docker file and recreate the image my self from scratch.
This is essentially the same as downloading the image.
So, by definition, a Docker pull client command actually needs to talk to a Docker daemon, because the Docker daemon assembles layers one by one for you.
Think of it as a POST request - it's causing a mutation of state, in the Docker daemon itself. You're not 'pulling' anything over HTTP when you do a pull.
You can pull all the individual layers over REST from the Docker registry, but that won't actually be the same semantics as a pull, because pull is an action that specifically tells the daemon to go and get all the layers for an image you care about.
Another possibly might be an option for you if your company firewall (and policy) allows for connecting to a remote SSH server. In that case you can simply set up a SSH tunnel to tunnel any traffic to the Docker registry through it.
The Answer and solution to my original question were that I found that I could download the Docker file and all the necessary support files and recreate the image my self from scratch. This is essentially the same as downloading the image.
This solution has been in the questions and comments above, I just pinned it out here.
This is although no longer an issue for me since my company have changed policy and allowed docker pull commands to work.
thanks #Ham Co for answer,
I adapted a golang tool for having an OS independant solution:
golang http pull docker image
./gopull download redis
get a docker importable archive redis.tar
References:
https://github.com/NotGlop/docker-drag
Today I first time installed docker on Fedora 21. Now, I need change location of docker images folder from default /var/lib/docker.
After copying files (devicemapper subfolder skipped, docker service stopped) and changing /etc/sysconfig/docker (adding -g option), I run docker service again, no problems, devicemapper/metadata created.
Next, I'm trying to pull first image:
docker pull centos
But this error occured:
docker pull centos
latest: Pulling from docker.io/centos
6941bfcbbfca: Download complete
6941bfcbbfca: Error downloading dependent layers
41459f052977: Downloading [==========================> ] 41.61 MB/77.28 MB
fd44297e2ddb: Error pulling image (latest) from docker.io/centos, endpoint: https://registry-1.docker.io/v1/, Driver devicemapper failed to create image rootfs 6941bfcbbfca7f4f48becd38f2639157042bfd44297e2ddb: Error pulling image (latest) from docker.io/centos, Driver devicemapper failed to create image rootfs 6941bfcbbfca7f4f48becd38f2639157042b5cf9ab8c080f1d8b6d047380ecfc: Error running DeviceCreate (createSnapDevice) dm_task_run failed
FATA[0013] Error pulling image (latest) from docker.io/centos, Driver devicemapper failed to create image rootfs 6941bfcbbfca7f4f48becd38f2639157042b5cf9ab8c080f1d8b6d047380ecfc: Error running DeviceCreate (createSnapDevice) dm_task_run failed
If I try this without changing location - ok, no problems.
How to fix it?
1) service docker stop
2) thin_check /home/docker/devicemapper/devicemapper/metadata
3) thin_check --clear-needs-check-flag /home/docker/devicemapper/devicemapper/metadata
4) service docker start
As seen in issue 3721, this generally is a disk space issue.
The problem is that docker rmi doesn't always work in that case:
Getting this in v1.2 on CentOS 6.5 if a disk fills up before the image finishes pulling. Unable to rmi the incomplete image.
One "nuclear" option:
removing everything in /var/lib/docker worked out. Thanks
Another reason can be a common layer of fs to be downloaded between two images.
This was worked for me,
mv -f /var/lib/docker/* /data/tmp
systemctl restart docker.service
docker system prune -a
this could be caused when disk usage got full you can check this by doing df -h if so clean up unwanted space or can prune docker by doing docker system prune once after freeing up space , stop docker because thin_check cannot be run on live metadata.
systemctl stop docker
thin_check /var/lib/docker/devicemapper/devicemapper/metadata
check for error if noting clear check flag by
thin_check --clear-needs-check-flag /var/lib/docker/devicemapper/devicemapper/metadata
then start docker
systemctl start docker.service
if thin check not installed
yum install -y device-mapper-persistent-data
for centos
apt-get install -y thin-provisioning-tools
Got this today.
sudo reboot
Problem's out.
I encounter another dm_task_run issue during docker import, for my case, I yum erase docker.x86_64; yum install docker.x86_64; systemctl start docker.service works.