Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull.
I am blocked by the company firewall and proxy, and I can't get a hole through it.
My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy.
On my personal laptop (OS X)
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xz
Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
On my work laptop (CentOS 7)
$ docker load < couchbase.tar.xz
References
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site:
Downloading Docker Images from Docker Hub without using Docker
With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
download-frozen-image-v2.sh
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][#digest] ...
The image can then be imported with tar and docker load:
tar -cC 'target_dir' . | docker load
To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user#host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#host:~$ tar -cC 'ubuntu' . | docker load
user#host:~$ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user#nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user#nodocker:~$ scp ubuntu.tar user#hasdocker:~
and then load and use the image on the target host:
user#hasdocker:~ docker load -i ubuntu.tar
user#hasdocker:~ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
I adapted a python script for having an OS independant solution:
docker-drag
Use it like that, and it will create a TAR archive that you will be able to import using docker load :
python docker_pull.py hello-world
python docker_pull.py alpine:3.9
python docker_pull.py kalilinux/kali-linux-docker
Use Skopeo. It is a tool specifically made for that (and others) purpose.
After install simply execute:
mkdir ubuntu
skopeo --insecure-policy copy docker://ubuntu ./ubuntu
Copy these files and import as you like.
First, check if your Docker daemon is configured for using the proxy. With boot2docker and docker-machine, for instance, this is done on docker-machine create, with the --engine-env option.
If this is just a certificate issue (i.e., Firefox does access Docker Hub), try and install that certificate:
openssl s_client -connect index.docker.io:443 -showcerts /dev/null | openssl x509 -outform PEM > docker.pem
sudo cp docker.pem /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust
sudo systemctl restart docker
sudo docker run hello-world
The other workaround (not a recommended solution) would be to access Docker Hub without relying on certificate with --insecure-registry:
If the firewall is actively blocking any Docker pull, to the point you can't even access Docker Hub from Firefox, then you would need to docker save/docker load an image archive. Save it from a machine where you did access Docker Hub (and where the docker pull succeeded). Load it on your corporate machine (after approval of your IT system administrators, of course).
Note: you cannot easily "just" download an image, because it is often based on top of other images which you would need to download too. That is what docker pull does for you. And that is what docker save does too (create one archive composed of all the necessary images).
The OP Ephreal adds in the comments:
[I] didn't get my corp image to work either.
But I found that I could download the Docker file and recreate the image my self from scratch.
This is essentially the same as downloading the image.
So, by definition, a Docker pull client command actually needs to talk to a Docker daemon, because the Docker daemon assembles layers one by one for you.
Think of it as a POST request - it's causing a mutation of state, in the Docker daemon itself. You're not 'pulling' anything over HTTP when you do a pull.
You can pull all the individual layers over REST from the Docker registry, but that won't actually be the same semantics as a pull, because pull is an action that specifically tells the daemon to go and get all the layers for an image you care about.
Another possibly might be an option for you if your company firewall (and policy) allows for connecting to a remote SSH server. In that case you can simply set up a SSH tunnel to tunnel any traffic to the Docker registry through it.
The Answer and solution to my original question were that I found that I could download the Docker file and all the necessary support files and recreate the image my self from scratch. This is essentially the same as downloading the image.
This solution has been in the questions and comments above, I just pinned it out here.
This is although no longer an issue for me since my company have changed policy and allowed docker pull commands to work.
thanks #Ham Co for answer,
I adapted a golang tool for having an OS independant solution:
golang http pull docker image
./gopull download redis
get a docker importable archive redis.tar
References:
https://github.com/NotGlop/docker-drag
Related
I'm trying to push docker images to artifactory as part of a CI jenkins job.
I have an Artifactory installed with url art:8080
I installed Docker on Win2016 and built my dockerfile.
Now I stuck in how to push the output image of the dockerfile.
I tried:
<!-- language: lang-none -->
docker tag microsoft/windowsservercore art:8080/imageID:latest
docker push art:8080/docker-local:latest
but I get an error stating:
Get https://art:8080/v2/: dial tcp: lookup artifactory: getaddrinfow: No such host is known.
Where is the https getting from?
How do I push to the correct local docker repo in my artifactory?
Docker requires you to use https. What I do (I use Nexus not Artifactory) is setup a reverse proxy using nginx. Here is the doc for that - https://www.jfrog.com/confluence/display/RTF/Configuring+a+Reverse+Proxy
Alternatively, you can set Docker to not require https (though not recommended)
Since you're asking how to pull, these steps worked for an enterprise artifactory where Certificate CA are not trusted outside the organization
$ sudo mkdir -p /etc/docker/certs.d/docker-<artifactory-resolverhost>
$ sudo cp /tmp/ca.crt /etc/docker/certs.d/docker-<artifactory-resolverhost>
$ sudo chown root:docker /etc/docker/certs.d/docker-<artifactory-resolverhost>/ca.crt
$ sudo chmod 740 /etc/docker/certs.d/docker-<artifactory-resolverhost>/ca.crt
Where ca.crt is the base-64 chain of CA trusted certificates and is the resolver hostname of the repository. For ex. repo.jfrog.org if you were using the public repository. To confirm you can do a ping against "artifactory-resolverhost" to make sure is reachable from your network
Then you should be able to pull an image with your user belonging to docker group for ex.
docker pull docker-<artifactory-resolverhost>/<repository-name>/rhel7-tomcat:8.0.18_4
You can then view the downloaded image with below command
docker images
Is there a gcloud API or other command line interface (CLI) to access the list of published container images in the private Google Container Registry? (That is the container registry inside a Google Cloud Platform project)
gcloud container does not seem to help:
$ gcloud container
Usage: gcloud container [optional flags] <group | command>
group may be clusters | operations
command may be get-server-config
Deploy and manage clusters of machines for running containers.
flags:
--zone ZONE, -z ZONE The compute zone (e.g. us-central1-a) for the cluster
global flags:
Run `gcloud -h` for a description of flags available to all commands.
command groups:
clusters Deploy and teardown Google Container Engine clusters.
operations Get and list operations for Google Container Engine
clusters.
commands:
get-server-config Get Container Engine server config.
I also don't want to use gcloud docker to list images because this wants to connect to a particular docker daemon that I don't have. Unless there is a way to tell gcloud docker to connect to a remote public docker daemon that can read the private containers pushed to the registry through my project.
We just released a new command to list the images in your repository! You can try it out with:
gcloud alpha container images list --repository=gcr.io/$MYREPOSITORY
If you want to see the specific tags for an image you can use:
gcloud alpha container images list-tags gcr.io/$MYREPOSITORY/$MYIMAGE
The answer given by Robert Bailey is good for certain tasks, but might be missing what you specifically want to do. Nonetheless, your comments in reply to his answer are not so much faults of his answer as of your own understanding of what the commands which "fail" actually mean to do.
As far as your second comment,
Using docker I get the following error (for the reasons mentioned
above; I also edited the question): Cannot connect to the Docker daemon. Is the docker daemon running on this host?
This is a result of the docker daemon not running. Check if it's running via ps aux | grep docker. You can refer to the Docker documentation to determine how to properly install and run it.
As far as your first comment,
Using curl I get: {"errors":[{"code":"DENIED","message":"Failed to read tags for repository '<my_project>/<my_image>'"}]}. I have to
authenticate somehow to access the images in a private registry. I
don't want to use docker because that means I have to have a docker
daemon available. I only want to see if a container image with a
particular version is in the Container Registry. So what I need is an
API to the Container Registry in the Google Developer Console.
You wouldn't be able to curl the image unless it was public, as mentioned in Robert's latest comment, or unless you somehow provided some great oauth headers during the curl's invocation.
You should use gcloud docker to attempt to list the images in the registry, as you would for other docker registries. The gcloud container command group is the wrong one for your desired task. You can see below an output from gcloud version 96.0.0 (latest as of this comment) for the docker command group:
$ gcloud docker
Usage: docker [OPTIONS] COMMAND [arg...]
docker daemon [ --help | ... ]
docker [ --help | -v | --version ]
A self-sufficient runtime for containers.
Options:
--config=~/.docker Location of client config files
-D, --debug=false Enable debug mode
--disable-legacy-registry=false Do not contact legacy registries
-H, --host=[] Daemon socket(s) to connect to
-h, --help=false Print usage
-l, --log-level=info Set the logging level
--tls=false Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify=false Use TLS and verify the remote
-v, --version=false Print version information and quit
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on a container or image
kill Kill a running container
load Load an image from a tar archive or STDIN
login Register or log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
network Manage Docker networks
pause Pause all processes within a container
port List port mappings or a specific mapping for the CONTAINER
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart a container
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save an image(s) to a tar archive
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop a running container
tag Tag an image into a repository
top Display the running processes of a container
unpause Unpause all processes within a container
version Show the Docker version information
volume Manage Docker volumes
wait Block until a container stops, then print its exit code
Run 'docker COMMAND --help' for more information on a command.
You should use gcloud docker search gcr.io/project-id to check which images are in the repository. gcloud has your credentials, so it can talk to the private registry as long as you're authenticated as an appropriate user on the project.
Finally, as an added resource: The Cloud Platform docs have a whole article about working with Google Container Registry.
If you know the project that is hosting the images (e.g. google-containers) you can list images with
gcloud docker search gcr.io/google_containers
For an individual image (e.g. the pause image in the google-containers project), you can check the versions with
curl https://gcr.io/v2/google-containers/pause/tags/list
I've just found a far simpler way to check for specific images. Once you have authenticated gcloud, use it to generate access tokens for reading from your private registry:
curl -u "oauth2accesstoken:$(gcloud auth print-access-token)" https://gcr.io/v2/<projectName>/<imageName>/tags/list
My best solution so far without having a local docker available and without being able to connect to a remote docker (this would still require at least the local docker client but not the local daemon running), is to SSH into a Container Cluster instance that runs docker and have my search done there and getting the result in my original script:
gcloud compute ssh <container_cluster_instance> -C "sudo gcloud docker search ..."
Of course, to avoid all verbose output (like SSH/Terminal welcome messages) I use some arguments to silent the execution a bit:
gcloud compute ssh --ssh-flag="-q" "$INSTANCE_NAME" -o LogLevel=quiet -C "sudo gcloud docker search ..."
How can I share a folder between my Windows files and a docker container, by mounting a volume with simple --volume command using Docker Toolbox on?
I'm using "Docker Quickstart Terminal" and when I try this:
winpty docker run -it --rm --volume /C/Users/myuser:/myuser ubuntu
I have this error:
Invalid value "C:\\Users\\myuser\\:\\myuser" for flag --volume: bad mount mode specified : \myuser
See 'docker run --help'.
Following this, I also tried
winpty docker run -it --rm --volume "//C/Users/myuser:/myuser" ubuntu
and got
Invalid value "\\\\C:\\Users\\myuser\\:\\myuser" for flag --volume: \myuser is not an absolute path
See 'docker run --help'.
This is an improvement of the selected answer because that answer is limited to c:\Users folder. If you want to create a volume using a directory outside of c:\Users this is an extension.
In windows 7, I used docker toolbox. It used Virtual Box.
Open virtual box
Select the machine (in my case default).
Right clicked and select settings option
Go to Shared Folders
Include a new machine folder.
For example, in my case I have included:
**Name**: c:\dev
**Path**: c/dev
Click and close
Open "Docker Quickstart Terminal" and restart the docker machine.
Use this command:
$ docker-machine restart
To verify that it worked, following these steps:
SSH to the docker machine.
Using this command:
$ docker-machine ssh
Go to the folder that you have shared/mounted.
In my case, I use this command
$ cd /c/dev
Check the user owner of the folder. You could use "ls -all" and verify that the owner will be "docker"
You will see something like this:
docker#default:/c/dev$ ls -all
total 92
drwxrwxrwx 1 docker staff 4096 Feb 23 14:16 ./
drwxr-xr-x 4 root root 80 Feb 24 09:01 ../
drwxrwxrwx 1 docker staff 4096 Jan 16 09:28 my_folder/
In that case, you will be able to create a volume for that folder.
You can use these commands:
docker create -v /c/dev/:/app/dev --name dev image
docker run -d -it --volumes-from dev image
or
docker run -d -it -v /c/dev/:/app/dev image
Both commands work for me. I hope this will be useful.
This is actually an issue of the project and there are 2 working workarounds:
Creating a data volume:
docker create -v //c/Users/myuser:/myuser --name data hello-world
winpty docker run -it --rm --volumes-from data ubuntu
SSHing directly in the docker host:
docker-machine ssh default
And from there doing a classic:
docker run -it --rm --volume /c/Users/myuser:/myuser ubuntu
If you are looking for the solution that will resolve all the Windows issues and make it work on the Windows OS in the same way as on Linux, then see below. I tested this and it works in all cases. I’m showing also how I get it (the steps and thinking process). I've also wrote an article about using Docker and dealing with with docker issues here.
Solution 1: Use VirtualBox (if you think it's not good idea see Solution 2 below)
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine
(This is optional, you can skip it and forward ports from the VM) Create second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is older than 1 year
Install docker
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu) but still can work in Windows
Done
Bonus:
Everything is working the same way as on Linux
Pause/Unpause the dockerized environment whenever you want
Solution 2: Use VirtualBox (this is very similar to the solution 1 but it shows also the thinking process, which might be usefull when solving similar issues)
Read that somebody move the folders to /C/Users/Public and that works https://forums.docker.com/t/sharing-a-volume-on-windows-with-docker-toolbox/4953/2
Try it, realize that it doesn’t have much sense in your case.
Read entire page here https://github.com/docker/toolbox/issues/607 and try all solutions listed on page
Find this page (the one you are reading now) and try all the solutions from other comments
Find somewhere information that setting COMPOSE_CONVERT_WINDOWS_PATHS=1 environment variable might solve the issue.
Stop looking for the solution for few months
Go back and check the same links again
Cry deeply
Feel the enlightenment moment
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine with second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is very recent (not older than few months)
Notice that the automounting is not really working and the integration is broken (like clipboard sharing etc.)
Delete virtual machine
Go out and have a drink
Rent expensive car and go with high speed on highway
Destroy the car and die
Respawn in front of your PC
Install Ubuntu LTS which is older than 1 year
Try to run docker
Notice it’s not installed
Install docker by apt-get install docker
Install suggested docker.io
Try to run docker-compose
Notice it’s not installed
apt get install docker-compose
Try to run your project with docker-compose
Notice that it’s old version
Check your power level (it should be over 9000)
Search how to install latest version of docker and find the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Uninstall the current docker-compose and docker.io
Install docker using the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu, so you can run any docker command)
Done
As of August 2016 Docker for windows now uses hyper-v directly instead of virtualbox, so I think it is a little different. First share the drive in settings then use the C: drive letter format, but use forward slashes. For instance I created an H:\t\REDIS directory and was able to see it mounted on /data in the container with this command:
docker run -it --rm -v h:/t/REDIS:/data redis sh
The same format, using drive letter and a colon then forward slashes for the path separator worked both from windows command prompt and from git bash.
I found this question googling to find an answer, but I couldn't find anything that worked. Things would seem to work with no errors being thrown, but I just couldn't see the data on the host (or vice-versa). Finally I checked out the settings closely and tried the format they show:
So first, you have to share the whole drive to the docker vm in settings here, I think that gives the 'docker-machine' vm running in hyper-v access to that drive. Then you have to use the format shown there, which seems to only exist in this one image and in no documentation or questions I could find on the web:
docker run --rm -v c:/Users:/data alpine ls /data
Simply using double leading slashes worked for me on Windows 7:
docker run --rm -v //c/Users:/data alpine ls /data/
Taken from here: https://github.com/moby/moby/issues/12590
Try this:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~ to make sure you are in Windows user directory.
$ docker run -it -v /$(pwd)/ubuntu:/windows ubuntu
It will work if the error is due to typo. You will get an empty folder named ubuntu in your user directory. You will see this folder with the name windows in your ubuntu container.
For those using Virtual Box who prefer command-line approach
1) Make sure the docker-machine is not running
Docker Quickstart Terminal:
docker-machine stop
2) Create the sharing Windows <-> docker-machine
Windows command prompt:
(Modify following to fit your scenario. I feed my Apache httpd container from directory synced via Dropbox.)
set VBOX=D:\Program Files\Oracle\VirtualBox\VBoxManage.exe
set VM_NAME=default
set NAME=c/htdocs
set HOSTPATH=%DROPBOX%\htdocs
"%VBOX%" sharedfolder add "%VM_NAME%" --name "%NAME%" --hostpath "%HOSTPATH%" --automount
3) Start the docker-machine and mount the volume in a new container
Docker Quickstart Terminal:
(Again, I am starting an Apache httpd container, hence that port exposing.)
docker-machine start
docker run -d --name my-apache-container-0 -p 80:80 -v /c/htdocs:/usr/local/apache2/htdocs my-apache-image:1.0
share folders virtualBox toolbox and windows 7 and nodejs image container
using...
Docker Quickstart Terminal [QST]
Windows Explorer [WE]
lets start...
[QST] open Docker Quickstart Terminal
[QST] stop virtual-machine
$ docker-machine stop
[WE] open a windows explorer
[WE] go to the virtualBox installation dir
[WE] open a cmd and execute...
C:\Program Files\Oracle\VirtualBox>VBoxManage sharedfolder add "default" --name
"/d/SVN_FOLDERS/X2R2_WP6/nodejs" --hostpath "\?\d:\SVN_FOLDERS\X2R2_WP6\nodejs" --automount
check in the oracle virtual machine, that the new shared folder has appeared
[QST] start virtual-machine
$ docker-machine start
[QST] run container nodejs
docker stop nodejs
docker rm nodejs
docker run -d -it --rm --name nodejs -v /d/SVN_FOLDERS/X2R2_WP6/nodejs:/usr/src/app -w /usr/src/app node2
[QST] open bash to the container
docker exec -i -t nodejs /bin/bash
[QST] execute dir and you will see the shared files
I solved it!
Add a volume:
docker run -d -v my-named-volume:C:\MyNamedVolume testimage:latest
Mount a host directory:
docker run -d -v C:\Temp\123:C:\My\Shared\Dir testimage:latest
I installed Docker-Toolbox just now while following their webpage
I started with Docker QuickStart Terminal and see following
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
docker is configured to use the default machine with IP 192.168.99.100
For help getting started, check out the docs at https://docs.docker.com
bash-3.2$
But when I try to perform docker pull hello-world, this is what I see
bash-3.2$ docker run hello-world
Unable to find image 'hello-world:latest' locally
Pulling repository docker.io/library/hello-world
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/hello-world/images. You may want to check your internet connection or if you are behind a proxy.
bash-3.2$
What's wrong?
I had the same problem this morning and the following fixed it for me:
$ docker-machine restart default # Restart the environment
$ eval $(docker-machine env default) # Refresh your environment settings
It appears that this is due to the Docker virtual machine getting itself into a strange state. There is an open github issue here
I installed Docker without the Toolbox on Windows 10, so the version that requires Hyper-V to be enabled.
For Docker version 1.12 I had to go into the taskbar, right click the Docker Icon, select Settings -> Network and set the DNS Server to fixed, so that is uses Google's DNS server at 8.8.8.8.
Once that setting was changed, it finally worked.
The simpler solution is to add the following entry in /etc/default/docker file
export http_proxy="http://HOST:PORT/"
and restart the docker service
service docker restart
Update August 2016
Using Docker for Mac (version 1.12.0), was seeing issues of the form:
➜ docker pull node
Using default tag: latest
Pulling repository docker.io/library/node
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/node/images. You may want to check your internet connection or if you are behind a proxy.`enter code here`
This was resolved by updating my MacBook Pro wireless network settings to include the following DNS entry: 8.8.8.8
For further info, please see this (dated) issue which provided the answer given here.
I ran into this problem running Docker on my MAC(host) with Docker VM in VBOX 5.10. It is a networking issue. The simple fix is to add a bridged network to the VBOX image. You can use the included NAT config present with the VM, but you need to change the ssh port from 50375 to 2375.
sudo service docker stop
sudo service docker start
works for me..
somehow, sudo service docker restart didn't work
(RHEL7)
On Windows 7 and if you believe you are behind proxy
Logon to default machine
$ docker-machine ssh default
Update profile to update proxy settings
docker#default:~$ sudo vi /var/lib/boot2docker/profile
Append from the below as appropriate
# replace with your office's proxy environment
export"HTTP_PROXY=http://PROXY:PORT"
export"HTTPS_PROXY=http://PROXY:PORT"
# you can add more no_proxy with your environment.
export"NO_PROXY=192.168.99.*,*.local,169.254/16,*.example.com,192.168.59.*"
Exit
docker#default:~$ exit
Restart docker machine
docker-machine restart default
Update environment settings
eval $(docker-machine env default)
Above steps are slightly tweaked but as given in troubleshooting guide: https://docs.docker.com/toolbox/faqs/troubleshoot/#/update-varlibboot2dockerprofile-on-the-docker-machine
I ran into this exact same problem yesterday and none of the "popular" answers (like fixing DNS to 8.8.8.8) worked for me. I eventually happened across this link, and that did the trick ... https://github.com/docker/for-win/issues/16
Between Docker for Windows, Windows 10 and Hyper-V, there seems to be a problem during the virtual network adapter creation process. Specifically, you might end up with two "vEthernet (DockerNAT)" network adapters. Check this with Get-NetAdapter "vEthernet (DockerNAT)" (in an elevated PowerShell console). If the result shows more than one adapter, you can disable and rename it with:
$vmNetAdapter = Get-VMNetworkAdapter -ManagementOS -SwitchName DockerNAT
Get-NetAdapter "vEthernet (DockerNAT)" | ? { $_.DeviceID -ne $vmNetAdapter.DeviceID } | Disable-NetAdapter -Confirm:$False -PassThru | Rename-NetAdapter -NewName "OLD"
Then open up Device Manager and delete the disabled adapter (for some reason you can do this from here, but not from the Network and Sharing Center adapters view).
I assume that you have a network problem. Are you behind a proxy? Is it possible that it filters the connection to docker.io or blocks the docker user agent?
I installed the toolbox and ran your test. It works fine, here:
docker is configured to use the default machine with IP 192.168.99.101
For help getting started, check out the docs at https://docs.docker.com
bash-3.2$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
535020c3e8ad: Pull complete
af340544ed62: Already exists
library/hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:d5fbd996e6562438f7ea5389d7da867fe58e04d581810e230df4cc073271ea52
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/userguide/
bash-3.2$
On Windows 10. Just right-click on the systray docker icon-> Settings... -> Rest -> Restrart Docker
I had this same problem with boot2docker and fixed it by restarting it with:
boot2docker restart
I just ran into this today with 1.10.1 and none of the existing solutions worked. I tried to restart, upgrade, regenerate certs, ...
I noticed that I had a lot of networks created on the machine. After removing them with:
docker network ls | grep bridge | awk '{print $1}' | xargs -n1 docker network rm
The DNS started working again.
Note: You may ignore errors about pre-defined networks
If you are behind proxy it is not enough to set HTTP_PROXY and HTTPS_PROXY env. You should set it while machine creation.
Paramer for this is --engine-env:
docker-machine create -d "virtualbox" --engine-env HTTP_PROXY=http://<PROXY>:<PORT> --engine-env HTTPS_PROXY=<PROXY>:<PORT> dev
In my case, installing docker on Alpine Linux I get the error:
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/........
Using the script here:
https://github.com/docker/docker/blob/master/contrib/download-frozen-image-v2.sh
Works. It downloads the image using curl and then shows you how to untar and 'docker load' it.
I tried the above methods of static DNS at 8.8.8.8 and disabling ipv6 (I didn't understand the proxy thing) and none of them worked for me.
EDIT 9/8/2016:
I was initially using dropbear instead of openssh. Reinstalled Alpine with openssh fixed the problem.
The next problem was 'ApplyLayer exit status 1 stdout: stderr: chmod /bin/mount: permission denied' error during pull.
From (nixaid.com/grsec-in-docker/):
To build the Docker image, I had to disable the following grsec
protections. Modify the /etc/sysctl.d/grsec.conf as follows:
kernel.grsecurity.chroot_deny_chmod = 0
kernel.grsecurity.chroot_deny_mknod = 0
kernel.grsecurity.chroot_caps = 0 # related to a systemd package/CAP_SETFCAP
in alpine's case though it's
/etc/sysctl.d/00-alpine.conf
reboot
Restarting Docker or recreating the image did not help. I rebooted Windows to no avail.
Astoundingly, when I ssh'ed into the running container and did curl https://index.docker.io/v1/repositories/library/hello-world/images I got a perfectly valid response.
I used the Docker Toolbox with VirtualBox on 64bit Windows 10 Pro.
The solution in my case was to uninstall the old Docker version and install the new one that uses Hyper-V instead of VirtualBox.
Now Docker works again.
If you are behind proxy kindly use below commands
sudo mkdir /etc/systemd/system/docker.service.d
sudo cd /etc/systemd/system/docker.service.d
sudo vi http-proxy.conf
[Service]
Environment=HTTP_PROXY=http://proxy-server-ip:port" "NO_PROXY=localhost,127.0.0.1"
sudo systemctl daemon-reload
sudo systemctl show --property=Environment docker
sudo systemctl restart docker
Try this if you can fetch latest ubuntu
sudo docker run -it ubuntu bash
Unable to find image ubuntu:latest locally
latest: Pulling from library/ubuntu b3e1c725a85f: Pull complete
4daad8bdde31: Pull complete
63fe8c0068a8: Pull complete
4a70713c436f: Pull complete
bd842a2105a8: Pull complete
Digest:
sha256:7a64bc9c8843b0a8c8b8a7e4715b7615e4e1b0d8ca3c7e7a76ec8250899c397a
Status: Downloaded newer image for ubuntu:latest
It worked for me finally :)
Another scenario: if your docker network adapter is disabled, it will fail with this error. The adapter is named "vEthernet (DockerNAT)" or similar. Apparently this adapter is involved somehow in the normal docker pull behavior. Enable it back to solve the problem.
Create a systemd drop-in directory for the docker service:
$ sudo mkdir -p /etc/systemd/system/docker.service.d
Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Hope it helps
refer to https://docs.docker.com/network/proxy/
for me, proxy setting without http:// or https:// prefix works.
e.g:
PROXY:PORT
or with / suffix with http:// or https:// prefix
e.:
http://PROXY:PORT/
On Windows this happened when I moved from a work network to a home network.
To solve it, run:
docker-machine stop
docker-machine start
docker-env
"C:\Program Files\Docker Toolbox\docker-machine.exe" env | Invoke-Expression
Running out of entropy in virtualized Linux systems seems to be a common problem (e.g. /dev/random Extremely Slow?, Getting linux to buffer /dev/random). Despite of using a hardware random number generator (HRNG) the use of a an entropy gathering daemon like HAVEGED is often suggested. However an entropy gathering daemon (EGD) cannot be run inside a Docker container, it must be provided by the host.
Using an EGD works fine for docker hosts based on linux distributions like Ubuntu, RHEL, etc. Getting such a daemon to work inside boot2docker - which is based on Tiny Core Linux (TCL) - seems to be another story. Although TCL has a extension mechanism, an extension for an entropy gathering daemon doesn't seem to be available.
So an EGD seems like a proper solution for running docker containers in a (production) hosting environment, but how to solve it for development/testing in boot2docker?
Since running an EGD in boot2docker seemed too difficult, I thought about simply using /dev/urandom instead of /dev/random. Using /dev/urandom is a litte less secure, but still fine for most applications which are not generating long-term cryptographic keys. At least it should be fine for development/testing inside boot2docker.
I just realized, that it is simple as mounting /dev/urandom from the host as /dev/random into the container:
$ docker run -v /dev/urandom:/dev/random ...
The result is as expected:
$ docker run --rm -it -v /dev/urandom:/dev/random ubuntu dd if=/dev/random of=/dev/null bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00223239 s, 459 kB/s
At least I know how to build my own boot2docker images now ;-)
The most elegant solution I've found is running Haveged in separate container:
docker pull harbur/haveged
docker run --privileged -d harbur/haveged
Check whether enough entropy available:
$ cat /proc/sys/kernel/random/entropy_avail
2066
Another option is to install the rng-tools package and map it to use the /dev/urandom
yum install rng-tools
rngd -r /dev/urandom
With this I didn't need to map any volume in the docker container.
Since I didn't like to modify my Docker containers for development/testing I tried to modify the boot2docker image. Luckily, the boot2docker image is build with Docker and can be easily extended. So I've set up my own Docker build boot2docker-urandom. It extends the standard boot2docker image with a udev rule found here.
Building your own boot2docker.iso image is simple as
$ docker run --rm mbonato/boot2docker-urandom > boot2docker.iso
To replace the standard boot2docker.iso that comes with boot2docker you need to:
$ boot2docker stop
$ boot2docker delete
$ mv boot2docker.iso ~/.boot2docker/
$ boot2docker init
$ boot2docker up
Limitations, from inside a Docker container /dev/random still blocks. Most likely, because the Docker containers do not use /dev/random of the host directly, but use the corresponding kernel device - which still blocks.
Alpine Linux may be a better choice for a lightweight docker host. Alpine LXC & docker images are only 5mb (versus 27mb for boot2docker)
I use haveged on Alpine for LXC guests & on Debian for docker guests. It gives enough entropy to generate gpg / ssh keys & openssl certificates in containers. Alpine now has an official docker repo.
Alternatively build a haveged package for Tiny Core - there is a package build system available.
if you have this problem in a docker container created from a self-built image that runs a java app (e.g. created FROM tomcat:alpine) and don't have access to the host (e.g. on a managed k8s cluster), you can add the following command to your dockerfile to use non-blocking seeding of SecureRandom:
RUN sed -i.bak \
-e "s/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g" \
-e "s/securerandom.strongAlgorithms=NativePRNGBlocking/securerandom.strongAlgorithms=NativePRNG/g" \
$JAVA_HOME/lib/security/java.security
the two regex expressions replace file:/dev/random by file:/dev/urandom and NativePRNGBlocking by NativePRNG in the file $JAVA_HOME/lib/security/java.security which causes tomcat to startup reasonably fast on a vm. i haven't checked whether this works also on non alpine-based openjdk images, but if the sed command fails, just check the location of the file java.security inside the container and adapt the path accordingly.
note: in jdk11 the path has changed to $JAVA_HOME/conf/security/java.security