I'm looking for a way to pull the latest image in Docker vanilla after a container crashed/exited.
As in my current architecture, I don't have access to Docker Engine API but only to the container itself, I want to be able to update the container based on the image after this service is exited.
The Docker way to upgrade containers seems to be the following:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
But that's based on the Docker engine CLI API and as I explained before - that's not an approach that I want to try.
Is there a possible way to configure the Docker when the container is pulling again the image from the latest repository upon restart/crash?
What you are asking for is this.
Which seems possible using docker service update for which you will need docker swarm. With plain docker installed on single VM, don't seems feasible.
Hope this helps.
I started my Jenkins Docker image that I had saved earlier.
docker start -ai <my_container_ID>
I can see that jenkins has started in console but it doesn't get launched:
screenshot
For the first time, I had started it using docker run command after which Jenkins got launched on browser and I also added some jobs in it and did docker commit.
Any help would be appreciated!
From your comments I see that you started a new container from jenkins image, made some changes and then did docker commit for creating a new image based on that.
In order to run a new container from that image you need to use docker run using the image's hash return by the docker commit command. Example:
$ docker commit cc79f8ec407d #hash of the container you want to commit
sha256:227efd2e30a9033e6ce288084c6452aa5a5112974ea833b559429a9ae78697a8 # new image hash return by docker commit
$ docker run 227efd2e30a9033e6ce288084c6452aa5a5112974ea833b559429a9ae78697a8 # hash of the new image
But bare in mind that when you run this new container from that image jenkins might not consider it as a new installation because the initialization process was already done before the commiting.
The easiest way to do that is to just point your browser to the IP address of your new Virtual Machine Docker host.
You could monkey around with ifconfig or ipconfig to figure it out, but thankfully Docker Toolbox comes with a handy command line option by utilizing docker-machine:
docker-machine ip default
That’s the IP of your host and where your web services will be listening! Granted this IP is on your local machine and not externally reachable. If you want outside services to hit your machine, you’ll need to set up port forwarding.
Now try docker run -p 8080:8080 --name=jenkins-master jenkins
According to the docs they are two same commands:
docker stack deploy and docker deploy.
Is that the case or some information is hidden somewhere?
The commands are synonyms, they hit the same backend API. Docker is in the first steps of transitioning from from docker $verb commands to docker $noun $verb, so you'll also see commands like docker images from before and docker image ls, or even docker ps and now docker container ps.
Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull.
I am blocked by the company firewall and proxy, and I can't get a hole through it.
My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy.
On my personal laptop (OS X)
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xz
Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
On my work laptop (CentOS 7)
$ docker load < couchbase.tar.xz
References
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site:
Downloading Docker Images from Docker Hub without using Docker
With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
download-frozen-image-v2.sh
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][#digest] ...
The image can then be imported with tar and docker load:
tar -cC 'target_dir' . | docker load
To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user#host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#host:~$ tar -cC 'ubuntu' . | docker load
user#host:~$ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user#nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user#nodocker:~$ scp ubuntu.tar user#hasdocker:~
and then load and use the image on the target host:
user#hasdocker:~ docker load -i ubuntu.tar
user#hasdocker:~ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
I adapted a python script for having an OS independant solution:
docker-drag
Use it like that, and it will create a TAR archive that you will be able to import using docker load :
python docker_pull.py hello-world
python docker_pull.py alpine:3.9
python docker_pull.py kalilinux/kali-linux-docker
Use Skopeo. It is a tool specifically made for that (and others) purpose.
After install simply execute:
mkdir ubuntu
skopeo --insecure-policy copy docker://ubuntu ./ubuntu
Copy these files and import as you like.
First, check if your Docker daemon is configured for using the proxy. With boot2docker and docker-machine, for instance, this is done on docker-machine create, with the --engine-env option.
If this is just a certificate issue (i.e., Firefox does access Docker Hub), try and install that certificate:
openssl s_client -connect index.docker.io:443 -showcerts /dev/null | openssl x509 -outform PEM > docker.pem
sudo cp docker.pem /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust
sudo systemctl restart docker
sudo docker run hello-world
The other workaround (not a recommended solution) would be to access Docker Hub without relying on certificate with --insecure-registry:
If the firewall is actively blocking any Docker pull, to the point you can't even access Docker Hub from Firefox, then you would need to docker save/docker load an image archive. Save it from a machine where you did access Docker Hub (and where the docker pull succeeded). Load it on your corporate machine (after approval of your IT system administrators, of course).
Note: you cannot easily "just" download an image, because it is often based on top of other images which you would need to download too. That is what docker pull does for you. And that is what docker save does too (create one archive composed of all the necessary images).
The OP Ephreal adds in the comments:
[I] didn't get my corp image to work either.
But I found that I could download the Docker file and recreate the image my self from scratch.
This is essentially the same as downloading the image.
So, by definition, a Docker pull client command actually needs to talk to a Docker daemon, because the Docker daemon assembles layers one by one for you.
Think of it as a POST request - it's causing a mutation of state, in the Docker daemon itself. You're not 'pulling' anything over HTTP when you do a pull.
You can pull all the individual layers over REST from the Docker registry, but that won't actually be the same semantics as a pull, because pull is an action that specifically tells the daemon to go and get all the layers for an image you care about.
Another possibly might be an option for you if your company firewall (and policy) allows for connecting to a remote SSH server. In that case you can simply set up a SSH tunnel to tunnel any traffic to the Docker registry through it.
The Answer and solution to my original question were that I found that I could download the Docker file and all the necessary support files and recreate the image my self from scratch. This is essentially the same as downloading the image.
This solution has been in the questions and comments above, I just pinned it out here.
This is although no longer an issue for me since my company have changed policy and allowed docker pull commands to work.
thanks #Ham Co for answer,
I adapted a golang tool for having an OS independant solution:
golang http pull docker image
./gopull download redis
get a docker importable archive redis.tar
References:
https://github.com/NotGlop/docker-drag
I need some tips on setting up a 'remote private Docker registry'.
README.md on Docker-Registry mainly focus on private registry running on the same host, does not specify how other machines can access it remotely (or maybe too complex to understand).
So far I found these threads:
Docker: Issue with pulling from a private registry from another server
(Still an open thread, no solution offered. Further discussion on Github gives hint on proxy, but how does that work?)
Create a remote private registry
(Maybe closest to what I'm looking for, but what command do I need to access the registry from other machines?)
How to use your own registry (Again, this focuses on running registry on the same host. It did mention running on port 443 or 80 for other machines to access, but need more detail!)
Running out of clues, any input very appreciated!
I was able to set up a remote private registry by referring to this:
Remote access to a private docker-registry
Steps:
On registry host, run docker run -p 5000:5000 registry
On client host, start Docker service by docker -d --insecure-registry 10.11.12.0:5000 (replace 10.11.12.0 with your own registry ip, and you might want to daemonize the process so it'll continue running after shell closes.)
Edit: Alternatively, you can edit Docker's init script (/etc/sysconfig/docker for RHEL/CentOS, /var/lib/docker for Ubuntu/Debian). Add this line other_args="--insecure-registry 10.11.12.0:5000", then do a service docker restart. This is a recommended method as it daemonizes the Docker process.
Now, try if it works:
In client, download a busybox image docker pull busybox
Give it a new tag docker tag busybox 10.11.12.0:5000/busybox
Push it to registry docker push 10.11.12.0:5000/busybox
Verify the push docker search 10.11.12.0:5000/busybox
Remove all images and pull it from your registry docker rmi busybox 10.11.12.0:5000:busybox docker pull 10.11.12.0:5000:busybox
Run docker images should have the image you just pulled from your own remote private registry.
I use private registry in the next way:
It has FQDN: docker.mycompany.com
All images which I create have name: docker.mycompany.com/image1, docker.mycompany.com/image2, etc
After that all is working seamlessly:
Push image to registry:
docker push docker.mycompany.com/image1
Pull and run image:
docker run docker.mycompany.com/image2