I'm looking for a way to pull the latest image in Docker vanilla after a container crashed/exited.
As in my current architecture, I don't have access to Docker Engine API but only to the container itself, I want to be able to update the container based on the image after this service is exited.
The Docker way to upgrade containers seems to be the following:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
But that's based on the Docker engine CLI API and as I explained before - that's not an approach that I want to try.
Is there a possible way to configure the Docker when the container is pulling again the image from the latest repository upon restart/crash?
What you are asking for is this.
Which seems possible using docker service update for which you will need docker swarm. With plain docker installed on single VM, don't seems feasible.
Hope this helps.
Related
I'm building an app that makes api calls to run code inside docker containers
I want to run a docker container that has docker running inside it.
I want to create a docker file that pulls other docker images inside it and then waits for api calls (on port 2376) to create, run and delete containers based on the docker images that i pulled into the dockerfile
This is the dockerfile I'm trying to create right now.
FROM docker:stable
RUN docker pull python
EXPOSE 23788
CMD tail -f /dev/null
However when the RUN command is issued i get this error message:
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I don't really know how to start docker inside a docker container.
The reason i need this kind of a docker file is so that i can then use kubernetes to scale this part of my application
There's a special image for this, docker:dind. See the bit about "Docker in Docker" in https://hub.docker.com/_/docker.
According to the docs they are two same commands:
docker stack deploy and docker deploy.
Is that the case or some information is hidden somewhere?
The commands are synonyms, they hit the same backend API. Docker is in the first steps of transitioning from from docker $verb commands to docker $noun $verb, so you'll also see commands like docker images from before and docker image ls, or even docker ps and now docker container ps.
I've been using Dockerfiles so often that I've forgotten how to start up a new one without one.
I was reading https://docs.docker.com/engine/reference/commandline/start/ and ofc it doesn't state how to start up a new one.
docker run -it ubuntu:16.04 bash
A Dockerfile describes a Docker image not a container.
The container is an instance of this image.
If you want to run a container without building an image (which means without creating a Dockerfile), you need to use an existing image on the Docker Hub (link here).
N.B.: The Docker Hub is a Docker online repository, they are more repositories like Quay, Rancher and others.
For example, if you want to test this, you can use the hello-world image found on the Docker Hub: https://hub.docker.com/_/hello-world/.
According to the documentation, to run a simple hello-world container:
$ docker run hello-world
Source: https://hub.docker.com/_/hello-world/
If you don't have the image locally, Docker will automatically pull it
from the web. If you want to manually pull the image you can run the
following command:
$ docker pull hello-world
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Source: https://hub.docker.com/_/hello-world/
docker start is used to start a stopped container which already exists and in stopped state.
If you want to start a new container use docker run instead. For information about docker run please see https://docs.docker.com/engine/reference/commandline/run/
I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/
I read this article http://blog.docker.io/2013/09/docker-can-now-run-within-docker/ and I want to share images between my "host" docker and "child" docker. But when I run
sudo docker run -v /var/lib/docker:/var/lib/docker -privileged -t -i jpetazzo/dind
I can't connect to "child" docker from dind container.
root#5a0cbdc2b7df:/# docker version
Client version: 0.8.1
Go version (client): go1.2
Git commit (client): a1598d1
2014/03/13 18:37:49 Can't connect to docker daemon. Is 'docker -d' running on this host?
How can I share my local images between host and child docker?
You shouldn't do that! Docker assumes that it has exclusive access to /var/lib/docker, and if you (or another Docker instance) meddles with this directory, it could have unexpected results.
There are multiple solutions, depending on what you want to achieve.
If you want to be able to run Docker commands from within a container, but don't need a separate daemon, then you can share the Docker control socket with this container, e.g.:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-t -i ubuntu bash
If you really want to run a different Docker daemon (e.g. because you're hacking on Docker and/or want to run a different version), but want to access the same images, maybe you could run a private registry in a container, and use that registry to easily share images between Docker-in-the-Host and Docker-in-the-Container.
Don't hesitate to give more details about your use-case so we can tell you the most appropriate solution!