According to the docs they are two same commands:
docker stack deploy and docker deploy.
Is that the case or some information is hidden somewhere?
The commands are synonyms, they hit the same backend API. Docker is in the first steps of transitioning from from docker $verb commands to docker $noun $verb, so you'll also see commands like docker images from before and docker image ls, or even docker ps and now docker container ps.
Related
I'm building an app that makes api calls to run code inside docker containers
I want to run a docker container that has docker running inside it.
I want to create a docker file that pulls other docker images inside it and then waits for api calls (on port 2376) to create, run and delete containers based on the docker images that i pulled into the dockerfile
This is the dockerfile I'm trying to create right now.
FROM docker:stable
RUN docker pull python
EXPOSE 23788
CMD tail -f /dev/null
However when the RUN command is issued i get this error message:
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I don't really know how to start docker inside a docker container.
The reason i need this kind of a docker file is so that i can then use kubernetes to scale this part of my application
There's a special image for this, docker:dind. See the bit about "Docker in Docker" in https://hub.docker.com/_/docker.
I'm looking for a way to pull the latest image in Docker vanilla after a container crashed/exited.
As in my current architecture, I don't have access to Docker Engine API but only to the container itself, I want to be able to update the container based on the image after this service is exited.
The Docker way to upgrade containers seems to be the following:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
But that's based on the Docker engine CLI API and as I explained before - that's not an approach that I want to try.
Is there a possible way to configure the Docker when the container is pulling again the image from the latest repository upon restart/crash?
What you are asking for is this.
Which seems possible using docker service update for which you will need docker swarm. With plain docker installed on single VM, don't seems feasible.
Hope this helps.
I noticed in the latest Docker CLI documentation that Docker CLI command list has expanded.
If I used docker exec earlier to start executable inside container now I can also use docker container exec command.
docker container run command is similar to docker run, etc.
So which commands are preferrable now? Old syntax or new docker container syntax? Unfortunately I couldn't find any explanation in the docs.
Also what is the difference between docker container run and docker container create commands? And between docker container stop and docker container kill? The description and syntax are very similar.
Thanks.
As docker grew in features over time and new commands were added cli needed some redesign. You should use docker container exec to be compatible in the future, but docker exec is in fact an alias so until someone decided to deprecate it should also work. If you are interested, you can start reading about this change from this PR: https://github.com/moby/moby/pull/26025
I've been using Dockerfiles so often that I've forgotten how to start up a new one without one.
I was reading https://docs.docker.com/engine/reference/commandline/start/ and ofc it doesn't state how to start up a new one.
docker run -it ubuntu:16.04 bash
A Dockerfile describes a Docker image not a container.
The container is an instance of this image.
If you want to run a container without building an image (which means without creating a Dockerfile), you need to use an existing image on the Docker Hub (link here).
N.B.: The Docker Hub is a Docker online repository, they are more repositories like Quay, Rancher and others.
For example, if you want to test this, you can use the hello-world image found on the Docker Hub: https://hub.docker.com/_/hello-world/.
According to the documentation, to run a simple hello-world container:
$ docker run hello-world
Source: https://hub.docker.com/_/hello-world/
If you don't have the image locally, Docker will automatically pull it
from the web. If you want to manually pull the image you can run the
following command:
$ docker pull hello-world
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Source: https://hub.docker.com/_/hello-world/
docker start is used to start a stopped container which already exists and in stopped state.
If you want to start a new container use docker run instead. For information about docker run please see https://docs.docker.com/engine/reference/commandline/run/
I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/