What is the best way to automate the deployment of a Docker image in a CI environment?
After building a simple web project using Travis CI and using a Dockerfile to build the corresponding Docker image, is there a way to automatically cause that image to be deployed to a cloud provider?
Right now, the Dockerfile pulls down the base image to the Travis build machine and builds the image based on the instructions in the Dockerfile. At this point if the build is successful I can push it to the Docker Hub, though I have no need save this image to the Docker hub, what I envision is deploying the successfully built Docker image to a cloud provider (IE. DigitalOcean, Linode, or AWS) and starting/running the image.
While pushing directly to a host might seem ideal, I think it ignores the fact that hosts can fail, or may need to be replicated.
If you push directly to a prod host, and that host goes down, you don't have any way to start another one without re-running the entire CI pipeline.
If you push to an intermediary (the hub or a docker registry), you can create as many hosts as you want without having to re-run the build. You can also recover on a new host very easily (the initialize script can just pull the image and start).
If you wanted to, you could run your own registry on the cloud provider (instead of using the hub).
For a static website, you might want to look at Surge.
Otherwise, you might want to look at the AWS Elastic Beanstalk Command Line Interface (AWS EB CLI) in combination with using docker with AWS EB.
For using docker with AWS EB, read this AWS Blog post
For AWS EB CLI, here is an excerpt from the AWS EB dashboard sidebar
If you want to use a command line to create, manage, and scale your
Elastic Beanstalk applications, please use the Elastic Beanstalk
Command Line Interface (EB CLI). Get Started
$ mkdir HelloWorld
$ cd HelloWorld
$ eb init -p PHP
$ echo "Hello World" > index.html
$ eb create dev-env
$ eb open
To deploy updates to your applications, use ‘eb deploy’.
Further reading
Installing the AWS EB CLI
EB CLI Command Reference
I had this very same question.
There's a very cool docker image called Watchtower that checks the running version of a container with the same image tag on Docker hub. If there is an update on the hub, Watchtower pulls the newer image and restarts the running container (retaining all the env vars etc). Works really well for single containers that need updating.
NB: it really is as simple as running:
docker run -d \
--name watchtower \
-e REPO_USER="username" -e REPO_PASS="pass" -e REPO_EMAIL="email" \
-v /var/run/docker.sock:/var/run/docker.sock \
drud/watchtower container_to_watch --debug
I'm looking for the same thing but for the containers running as part of a docker swarm...
Related
Why?
I'm trying to create a general purpose solution for running docker-compose on Heroku. I want to make a one click deployment solution through the use of Heroku Button deployment. This way, a user does not need any knowledge of git, Heroku cli and docker.
The problem.
Docker and the docker daemon are only available when I set the stack to container. There are buildpacks that give you docker and docker-compose CLI but without the docker daemon you cannot run the docker image. So buildpacks won't work.
With the stack set to container I can use the file heroku.yml (article). In there I define my processes. (It replaces Procfile. If I still add a Procfile to my project it will do nothing.)
I can also define a Dockerfile there to build my docker image.
When I however run the docker image the following error pops up:
2019-02-28T15:32:48.462101+00:00 app[worker.1]: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
2019-02-28T15:32:48.462119+00:00 app[worker.1]:
2019-02-28T15:32:48.462122+00:00 app[worker.1]: If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
The problem is inside the Docker container the Docker daemon is not running. The solution to this is to mount it:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
And since you cannot use Procfile I cannot run that command. (See above heroku.yml replaces Procfile.) And if I was using a buildpack I could use Procfile but the docker daemon wouldn't be running.....
I tried defining a VOLUME within the Dockerfile and the problem persists. Furthermore a heroku article says "Volume mounting is not supported. The filesystem of the dyno is ephemeral."
On Heroku it is possible to run a docker image. What I am struggling at is running a docker in docker image.
Running a docker in docker image works fine on my VPS by mounting /var/run/docker.sock but this cannot(?) be done on Heroku.
Last words:
I'm trying to make this work so that other people can easily deploy software solution even though they are not comfortable with git, heroku cli and docker.
Unfortunately the answer to your question is: not yet.
For securiy reasons Heroku does not provide to the users the ability to run priviledged containers because the container could access to host capabilities.
The documentation is pretty clear about your limitations, e.g: No --priviledged container and no root user either, no VOLUMES and disk is ephemeral.
After playing with DinD images for your concern, I came to the conclusion that trying to run Docker inside a Heroku container is not the right choice and design.
I am pretty sure what you are trying to achieve is close to what Heroku is offering to the users. Offering a platform or an application where non-developper can push and deploy applications with just a button can be very interesting in various ways. And it can be done with an application using their Platform API.
In this situation a Web application (running on Heroku) may not (up to my knowledge) be able to do what you want. Instead you need to embed in a Desktop application: git, docker, and your app for parsing, verifying, building and pushing your applications/components to Heroku's container registry.
In the end, if you still think what you need a DinD solution, well, your primary solution to use a VPS is the only solution for the moment. But be aware that it may open security vulnerabilities to your system and you may arrive to offer something similar to Heroku's offer when trying to limit those security doors.
I don't think you can run a service on Heroku that can use the docker command to start some docker container.
I want to make a one click deployment solution through the use of Heroku Button deployment.
I think you can update the reference of the Deploy button to some of your automation servers (ex: an instance Jenkins already deploy on Heroku/another cloud) to trigger the deploy pipeline and don't let people interact with git/docker, etc.
But yes, you have to deal with a lot of problems like security, parameter. when not using popular solutions like Jenkins/CircleCI login and then deploy...
What I did was install it in my dockerfile like this:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Then in the args section for running the docker I added this:
args '--user root -v /var/run/docker.sock:/var/run/docker.sock'
For further explanation on why this works see: stackoverflow.com/q/27879713/354577 This works well for me though.
I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.
we need to automate the process of deployment. Let me point out the stack we use.
We have our own GitLab CE instance and private docker registry. On production server, application is run in container. After every master commit, GitLab CI builds the image with code in it, sends it to docker registry and this is where automation ends.
Deployment on production server could be performed by a few steps - stopping current application container, pulling newer one and run it.
What is the best way to automate this process?
I read about a couple of solutions (but I believe there is much more)
docker private registry pings to a production server that does all the above steps itself (script on production machine managed by eg. supervisor or something similar)
using docker machine to remotely manage run containers
What is the preferred way? Or you can recommend something else?
No need to use tools like swarm, kubernetes, etc. It's quite simple application. Thanks in advance.
How about install Gitlab-ci runner on your production machine? And perform a job after the push to registry on master called deploy and pin it to that machine using Gitlab CI tags.
The job simply pulls the image from the registry and restarts your service or whatever you have in place.
Something like:
deploy-job:
stage: deploy
tags:
- production
script:
- docker login myprivateregistry.com -u $SECRET_USER -p $SECRET_PASS
- docker pull $CI_REGISTRY_IMAGE:latest
- docker-compose down
- docker-compose up -d
I can think of four solutions
use watchtower on production server https://github.com/v2tec/watchtower
run a webhook server which is requests by your CI after pushing the image to the registry. https://github.com/adnanh/webhook
as already mentioned, run the CI on production too which finaly triggers your update commands.
enable docker api and update the container by requesting it from the CI
I am very new to Openshift Origin. I am now trying out the possibility of deploying my docker containers in OpenShift origin.
For that I created a very simple docker container that adds two numbers and produce the result:
https://github.com/abrahamjaison01/openshifttest
I created a docker image locally and a public docker image in docker hub:
docker pull abrahamjaison/openshifttest
I run the docker image locally as follows:
[root#mymachine /]# docker run -it --rm abrahamjaison/openshifttest
Enter first large number
12345
Enter second large number
54321
Result of addition = 66666
Since I am completely new to Openshift, I have no idea on how to deploy this in the Openshift environment.
I created a new project: oc new-project openshifttest
Then a new app: oc new-app docker.io/abrahamjaison/openshifttest
But then I do not know how I can access the console/terminal to provide the inputs. Also many a times when I run this I get the output as "deployment failed" when I issue the command "oc status".
Basically I would like to know how I can deploy this docker image on openshift and how I will be able to access the terminal to provide the inputs for performing the addition.
Could someone help me out with this?
OpenShift is principally for long running services such as web application and database. It isn't really intended for running a Docker container to wrap a command which then returns a result to the console and exits.
To get a better understand of how OpenShift 3 is used, download and read the free eBook at:
https://www.openshift.com/promotions/for-developers.html
The closest you will get to do the same as docker run is the oc run command, but it sort of defeats the whole point of what OpenShift is for. You are better off using Docker on your own system for what you are describing.
A guess at the command you would use if really wanted to try would be:
oc run test -i --tty --rm --image=abrahamjaison/openshifttest
As I say though, not really intended for doing this. That oc run exists is more for testing when having deployment problems for your applications.
Following the "Creating an Application From an Image" part, the syntax should be:
oc new-app abrahamjaison/openshifttest
By default, OpenShift will look for the image in DockerHub.
But that supposes you have pushed your GitHub image there first: see "Store images on Docker Hub". That might be the missing step in your process.
The interaction with oc is done with the OpenShift CLI or web console, as illustrated in the authentication page.
I have developed an app image using docker.I am able to run the image but now i need to deploy it on multiple servers.I came across fig which can deploy app on multiple servers.But they are all in development stage and do not know how well they work.How can i deploy my docker image on multiple servers.Is there any tool that can be used along with docker to deploy on multiple servers.I need some suggestions.
The simplest way is to save the image as a tarfile:
docker save [my-image-name] > my-tarfile
and load it on each target:
ssh target1 docker load < my-tarfile
ssh target2 docker load < my-tarfile
...
(adjusting for sudo, ssh credentials, etc)
You could also enable remote access for Docker and use docker -H target instead of ssh.