I have a web application which I want to dockerise and release using azure pipelines.
I'm already able to build and test but am struggling on releasing it.
Build Pipeline (on build server controlled with VSTS agent):
1) Build using .net core
2) Run unit tests using .net core
3) Build docker image
4) Push to registry
This then automatically triggers my Release Pipeline. The release pipeline has two stages which are create release and deploy release.
A non-docker scenario would be to create DLLs (create release) and deploy on web servers (deploy release). I'm not sure if/how this applies to docker apps.
Release Pipeline (a separate release agent running on each environment server):
Artifact (CI Build) => DEV => SIT => UAT => PROD
In each of the environments, I do:
1) stop and remove old image/container
2) pull 'latest' tag from repository
3) run new image in docker container.
Question:
How do I create a single, release specific, docker image which can then be pushed to each environment and how do I split the create release from deploy release to get this done?
Issues:
If I'm just pulling an image in each pull then I'm just getting the latest tag, I am not sure if I can tag the registry image with the release number?
The reason for tagging with a release number/name is so that I can re-deploy a previous release if necessary.
I want to prevent storing all the release images on my DEV/SIT/UAT/PROD servers. The release images need to live in the registry server so that I have a central location to pull them from.
Addressing Your Question
Your create-release job can look like this.
echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
docker build -t $CI_REGISTRY_IMAGE -t $CI_REGISTRY_IMAGE:$VERSION .
docker push $CI_REGISTRY_IMAGE
docker push $CI_REGISTRY_IMAGE:$VERSION
Your deploy release job can look like this.
echo $DOCKER_HUB_PASSWORD | docker login -u $DOCKER_HUB_USER --password-stdin $REGISTRY_URL
docker pull $CI_REGISTRY_IMAGE
docker pull $CI_REGISTRY_IMAGE:$VERSION
docker tag $CI_REGISTRY_IMAGE $DOCKER_HUB_USER/crystal-skull
docker tag $CI_REGISTRY_IMAGE:$VERSION $DOCKER_HUB_USER/$IMAGE_NAME:$VERSION
docker push $DOCKER_HUB_USER/$IMAGE_NAME
docker push $DOCKER_HUB_USER/$IMAGE_NAME:$VERSION
Addressing Your Issues
Yes, you can tag the registry image with a particular release number. This is standard for Docker registries including Docker Hub.
Related
There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways
I have a Java application in VSTS for which a build definition has been created to generate a number of build artifacts which include an ear file and a server configuration file. All of these build artifacts are zipped up in a final build definition task.
We now wish to create a Docker file which encapsulates the above build artifacts in another VSTS Docker build task. This will be done via a build definition commandline task and it is worth pointing out that our target docker registry is a corporate registry, not Azure.
The challenge I am now facing is how to generate the required docker image from the zipped artifact (or its contents if possible). Any ideas on how this could be achieved?
To generate a docker image from zip/tar file you can use docker load command:
docker load < test.tar.gz
Loaded image: test:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test latest 769b9341d937 7 weeks ago 2.489 MB
After that you can push the image to your private registry:
docker login <REGISTRY_HOST>:<REGISTRY_PORT>
docker tag <IMAGE_ID> <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>
docker push <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>
Example :
docker login repo.company.com:3456
docker tag 769b9341d937 repo.company.com:3456/test:0.1
docker push repo.company.com:3456/test:0.1
So at the end of your build pipeline add a Command Line task and run the above commands (change the values to your zip file location, username, password, registry url etc.).
we need to automate the process of deployment. Let me point out the stack we use.
We have our own GitLab CE instance and private docker registry. On production server, application is run in container. After every master commit, GitLab CI builds the image with code in it, sends it to docker registry and this is where automation ends.
Deployment on production server could be performed by a few steps - stopping current application container, pulling newer one and run it.
What is the best way to automate this process?
I read about a couple of solutions (but I believe there is much more)
docker private registry pings to a production server that does all the above steps itself (script on production machine managed by eg. supervisor or something similar)
using docker machine to remotely manage run containers
What is the preferred way? Or you can recommend something else?
No need to use tools like swarm, kubernetes, etc. It's quite simple application. Thanks in advance.
How about install Gitlab-ci runner on your production machine? And perform a job after the push to registry on master called deploy and pin it to that machine using Gitlab CI tags.
The job simply pulls the image from the registry and restarts your service or whatever you have in place.
Something like:
deploy-job:
stage: deploy
tags:
- production
script:
- docker login myprivateregistry.com -u $SECRET_USER -p $SECRET_PASS
- docker pull $CI_REGISTRY_IMAGE:latest
- docker-compose down
- docker-compose up -d
I can think of four solutions
use watchtower on production server https://github.com/v2tec/watchtower
run a webhook server which is requests by your CI after pushing the image to the registry. https://github.com/adnanh/webhook
as already mentioned, run the CI on production too which finaly triggers your update commands.
enable docker api and update the container by requesting it from the CI
Recently, we just finished web application solution using Docker.
https://github.com/yccheok/celery-hello-world/tree/nginx (The actual solution is hosted in private repository. This example just a quick glance on how our project structure looks like)
We plan to purchase 1 empty Linux machine on deploy on it. We might purchase more machines in the future but with current traffic right now, 1 machine will be sufficient.
My plan for deployment on the single empty machine is
git pull <from private code repository>
docker-compose build
docker-compose up -d
Since we are going to deploy to multiple machines in near future, I was wondering, is this a common practice to deploy docker application into a fresh empty machine?
Is there anything we can utilize from https://hub.docker.com/ , without requiring us to perform git pull during deployment stage?
You don't want to perform git pull in each machine - your intuition is correct.
Instead you want to use remote docker registry (as docker hub for example).
So the right flow, each time your source code (git repo) is changed:
git pull from all relevant repos.
docker-compose build to build all relevant images.
docker-compose push to push all images (diff) to remote registry.
docker-compose pull in your production machines, to get the latest updated images.
docker-compose up to start all containers.
First 3 steps should be done in your CI machine (for example, as a jenkins job). Steps 4-5 in your production machines.
EDIT: one thing to consider. I think build via docker-compose is bad. Consider building directly by docker build -f Dockerfile -t repo/image:tag . and in docker-compose just specify the image name.
My opinion is you should not BUILD images on production machines. Because the image might be different than you would expect and you should limit yourself what you do on production machines.. With that being said, i would recommend:
updating the code on your local computer (development)
when you push code to git, you should use some software to build
your images from your push. For example Gitlab-CI (Continuous
integration tool)
gitlab-ci will build the image, then it could run some tests on that
image, and then deploy it to production (this build image)
On you production machine just do docker-compose pull &&
docker-compose up -d and that is it.
I strongly recommend to build images on other machine than production machines, and use some CI tool to test your images before deploying. For example https://docs.gitlab.com/ce/ci/README.html
Deploying it on a fresh machine or the other way around would be fine.
The best way to go around is to make a private repo on https://hub.docker.com/ and push your images there.
Building and shipping the image
git pull
docker build
docker login
docker push repo/image
Pulling the shipped image and deploying
docker login on the server
docker pull repo/image
docker-compose up -d
Though i would recommend you to look at container scheduling using kubernetes and setting up your CI/CD stack with jenkins to automate this process, in case something bad happens it can be a life saver.
We're using the Cloudbees Docker Build and Publish plugin to build Docker images in our Jenkins instance. The builds are working fine and we're pushing to Docker Hub successfully, but the images are sticking around on the Jenkins slave and causing space issues.
Is there an option to remove the images after a successful build and push? Thanks.
Like you said, you need to have --rm as an Additional Build Arguments in advanced section of Cloudbees Docker Build and Publish plugin to get rid of those intermediate container but images that you build and push to repo will still remain on your host. Simple fix would be to add a build step and execute a shell command like this to delete those images:
docker rmi ACCOUNT/IMAGE:${BUIL_NUMBER}
Assuming you're tagging your images with Jenkins BUILD_NUMBER, you can replace it with whatever variable that you use.