Continuous deployment with docker - docker

I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help

I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.

Related

Auto deploy docker images on push

First, I'm noob with Continuous Deployement. I currently have a VPS running 3 docker containers (Flask, MongoDb, Nginx) that I'm pulling from DockerHub with a docker-compose. What I want to do is auto deploy those 3 containers when pushing some code in my github repo. I think It's possible with Ansible but I never used it.
Someone can explain me how to do it ?
Many thx !
Finally I will use Jenkins :)
That implies a webhook, as explained in "How to Integrate Your GitHub Repository to Your Jenkins Project" by Guy Salton
And that means your Jenkins server is accessible through an internet-facing public URL, which is not always obvious when working in a corporate environment.
GitHub Actions "Publishing Docker images" can help publishing the image to DockerHub, but you still need to listen/detect those events in order for your Jenkins to trigger job pulling said ppublished images.
For that, a regular sheduler Jenkins job using regclient/regclient can help checking the latest published SHA2 image ID has or has not changed.
See more with "Container Registry Management with Brandon Mitchell: DevOps and Docker (Ep 108)".

Using docker the way Openshift does?

I read this How does docker compare to openshift?
But I have a question :
This is an extremely simplified description of what usually devs do with Openshift :
Select a "pod" (let's say a JBoss/Wildfly container)
From within Openshift you point to your github repo
Openshift would clone the repo, build it and deploy it
Openshift present you with a web URL to access this repo port 8080
There's of course a lot more going on but that's as simple as it gets
Is this setup doable in my own linux box, VM or a cloud instance (Docker Container --> clone, build and deploy from git repo)? What would I need without messing too much with networking and domains etc?
from my research I see the following tools:
Kubernetes
Dokku : I see it described as "Your own Heroko"
I also keep hearing about CaaS (Containers as a Service)
I understand I would be needing another tool or process to the build (CI/CD) capability, and to triggering builds with git push.

CI/CD with Docker - what is the final deployment step?

I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.

Jenkins deploy into multiple openshift environments and ALM Maintenance

I'm a bit newbie on Jenkins so I have this question.
I am currently working on a project about CD. We are using jenkins to build a docker image, push it to the registry and deploy into OpenShift afterwards...Although this process works like a charm there is a tricky problem i'd like to solve. There is not only 1 openshift but 3 (and increasing) environments/regions where I want to deploy this images.
This is how we are currently doing:
Setting region tokens as secret text
$region_1 token1
$region_2 token2
$region_3 token3
Then
build $docker_image
push $docker_image to registry
deploy into Region1.ip.to.openshift:port -token $region_1
deploy into Region2.ip.to.openshift:port -token $region_2
deploy into Region3.ip.to.openshift:port -token $region_3
Thus, in case we need to new any new "region" to the Jenkins Jobs, we have to edit every job manually...
Since the number of docker images and also the number of Openshift regions/enviromnets is increasing, we are looking for the way to kind of "automate" or make it easier as possible when it comes to add a new Openshift region, since ALL the jobs (old and new ones) must deploy their images into those new environment/regions...
I have been reading documentation for a while but Jenkins is so powerful and have so many features/options that somehow i get lost reading all the docs...
I dont know if doing a Pipeline process or similar would help...
Any help is welcome :)

How would i go about creating docker environment in CI with lots of services

Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.

Resources