Best practices: How to use CI/CD to deploy flask webapp to digital ocean? [closed] - docker

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am teaching myself how to use Jenkins and Docker for CI/CD and deployment to digital ocean. I am stuck at some steps and I am especially interested in the best practices of CI/CD.
The process/pipeline I currently have:
Local code with Flask web app and docker-compose.yml (incl. dockerfile)
Push the code to github
CI: Local jenkins (will be transferred to the host later in time) tests code
If tests run successfully, I log to droplet, clone the repo, stop running docker container, do docker-compose up
My app is live again
I would like to automate step 4, and potentially I have two plans how to do it (advice on best practice is appreciated!).
Plan 1:
1. Write a step in Jenkins pipeline that will
4.1. start a new droplet automatically
4.2 log in into it with ssh
4.3 pull code from github
4.4 start with docker-compose
4.5 reroute with IP floating to a new droplet
Plan 2:
2. Write a step in Jenkins pipeline that will
4.1 build code and push somehow an image to "somewhere"
4.2 start a new droplet
4.3 log into droplet with ssh
4.4 pull image from "somewhere"
4.5 start with docker-compose
4.6 reroute with IP floating to a new droplet
I'd like to hear your opinions on the steps:
1. Which plan is better?
2. What I could do better?
3. What are the best practices that I could use?
4. Where can I push an image so that I can pull it in a new droplet?
EDIT:
I'd like to hear your answers on following:
1. Which plan is better?
2. Why is Kubernetes better then docker-compose in production env?

I recommend using Kubernetes instead of docker-compose on production environments. If not Kubernetes and you really want Docker only then at least make it Docker Swarm..
docker-compose is not reliable for production because first of all it is only for single node. If you want to scale-up you will surely have downtime because you will be relying on vertical scale (Increasing your node resources).
Kubernetes and Docker Swarm are orchestration tools. Meaning you can add more servers to scale app a.k.a Horizontal Scaling. This orchestration allows your containers to be assigned to other droplets and they can freely communicate to other containers even if they are in different droplets. docker-compose alone can not do that. I will recommend Docker Swarm for beginners as Kubernetes is very complicated
Normally you should just only setup your infrastructure in the cloud then your CI/CD will do the continuous integration on Jenkins doing continuous image builds for at least then doing an automated deployment to your server.
What I am talking about here is.. When you merge your code in a particular branch(e.g. master) of your source code repository like Github or Bitbucket then an automatic Jenkins build will run then execute your CI/CD.. So basically everytime master has an update then it will also update your image inside droplet thus having the latest source code
In your case where you are using Digital Ocean.. You can create an API on your droplet for accepting webhooks to trigger the automated deployment
This is the approach that I can think of when using Digital Ocean. DIgital Ocean is very cheap but things are done manually unlike if you tried GCP and AWS. In GCP and AWS there are more approach to do deployment automation than create your own webhook API. Regarding your last statement "if I can use Jenkins to clone the code, and run the container in the new droplet and reroute with IP floating" but I think it is too much and this is slow. This may take maybe 10 minutes whole on this process alone? We do deployment automation w/ our Kubernetes for maybe 30 seconds only. Our whole CI/CD only takes 2 minutes
On your fourth question.. Dockerhub should be fine for your image repository

Related

GitLab CI build multiple dockers + test end-to-end

I want to build and test multiple dockers from one repo in GitLab. This monorepo holds a few microservices that work together.
We have have a working docker compose up to aid local dev, so thats a start.
Goal is move build + test to GitLab, run a E2E test (end-to-end) of these dockers, and let GitLab upgrade our staging environment.
The building of multiple dockers is just multiple jobs in build stage of the (one) pipeline, testing can happen per docker i guess, which leaves testing the whole with multiple dockers running in a (or the?) staging environment.
How can GitLab run E2E tests on multiple dockers? (or is this unwise to begin with)
Would we need Kubernetes for the inter-docker mapping (network, volumes, but also dependencies) that docker-compose now facilitates?
We are using a self-hosted GitLab CE instance.
Update: cut shorter, use proper terminology.
I haven't worked on PHP, and never done Docker build for multi module, although I tried out a quick example of multi module kind of thing for Nodejs. Check this Repo
Refer to .gitlab-ci.yml which builds two independent hello world kind of nodejs module.
I now adapted this to build docker images separately.
Original answer (question changed 12th July)
You haven't mentioned what documentation you are referring to. You should be able to configure using .gitlab-ci.yml
AFAIK Building docker image should be programming language agnostic. If you can run docker build . on your local. Following documentation should help
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
You can use Gitlab pipeline for testing < - > building images. Refer to https://docs.gitlab.com/ee/ci/pipelines.html
Refer to this for PHP build - https://docs.gitlab.com/ee/ci/examples/php.html
Answering my own question, 2 years of experience later.
For a normal (single-project/service) repo the CI is straightforward.
For a monorepo, an end-to-end test is straightforward.
You can combine these aspects by having a repo with submodules, its essentially both. But this can cause overhead if done continiously and mostly usefull for proofing beta/alpha and release-candidates.
If you know more options, feel free to add here.

CI/CD with Docker - what is the final deployment step?

I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.

Jenkins deploy into multiple openshift environments and ALM Maintenance

I'm a bit newbie on Jenkins so I have this question.
I am currently working on a project about CD. We are using jenkins to build a docker image, push it to the registry and deploy into OpenShift afterwards...Although this process works like a charm there is a tricky problem i'd like to solve. There is not only 1 openshift but 3 (and increasing) environments/regions where I want to deploy this images.
This is how we are currently doing:
Setting region tokens as secret text
$region_1 token1
$region_2 token2
$region_3 token3
Then
build $docker_image
push $docker_image to registry
deploy into Region1.ip.to.openshift:port -token $region_1
deploy into Region2.ip.to.openshift:port -token $region_2
deploy into Region3.ip.to.openshift:port -token $region_3
Thus, in case we need to new any new "region" to the Jenkins Jobs, we have to edit every job manually...
Since the number of docker images and also the number of Openshift regions/enviromnets is increasing, we are looking for the way to kind of "automate" or make it easier as possible when it comes to add a new Openshift region, since ALL the jobs (old and new ones) must deploy their images into those new environment/regions...
I have been reading documentation for a while but Jenkins is so powerful and have so many features/options that somehow i get lost reading all the docs...
I dont know if doing a Pipeline process or similar would help...
Any help is welcome :)

Continuous deployment with docker

I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help
I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.

How do travis-ci and gitlab-ci compare? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
As far as I understand, the main difference is that gitlab-ci is opensource (you can install it on your own server) and travis-ci isn't.
So then the latter is always cloud/service-based. And it's free for open source projects.
But then GitLab.com (the company, not the software) also has a cloud version that you don't need to install: ci.gitlab.com. And I'm guessing this version can only be used with public repositories posted in your Gitlab account.
But then, there's almost no documentation out there about running GitLab CI this way. Most of the docs I find are about installing the GitLab CI server or the runners. But how are the ci.gitlab.com's runners configured? What OS do they have? Can I have Windows/Mac runners? (The software supports these OSs apparently, but it's not specified what runners are supplied by ci.gitlab.com's service.)
Edit: 29/06/2016
As comments suggest, now gitlab is offering what they call shared runners. This means that you no longer need to bring your own runner, you can use theirs instead and use it just like travis CI, but there is a limit of 2,000 minutes of CI run-time per month for the free tier.
** Previous historic answer **
Gitlab CI can be used online, but you must bring your own runners. What does this means? You need to install a piece of software in your servers which will run the tests. Its more complex than travis.
After installing you have to associate it with your project, and configure it if you want to run tests inside docker or in your bare hardware. There are few more options.
Each time you push a commit to gitlab, a hook is triggered to gitlab ci and a build is sent to an available runner which executes the build and tests and send back tests results to gitlab ci server.
Now, with the last update, gitlab ci is inside gitlab, but it is still the same.

Resources