Auto deploy docker images on push - docker

First, I'm noob with Continuous Deployement. I currently have a VPS running 3 docker containers (Flask, MongoDb, Nginx) that I'm pulling from DockerHub with a docker-compose. What I want to do is auto deploy those 3 containers when pushing some code in my github repo. I think It's possible with Ansible but I never used it.
Someone can explain me how to do it ?
Many thx !

Finally I will use Jenkins :)
That implies a webhook, as explained in "How to Integrate Your GitHub Repository to Your Jenkins Project" by Guy Salton
And that means your Jenkins server is accessible through an internet-facing public URL, which is not always obvious when working in a corporate environment.
GitHub Actions "Publishing Docker images" can help publishing the image to DockerHub, but you still need to listen/detect those events in order for your Jenkins to trigger job pulling said ppublished images.
For that, a regular sheduler Jenkins job using regclient/regclient can help checking the latest published SHA2 image ID has or has not changed.
See more with "Container Registry Management with Brandon Mitchell: DevOps and Docker (Ep 108)".

Related

Deploying Multiple Microservices in different repositories to a single VM using bitbucket pipelines and docker-compose

I have a total of 8 Nodejs services in 8 different repositories on bitbucket. The services share some common code which is present in another repository called Brain. I want to create bitbucket pipelines in all of these repositories so that I can do the following things:
Build the docker image for each service and store it in Google Container Registry
Use ssh-run or a similar runner to ssh into my GCE VM and run a docker-compose pull and docker-compose up to deploy the latest versions.
Perform zero downtime update, i.e, keep the old containers running until the new containers are ready.
What would be the best way of doing this? Currently I'm facing the following problems:
When I push changes to the Brain repository, I need to build images for all the different services. Most of the time I'm pushing both to my brain repository and some other service repository. So multiple images are being built.
When I push changes to the Brain repository, as I build the different images for the services, all of them try to deploy using ssh-run. I don't know if this is sustainable and may crash my VM?
Any suggestions would be appreciated. Thanks in advance!

Is it possible to deploy a node.js server to minikube from local jenkins when any commit done to github?

I'm new to Jenkins...
Usually, we have a Jenkins CI/CD pipeline where it deploys code to kubernetes...
similarly, will I be able to have a deploy locally on minikube...?
if so pls guide me...
Kubernetes is running Deployments from Docker images, those images need to be hosted somewhere.
You can deploy a registry server and run it locally as a Docker container.
You can also follow this guide on how to Use your local registry with Kubernetes
Please, read the Kubernetes documentation for Images.
Also, here is a StackOverflow question on how to install Jenkins locally
If you have more questions, feel free to ask.

Using docker the way Openshift does?

I read this How does docker compare to openshift?
But I have a question :
This is an extremely simplified description of what usually devs do with Openshift :
Select a "pod" (let's say a JBoss/Wildfly container)
From within Openshift you point to your github repo
Openshift would clone the repo, build it and deploy it
Openshift present you with a web URL to access this repo port 8080
There's of course a lot more going on but that's as simple as it gets
Is this setup doable in my own linux box, VM or a cloud instance (Docker Container --> clone, build and deploy from git repo)? What would I need without messing too much with networking and domains etc?
from my research I see the following tools:
Kubernetes
Dokku : I see it described as "Your own Heroko"
I also keep hearing about CaaS (Containers as a Service)
I understand I would be needing another tool or process to the build (CI/CD) capability, and to triggering builds with git push.

Jenkins deploy into multiple openshift environments and ALM Maintenance

I'm a bit newbie on Jenkins so I have this question.
I am currently working on a project about CD. We are using jenkins to build a docker image, push it to the registry and deploy into OpenShift afterwards...Although this process works like a charm there is a tricky problem i'd like to solve. There is not only 1 openshift but 3 (and increasing) environments/regions where I want to deploy this images.
This is how we are currently doing:
Setting region tokens as secret text
$region_1 token1
$region_2 token2
$region_3 token3
Then
build $docker_image
push $docker_image to registry
deploy into Region1.ip.to.openshift:port -token $region_1
deploy into Region2.ip.to.openshift:port -token $region_2
deploy into Region3.ip.to.openshift:port -token $region_3
Thus, in case we need to new any new "region" to the Jenkins Jobs, we have to edit every job manually...
Since the number of docker images and also the number of Openshift regions/enviromnets is increasing, we are looking for the way to kind of "automate" or make it easier as possible when it comes to add a new Openshift region, since ALL the jobs (old and new ones) must deploy their images into those new environment/regions...
I have been reading documentation for a while but Jenkins is so powerful and have so many features/options that somehow i get lost reading all the docs...
I dont know if doing a Pipeline process or similar would help...
Any help is welcome :)

Continuous deployment with docker

I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help
I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.

Resources