I read this How does docker compare to openshift?
But I have a question :
This is an extremely simplified description of what usually devs do with Openshift :
Select a "pod" (let's say a JBoss/Wildfly container)
From within Openshift you point to your github repo
Openshift would clone the repo, build it and deploy it
Openshift present you with a web URL to access this repo port 8080
There's of course a lot more going on but that's as simple as it gets
Is this setup doable in my own linux box, VM or a cloud instance (Docker Container --> clone, build and deploy from git repo)? What would I need without messing too much with networking and domains etc?
from my research I see the following tools:
Kubernetes
Dokku : I see it described as "Your own Heroko"
I also keep hearing about CaaS (Containers as a Service)
I understand I would be needing another tool or process to the build (CI/CD) capability, and to triggering builds with git push.
Related
I have a total of 8 Nodejs services in 8 different repositories on bitbucket. The services share some common code which is present in another repository called Brain. I want to create bitbucket pipelines in all of these repositories so that I can do the following things:
Build the docker image for each service and store it in Google Container Registry
Use ssh-run or a similar runner to ssh into my GCE VM and run a docker-compose pull and docker-compose up to deploy the latest versions.
Perform zero downtime update, i.e, keep the old containers running until the new containers are ready.
What would be the best way of doing this? Currently I'm facing the following problems:
When I push changes to the Brain repository, I need to build images for all the different services. Most of the time I'm pushing both to my brain repository and some other service repository. So multiple images are being built.
When I push changes to the Brain repository, as I build the different images for the services, all of them try to deploy using ssh-run. I don't know if this is sustainable and may crash my VM?
Any suggestions would be appreciated. Thanks in advance!
First, I'm noob with Continuous Deployement. I currently have a VPS running 3 docker containers (Flask, MongoDb, Nginx) that I'm pulling from DockerHub with a docker-compose. What I want to do is auto deploy those 3 containers when pushing some code in my github repo. I think It's possible with Ansible but I never used it.
Someone can explain me how to do it ?
Many thx !
Finally I will use Jenkins :)
That implies a webhook, as explained in "How to Integrate Your GitHub Repository to Your Jenkins Project" by Guy Salton
And that means your Jenkins server is accessible through an internet-facing public URL, which is not always obvious when working in a corporate environment.
GitHub Actions "Publishing Docker images" can help publishing the image to DockerHub, but you still need to listen/detect those events in order for your Jenkins to trigger job pulling said ppublished images.
For that, a regular sheduler Jenkins job using regclient/regclient can help checking the latest published SHA2 image ID has or has not changed.
See more with "Container Registry Management with Brandon Mitchell: DevOps and Docker (Ep 108)".
I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.
I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help
I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.
I already have a build server that I generate a docker image for an application with and then put it into cloud storage. This is not an image that can be publicly shared on the docker index.
How can I run this application docker image in deis?
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile (although I can't find instructions on how to use a Dockerfile instead of a buildpack). This could be considered a legacy integration issue. However, the current setup of running the build service on the application cluster is not good for me, because I want my build server to be a lot more powerful than my application server. Ideally my build server would spin up on demand, although I don't bother with that rigt now.
We are hoping to resolve this feature request with https://github.com/deis/deis/issues/533.
Ideally we see it as "build your image with - insert CI product here - then run deis push --app=appname to deploy your docker image as an application". After that, it would be treated the same as any other application deployed to deis. Basically, deis push is to pushing docker images as git push is to pushing repositories.
In regards to documentation for deploying an application with a Dockerfile, the docs are at http://docs.deis.io/en/latest/developer/dockerfile/, though this workflow will change back to a more sane deployment workflow once https://github.com/deis/deis/pull/967 is merged. There was some technical debt from v0.8.0, and Dockerfile deployments was one of them.
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile
The quote is not quite right. Deis is actually designed to build the docker image from its own git repo. When you create a deis application using deis create, Deis will create new git remote name deis, that's why you run git push deis master to build you application.
So, you don't need to push your image to a public repository in order to deploy to Deis. All you need is a Dockerfile. Just put your Dockerfile in the root directory of your application and make sure to commit that file, Deis will build the application using Dockerfile, instead of buildpack.
Hope this will help!