Cloud build CI/CD & k8s files - docker

I am using cloud build and GKE k8s cluster and i have setup CI/CD from github to cloud build.
I want to know is it good to add CI build file and Dockerfile in the repository or manage config file separately in another repository?
Is it good to add Ci & k8s config files with business logic repository?
What is best way to implement CI/CD cloud build to GKE with managing CI/k8 yaml files

Yes, you can add deployment directives, typically in a dedicated folder of your project, which can in turn use a cicd repository
See "kelseyhightower/pipeline-application" as an example, where:
Changes pushed to any branch except master should trigger the following actions:
build a container image tagged with the build ID suitable for deploying to a staging cluster
clone the pipeline-infrastructure-staging repo
patch the pipeline deployment configuration file with the staging container image and commit the changes to the pipeline-infrastructure-staging repo
The pipeline-infrastructure-staging repo will deploy any updates committed to the master branch.

Please keep in mind that:
The best solution for storing Dockerfile or build config file should be the remote repository. For dockerfiles this is supported as "Native Docker support" in gcloud.
You can use different host repositories like:
Cloud Source Repository
Bitbucket
GitHub
As an example structure for build config yaml file you can find here, informations about cloud build concepts and tutorials.

Related

Build multiple projects

I'm exploring the Jenkins world to see if it can fit my needs for this case.
I need to build two git repositories (backend and frontend). For the backend, I would need:
Choose the branch we want to build from a list
Checkout the branch and build Docker image using the Dockerfile
push to ECR
release to a specific Kubernetes deployment
After backend build, we have to build the frontend by doing:
Choose the branch we want to build from a list
Checkout the branch and run npm script to build
deploy to S3 folder
Build of the project should be triggered only manually, by the project owner (who is not a developer )
Is Jenkins the right way to go? And if yes, could you point me to how you would do it?
Thanks
Yes, you can definitely implement what you need with Jenkins. There are different ways to implement each step. But here are some things you can consider using.
For Branch listing, you can consider using a plugin like List Git
Branches Plugin
For Docker image Building and pushing you can use Jenkins Docker Steps.
For K8S stuff you can probably use a Shell script or can use something like Kubecli
For S3 stuff you can use S3 Publisher Plugin.

CI/CD in gitlab through mirror repository

I have been wanting to set up a CI/CD pipeline for a repository in its mirror repository. I have found articles on how to mirror a repo but I cannot find how to set up CI/CD in it and view the results in original repo. Can anyone suggest me how to do it?
how to set up CI/CD
It's the same approach as if without mirroring - you just add .gitlab-ci.yml file to your original git repository, it gets mirrored to GitLab which then runs pipelines for you
how to ... view the results in original repo
You didn't specify where you host your original repo.
If it's a GitHub, then you will see GitLab pipelines on your GitHub Pull Requests.
It's the same as with any other CI tool integrated with GitHub.
Links:
https://docs.gitlab.com/ee/ci/yaml/
https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/
https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html

Implementation of CD with Jenkins for on premise kubernetes cluster?

I have my on premise kubernetes cluster.
I want to setup CD with jenkins.
I have tried two plugins which are available for kubernetes deployment.
Kubernetes Continuous Deploy Plugin
Kubernetes plugin
The question here is, jenkins master will require the .kube/config
file to connect to the kubernetes cluster for doing deployments.
Is this the best practice to copy the .kube/config file of your kubernetes master to jenkins master to provide full acesss of your kubernetes cluster for deployment purpose.
Please do let me know if you have any other suggestions for my ON-PREM kubernetes cluster CD.
you have two options
1. put all kubeconfig files in a separate git repo, say, kubeVault .git, and name the files using cluster name. Accept cluster as build paramter in jenkins. Lookup the respective kubeconfig from kubeVault repo and use it as target platform and deploy the container. We have used this approach in one of the project. you need to build some logic in the pipeline groovy code
define build parameters and set them using the fields from kubeconfig file. post build generate actual kubeconfig file from build params and keep it in /tmp directory. pass the kubeconfig location to kubectl to deploy the k8s obejects/pod

GitLab CI pipeline with multi-container kubernetes pod

Is this possible to set up a config, which would include:
GitLab project #1 java-container
GitLab project #2 java-container
Nginx container
Redis container
Cassandra container
Nginx exporter (Prometheus)
Redis exporter (Prometheus)
JMX exporter (Prometheus) x2
It's important to have all this in one multi-container pod on kubernetes (GKE) and communicating via shared volume and localhost.
I've already done all this in kubernetes with initial containers (to pull the code and compile it), and now I'm looking for the way to make this work with CI/CD.
So, if this could be done with GitLab CI, could you, please, point me to the right documentation or manual pages, as I'm a newbie in GitLab CI and stuff, and have already lost myself in dozens of articles from the internet.
Thanks in advance.
The first thing is to join all projects, that should be built with maven and(or) docker into the one common project at GitLab.
Next is to add dockerfiles and all files, needed for docker build, into the sub-projects folders.
Next in the root of the common project we should we should place .gitlab-ci.yml and deployment.yml file.
deployment.yml should be common or all the sub-project.
.gitlab-ci.yml should contain all the stages to build every sub-project. As we don't need to build all the stuff every time we make changes in sime files, we should use tags in git to make GitLab CI understand, in which case it should run one or another stage. This could be implemented by only parameter:
docker-build-akka:
stage: package
only:
- /^akka-.*$/
script:
- export DOCKER_HOST="tcp://localhost:2375"
...
And so on in every stage. So, if you make changes to needed Dockerfile or java code, you should commit and push to gitlab with a tag like akka-0.1.4, and GitLab CI runner will run only the appropriate stages.
Also, if you make changes to README.md file or any other changes, that don't need to build the project - it wouldn't.
Lots of useful stuff you can find here and here.
Also, look at the problem, that I faced running docker build stage in kubernetes. It might me helpful.

Deploying an existing docker image with Deis

I already have a build server that I generate a docker image for an application with and then put it into cloud storage. This is not an image that can be publicly shared on the docker index.
How can I run this application docker image in deis?
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile (although I can't find instructions on how to use a Dockerfile instead of a buildpack). This could be considered a legacy integration issue. However, the current setup of running the build service on the application cluster is not good for me, because I want my build server to be a lot more powerful than my application server. Ideally my build server would spin up on demand, although I don't bother with that rigt now.
We are hoping to resolve this feature request with https://github.com/deis/deis/issues/533.
Ideally we see it as "build your image with - insert CI product here - then run deis push --app=appname to deploy your docker image as an application". After that, it would be treated the same as any other application deployed to deis. Basically, deis push is to pushing docker images as git push is to pushing repositories.
In regards to documentation for deploying an application with a Dockerfile, the docs are at http://docs.deis.io/en/latest/developer/dockerfile/, though this workflow will change back to a more sane deployment workflow once https://github.com/deis/deis/pull/967 is merged. There was some technical debt from v0.8.0, and Dockerfile deployments was one of them.
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile
The quote is not quite right. Deis is actually designed to build the docker image from its own git repo. When you create a deis application using deis create, Deis will create new git remote name deis, that's why you run git push deis master to build you application.
So, you don't need to push your image to a public repository in order to deploy to Deis. All you need is a Dockerfile. Just put your Dockerfile in the root directory of your application and make sure to commit that file, Deis will build the application using Dockerfile, instead of buildpack.
Hope this will help!

Resources