Jenkins + Kubernetes - jenkins

I am wondering what is the pattern to proper integrate Jenkins and Kubernetes to satisfy the following scenario :
A developer checks in some code relative to a new feature
Jenkins build the container creating a pod for Kubernetes
Kubenetes assigns a proper dns name to the pod, this is to allow the
tester to connect exactly to the pod containing such feature to test
Carry on tests
I may be able to configure steps 1 to 2 but I am wondering if there is a way to automatically connect exactly to the pod that has the new feature
I need to test.
Just to be more clear, system builds the code automatically, a message is sent to the tester telling him which pod has that feature he is looking to test, in some way he tests the container with such a feature and if everything is ok the feature is merged in master.
cheers

Sorry, not a complete answer but what you describe sounds like the Auto Dev Ops Feature of Gitlab. You deploy a new "environment" to k8 for each branch, isolated into a namespace. I think you will be able to copy the procedure that Gitlab takes:
https://www.youtube.com/watch?v=uWC2QKv15mk&t=1730s
useful links:
Dns
http://xip.io/
Gitlab File (dont be scared by length) https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml
Helm Chart used
https://gitlab.com/charts/auto-deploy-app
The helm chart expects your app as a docker container exposed on port 5000 and brings Postgres.

Jenkins X will deploy your pull requests to a new preview environment and give you a URL you can connect to, so you can carry your tests
https://jenkins-x.io/about/features/#preview-environments

Related

How to enable Continuous Deployment to an internal server from BitBucket

I need some guidance and advice please on if / how I can implement a CI / CD pipeline for a corporate PHP Laravel application that is hosted on an internal server with limited access and no public IP. Unfortunately my DevOps knowledge is limited and despite a lot of Google searches, I have no idea if I am on the right track or if I am a million miles away. Everything that I have read so far looks at using Web Hooks which as far as I can tell are reliant on a public IP / domain.
At present I can work on the application on my laptop and push changes to BitBucket Cloud. While I have managed to create a bitbucket-pipelines.yml file that will automatically build and test any branches that are pushed, I then have to access the server to pull the code manually and run the various scripts as required, which I would much prefer to automate.
I need to understand please what options there are for implementing continuous deployment given the circumstances and limitations?
If anyone could offer some pointers then I would be very grateful.
Thanks
You can't push to a machine that is unreachable but there are some alternatives.
Configure a bastion host, so an external CI/CD runner can ssh through it into your server. Allow inbound port 22 connections from your CI/CD provider CIDR blocks to your bastion host https://ip-ranges.atlassian.com/
Setup a CI/CD self-hosted runner in the same network than the server. https://support.atlassian.com/bitbucket-cloud/docs/runners/ Use that runner for the deployment step to ssh into your server private IP address.
Setup a pull-based deployment strategy. Your deployment step only registers the new wanted version, e.g. moving a release/production branch on the commit ref where the deploy was triggered. In your server, setup some kind of subscription to the latest release/deployment, e.g. a cron task to frequently fetch the production git branch. Upon changes, restart services and run any task you need. ansible-pull can be handy for this purpose.

Gitlab CI: How to configure cypress e2e tests with multiple server instances?

My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.

Does Jenkins (not Jenkins X) have gitops support?

I am trying to setup Kubernetes for my company. I have looked a good amount into Jenkins X and, while I really like the roadmap, I have come the realization that it is likely not mature enough for my company to use at this time. (UI in preview, flaky command line, random IP address needs and poor windows support are a few of the issues that have lead me to that conclusion.)
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
But I am not sure about gitops support. When I try to google it (gitops jenkins) I get a bunch of information that includes Jenkins X.
Is there an easy(ish) way for normal Jenkins to use GitOps? If so, how?
Update:
By GitOps, I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
Yes, this is the what Jenkins (or other CICD tools) do. You can declare a deployment pipeline in a Jenkinsfile that is triggered on merge (commit to master) and have other steps for other branches (if you want).
I recommend to deploy with kubectl using kustomize and store the config files in your Git repository. You parameterize different environments e.g. staging and production with overlays. You may e.g. deploy with only 2 replicas in staging but with 6 replicas and more memory resources in production.
Using Jenkins for this, I would create a docker agent image with kubectl, so your steps can use the kubectl command line tool.
Jenkins on Kubernetes
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
I have not had the best experience with this. It may work - or it may not work so well. I currently host Jenkins outside the Kubernetes cluster. I think that Jenkins X together with Tekton may be an upcoming promising solution for this, but I have not tried that setup.

Setting up the Kubernetes Plugin on Jenkins

I've been struggling with setting up the Jenkins Kubernetes Plugin on the Google Container Engine.
I have the plugin installed but I think all my builds are still running on master.
I haven't found any good documentation or guides on configuring this.
UPDATE
I removed the master executor from my Jenkins image. So now my builds aren't running on master but now they have no executor so they don't run at all. Just waits in the queue forever.
You'll need to tell Jenkins how and where to run your builds by adding your Kubernetes cluster as a 'cloud' in the Jenkins configuration. Go to Manage Jenkins -> Configure System -> Cloud -> Add new cloud and select 'Kubernetes'. You'll find the server certificate key, user name and password in your local kubectl configuration (usually in ~/.kube/config). The values for 'Kubernetes URL' and 'Jenkins URL' depend on your cluster setup.
Next, you'll need to configure the docker images that should be used to run your builds by selecting 'Add Docker Template'. Use labels to define which tasks should be run with which image!
Here's a good video tutorial and here you'll find a nice tutorial which explains everything in detail.
The important bit after you've installed the plugin, set up access to your Kubernetes cluster, and set up your first Kubernetes Pod Template with a label like jnlp-slave, is that in your Jenkinsfile you need to begin with something like node('jnlp-slave') {}. Then the pod will be started when you trigger a build.
There's also a helm chart for easy deployment if that helps :)
This example might also help once you've set the plugin up too.

Continuous deployment with docker

I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help
I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.

Resources