Run Automatic Integration Tests on Kubernetes Cluster - docker

My team has developed a kubernetes cluster. And we have tested it manually using kubectl command line. These test cases are related to for example:
Pods
Services,LoadBalancer etc.
Deployments
Horizontal Pod Scaling
Rollback Deployments
Ingress Controller
Helm - the package manager for kubernetes
Persistence volume and persistence volume claims.
DNS
Link for above manual testcases: Kubernetes DOCS
I have found a Github resource to run integration tests using automation.
Please refer to this link for more information: Run Integration test cases on kubernetes
But seems that i am unable to figure it out how to run them. Process mentioned by Github Resource is confusing and not clear.
I have searched for blogs for whole day and seems like they were helpful but not much. My question is has anyone have run the integration test cases on their kubernetes cluster using automation?
If yes can you please share the best resource and steps to follow up.
Adding more information i have searched everywhere and found a Github repository
Official Python client library for kubernetes, and i am going to try with this.

Give a read-through of Sonobuoy. It runs a conformance test on a Kubernetes cluster and also allows you to do some specific tests using it's plugin architecture.

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

Directory structure for project with Dockerfile, Jekinsfile, Kubernetes deployment yaml, pip requirements.txt, and test scripts?

Would the following directory structure work?
The goal is to have Jenkins trigger off GitHub commits and run Multi-branch Pipelines that build and test containers. (I have everything running on Kubernetes, including Jenkins)
/project
.git
README.md
Jenkinsfile
/docker_image_1
Dockerfile
app1.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/docker_image_2
Dockerfile
app2.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/k8s
/dev
deployment.yaml
/production
deployment.yaml
/component_tests
component_tests.py
Is the k8s folder that has the deployment.yamls in the right place?
Are the test folders in good locations? The tests in "component_tests" will ideally be doing more end-to-end integrated testing that involve multiple containers
I see a lot of repos have Jenkins file and Dockerfile in the same directory level. What are the pros and cons of that?
There's no good answer to this question currently.
Kubernetes provides a standard API for deployment, but as a technology it relies on additional 3rd party tooling manage the build part of the ALM workflow. There a lots of options available for turning your source code into a container running on Kubernetes. Each has it's own consequences for how your source code is organised and how a deployment might be invoked from a CI/CD server like Jenkins.
I provide the following collection of options for your consideration, roughly categorized. Represents my current evaluation list.
"Platform as a service" tools
Tooling the manages the entire ALM lifecycle of your code. Powerful but more complex and opinionated.
Deis workflow
Openshift
Fabric8 (See also Openshift.io)
Build and deploy tools
Tools useful for the code/test/code/retest workflow common during development. Can also be invoked from Jenkins to abstract your build process.
Draft
Forge
Kcompose
Fabric8 Maven plugin (Java)
Psykube
YAML templating tools
The kubernetes YAML was never designed to be used by human beings. Several initatives to make this process simpler and more standardized.
Helm
Ksonnet
Deployment monitoring tools
These tools have conventions where they expect to find Kubernetes manifest files (or helm charts) located in your source code repository.
Keel
Kube-applier
Kubediff
Landscaper
Kit
CI/CD tools with k8s support
Spinnaker
Gitlab
Jenkins + Kubernetes CI plugin
Jenkins + Kubernetes plugin
This is really left much to your preference.
In our projects we tend to split services into separate repositories not subfolders, but we also had a case where we had a bunch of Scala microserviced managed in similar way (although dockers were built with sbt plugin for docker)
One big advice I would give you is that in the long run managing your kubernetes manifests like that might become serious pain in the back. I went through this, and my suggestion is to use helm charts from day one.
I assume that your "component_tests" are end-to-end tests. Other then naming I see no problem with that. For cases where we test solutions that span multiple repos we keep them in a separate repo as well though.

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Chef and Docker

I am a bit confused. As a part of a course we are supposed to setup a CI and CD solution using Jenkins, Docker and Chef, how the flow is going to be is not specified.
We have been setting up Jenkins, so that for every new git commit it creates a Jenkins slaves that spins up the specific containers needed for a test, then tears down them and reports the result.
So, have been looking around today for information regarding using Chef and Docker for continuous delivery/deployment. The use case that I see is the following, specify in Chef the machine deployment options, how many machines for each server, database and so on. When the Jenkins slave successfully builds and tests the application, it is time to deploy. Remove any old container and build new containers, handle configurations and other necessary management in Chef.
Have been looking around for information of similar use cases and there does not seem to be super much information about it. Have been tinkering with the chef-provision plugin with chef-provision-docker but the information regard to using for example the docker plugin is not super intuitive. Then I stumble across this article (https://coderanger.net/provisioning/) which basically does not recommend new projects to start using the chef-provision plugin.
Is there something I am missing, is this kind of use case not that popular or even just stupid? Are there any other plugins that I have missed or another setup with chef that is more suitable?
Cheers in advance!
This kind of purely procedural stuff isn't really what Chef is for. You would want to use something integrated directly with Jenkins as a plugin probably. Or if you're talking about cookbook integration tests there are the kitchen-docker and kitchen-dokken drivers which can handle the container management for you.
EDIT: The above was not really what the question was about, new answer.
The tool you're looking for is usually called a resource scheduler or cluster orchestrator. Chef can do this either via chef-provisioning or the docker cookbook. Between those two I would use the latter. But that said, Chef is really not the best tool for this job. There is a whole generation of dedicated schedulers including Mesos+Marathon, Kubernetes, Nomad, and docker-compose+swarm. Of all of those, Nomad is probably the simplest but Kube has a huge community following and is growing quickly. I would consider using Chef for this an intermediary step at best.
I would suggest to use container orchestrations platforms like kubernetes, docker swarm or mesos. Personally i would recommend to use kubernetes since it is the leading platform out of the three listed.
chef is a good configuration management tool and using it for provisioning containers would work but it is not the best solution. You would come across issues like managing where the containers should be provisioned and monitoring container status and how to handle their failures. A platform like kubernetes would handle this for you.
this would be useful to get some insigths:
https://www.youtube.com/watch?v=24X18e4GVbk
more to read:
http://www.devoperandi.com/how-we-do-builds-in-kubernetes/

Jenkins - Docker integration

I'm looking for the best way to integrate Docker into Jenkins to execute build/test commands.
The best source I have found is this blog post:
http://blog.howareyou.com/post/62157486858/continuous-delivery-with-docker-and-jenkins-part-i
It basically offers to wrap all execution commands with "docker run".
I would like to see better integrations with Jenkins plugin but I couldn't find anything in this area.
Could anyone suggest a good way to proceed? Is there any project on the way to address this?
Thanks
There's a second blog post which goes into more detail about the setup. The missing piece was dockerize which makes the Jenkins & Docker integration painless. If you look into the Ruby app example, there's a Vagrantfile which will set everything up for you, use that as the starting point for your own setup.
Wouldn't you just create a shell script to setup the docker environment as you normally would (or better still push the docker container in to a private repo) and get Jenkins to pull it down and run the test suite inside docker using the daemonised mode?
Maybe you could use directory shares to drop the output of the test data so that Jenkins can read it and display?
Hopefully a point in the right direction...
This plugin might be what you are looking for.

Resources