Configuring different build tools for Jenkins Instance in Openshift - jenkins

We are providing Jenkins As a Service using Openshift as orchestration platform in our corporate environment. Different teams use different tools and their version to configure jobs.
For instance, we have 3 different combinations of java and maven. 5 different version of npm, 2 different version of python.
I wanted to know what is the best practice of configuring different tools?
Do I need to create and use slave image for each combination and different version of tool?
Is it a good practice to keep a simple slave image like different jdk versions (1.7, 1.8 etc) and configure JDK, NPM, Maven, Python packages as tools and use a persistent volume on slave. So that, during build these packages are setup on the fly in the PVC.
Is that an anti-pattern to use tools this way in docker slave images?

I have accomplished this by creating a git repository called jenkins and the structure of the repository looks like
master/
plugins.txt
config-stuff
agents/
base/
nodejs8/
nodejs10/
nodejs12/
maven/
java8/
openshift/
templates/
build.yaml
deploy.yaml (this includes the deployment and configmaps to attach the agents)
params/
build
deploy
We are able to build each agent independently and the master independently. We place the deployment template on the OpenShift cluster so the user has to do oc process openshift//jenkins | oc apply -f - to install Jenkins in a namespace. However, you should also look into helm for installing Jenkins as a helm chart.

In my view is better to create images separately for tools for specific apps - Java for Java tools, Python, only Python tools. You can use Docker Compose that you will have all tools available from single host. You will preserve volume data when containers are created.
Compose supports variables in the compose file. You can use these variables to customize your composition for different environments, or different users.
Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.
Here is example of compose file: compose-file.

Related

How to start docker containers using shell commands in Jenkins

I'm trying to start two containers (each with different image) using Jenkins shell commands. I tried installing docker extension in Jenkins and/or setting docker in global configuration tools. I am also doing all this in a pipeline. After executing docker run... I'm getting Docker: not found error in Jenkins console output.
I am also having a hard time finding a guide on the internet that describes exactly what I wish to accomplish. If it is of any importance, I'm trying to start a Selenium Grid and a Selenium Chrome Node and then using maven (that is configured and works correctly) send a test suite on that node.
If u have any experience with something similiar to what I wish to accomplish, please share your thoughts as what the best approach is to this situation.
Cheers.
That's because docker images that you probably create within your pipeline cannot also run (become containers) within the pipeline environment, because that environment isn't designed to also host applications.
You need to find a hosting provider for your docker images (e.g. Azure or GCP). Once you set up the hosting part, you need to add a step to your pipeline to upload/push the image to that provider's docker registry or to the free public Docker Hub. Then, finally, add a step to your pipeline to send a command to your hosting, to download the image from whichever docker registry you chose, and to launch the image into a container (this last part of download and launch is covered by docker run). Only at that point you have a running app.
Good luck.
Somewhat relevant (maybe it'll help you understand how some of those things work):
Command docker build is comparable to the proces of producing an installer package such as MSI.
Docker image is comparable to an installation package (e.g. MSI).
Command docker run is comparable to running an installer package with the goal of installing an app. So, using same analogy, running an MSI installs an app.
Container is comparable to installed application. Just like an app, docker container can run or be in stopped state. This depends on the environment, which I referred to as "hosting" above.
Just like you can build an MSI package on one machine and run it on other machines, you build docker images on one machine (pipeline host, in your case), but you need to host them in environments that support that.

How to inject a configuration file into an openshift docker build (not a pod)

Before the docker build starts I am producing a configuration file which contains a lot of environment specific configuration. I don't want it in my repository because it changes frequently.
Openshift recommends for artifacts needed during a docker build to downloaded them directly from the container. Here. Since they are mostly unique for each deployment and immutably stored on the container I would like to avoid having to first push the config file to a artifact repo just to download it again from the container.
Additionally they may have sensitive data. (although that should be easy to inject via environment variables)
Is there a different way to pass small configuration files (<10KB) to the openshift docker build context?
Update: I only use openshift in this case as a docker builder and push the target to an external repository. From there the container is picked up for deployment.
(The situation is we are in the middle of a migration where the target architecture is not yet defined, but some specific parts must be already used. Like Openshift for cicd)
Is the configuration affecting the build, or just the deployment?
If it is just the deployment, use config maps or secrets. These can be used as source for setting environment variables, or mounted as files in the container for the deployment.

Directory structure for project with Dockerfile, Jekinsfile, Kubernetes deployment yaml, pip requirements.txt, and test scripts?

Would the following directory structure work?
The goal is to have Jenkins trigger off GitHub commits and run Multi-branch Pipelines that build and test containers. (I have everything running on Kubernetes, including Jenkins)
/project
.git
README.md
Jenkinsfile
/docker_image_1
Dockerfile
app1.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/docker_image_2
Dockerfile
app2.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/k8s
/dev
deployment.yaml
/production
deployment.yaml
/component_tests
component_tests.py
Is the k8s folder that has the deployment.yamls in the right place?
Are the test folders in good locations? The tests in "component_tests" will ideally be doing more end-to-end integrated testing that involve multiple containers
I see a lot of repos have Jenkins file and Dockerfile in the same directory level. What are the pros and cons of that?
There's no good answer to this question currently.
Kubernetes provides a standard API for deployment, but as a technology it relies on additional 3rd party tooling manage the build part of the ALM workflow. There a lots of options available for turning your source code into a container running on Kubernetes. Each has it's own consequences for how your source code is organised and how a deployment might be invoked from a CI/CD server like Jenkins.
I provide the following collection of options for your consideration, roughly categorized. Represents my current evaluation list.
"Platform as a service" tools
Tooling the manages the entire ALM lifecycle of your code. Powerful but more complex and opinionated.
Deis workflow
Openshift
Fabric8 (See also Openshift.io)
Build and deploy tools
Tools useful for the code/test/code/retest workflow common during development. Can also be invoked from Jenkins to abstract your build process.
Draft
Forge
Kcompose
Fabric8 Maven plugin (Java)
Psykube
YAML templating tools
The kubernetes YAML was never designed to be used by human beings. Several initatives to make this process simpler and more standardized.
Helm
Ksonnet
Deployment monitoring tools
These tools have conventions where they expect to find Kubernetes manifest files (or helm charts) located in your source code repository.
Keel
Kube-applier
Kubediff
Landscaper
Kit
CI/CD tools with k8s support
Spinnaker
Gitlab
Jenkins + Kubernetes CI plugin
Jenkins + Kubernetes plugin
This is really left much to your preference.
In our projects we tend to split services into separate repositories not subfolders, but we also had a case where we had a bunch of Scala microserviced managed in similar way (although dockers were built with sbt plugin for docker)
One big advice I would give you is that in the long run managing your kubernetes manifests like that might become serious pain in the back. I went through this, and my suggestion is to use helm charts from day one.
I assume that your "component_tests" are end-to-end tests. Other then naming I see no problem with that. For cases where we test solutions that span multiple repos we keep them in a separate repo as well though.

Service from sources inside Docker container

Short question: is it ok (aren't there any contradictions with Docker ideology) to compile and start application from sources inside Docker container?
Assume that I have some hypothetical application. Let it be Java web service built with Maven, located somewhere in GitHub. Specifics doesn't matter here.
But before starting this service, I need to set-up several config files with right parameters, known at deployment time. Right now I can build fully-preconfigured application package with a single maven command, passing all the necessary configurations at build command.
Now assume that I need to make it a Docker container and don't have time to refactor it somehow right now. So I have a plan: let my docker image have Maven and Git, ENTRYSCRIPT clones my Git repository, builds and starts the application, passing all the necessary parameters via environment.
Is it suitable plan, or it's just wrong?

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Resources