Mimic prod environment on CI server - jenkins

I have to setup CI/CD for my organisation.
My requirement is that ci-sever (whether hosted or on-premise ) should mimic the prod environment like operating system , /var/log directories , nginx , php-fpm configuration so on. It gives us more confidence when running integration test cases.
As we setup jenkins on-premise server so we can easily replicate the prod environment on jenkinsserver.
How can I do that with any ci hosting service like travis-ci , 'codeship' , 'circle-ci' etc ??

Operating system
If you use ubuntu 12.04/14.04 then most commercial offering (travis, cicrcle ci, I think codeship also) are already using it so you're ok there. If not then I believe you'll need to use docker to setup a container with the expected operating system
2.software packages and other configuration
All commercial offering support specifying the system packages you need and installing on build (Usually with cached, so quicker then usual install). Of course if you are using a docker container you can pre bake it with all the packages you need.
Same go for /var/log just run a script in "pre" step of the build and setup whatever you need and expect
Bonus points
The best way to handle production infrastructure is automation so if you automate your infrastructure setup with configuration management tool (ansible is my favorite, other popular options are salt, chef, puppet, engineCf) and then use the same script to prepare the environment in your build jobs..

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

Configuring different build tools for Jenkins Instance in Openshift

We are providing Jenkins As a Service using Openshift as orchestration platform in our corporate environment. Different teams use different tools and their version to configure jobs.
For instance, we have 3 different combinations of java and maven. 5 different version of npm, 2 different version of python.
I wanted to know what is the best practice of configuring different tools?
Do I need to create and use slave image for each combination and different version of tool?
Is it a good practice to keep a simple slave image like different jdk versions (1.7, 1.8 etc) and configure JDK, NPM, Maven, Python packages as tools and use a persistent volume on slave. So that, during build these packages are setup on the fly in the PVC.
Is that an anti-pattern to use tools this way in docker slave images?
I have accomplished this by creating a git repository called jenkins and the structure of the repository looks like
master/
plugins.txt
config-stuff
agents/
base/
nodejs8/
nodejs10/
nodejs12/
maven/
java8/
openshift/
templates/
build.yaml
deploy.yaml (this includes the deployment and configmaps to attach the agents)
params/
build
deploy
We are able to build each agent independently and the master independently. We place the deployment template on the OpenShift cluster so the user has to do oc process openshift//jenkins | oc apply -f - to install Jenkins in a namespace. However, you should also look into helm for installing Jenkins as a helm chart.
In my view is better to create images separately for tools for specific apps - Java for Java tools, Python, only Python tools. You can use Docker Compose that you will have all tools available from single host. You will preserve volume data when containers are created.
Compose supports variables in the compose file. You can use these variables to customize your composition for different environments, or different users.
Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.
Here is example of compose file: compose-file.

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Deploy apps from release server

I don't like when it comes to release my projects on production server.. May be i just don't have enough experience, nobody taught me how to do this in a right way.
For now i have several repos with scala (on top of spray). I have everything to build and run this projects on my local machine (of course, i develop them). So installed jenkins on my production server in order to sync from git, build and run. It works for now but i don't like it, because i need to install jenkins on every machine i want to have run my projects. What if i want to show my project to my friend in cafe?
So i've come with idea: what if i run tests before building app, make portable build (e.q. with sbt native packager) and save it on remote server "release server". That server just keeps these ready to be launched apps.
Then i go to production server, run bash script that downloads executables from release server and runs my project on a machine
In future i want to:
download and run projects inside docker containers.
keep ready to be served static files for frontend. Run docker
container with nginx and linked volume with static files
I heard about nexus (http://www.sonatype.org/nexus/), that artist use to save their songs, images, so on. I believe there should be open source projects that expose idea like mine
Any help is appreciated!
A common anti-pattern, in my opinion, is to build the software every time you perform a deployment.You are best advised to separate the process of build from the act of deployment by introducing a binary repository manager (you've mentioned on such example, nexus).
Best Practice - Using a Repository Manager
Binary repository manager
How can I automatically deploy a war from Nexus to Tomcat?
Only successfully tests builds get pushed to the repository, so you can treat each successful build as a mini-release. A by-product of this is that your production server does not have to have all the build software pre-installed (like, Jenkins, ANT , Maven, etc).
It should be noted that modern repository managers like Nexus and Artifactory now support Docker registries too, so that you use these for deploying docker images too.
Update
A related chef question, a technology where there is no intermediate binary file (like a jar). In this case the software is still "released" by creating a tar distribution stored in the repo.
chef cookbook delivery - chef server vs. artifactory + berkshelf

Install Jenkins as a Service or Run it behind Apatche

I understand that there are two ways of installing Jenkins:
1) Running Jenkins behind Apache (Using the War file)
2) Installing Jenkins as a Windows Service(using the windows Installer)
I am in the process of creating a CI, Auto-Deployment and Scheduled Automation runs for my project. So in this case which kind of installation should be better. I just do not want to choose the wrong one and end up recreating jobs to the other kind.
I have few questions:
1) If I choose to install as a Windows service( using the windows installer), do I still have to install an web server like IIS or Apatche for accessing my Jenkins URL, or does Jenkins have something inbuilt in it and I do not have to add an web server for accessing Jenkins?
2)If Jenkins as a Windows Service(using the windows Installer) needs IIS, I have steps in my project in which I have to restart IIS manually to generate NCover reports. In such cases, would Jenkins also be down?
3) Jenkins website states the following: "In situations where you have existing web sites on your server, you may find it useful to run Jenkins (or the servlet container that Jenkins runs in) behind Apache, so that you can bind Jenkins to the part of a bigger website that you may have.".
I would be hosting our application locally using IIS, in that case, should I chose to use the WAR installer instead of the windows installer.
I do not run Jenkin's on Windows but I believe its the same as other platforms...
No if you install Jenkins with the installer you will not need IIS or Apache
See the answer to 1, If you don't use IIS to run Jenkins restarting it won't bring down Jenkins
It sounds like you want to run your existing site under IIS and leave Jenkins running on its own. I think the windows installer for Jenkins will do exactly this.
I have run Jenkins on Windows and Unix environment.
Just wanted to add more to Ben's answer:
Windows if you install as windows service you will not need anything and for this following wiki should be more than enough:
https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+as+a+Windows+service
To add more to 3rd point:
Normally web sites are hosted behind Apache httpd server. If you are using one then you can configure both IIS web server and Jenkins accordingly.
In my previous company, we was running Jenkins as a service (with the solution proposed by Vinay above).
It worked well and you don't have to install an application server like Apache.
The only thing you have to take care is the user which launch the Windows service.
If your Jenkins server needs to access some ressource on the network, maybe you have to use a LDAP user to launch your service instead of using the "local system account".

Resources