I have a scenario where I am given a AWS instance ( Production, staging, testing), a Docker for development and need to work on a infrastructure work flow. We also need to take care of continuous deployment and integration using Jenkins.
Can you please help me out to figure out a rigid Work flow to create an environment with the above Tech space.
Man, with these tools it is possible to do a lot of different strategies of CI and CD. i recommend to adding GIT and Jenkins in your deck of development tools.
Start doing it in a simple way, first build you application then create new docker images and deliver this in a dev server, then think in how to deliver
to others envs.
After that store this images in a private repository using nexus or similar and try to think about an hierarchy of images (base images to app and ready images).
It all depends on your goals.
What does it mean by deploying code from dev to prod environments using Jenkins. Can anyone please help. I currently have the source code in my gitlab. I need to deploy this code from dev env to prod env
Thanks in advance.
Source code present in GitLab is just the files that is needed to create a WAR/EAR/JAR to run the application.
It's the environment files if present which makes the application behave slightly different on each environment i.e. DEV/PROD the data that you see on DEV will not be the same that you see on PROD(application is live), as developers tend to test/modify code/data to ensure that the application works as excepted. This is fine on DEV but is a big no-no on PROD as it will impact business.
Deploying code from dev to prod environments just means building the application with the right environment files e.g DEV points to xyz DB but prod points to abc DB.
All this can be achieved with jenkins and if your project uses maven/gradle then with a single line command you can achieve the above.(A bit of googling will help you here)
If your project doesn't involve Maven/Gradle then you will have to replace the environment file each time a build happens based on a parameter which can be passed from jenkins.
This whole process is part of DevOps culture. In simple terms it looks like this:
Developer pushes changes to source control (i.e. gitlab).
Build server (i.e. Jenkins) automatically downloads latest changes and builds an application (i.e. creates setup files or just the binaries). Usually you run tests (unit, integration, automation tests etc.). If something fails then developers get notified about it. This whole process is called continuous integration.
If everything went right then you can deploy your application to production. This part of the process is called continuous deployment.
It's a common strategy for web apps. For larger projects QA team tests the software and the software gets deployed once QA team approves it.
I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.
Imagine Jenkins is generating 3 different distributions - one on javascript running on NodeJS, another on python running on apache with python module and another on Java using Springboot. How do you write a Chef cookbook to install all of them on an on-premise infrastructure having bare minimal linux ubuntu distribution. Scope of the problem involves capturing trigger from Jenkins and then kick starting Chef books to deploy these 3 apps. Based on configuration, either all 3 apps should be deployed on same infra or different deployment infrastructure.
So a few things:
How you write the cookbooks is up to you entirely. I've got some examples up on http://github.com/poise/application_examples for Python, Ruby, and Node, but that's just my take on the subject. Whenever you ask "How do I do X with Chef?" the answer is always "How would you do X without Chef, and then automate that".
How to trigger deploys from Jenkins is a bit more fuzzy than that already-very-fuzzy answer. The simplest answer is to have Jenkins SSH in to each machine and run chef-client. However this might have security implications you don't like. You can look at more dedicated command-push systems like MCollective, SaltStack, or maybe Chef Push Jobs (though I would skip that last one). You can also just set up your nodes to converge automatically every 5 minutes so all Jenkins does it updates some stuff in the Chef Server to say which version to deploy and then waits 10 minutes.
I have a similar case to yours, using TeamCity instead of Jenkins. (But you can replicate similar behaviour)
In my case I use policy_files to manage my infrastructure, where I pass as attributes the build information so I can download artefacts in a recipe.
The main trick is having a build which is triggered by the services you mention (Python, Java... whatever), updating the attributes (build id, artefact name...) in the policy_file, when committing to GIT the result.
As a recap:
Build for your services is completed.
A build for your policy_files is triggered updating artefact download information.
Your usual build workflow.
In order to download the artefacts, you can use the remote_file chef resource, making a checksum verification to avoid downloading the same file on each chef run.
I am using Vagrant to provide a 'synchronised' and standardised development/test/uat/staging and production environments.
I am now looking at how to standardise my CI build process. I like the look of Jenkins but I am confused as to what the best way to deploy it is. Should I have it deployed in a stand-alone CI box or install it on all the various environments?
I guess I am a little confused here. Any help much appreciated, Thanks
The standard approach is a stand-alone CI server shared by the development team. This common server (at a well known URL) provides the development dashboard for a team and the only authorized way to publish into the release repository (Developers not allowed to publish directly)
You could go for extra credit and also setup an instance of Sonar which in my opinion is much better suited as a development dashboard, providing a richer set of metrics and also serves as a historicial record for development.
Finally Jenkins is so simple to setup, there is nothing stopping developers from running their own instances. I find that with Sonar it matters less and less where a build is actually run, once the release credentials are properly controlled. In fact this attitude is important as it prevents the build server from turning into a delicate snowflake :-)
Update
There's a vagrant plugin for Jenkins which might prove useful in running your current processes.
You're likely better off running Jenkins as a shared stand-alone server.
However, I highly recommend that you set up your builds in such a way that they can be run on each developer's machine locally as well. This is particularly key with unit-tests.
In our setup, we have a shared Jenkins server that executes all of our builds using NAnt. Each developer also has NAnt installed and can run the build and unit-test portions of the build freely. Ideally integration tests could also be run, but we're not quite there yet and having them execute on the CI server still gives us that proper feedback even if it takes a little longer to get.