Can we use chef as continuous deployment tool? - jenkins

I've integrated GitHub, Maven, Nexus and Chef into Jenkins. Now my question is "Can we use chef for continuous deployment" if so how can I deploy my artifact in staging server which is hosted in AWS.

The "continuous" part of that is entirely up to you, that's just a question of how often you change what versions of things are deployed where. As for the "deployment", that's usually rephrased as "is Chef a good tool for application deployment?". I personally answer yes to that (spoiler warning: I also wrote the application_* suite of community cookbooks which exist specifically to make this easier) but it's probably a minority opinion at this point. Containers rule the application world at this point, and most of those ecosystems (Kubernetes, Mesos, Nomad, maybe Swarm if I'm being generous) have their own deployment management tools/systems/whatever. But Chef can do anything a human can so that includes managing those systems too. If you don't feel ready to take the K8s plunge quite yet, then sure, you could do worse than Chef.

Related

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Chef and Docker

I am a bit confused. As a part of a course we are supposed to setup a CI and CD solution using Jenkins, Docker and Chef, how the flow is going to be is not specified.
We have been setting up Jenkins, so that for every new git commit it creates a Jenkins slaves that spins up the specific containers needed for a test, then tears down them and reports the result.
So, have been looking around today for information regarding using Chef and Docker for continuous delivery/deployment. The use case that I see is the following, specify in Chef the machine deployment options, how many machines for each server, database and so on. When the Jenkins slave successfully builds and tests the application, it is time to deploy. Remove any old container and build new containers, handle configurations and other necessary management in Chef.
Have been looking around for information of similar use cases and there does not seem to be super much information about it. Have been tinkering with the chef-provision plugin with chef-provision-docker but the information regard to using for example the docker plugin is not super intuitive. Then I stumble across this article (https://coderanger.net/provisioning/) which basically does not recommend new projects to start using the chef-provision plugin.
Is there something I am missing, is this kind of use case not that popular or even just stupid? Are there any other plugins that I have missed or another setup with chef that is more suitable?
Cheers in advance!
This kind of purely procedural stuff isn't really what Chef is for. You would want to use something integrated directly with Jenkins as a plugin probably. Or if you're talking about cookbook integration tests there are the kitchen-docker and kitchen-dokken drivers which can handle the container management for you.
EDIT: The above was not really what the question was about, new answer.
The tool you're looking for is usually called a resource scheduler or cluster orchestrator. Chef can do this either via chef-provisioning or the docker cookbook. Between those two I would use the latter. But that said, Chef is really not the best tool for this job. There is a whole generation of dedicated schedulers including Mesos+Marathon, Kubernetes, Nomad, and docker-compose+swarm. Of all of those, Nomad is probably the simplest but Kube has a huge community following and is growing quickly. I would consider using Chef for this an intermediary step at best.
I would suggest to use container orchestrations platforms like kubernetes, docker swarm or mesos. Personally i would recommend to use kubernetes since it is the leading platform out of the three listed.
chef is a good configuration management tool and using it for provisioning containers would work but it is not the best solution. You would come across issues like managing where the containers should be provisioned and monitoring container status and how to handle their failures. A platform like kubernetes would handle this for you.
this would be useful to get some insigths:
https://www.youtube.com/watch?v=24X18e4GVbk
more to read:
http://www.devoperandi.com/how-we-do-builds-in-kubernetes/

jenkins - infrastructure provisioning

I've just finished setting up my Jenkins server but there seems to be one glaring problem that I can't find an answer to.
Shouldn't the Jenkins environment be an exact clone of the production environment to make it effective? None of the tutorials i've read mention this fact at all. So how do people deal with this fact? especially if they are using a single Jenkins instance for multiple projects that may be running on different infrastructure?
I can imagine puppet or chef might help here but wouldn't you be constantly re-provisioning the jenkins server for different projects. sounds pretty dangerous to me.
The best solution to me seems to be to not run the test on the Jenkins server itself but to spin up a clone of the production environment and run the tests on that? But I can't find a single solitary tutorial on how this could be done on EC2 for example.
Sorry if this is a bit of a rambling question. So how does everyone else deal with ensuring an exact replica of the production environment for Jenkins to run tests on? This includes database migrations as well now that I think about it.
Thanks.
UPDATE:
A few of the answers given seem to concern compiled languages like java or objective-c. In those situations I can see the logic of having the tests be plaform agnostic. I should have been more specific and mentioned that I use Jenkins for LAMP stack testing. So in that situation the infrastructure is as much a component that needs testing as anything else.Seeing as having PHP5.3 on one machine is enough to break a project that requires PHP5.4 for example.
There is another approach you can consider.
With Vagrant, you can create completely virtual environment that simulates your production. This is especially useful, when you want to test many environments (production, single node env, different OS, different DB) but you don't have enough "bare metal" machines.
You define proper vagrant environment. Next in Jenkins test item you setup proper machines, and execute tests on created vagrant environment.
Jenkins also supports a Master/Slave system (see here for details).
The jenkins slave itself does not need much configuration and could run on your production system replica, as it also does not need much configuration it should not influence your production clone significantly.
Your master would then just delegate the jobs (like your acceptance tests) to your slave. In that way you could also setup different production environments (like different OS, different DBs etc.) by just setting up a Jenkins slave for every configuration you need.
You are probably using Jenkins to check the quality of your code, compile it, run unit tests, package it, deploy it and maybe run some integration tests.
Given the nature of all those tasks, it is probably best that your Jenkins environment is NOT like your production environment. For example, you don't want your production environment to have your compiler installed (for many reasons, security to name one).
So, Jenkins is a development environment, and the fact that is doesn't exactly match your production environment should not be a concern to you.
However, I understand that perhaps you want to deploy your packages to a production-like or even production-clone environment to run some specific tests or tasks of your software lifecycle, but in my opinion that issue is beyond jenkins and concerns only the "deployment" phase of your lifecycle (ie. it's not a "Jenkins" issue, but an infrastructure issue that you should think about with a more general approach, and then simply tell jenkins where to deploy).

Jenkins for Production deployment

This question is just arising out of curiosity. Everyone now a days use jenkins for build and deployment automation, but still many of these people shy away from using the jenkins for Production deployment.
Considering Jenkins is such a nice and easy tool, I don't really understand why we don't see more of Jenkins in production deployment. Is it because of some security reasons? If yes, what these security reasons might be?
Or any other reason which exists that make some people to have 2 different tools for automation, one for dev and lower environment and another for higher environment like production.
System Admins running Production environments don't want to be obsoleted/automated out of their job by "Dev tools"
I might be bit late here but I would like to share an automated process that will allow you to handle the entire production Deployment using a single YAML file.
For more information
https://github.com/frostyaxe/Blaze-Tracker

Should Jenkins be run inside development/deployment environment or on standalone box

I am using Vagrant to provide a 'synchronised' and standardised development/test/uat/staging and production environments.
I am now looking at how to standardise my CI build process. I like the look of Jenkins but I am confused as to what the best way to deploy it is. Should I have it deployed in a stand-alone CI box or install it on all the various environments?
I guess I am a little confused here. Any help much appreciated, Thanks
The standard approach is a stand-alone CI server shared by the development team. This common server (at a well known URL) provides the development dashboard for a team and the only authorized way to publish into the release repository (Developers not allowed to publish directly)
You could go for extra credit and also setup an instance of Sonar which in my opinion is much better suited as a development dashboard, providing a richer set of metrics and also serves as a historicial record for development.
Finally Jenkins is so simple to setup, there is nothing stopping developers from running their own instances. I find that with Sonar it matters less and less where a build is actually run, once the release credentials are properly controlled. In fact this attitude is important as it prevents the build server from turning into a delicate snowflake :-)
Update
There's a vagrant plugin for Jenkins which might prove useful in running your current processes.
You're likely better off running Jenkins as a shared stand-alone server.
However, I highly recommend that you set up your builds in such a way that they can be run on each developer's machine locally as well. This is particularly key with unit-tests.
In our setup, we have a shared Jenkins server that executes all of our builds using NAnt. Each developer also has NAnt installed and can run the build and unit-test portions of the build freely. Ideally integration tests could also be run, but we're not quite there yet and having them execute on the CI server still gives us that proper feedback even if it takes a little longer to get.

Resources