Chef Inspec test suites to verify Jenkins build node configurations - jenkins

I currently have a build farm setup using Jenkins 2.46.3 LTS. I have a Jenkins Master installed on a physical server with 4 - 5 virtual machine build nodes running on VirtualBox.
I am thinking about managing the configuration of the virtual machines using Chef, however much of the software I need installed on these build nodes does not have a corresponding Chef Supermarket Cookbook. I do not feel confident enough with Chef to create my own cookbooks for managing these nodes.
I do have some experience writing Inspec tests in order to test a few wrapper cookbooks I have for some Chef Supermarket cookbooks. My question is; even if I do not have cookbooks that run to install this software initially on the nodes, is there a way to run a suite of inspec tests against the actual nodes (as oppose to using them to test a cookbook in a sandbox environment)?
My goal is to automate all the manual checks we would do to verify the build nodes are setup correctly. Ohai seems like it may be good for this as it diffs the configuration Chef has and what's on the managed node anyways.
If there is a better approach to completing this goal I would be happy to consider it as a solution.
Thanks to anybody with any experience with this! Any help or advice is welcome :)

I think you might be confusing InSpec with kitchen-inspec. The two are related, but independent (well, kitchen-inspec uses InSpec but you know what I mean). The Test Kitchen verifier runs some kind of test code against the test instances (usually VMs), but InSpec itself runs test code against whatever (real) servers you point it at.

Related

Can we use chef as continuous deployment tool?

I've integrated GitHub, Maven, Nexus and Chef into Jenkins. Now my question is "Can we use chef for continuous deployment" if so how can I deploy my artifact in staging server which is hosted in AWS.
The "continuous" part of that is entirely up to you, that's just a question of how often you change what versions of things are deployed where. As for the "deployment", that's usually rephrased as "is Chef a good tool for application deployment?". I personally answer yes to that (spoiler warning: I also wrote the application_* suite of community cookbooks which exist specifically to make this easier) but it's probably a minority opinion at this point. Containers rule the application world at this point, and most of those ecosystems (Kubernetes, Mesos, Nomad, maybe Swarm if I'm being generous) have their own deployment management tools/systems/whatever. But Chef can do anything a human can so that includes managing those systems too. If you don't feel ready to take the K8s plunge quite yet, then sure, you could do worse than Chef.

Chef and Docker

I am a bit confused. As a part of a course we are supposed to setup a CI and CD solution using Jenkins, Docker and Chef, how the flow is going to be is not specified.
We have been setting up Jenkins, so that for every new git commit it creates a Jenkins slaves that spins up the specific containers needed for a test, then tears down them and reports the result.
So, have been looking around today for information regarding using Chef and Docker for continuous delivery/deployment. The use case that I see is the following, specify in Chef the machine deployment options, how many machines for each server, database and so on. When the Jenkins slave successfully builds and tests the application, it is time to deploy. Remove any old container and build new containers, handle configurations and other necessary management in Chef.
Have been looking around for information of similar use cases and there does not seem to be super much information about it. Have been tinkering with the chef-provision plugin with chef-provision-docker but the information regard to using for example the docker plugin is not super intuitive. Then I stumble across this article (https://coderanger.net/provisioning/) which basically does not recommend new projects to start using the chef-provision plugin.
Is there something I am missing, is this kind of use case not that popular or even just stupid? Are there any other plugins that I have missed or another setup with chef that is more suitable?
Cheers in advance!
This kind of purely procedural stuff isn't really what Chef is for. You would want to use something integrated directly with Jenkins as a plugin probably. Or if you're talking about cookbook integration tests there are the kitchen-docker and kitchen-dokken drivers which can handle the container management for you.
EDIT: The above was not really what the question was about, new answer.
The tool you're looking for is usually called a resource scheduler or cluster orchestrator. Chef can do this either via chef-provisioning or the docker cookbook. Between those two I would use the latter. But that said, Chef is really not the best tool for this job. There is a whole generation of dedicated schedulers including Mesos+Marathon, Kubernetes, Nomad, and docker-compose+swarm. Of all of those, Nomad is probably the simplest but Kube has a huge community following and is growing quickly. I would consider using Chef for this an intermediary step at best.
I would suggest to use container orchestrations platforms like kubernetes, docker swarm or mesos. Personally i would recommend to use kubernetes since it is the leading platform out of the three listed.
chef is a good configuration management tool and using it for provisioning containers would work but it is not the best solution. You would come across issues like managing where the containers should be provisioned and monitoring container status and how to handle their failures. A platform like kubernetes would handle this for you.
this would be useful to get some insigths:
https://www.youtube.com/watch?v=24X18e4GVbk
more to read:
http://www.devoperandi.com/how-we-do-builds-in-kubernetes/

How to write a Chef cookbook for multiple applications

Imagine Jenkins is generating 3 different distributions - one on javascript running on NodeJS, another on python running on apache with python module and another on Java using Springboot. How do you write a Chef cookbook to install all of them on an on-premise infrastructure having bare minimal linux ubuntu distribution. Scope of the problem involves capturing trigger from Jenkins and then kick starting Chef books to deploy these 3 apps. Based on configuration, either all 3 apps should be deployed on same infra or different deployment infrastructure.
So a few things:
How you write the cookbooks is up to you entirely. I've got some examples up on http://github.com/poise/application_examples for Python, Ruby, and Node, but that's just my take on the subject. Whenever you ask "How do I do X with Chef?" the answer is always "How would you do X without Chef, and then automate that".
How to trigger deploys from Jenkins is a bit more fuzzy than that already-very-fuzzy answer. The simplest answer is to have Jenkins SSH in to each machine and run chef-client. However this might have security implications you don't like. You can look at more dedicated command-push systems like MCollective, SaltStack, or maybe Chef Push Jobs (though I would skip that last one). You can also just set up your nodes to converge automatically every 5 minutes so all Jenkins does it updates some stuff in the Chef Server to say which version to deploy and then waits 10 minutes.
I have a similar case to yours, using TeamCity instead of Jenkins. (But you can replicate similar behaviour)
In my case I use policy_files to manage my infrastructure, where I pass as attributes the build information so I can download artefacts in a recipe.
The main trick is having a build which is triggered by the services you mention (Python, Java... whatever), updating the attributes (build id, artefact name...) in the policy_file, when committing to GIT the result.
As a recap:
Build for your services is completed.
A build for your policy_files is triggered updating artefact download information.
Your usual build workflow.
In order to download the artefacts, you can use the remote_file chef resource, making a checksum verification to avoid downloading the same file on each chef run.

jenkins - infrastructure provisioning

I've just finished setting up my Jenkins server but there seems to be one glaring problem that I can't find an answer to.
Shouldn't the Jenkins environment be an exact clone of the production environment to make it effective? None of the tutorials i've read mention this fact at all. So how do people deal with this fact? especially if they are using a single Jenkins instance for multiple projects that may be running on different infrastructure?
I can imagine puppet or chef might help here but wouldn't you be constantly re-provisioning the jenkins server for different projects. sounds pretty dangerous to me.
The best solution to me seems to be to not run the test on the Jenkins server itself but to spin up a clone of the production environment and run the tests on that? But I can't find a single solitary tutorial on how this could be done on EC2 for example.
Sorry if this is a bit of a rambling question. So how does everyone else deal with ensuring an exact replica of the production environment for Jenkins to run tests on? This includes database migrations as well now that I think about it.
Thanks.
UPDATE:
A few of the answers given seem to concern compiled languages like java or objective-c. In those situations I can see the logic of having the tests be plaform agnostic. I should have been more specific and mentioned that I use Jenkins for LAMP stack testing. So in that situation the infrastructure is as much a component that needs testing as anything else.Seeing as having PHP5.3 on one machine is enough to break a project that requires PHP5.4 for example.
There is another approach you can consider.
With Vagrant, you can create completely virtual environment that simulates your production. This is especially useful, when you want to test many environments (production, single node env, different OS, different DB) but you don't have enough "bare metal" machines.
You define proper vagrant environment. Next in Jenkins test item you setup proper machines, and execute tests on created vagrant environment.
Jenkins also supports a Master/Slave system (see here for details).
The jenkins slave itself does not need much configuration and could run on your production system replica, as it also does not need much configuration it should not influence your production clone significantly.
Your master would then just delegate the jobs (like your acceptance tests) to your slave. In that way you could also setup different production environments (like different OS, different DBs etc.) by just setting up a Jenkins slave for every configuration you need.
You are probably using Jenkins to check the quality of your code, compile it, run unit tests, package it, deploy it and maybe run some integration tests.
Given the nature of all those tasks, it is probably best that your Jenkins environment is NOT like your production environment. For example, you don't want your production environment to have your compiler installed (for many reasons, security to name one).
So, Jenkins is a development environment, and the fact that is doesn't exactly match your production environment should not be a concern to you.
However, I understand that perhaps you want to deploy your packages to a production-like or even production-clone environment to run some specific tests or tasks of your software lifecycle, but in my opinion that issue is beyond jenkins and concerns only the "deployment" phase of your lifecycle (ie. it's not a "Jenkins" issue, but an infrastructure issue that you should think about with a more general approach, and then simply tell jenkins where to deploy).

Should Jenkins be run inside development/deployment environment or on standalone box

I am using Vagrant to provide a 'synchronised' and standardised development/test/uat/staging and production environments.
I am now looking at how to standardise my CI build process. I like the look of Jenkins but I am confused as to what the best way to deploy it is. Should I have it deployed in a stand-alone CI box or install it on all the various environments?
I guess I am a little confused here. Any help much appreciated, Thanks
The standard approach is a stand-alone CI server shared by the development team. This common server (at a well known URL) provides the development dashboard for a team and the only authorized way to publish into the release repository (Developers not allowed to publish directly)
You could go for extra credit and also setup an instance of Sonar which in my opinion is much better suited as a development dashboard, providing a richer set of metrics and also serves as a historicial record for development.
Finally Jenkins is so simple to setup, there is nothing stopping developers from running their own instances. I find that with Sonar it matters less and less where a build is actually run, once the release credentials are properly controlled. In fact this attitude is important as it prevents the build server from turning into a delicate snowflake :-)
Update
There's a vagrant plugin for Jenkins which might prove useful in running your current processes.
You're likely better off running Jenkins as a shared stand-alone server.
However, I highly recommend that you set up your builds in such a way that they can be run on each developer's machine locally as well. This is particularly key with unit-tests.
In our setup, we have a shared Jenkins server that executes all of our builds using NAnt. Each developer also has NAnt installed and can run the build and unit-test portions of the build freely. Ideally integration tests could also be run, but we're not quite there yet and having them execute on the CI server still gives us that proper feedback even if it takes a little longer to get.

Resources