jenkins - infrastructure provisioning - jenkins

I've just finished setting up my Jenkins server but there seems to be one glaring problem that I can't find an answer to.
Shouldn't the Jenkins environment be an exact clone of the production environment to make it effective? None of the tutorials i've read mention this fact at all. So how do people deal with this fact? especially if they are using a single Jenkins instance for multiple projects that may be running on different infrastructure?
I can imagine puppet or chef might help here but wouldn't you be constantly re-provisioning the jenkins server for different projects. sounds pretty dangerous to me.
The best solution to me seems to be to not run the test on the Jenkins server itself but to spin up a clone of the production environment and run the tests on that? But I can't find a single solitary tutorial on how this could be done on EC2 for example.
Sorry if this is a bit of a rambling question. So how does everyone else deal with ensuring an exact replica of the production environment for Jenkins to run tests on? This includes database migrations as well now that I think about it.
Thanks.
UPDATE:
A few of the answers given seem to concern compiled languages like java or objective-c. In those situations I can see the logic of having the tests be plaform agnostic. I should have been more specific and mentioned that I use Jenkins for LAMP stack testing. So in that situation the infrastructure is as much a component that needs testing as anything else.Seeing as having PHP5.3 on one machine is enough to break a project that requires PHP5.4 for example.

There is another approach you can consider.
With Vagrant, you can create completely virtual environment that simulates your production. This is especially useful, when you want to test many environments (production, single node env, different OS, different DB) but you don't have enough "bare metal" machines.
You define proper vagrant environment. Next in Jenkins test item you setup proper machines, and execute tests on created vagrant environment.

Jenkins also supports a Master/Slave system (see here for details).
The jenkins slave itself does not need much configuration and could run on your production system replica, as it also does not need much configuration it should not influence your production clone significantly.
Your master would then just delegate the jobs (like your acceptance tests) to your slave. In that way you could also setup different production environments (like different OS, different DBs etc.) by just setting up a Jenkins slave for every configuration you need.

You are probably using Jenkins to check the quality of your code, compile it, run unit tests, package it, deploy it and maybe run some integration tests.
Given the nature of all those tasks, it is probably best that your Jenkins environment is NOT like your production environment. For example, you don't want your production environment to have your compiler installed (for many reasons, security to name one).
So, Jenkins is a development environment, and the fact that is doesn't exactly match your production environment should not be a concern to you.
However, I understand that perhaps you want to deploy your packages to a production-like or even production-clone environment to run some specific tests or tasks of your software lifecycle, but in my opinion that issue is beyond jenkins and concerns only the "deployment" phase of your lifecycle (ie. it's not a "Jenkins" issue, but an infrastructure issue that you should think about with a more general approach, and then simply tell jenkins where to deploy).

Related

Should I use the production docker build when doing CI to run tests?

Should I replicate the production environment (docker image, dependencies, etc) when running tests?
The question is because how do I get composer development packages like phpunit and phpstan if I am replicating the prod environment?
What are best practices regarding this?
Should I replicate the production environment (docker image,
dependencies, etc) when running tests?
The immediate answer to this is yes. By replicating the production environment in your test environment means that you can limit issues that only happen in one specific environment biting you in production.
Having said this there are many times where it is not appropriate to make your test environment identical to your production environment. E.g. for a web service you cannot (easily) in CI run your tests against the production domain name.
Docker makes it a lot easier for you to use the production environment in other parts of the SDLC e.g. development and CI. I would advise making the docker image for your production environment available to all working on the project (dev, QA etc.) There will be examples where people want to move away from the production docker image, for example, installing additional debugging tools that should not be available in production.
In summary, using your production docker image (and dependencies) in your test / dev environment will give you greater confidence of how your product will perform in production. This will reduce your time to market and environmental issues in production.

Chef Inspec test suites to verify Jenkins build node configurations

I currently have a build farm setup using Jenkins 2.46.3 LTS. I have a Jenkins Master installed on a physical server with 4 - 5 virtual machine build nodes running on VirtualBox.
I am thinking about managing the configuration of the virtual machines using Chef, however much of the software I need installed on these build nodes does not have a corresponding Chef Supermarket Cookbook. I do not feel confident enough with Chef to create my own cookbooks for managing these nodes.
I do have some experience writing Inspec tests in order to test a few wrapper cookbooks I have for some Chef Supermarket cookbooks. My question is; even if I do not have cookbooks that run to install this software initially on the nodes, is there a way to run a suite of inspec tests against the actual nodes (as oppose to using them to test a cookbook in a sandbox environment)?
My goal is to automate all the manual checks we would do to verify the build nodes are setup correctly. Ohai seems like it may be good for this as it diffs the configuration Chef has and what's on the managed node anyways.
If there is a better approach to completing this goal I would be happy to consider it as a solution.
Thanks to anybody with any experience with this! Any help or advice is welcome :)
I think you might be confusing InSpec with kitchen-inspec. The two are related, but independent (well, kitchen-inspec uses InSpec but you know what I mean). The Test Kitchen verifier runs some kind of test code against the test instances (usually VMs), but InSpec itself runs test code against whatever (real) servers you point it at.

Chef and Docker

I am a bit confused. As a part of a course we are supposed to setup a CI and CD solution using Jenkins, Docker and Chef, how the flow is going to be is not specified.
We have been setting up Jenkins, so that for every new git commit it creates a Jenkins slaves that spins up the specific containers needed for a test, then tears down them and reports the result.
So, have been looking around today for information regarding using Chef and Docker for continuous delivery/deployment. The use case that I see is the following, specify in Chef the machine deployment options, how many machines for each server, database and so on. When the Jenkins slave successfully builds and tests the application, it is time to deploy. Remove any old container and build new containers, handle configurations and other necessary management in Chef.
Have been looking around for information of similar use cases and there does not seem to be super much information about it. Have been tinkering with the chef-provision plugin with chef-provision-docker but the information regard to using for example the docker plugin is not super intuitive. Then I stumble across this article (https://coderanger.net/provisioning/) which basically does not recommend new projects to start using the chef-provision plugin.
Is there something I am missing, is this kind of use case not that popular or even just stupid? Are there any other plugins that I have missed or another setup with chef that is more suitable?
Cheers in advance!
This kind of purely procedural stuff isn't really what Chef is for. You would want to use something integrated directly with Jenkins as a plugin probably. Or if you're talking about cookbook integration tests there are the kitchen-docker and kitchen-dokken drivers which can handle the container management for you.
EDIT: The above was not really what the question was about, new answer.
The tool you're looking for is usually called a resource scheduler or cluster orchestrator. Chef can do this either via chef-provisioning or the docker cookbook. Between those two I would use the latter. But that said, Chef is really not the best tool for this job. There is a whole generation of dedicated schedulers including Mesos+Marathon, Kubernetes, Nomad, and docker-compose+swarm. Of all of those, Nomad is probably the simplest but Kube has a huge community following and is growing quickly. I would consider using Chef for this an intermediary step at best.
I would suggest to use container orchestrations platforms like kubernetes, docker swarm or mesos. Personally i would recommend to use kubernetes since it is the leading platform out of the three listed.
chef is a good configuration management tool and using it for provisioning containers would work but it is not the best solution. You would come across issues like managing where the containers should be provisioned and monitoring container status and how to handle their failures. A platform like kubernetes would handle this for you.
this would be useful to get some insigths:
https://www.youtube.com/watch?v=24X18e4GVbk
more to read:
http://www.devoperandi.com/how-we-do-builds-in-kubernetes/

Jenkins for Production deployment

This question is just arising out of curiosity. Everyone now a days use jenkins for build and deployment automation, but still many of these people shy away from using the jenkins for Production deployment.
Considering Jenkins is such a nice and easy tool, I don't really understand why we don't see more of Jenkins in production deployment. Is it because of some security reasons? If yes, what these security reasons might be?
Or any other reason which exists that make some people to have 2 different tools for automation, one for dev and lower environment and another for higher environment like production.
System Admins running Production environments don't want to be obsoleted/automated out of their job by "Dev tools"
I might be bit late here but I would like to share an automated process that will allow you to handle the entire production Deployment using a single YAML file.
For more information
https://github.com/frostyaxe/Blaze-Tracker

Should Jenkins be run inside development/deployment environment or on standalone box

I am using Vagrant to provide a 'synchronised' and standardised development/test/uat/staging and production environments.
I am now looking at how to standardise my CI build process. I like the look of Jenkins but I am confused as to what the best way to deploy it is. Should I have it deployed in a stand-alone CI box or install it on all the various environments?
I guess I am a little confused here. Any help much appreciated, Thanks
The standard approach is a stand-alone CI server shared by the development team. This common server (at a well known URL) provides the development dashboard for a team and the only authorized way to publish into the release repository (Developers not allowed to publish directly)
You could go for extra credit and also setup an instance of Sonar which in my opinion is much better suited as a development dashboard, providing a richer set of metrics and also serves as a historicial record for development.
Finally Jenkins is so simple to setup, there is nothing stopping developers from running their own instances. I find that with Sonar it matters less and less where a build is actually run, once the release credentials are properly controlled. In fact this attitude is important as it prevents the build server from turning into a delicate snowflake :-)
Update
There's a vagrant plugin for Jenkins which might prove useful in running your current processes.
You're likely better off running Jenkins as a shared stand-alone server.
However, I highly recommend that you set up your builds in such a way that they can be run on each developer's machine locally as well. This is particularly key with unit-tests.
In our setup, we have a shared Jenkins server that executes all of our builds using NAnt. Each developer also has NAnt installed and can run the build and unit-test portions of the build freely. Ideally integration tests could also be run, but we're not quite there yet and having them execute on the CI server still gives us that proper feedback even if it takes a little longer to get.

Resources