I currently have one Jenkins master setup for our continuous integration project. Several different projects will need to be built using this Jenkins instance, each with different project dependencies as well as system dependencies.
From what I have read in the Jenkins documentation, a distributed build architecture can be implemented to provide different environments needed for builds/tests:
Jenkins supports the "master/slave" mode, where the workload of
building projects are delegated to multiple "slave" nodes, allowing a
single Jenkins installation to host a large number of projects, or to
provide different environments needed for builds/tests.
I'd like to take this approach in order to avoid taking down the continuous integration system for all projects in the event there is an issue with a single project's dependencies.
Instead, just the agent for the project with the environment that has an issue would be down, and our other projects could build/test without issue.
My approach for this is going to be to launch Jenkins Slaves/Agents via SSH, which are each configured with what is required to build a specific project. In the jobs configuration, I'll then restrict where the project can be built to the appropriate slave/agent node.
Are there any issues in having Jenkins agents as virtual machines
with resolvable IP addresses running on the same machine as Jenkins
Master (as the goal is not necessarily to gain computing power, but
to provide isolated environments for builds/tests)?
Should simply using virtualbox to launch the Slave/Agent virtual
machines, and configuring those machines with the environment
necessary to build/test the specific project be sufficient as far as
the project's goals go?
Thanks to everyone in advance for any advice on how best to create isolated environments for my projects!
Related
I am a beginner user of Jenkins. I am trying to putting a development process onto the DevOps pipeline that includes Jenkins, GitHub, SonarQube, IBM UCD.
It is not a very complicated deployment process and it uses windows machine.
There are three environments, QA, DEV, and PROD.
I know that I need to install one IBM UCD agent for each of those three, but do I need to have three slaves in Jenkins as well , or just one master in Jenkins could do that deployment for three environments ? Which way is better ?
Usually for the complex deployment process companies are using "Master+Agent" scheme, but in your case there is no need to create some advanced Jenkins system with master and agents if you can build it on one host and you have not any additional projects or restrictions.
From official documentation:
It is pretty common when starting with Jenkins to have a single server which runs the master and all builds, however Jenkins architecture is fundamentally "Master+Agent". The master is designed to do co-ordination and provide the GUI and API endpoints, and the Agents are designed to perform the work. The reason being that workloads are often best "farmed out" to distributed servers. This may be for scale, or to provide different tools, or build on different target platforms. Another common reason for remote agents is to enact deployments into secured environments (without the master having direct access).
For additional information you can read the following articles: this and this.
I have a 4-5 java projects build procedure to be configured as CI using Jenkins. Whether it will be time saving to build some/all projects on different machines(connected as Jenkins slaves)?
Are there some any other benefits of Jenkins Master-Slave configuration?
Offloading work to build agents is a very good idea as it keeps load away from the master. This allows you to build more projects in parallel (esp. with dynamic agents launched in some cloud environment).
Further, it makes the systems easier to maintain, as e.g. the Java version/setup of your build agents required to build and test your application does not interfere with the Jenkins master machine.
I would like to raise from scratch a build server for .NET applications using Jenkins, please note that i'm new to Jenkins CI.
Several Questions:
1) How should I decide on the build server specs? except for the OS which would be windows server 2012, how should I decide on the RAM and the CPU and HD space?
2) Should the Jenkins sit in the build machine or not, what is the recommended approach? I understood that the build server should be isolated from the Jenkins master
3) How do I decide on the Master/Slave approach, when should I use only Master and when should I use master and slave or slaves?
4) How would you recommend me to run the build and deployment tasks in the Jenkins CI, using NAnt/Python or any other scripting language ?
10x, and sorry for the igonrance :)
Responding to each in turn:
You can run Jenkins as a windows service (instructions here) and the machine can be a VM, so it doesn't have to be huge.
a) RAM and CPU: I'll put these together and will depend on how many jobs you plan to have running at the same time. The default number of build executors is 3 but can be increased as a global config change.
b) HDD: This depends on how many jobs you plan to have. Jenkins will checkout the source code (as well as the compiled output) to its home directory on a per job basis. This can get big. I would also recommend using the ThinBackup plugin to backup the Jenkins configuration.
Jenkins is the build machine. A vanilla installation of Jenkins is the master. In my experience you will not need a separate slave machine unless you're needing to do native builds on other platforms or have LOTS and LOTS of jobs. I've seen single masters running happily with hundreds of jobs.
Further to 2. above, suggest you start with a master and set up a slave later if you really need one.
As you have stated you are building .NET applications, you can simply install the MSBuild plugin which should serve you well. Builds for .NET applications in Jenkins are Freestyle builds so you will be using Windows Batch build steps often as well. This also is a great blog on Jenkins in a .NET environment.
I am using Jenkins for CI,
I've heard that I should have a dedicated server and slave for running Jenkins and building tasks, respectively -
is this true?
I can understand this as the server may not be powerful enough to handle the server itself and running build tasks,
but is there any defined technical reason for this?
Best practice is to have a separate machine for Jenkins-Server,
and not to use it for builds at all.
This has nothing to do with CPU-power or memory-resources -
A build-machine should have a predefined configuration,
and Jenkins should not be part of it.
(Jenkins requirements may even conflict with those of the build-machine)
You should be able to boot / clone / upgrade / restore / trash the build-machine
without any impact on Jenkins.
Of course you can settle for a single machine, if your resources are limited,
but if you are serious about build-automation - Jenkins should have its own server.
You probably don't need dedicated hardware/VM to run a Jenkins server because the actual Jenkins process (no builds running) uses very little resources. But it all depends on what you want to accomplish with your Jenkins setup.
Do you want to run continuous builds across multiple platforms for multiple projects? Then using a master with slaves is the only way to go. If, on the other hand, you're running fairly simple builds for just a few projects, then you only need one machine to run the builds and the Jenkins process.
You can configure Jenkins to have multiple builds running concurrently so if you have a quad-core machine, you can safely run 2 builds and possibly a third once you analyze resource usage.
At my last gig, I used a quad-core machine with 8GB RAM to run:
Jenkins running Selenium builds
VirtualBox VM with Windows XP
Two instances of Tomcat each with two applications deployed.
And the machine still had more to spare.
I want to deploy a private cloud test infrastructure using OpenStack and Jenkins for multiple projects. I thought of creating a template for OpenStack with one Jenkins installation using as master. For the projects I thought of separating them into nodes, i.e. each project would get one node. Is this a sensible structure? Or should I install one Jenkins installtion per project+vm?
1) How would you organize a private multi-project test cloud infrastructure?
2) Jenkins saves configuration and job information to /var/lib/jenkins by default, how do I manage the object storage for each project?
When you say node, I'm assuming you mean a machine running nova-compute and hosting VM instances. If this is the case, then I honestly wouldn't worry about trying to bind a project to a specific node - treat the entire openstack pool of resources you have as a global cluster, assign in projects, and let them spin up and tear down as they need.
You will likely find it beneficial to have an image with jenkins pre-installed as a publically available image, assuming you want a master jenkins per project in your cloud. If you're running jenkins as a stand-alone item per project, using a m1.medium may be sufficient, but you might find you want to use m1.large. It all depends on what you have your jenkins instance doing in each project.
If you want the jenkins data to persist across destroying and recreating the jenkins master instance, then you could use a volume and specifically mount /var/lib/jenkins into it - but you will need to manage the coordination of jenkins startup and having the volume attached appropriately. You may find it easier to give the jenkins instance a larger base disk and just back up and restore the data per project if you need to destroy and recreate the jenkins instance.
Another way to do this would be to share a master jenkins external to your openstack cloud and use the jclouds jenkins plugin to spin up jenkins instances and slaves as you need for projects. This isn't providing any segregation between projects in jenkins, which may not be to your liking based on the question above.