Can I poll a workspace repository that is located in the slave (Jenkins+RTC plugin)? - jenkins

I recently started working in a way for speeding up the building time of a relatively big software code base of my company. This code base is using RTC for source code management and after looking and trying I ended using jenkins for automation of the process. I started by creating my build server in a local machine and configure the repository through the RTC plugin, which works quite well using the poll SCM option and with the workspace repository-rtc option. However, I have to move this job to the official jenkins company server, but keeping the job execution in the original local PC. I have added the PC as a jenkins node and I have not had any problem with reaching it through jenkins, but my question/assumptions are the following:
It looks like the job is executing the RTC buildtoolkit from the slave (or at least I had to configured the RTC path in the node.
For some reason, it looks like the polling in jenkins is always looking for the repository in the master, even when I add a SCM pre step in which I can validate that the job is running in the slave system.
My question: Is there any way of ensuring the polling happening in the slave (without scripting or adding external solutions, just using RTC plugin)? For security reason I cannot add additional plugins to jenkins or to create anything in the master, I only got a free job to configure.
Thanks.

Related

Is it possible to integrate SonarQube, Jenkins and GitLab (all in dockers)?

Currently, I am working in a quality process so as to ensure that the code is acceptable. For that, I'm integrating Jenkins, SonarQube and GitLab, which are running in different servers (actually they are in different docker containers).
The idea is to check with SonarQube everytime the code is pushed against GitLab and block commits, merges, and so on, whether SonarQube has not passed.
I have already integrated Jenkins with SonarQube, but Jenkins checks the code inside his workspace, so imagine a situation where a developer in his laptop needs to push his changes.
My conceptual question is simple: Is it possible to integrate these technologies in order to do this? And, if the question is yes, which steps are necessary?
PD: I don't need to see code, configuration files,and so on. I just need something like:
Configure SonarQube to work with Jenkins
Do an script so as to copy that file in that folder,
...
First, in docker means each tool is in its own container.
They only need to see each other through the network, which is where a Docker Engine in Swarm mode comes in.
Second "configure Jenkins to work with SonarQube"... that is what I have done in my shop, and there isn't much to it.
Once the Jenkins SonarQube plugin is installed, and the address for the SonarQube server entered, you can configure your job and call sonar (for instance with maven: $SONAR_MAVEN_GOAL -Dsonar.host.url=$SONAR_HOST_URL)
The analysis done in the Jenkins workspace will then be published in the SonarQube server.
A swarm server is the more modern version of this 2015 docker-compose.yml file from the marcelbirkner/docker-ci-tool-stack project.
The idea remains the same though: each element is isolated in its own container.
I haven't tried It myself but https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin could be interesting in your setup.

Communicate between Jenkins server without setting up master slave relation

I would like to set up jenkins server that would run test scripts based on successful build deployments on other Jenkins servers. for example, if the QA jenkins server is named JQA1OnMachine1 and i have three others that are named
J2OnMachine2, J3OnMachine3, J4OnMachine4 (different jenkins server on different boxes) can the JQA1OnMachine1 (QA jenkis) poll the others at regular interval to see if a build was deployed successfully? if so can anyone tell me how?
Jenkins master slave along with Jenkins Pipeline Plugin would be one of the better ways to implement this however, since you don't want to use that approach you can explore PSTools to remotely capture processes or files on different server.
Your builds may update a file on the build server post completion of the build and your QA machine can run script with PSTools to monitor and trigger the QA testing based on the file content

How to securely backup Jenkins configuration?

I am setuping a Jenkins environment to manage workflows of Python projects. This Jenkins install is running on a Windows 7 machine and I need to backup the Jenkins config to avoid potential loss of work in case of HDD failure (for example).
I tried the SCM sync configuration plugin but this one is not compatible with the Subversion plugin I use and caused Jenkins to display only a white screen when I activated it. So it is not usable.
I also tried the thinBackup. It works well but, due to Jenkins being ran as a local service, it is not able to save backups on a network drive (and backuping on the same drive than Jenkins is not very insteresting). You would think that I just have to run Jenkins with a network user, but in this case it would not have sufficient local privilèges.
I am thinking about creating a Batch (or Python) script which could deal with SVN to backup the Jenkins configuration by adapting what is described in this page but I am not very happy to write a SVN account password in a Batch (or Python) script which could potentially be seen by anybody.
So I would know if it exists an other way to achieve this Jenkins configuration backup.
Or at least, does it exists a way to perform svn commands without showing anybody a clear password?
The issues with the SCM sync configuration plugin sadden me, too. What we do with our Jenkins instances, is: we use thinBackup to run regular backups and store them in the default folder on the same HDD. Then we have a daily cron job rsync them with a folder on another HDD. So if Jenkins is running on Windows, you would probably achieve the same using the Windows Task Scheduler and cwRsync, for example.

Jenkins Puppet integration

My development setup is such that for every svn checkin code is built,unit tested, packaged and published in Artifactory. Now I want to automate my deployment process & run integration(Selenium) test as part of this process. I am thinking of using Puppet to managed the deployment
Is puppet the correct tool for this
What is the process I should use to trigger puppet master to initiate a fresh installation on agents, I couldn't find any Jenkins plugin that would actually trigger puppet. One option is to call
puppet apply ...
as a Jenkins post build task
Any suggestions welcome, thank you.
Have a look at this Selenium Jenkins article from Saucelabs, a service that automates cross-browser testing. Though they are a vendor with a service to sell, the article covers how to do Selenium testing yourself with Jenkins. It also exposes common pain points you are likely to run into with this approach.
A Puppet master doesn't serve the function of orchestrating client convergences. Take a look at Mcollective. This is a tool that will allow you to trigger puppet runs on target systems from a Jenkins agent via script commands.
Some Mcollective getting started material:
http://www.slideshare.net/PuppetLabs/presentation-16281121
http://puppetlabs.com/mcollective

Delegate specific part of build to slave

I have a project where part of the build process is to create a native library on a remote machine. This is currently a manual process outside of the CI builds made by Jenkins.
The setup in question is that the Jenkins master server build a GIT based maven project, which has a dependency to a native library which can only be built on a specific machine. Jenkins can't compile this module, and because of this, it is currently a manual process.
I would like to install a Jenkins slave on the machine that creates the native library, and returns the compiled files to the Jenkins master, without handling any other parts of the build.
I am having trouble figuring out if this is even possible. The number of articles i have found on the subject discusses Jenkins slaves as a means of distributing the build, but i want the slave to take responsibility for a small part of the build process, and nothing else. The Jenkins master should just send the build request to the slave and wait for the result, instead of trying to compile the code itself.
I do exactly the same. My setup, very similar to what Mark O'Connor and gaige are advising, and I am using the Copy Artifact plugin.
job A: produces a zip file on a Mac
job B, runs on slave B - Windows machine, takes the zip as input and produces an MSI
Here's the important part in the config of job B:
restrict the job B on the proper slave using labels
make sure job B happens after job A
make sure artifacts from job A are sent to job B before your build
build your stuff
archive artifacts produced by job B
Delegating part of a job to a slave is something that would have to be done external to Jenkins, for example, using ssh.
However, as #kan indicates, you most likely want to extract the native library build as a separate job and then have that job execute on a particular slave, or any slave that meets a specific criteria.
To do this, my suggestion would be to use Labels in the node configurations to determine which slaves can be used for building that particular job.
In Jenkins > nodes > <slave node>, use the Labels property to set one-word labels that indicate your specific requirements, such as the OS or processor type.
Then, in the jobs that are node-specific, check Restrict where this project can be run and set the Label Expression to something that meets your criteria. If the criteria is simple, it will just be a single word, if you need a boolean, you can use those as well (such as OSX&&Lion in our case).
I believe this is all in the standard version of Jenkins, without need for a special plugin. Leave me a comment if it isn't and I'll try and diagnose which plugin enables this functionality.
This is problem is solved by using a binary repository manager to centralize your software artifacts. Personally I use Nexus, but it could be something as dumb as a remote file system.
The idea is to publish the built artifact after each Jenkins job (if you don't like Nexus, you could use one of the Publish over plugins) and retrieve it as a build dependency in the next job.
This approach means it longer matters where the build executes, and has the added advantage of decoupling the build of each module component.

Resources