Jenkins: what started the build - jenkins

There are two machines with Jenkins: one for building, second for testing. If some job is successful on 1st machine, it triggers testing job on 2nd machine via http request. For example:
http://<2nd_jenkins_ip>:8080/job/<job_name>/buildWithParameters?BUILD_NUMBER=167
The problem: It seems that there is something, which launches some of the testing jobs automatically, but it shouldn't. I have deactivated nightly builds, but it happened again. And I can't find out the reason.
Question: Is there any possibility to display the IP/url of the machine, which started the build (e.g. into console output)? If not, can I find this information elsewhere (e.g. jenkins/linux logs)?
EDIT1:
Console shows:
Started by user anonymous
Building on master in workspace <my_workspace>
Cleaning local Directory ./test_data
Checking out ...
Following svn checkout and other build steps.

In the Jenkins_HOME directory on the server, look under jobs/<jobname>/builds/<select the last build you want by date>
In there, open log file (no extension) with any text editor. It will usually provide a more detailed cause at the top of the file.
There are many ways you can prevent unwanted builds. One way is to configure Authentication Token under Job's configuration -> Build Triggers -> Trigger builds remotely. Once a token a set, other (rogue/old) scripts could not trigger the job without providing this token.
This however does not prevent manual triggering through the UI or other projects triggering through Jenkins' methods (not URL).
I've also had some inconsistent issues regarding jobs that were configured on a schedule/timer to the effect that changes wouldn't take effect until Jenkins restart.

Related

What happens to data on a CI pipeline

I've been asked to create a CI pipeline for a project at my work, I'm creating a load test with JMeter and Taurus so I plan to integrate it with Jenkins to build all the pipeline. I'm just starting on this field and a question that came to my mind is:
What happens to all the data created by the Load Test? does it goes to the deploy phase or it gets deleted once the test is done, should I clean after the tests end?
The data is being kept in the Jenkins workspace and by default it will be kept in the file system forever.
If you decide to publish the artifacts they will be available at Jenkins build dashboard via the web interface.
You might also be interested in Jenkins Performance Plugin which allows plotting performance trend charts and conditionally marking builds as unstable or failed depending on pass/fail thresholds
Example configuration can be found in the How to Run a Taurus Test through Jenkins Pipelines article
I am not completely familiar with your setup but as far as I can see from a quick research, JMeter does the same as every other testing framework and generates HTML reports. Jenkins wont delete them, unless you explicitly delete them (rm file.html) or call cleanWs (clean workspace). If the job is deleted so are the files.
So the test result file should still be present in the deploy phase. You can use a plugin to collect the result. Or just archive it. Or do whatever fits your workflow.
There is generally no need to clean it up (you usually configure Jenkins to delete old builds which takes care of that)

Jenkins: Github webhook does not trigger any job

I try to configure Jenkins. I want a simple behavior: trigger a build on new pull request.
I cannot understand what I missed...
Jenkins version: 2.89.2
At https://ci.mysite.fr/configure :
Still no build triggered:
At https://ci.mysite.fr/job/test-back/configure :
On Github, Webhook is sent and well received by Jenkins:
Nginx Log says the same:
Help please!
Some things to check when debugging this sort of thing:
Check your Jenkins logs to see whether or not Jenkins is receiving the hook and deciding not to take action for some reason.
Check Jenkins security by clicking Manage Jenkins -> Configure Global Security. Open things up as much as you're comfortable doing and see if that changes anything.
Ensure that you're pushing changes to the master branch. For simplification, consider using ** as your branch specifier while you're getting this to work.
Ensure Git is properly configured on your Jenkins host by clicking Manage Jenkins -> Global Tool Configuration
Make sure the user whose credentials you provided can manage hooks and pull from the repo you're interested in.
Run the job manually in Jenkins, ensure that it works.
After you run the job, it should show up as an option in Protected Branches/Required Statuses. In your repo, click on Settings->Branches, select your branch in the Branches section, click Require Status Check to Pass before merging option, and your job should show up in the list which appears.
Webhooks are arguably the most difficult Jenkins feature to test without prior experience, because of gotchas like these (probably their list is incomplete):
New git commit / git push must be made for each pipeline build (repeating a previous one won't trigger a new build even if webhooks are already set up correctly - see below).
First build made after setting up webhook correctly must be manual (no bootstrap from the webhook itself is possible).
First build made after setting webhook correctly must succeed completely for the changes to take effect and for webhooks to start working. This will also cause Jenkins to miss all incoming requests made during the first build of a newly created pipeline.
More info
Please be warned that it is not possible to trigger a build using the same build conditions again (at least using a webhook). Therefore you might have a correct webhook setup already, but not find out that it works unless you create a new git commit and push it to the remote repo on Github. If your try to repeat some old push over and over again, by simply pressing the "Redeliver" button in the Recent deliveries section on Github's Webhooks / Manage webhook page, Jenkins will never move beyond the "poke" repo stage, as it requires SCM changes to be detected in order to trigger a new build:
Received PushEvent for https://github.com/mirekphd/<REPO_NAME> from <GITHUB_IP> ⇒ <JENKINS_URL>/github-webhook/
Apr 16, 2021 9:42:12 PM INFO org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber$1 run
Poked <REPO_NAME>
Apr 16, 2021 9:42:13 PM INFO com.cloudbees.jenkins.GitHubPushTrigger$1 run
SCM changes detected in <REPO_NAME>. Triggering #236
For further info on points 2) and 3): see original source.

Jenkins automation

Is there any way in jenkins where as soon as we detect the failed build, job revert back the perforce code to the last successful build changelist and fire a build again.
Flow -
1. so as soon as we have failed build - Notifcation will be sent out to dev team with possible checkins which causing the build failure
Revert back the recent code to the last working code and submit it
Initiate a build.
It is possible, but I don't see any reasons or use case to do it as it is not a correct workflow and can be confusing.
But if you decided to do it, the next steps are required:
Example how to do it using Perforce source control.
Steps inside job settings:
Before build triggers you need to save latest changelist number $P4_CHANGELIST - 1
Perforce plugin for Jenkins: Perforce plugin for Jenkins
Build
Get last error code
Get last error code from the: batch
If code != 0 then checkout and build changelsit $P4_CHANGELIST - 1
Jenkins is not a production server. It runs tasks and do not have options that I know for that purpose.
What is your source code ? webapps ? others ?
What steps are you performing ?
Are you performing some automatic tests ?
My assumption is that you got some tests that may invalidate the build.
These tests should be runned :
* on a mock server to prevent deploying on your server
* or somewhere else
Like that, if build failed, nothing is deployed.
If build success, you can deploy your project normally.
If this not reply to your answer, please provide requested information to undestrand a bit more your job process.
If your using an artifact repository like Nexus or Artifactory to manage your project artifacts then you could always redeploy the previous working version of your application when a failure is detected.
Your not cancelling any checked in code that potentially broke the build but you are preserving your test environment. You can configure Jenkins to notify the user who checked in the latest erroneous change set and they can work on resolving the issue.
Jenkins also provides a rich API which allows you to delete a job, start a job, get information about previously run jobs. You could leverage some of these services along with your artifact repo to achieve the experience you described.

Export Jenkins User Account and Security Settings?

I'm working with Jenkins servers in three different environments:Development-Staging-Production.
We work out the kinks in our Jenkins jobs in dev, test them in stage, and then finally move them to production. We do that be either replicating the job in the GUI (cut and paste) or tarring up the job directory and moving it to the next environment via the command line.
I'm wondering if the move option can be done with the service accounts that run these jobs. I can see the user account directories and config files under /var/lib/jenkins/users. What I don't see are the security settings that get applied to the user from the "Configure Global Security" screen in the GUI.
For these service accounts, we have the minimal authorization of READ on Global and READ and BUILD on Jobs.
What I'd like to be able to do is prove a service account in dev and then promote it to Stage and Prod from the command line vs having to manually recreate the account in the GUI for each upstream environment. If the API key could also be moved along with it that would be great.
Any thoughts or ideas?
User permissions are in config.xml under the Jenkins root folder, in section <authorizationStrategy>
This file contains other global settings, so just copying it would not be advisable
Just a wild thought, but why not use a master-slave config and trigger builds on the desired remote machine based on some "environment" parameter. You can also look through the plugins section to see if you can find something useful such as the:
node label parameter which allows to define and select the label for the node where you want the build to run
copy to slave that facilitates copying files to and from a slave
That way you'll only have one job configuration which can be executed on different environments without too much hustle.

Trigger Jenkins with Puppet

Recently our organization decided to move from using Maven/Cargo-plugin to deploy our applications to using Puppet. We still have all of our builds and test jobs in Jenkins. So what I'm trying to figure out is how do I trigger a specific Jenkins job based on a specific line being changed in a puppet manifest? We are using a manifest that has all of our deployed components and their versions. If I change the version of one of the components, I want a specific test job to be triggered based on which component was changed. And eventually I will want to rollback the puppet change if the test fails. Has anyone done something related?
I haven't used it for your specific use case, but for a "pull" scenario where you want to monitor the contents of the Puppet manifest for changes, the Jenkins FSTrigger plugin should work for you as long as your Jenkins job can access the Puppet manifest file. You can set it up to look for changes in the entire file content, or just in a particular part of the contents.
If you want a "push" scenario to trigger a build as soon as the Puppet manifest is changed, you could write a script that runs after the changes are saved, checks which components have been changed, and triggers a build via the Jenkins CLI.

Resources