Trigger Jenkins job on checkin - jenkins

My Source Code management tool is Perforce and Auto-Deployment of the latest build is done using Jenkins. Whenever a new QA build comes-in, two folders would be updated within 2 minutes of each other - namely folder VersionNumber and folder AppFiles which hold the VersionNumber and the Files required for deployment respectively.
I continuously poll for changes at the folders VersionNumber and AppFiles in order to deploy the build immediately as shown below:
//depot/AppFiles/... //AutoDeployment/depot/AppFiles/...
//depot/VersionNumber/... //AutoDeployment/depot/VersionNumber/...
What happens is Jenkins runs the job twice as there are two checkins in Perfoce one for VersionNumber and another one for AppFiles one after the other. I actually want Jenkins to run the job only after it has Synced both the files as without the other file, the Auto-Deployment will fail.
Is this possible

Related

Jenkins - Copy Artifacts from upstream job built in different node

There is a job controlled by Development team which built in a different node. I am on Testing team who want to take the artifacts and deploy on test device.
I can see those Artifacts from dev are stored in some path in dev's node. Does it means it must first archived in Jenkins master before I can copy it to my job?
I am using Copy Artifact plugin and constantly getting the error
Failed to copy artifacts from <dev-job> with filter: <path-in-dev-node>
*Some newbie question since i just moved from TeamCity
You probably want to use: Copy Artifact plugin.
Adds a build step to copy artifacts from another project.
Consider also, the Jenkins post-buid step "Archive the artifacts".
If you copy from the other job's workspace, what happens if another job is in progress or the workspace is wiped? That step copies them from the node to the master and stores a copy along with the build logs, etc. That makes them available via the UI as long as the build logs remain. It can take up space tho.
If you do use archive artifacts, consider using the system property jenkins.model.Jenkins.buildsDir to store all the build logs (and artifacts) outside of the jobs config directory. Some downtime and work required to separate the two (config / logs) .
You may also want to consider using a proper repository manager (Nexus / artifactory)
Finally, you may want to learn about using a Jenkins pipeline rather the relying on chained jobs, triggers or users and so forth. Why? 'cos it's much more controlled and easier to maintain.
ps: I'm not a huge fan of artifactDeployer, but it may work for you.
pps: you may want to review this in depth answer: Jenkis downstream job fails to find upstream artifacts

When do the jenkins workspaces get preserved?

I have a bunch of pipeline jobs, yet when executed, workspaces of some get preserved, some are deleted. How does jenkins make these decisions?
Based on my findings so far:
All jobs executed on nodes will have their workspace persisted, e.g. /home/ec2-user/workspaces/some-job
Some works on master keep their workspaces but some others' workspaces disappear after the job has finished. For example, after my build job succeeded, if I ssh in I can see the its workspace directory; but all my e2e jobs have no workspace.
Note I didn't use any of clearWs, deleteDir etc in my pipelines.
By the way, the reason I'm looking into workspaces is the disk usage keeps increasing and I want to cleanup. I thought the workspace is overwritten each time a job runs, but yet I get the 'Disk space is too low' warning several times.
Jenkins is creating a new workspace for every build job (= run) per default. You can see that in the path of the ws in your console log: /here/is/the/ws#buildnumber. If you dont want to have that behavior you can set it to an path which is for instance for every repo the same: How to set specific workspace folder for jenkins multibranch pipeline projects
Maybe some of your jobs don't get executed on the Jenkins Master, but on some connected Node (via an agent directive within your Jenkinsfile or Pipeline description). If that's the case you won't see a build directory inside the workspace for this Job on the Jenkins master, but on the connected Node.
You would only get the build results (like artifacts, reports, etc.) under /<JENKINS_HOME>/jobs/My_Job/ on the Master.
Remember that you could trigger a Jenkins build on a node also indirectly if you, for example, run the build within a Dockerfile and have configured (within Jenkins configuration) a specific node label for execution of Docker builds.

Jenkins Multibranch Pipeline - Issues with deleting jobs

Use case: Using Jenkinsfile to auto create builds for branches
Summary:
For a variety of reasons sometimes the Jenkins master fails to connect to the SCM server. When this occurs Jenkins deletes that job directory on master, because it no longer sees the branches. However, the slaves are not cleaned up and so they still have the old workspace paths (which are uniquely named based on the build # in my setup). When the Jenkins master reconnects to the SCM server, it recreates a new job folder on master, and the build counter is reset to #1.
This creates the following issues:
When a build starts, it executes on a slave. Since master has a new counter the job is #1. But this path may already exist from a previous build on that slave, so the artifact is built with content that was checked out for the original old build (i.e. maven uses the /target directory inside the workspace which already existed from previous build). So the end result is an artifact that potentially has the wrong code.
This can create build storms. After the connection issues are resolved, Jenkins will see all the repositories and branches with Jenkinsfiles and start to build them. So in a setup of let's say 20 repositories with 10 branches each, this will create 200 new builds. This increases with additional repositories and branches. This is obviously not desired.
Solutions:
One quick solution I can think of is to update the Jenkinsfile to delete the workspace if it exists before running the job inside of it. But this is just a work around. I would not want to mask the connection issues and would like to retain the actual build history of a pipeline (not have it keep erasing itself).
Minimize connection issues. This obviously will not always be guaranteed though. Plus sometimes maintenance must force servers offline. While I can construct maintenance in a way to limit or work around such issues, there still will be rare cases where downtime is required across the board. It would be best if Jenkins could handle this use case.
I'm curious if anyone has ran into this issue and what the thoughts are on this problem?

How do I trigger deploy after the successful build of a specific branch?

I have a Jenkins task that triggers on any changes made to a gitlab project.
There are a few situations I'd like to be able to set up, however I'm not sure how to best accomplish them. Most of it centers around being able to do the following:
Once the job is complete, I'd like to trigger another job that takes the contents of the first job's workspace (emptying out the initial one).
I'd like for a way to only run certain other jobs when the workspace contains a specific branch (automatically deploy develop branch to a preview environment).
"to trigger another job that takes the contents of the first job's workspace" see Shared workspace plugin:
This plugin allows to share workspaces by Jenkins jobs with the same SCM repos.

Way to clone a job from one jenkins to another

I have two Jenkins, both are master. Both have 5 salve Jenkins each. I have one job on first jenkins that needs to be cloned for each job.
I can clone the job on first jenkins and its slave but not on second master jenkins. Is there a way to clone a job from one jenkins to another?
I have one more question can I archive the job at some defined location other than master jenkins, May be on slave?
I assume you have a job called "JOB" on "Jenkins1" and you want to copy it to "Jenkins2":
curl JENKINS1_URL/job/JOB/config.xml | java -jar jenkins-cli.war -s JENKINS2_URL create-job
You might need to add username and password if you have turned on security in Jenkins. The jenkins-cli.war is available from your $JENKINS_URL/cli.
Ideally you should make sure you have the same plugins installed on both Jenkins1 and Jenkins2. More similar you can make the two Jenkins masters, the fewer problems you will have importing the the job.
For the second part of your question: slaves don't store any Jenkins configuration. All configuration is done on Master. There is a lot of backup plugins, some backup the whole Jenkins, some backup just job configuration, some backup individual jobs, export them to files, or even store/track changes from SCM such as SVN.
So "archiving job configuration to slave" simply makes no sense. But at the end of the day, a job configuration is simply an .xml file, and you can take that file and copy it anywhere you want.
As for the first part of the question, it's unclear what you want. Do you want to clone a job automatically (as part of another job's process), programmatically (through some script) or manually (through the UI, other means)?
Edit:
Go to your JENKINS_HOME directory on the server filesystem, navigate to the jobs folder, then select the specific job folder that you want.
Copy the config.xml to another server, this will create the same job with the same configuration (make sure your plugins are same)
Copy the whole job_name folder if you want to preserve history, builds, artifacts, etc

Resources