I'm setting up a development environment where I have Jenkins as CI server (using pipelines), and the last build step in Jenkinsfile is a deployment to staging. The idea is to have a staging environment for each branch that is pushed.
Whenever someone deletes a branch (sometimes after merging), Jenkins automatically removes its respective job.
I wonder if there is a way to run a custom script before the automatic job removal, then I would be able to connect to the staging server and stop or remove all services that are running for the job that is going to be deleted.
The plugin multibranch-action-triggers-plugin might be worth a look.
This plugin enables building/triggering other jobs when a Pipeline job is created or deleted, or when a Run (also known as Build) is deleted by a Multi Branch Pipeline Job.
Related
I have microservice based application in Bitbucket. For all service need to deploy I'm using Jenkins pipelines. And each jenkinsfile is in root directory. Whenever a commit is pushed, all the pipeline starts even if there's no changes related to that particular build.
How to set the jenkins in such a way that it only trigger the pipeline of particular service where changes is made and pushed?
I've setup and connected a Jenkins (2.249) server to my GitHub account, so it has access to my repos and I've setup the GitHub webhook.
But I am having problems trying to understand how to create a multibranch pipeline job to detect when a push to my master branch happens and then I want to run SSH commands on another host to update a web server with the new code changes.
With Jenkins pipelines, I can't see how to detect when a push to master happens and then trigger the build? Is this possible with Jenkins? I have Blue Ocean installed as well.
Multi-branch Pipeline jobs periodically check the server for updates. It sets an environment variable BRANCH_NAME with the current branch during execution.
If you only care about the master branch, you should use a regular Pipeline job that only watches master.
See the docs
I have automated Jenkins master and slaves deployment and redeployment successfully.
I know how to manually create pipeline jobs and add github repos to use their Jenkinsfiles for the steps.
my issue is how can I automate the pipeline jobs addition to jenkins after its been destroyed and redeployed without having to manually create the pipeline jobs and point to Jenkinsfile each time.
I have seen this done before in a container environment with chef and docker when redeployed or updated it re-adds all the pipelines automatically again.
I want to not use the UI at all only to confirm job status progress and verify settings.
I would recommend looking at the JobDSL Plugin to create jobs, using a seed job to create them on initial Jenkins startup. The Jenkins Configuration-as-Code plugin can be used to setup any other configuration outside the jobs.
I am trying to run a pipeline job that get its' pipeline file from TFS but the mapping of the workspace and the checkout is done on the Master instead of the Slave.
I have Jenkins-master which is installed on a linux machine and I connected a windows machine as a slave to it. I created a pipeline job with 'Pipeline script from SCM' option selected for TFS.
How can I make the windows slave run that pipeline job?
The master can't run that job because it is running on linux and it fails when it is trying to map a workspace to TFS in order to download the pipeline script and run it.
Even if I create another pipeline job and select to hard-code a script to run my original pipeline job like this:
node('WIN_SLAVE') {
build job: 'My_Pipeline'
}
It doesn't work.
And I can see in the output that the initiali script (above) is in fact running on my windows slave, but when it's building the job 'My_Pipeline' it still tries to map a workspace to the Jenkins-master at it's linux machine path /var/jenkins/... and it fails.
If the initial pipeline script ran at the windows slave, why does the other pipeline script not running on the same node? Why is it trying again to checkout the pipeline file from TFS to the Jenkins-Master?
How can I make the windows slave checkout the pipeline file and run it?
Here are some things to check...
Make sure you disabled the original job, or you are completely redefining it for running on the slave, because you indicated you set up “another job” for the slave. It appears that this other job is just triggering the previous job, rather than defining its own specifications. When the job is ran on the slave, it’s just running whatever settings are in that original job.
Also, If you have the box checked to build when a change is pushed to TFS, then your original job could still be trying to run every time a change is made to TFS.
Verify the slaves Remote root directory is set properly in the slave configuration under Manage Jenkins -> Manage Nodes.
Since this slave job is triggering the other job you originally created on the master, then it will build on the master as expected.
Instead of referencing the My_Pipeline job, change the My_Pipeline job itself to run on the slave. If you are using a declarative Pipeline for the original job, then change that original job to run on the slave within the original job settings. You can do it similarly to how you have indicated above, just define the node in the original job.
If the original job is a freestyle project, there is a checkbox titled Restrict where this project can be run. Check that and include the name of the slave in the Label Expression. When you run the job, it will then be restricted to the slave.
Lastly, posting the My_Pipeline job will be helpful.
I have configured a multi-branch pipeline for my Bitbucket repository. My configuration is the staging branch gets automatically triggered and deploy to staging environment.
Now I want to implement a use case where I should be able to select one build from ‘Build history’ and deploy that build to production. Can any one give a suggestion on how to solve it?