I have a fastlane file in my ios project. When I increase the version number of the project and submit the code to github, the pipeline is constantly triggered. To solve this problem, I want to print the version number in another file and ignore that file. will this solve my problem? is there any other solution ?
I suppose your pipeline is triggered via GitHub Actions.
Look on .github/workflows/ you should have something like
on:
push:
You can exclude when to launch the pipeline and define when trigger it.
For example don't run for commit on main and launch it on tag starting with v:
on:
push:
branches-ignore:
- main
tags:
- v*
Related
I have jenkins setup using the 'Github Pull Request Builder' and for the most part its great. However extra builds are being triggered referencing closed PR's.
For example if create PR 8, it will also build:
origin/pr/7/head
origin/pr/8/merge
How can I stop it from doing these 2 extra builds?
Here check the docs.
https://github.com/jenkinsci/ghprb-plugin#creating-a-job
If you just want to build PRs, set refspec to +refs/pull/${ghprbPullId}/:refs/remotes/origin/pr/${ghprbPullId}/
I am running Jenkins version 2.85 on Kubernetes as pod(Affinity set to one workernode). I am creating Jobs using Salt Jenkins module by passing XML to this module.
I am using Jenkins Global Library for preforming job execution.
My Job config looks like this
I am calling GobalLibrary with my parameters like repoURL, componet etc..,
Things goes well for weeks and now I landed to a weird situation where my job configurations(config.xml) gets updated/revert automatically.
Intermittently my "Build with parameter" options disappears and I can see only "Build now" in Jenkins GUI. Initially I thought someone is doing this, so to track the config changes I installed Job config history plugin in Jenkins and what I find is strange. Someone with "SYSTEM" username is making/reverting changes.
This is how it looks
and what I find is SYSTEM user revert only JOB config changes, not the PIPELINE.
I am not sure what's going wrong behind the scenes and how to stop or fix this. This is my Production instance so I am more worried.
I can see a SYSTEM user in my Jenkins
but I can not delete that user
Few relevant Question I find for this but with no answers
Configuration of Jobs getting updated by System user on Jenkins
Jenkins SYSTEM user removes custom workspace configuration
I am not sure if this Jenkins Bug or some plugin is playing with my soul.
Need help! :(
Okay I find the answer to this problem.
I have used properties in my Jekins Global Library something like this
// Disable concurrent builds
//properties([disableConcurrentBuilds()])
which overrides my external job configuration(done via salt).
Hint I get from this blog:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
I also had this problem. For me it was solved when I changed the Build triggers -> Build Periodically settings from 'H 23 * * *' to '00 23 * * *'. (As I want my build to execute every night at 23:00.) Where H lets Jenkins decide when to run the job somewhere between 23:00 and 23:59 to spread load evenly. It seems Jenkins sometimes decided that it would be best to run my job on a different server and changed the parameters automatically.
In my case the issue was that the Jenkinsfile was removing the parameters I added to the pipeline from Jenkins console. Adding the same parameters in the JenkinSfile (stage -> script -> properties -> parameters) solved the issue.
In a nutshell, make sure that your Pipeline script is using the same configuration that your pipeline uses.
Jenkins documentation on parameters: https://www.jenkins.io/doc/book/pipeline/syntax/#parameters
We want to use Jenkins to generate releases/deployments on specific project milestones. Is it possible to trigger a Jenkins Pipeline (defined in a Jenkinsfile or Groovy script) when a tag is pushed to a Git repository?
We host a private Gitlab server, so Github solutions are not applicable to our case.
This is currently something that is sorely lacking in the pipeline / multibranch workflow. See a ticket around this here: https://issues.jenkins-ci.org/browse/JENKINS-34395
If you're not opposed to using release branches instead of tags, you might find that to be easier. For example, if you decided that all branches that start with release- are to be treated as "release branches", you can go...
if( env.BRANCH_NAME.startsWith("release-") ) {
// groovy code on release goes here
}
And if you need to use the name that comes after release-, such as release-10.1 turning into 10.1, just create a variable like so...
if( env.BRANCH_NAME.startsWith("release-") ) {
def releaseName = env.BRANCH_NAME.drop(8)
}
Both of these will probably require some method whitelisting in order to be functional.
I had the same desire and rolled my own, maybe not pretty but it worked...
In your pipeline job, mark that "This project is parameterized" and add a parameter for your tag. Then in the pipeline script checkout the tag if it is present.
Create a freestyle job that runs a script to:
Checkout
Run git describe --tags --abbrev=0 to get the latest tag.
Check that tag against a running list of builds (like in a file).
If the build hasn't occurred, trigger the pipeline job via a url passing your tag as a parameter (in your pipeline job under "Build Triggers" set "Trigger builds remotely (e.g. from scripts) and it will show the correct url.
Add the tag to your running list of builds so it doesn't get triggered again.
Have this job run frequently.
if you use multibranch pipeline, there is a discover tag. Use that plus Spencer solution
Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md
I have a Jenkins server that I'd like to download build artifacts from. The problem is that the way the job is set up, the build artifact includes the job number e.g. NightlyBuild-346.tar.bz2. We like the job numbers, because they make it easy to know how old a specific build is.
This becomes problematic because I don't know the precise name of the file I'm downloading--I just know I want the last successful build. I could do something like this:
- name: download build from CI
get_url:
url: "https://ci.contoso.com/job/NightlyBuild/lastSuccessfulBuild/artifact/NightlyBuild-345.tar.bz2"
dest: /tmp/NightlyBuild-345.tar.bz2
...but this will break after Jenkins finishes the next nightly build, because the artifact will become NightlyBuild-346.tar.bz2. I think I have a few options here:
Try to use wildcards in the get_url module (not so sure about that)
Download ALL artifacts from the job (there are several) as a single archive.zip and use command-line and regex magic to find the actual build artifact I care about. (potential for a hot unmaintanable mess)
Use the REST API to obtain the job number for the last successful job and form the full URL. (not sure that Ansible allows me to set variables on-the-fly like that).
Are these my options? Is there a better way to go about this? I want to eventually publish to an Artifactory repository from Jenkins, and if that's the right thing to do here, I'd appreciate some pointers in that direction too.
You can query Jenkins about build number with uri module:
- uri:
url: http://ci/job/NightlyBuild/lastSuccessfulBuild/buildNumber
return_content: yes
register: build_number_resp
- debug: msg="URL with build number http://ci/job/NightlyBuild/lastSuccessfulBuild/artifact/NightlyBuild-{{ build_number_resp.content }}.tar.bz2"
Since Ansible 2.0 maven_artifact module is available. The module supports maven version coordinates via version parameter. Use it like this:
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
version: latest
repository_url: htttp://your-artifactory