I have a maven jenkins job which builds to a directory called 'build.x86_64'. All of the artifacts are built to this directory. For some reason if I enable SCM polling, this directory gets deleted after the build completes. I can't see anything in the console output which says it is deleting the target.
Jenkins does however keep the build artifacts in it's own configured directory
/var/lib/jenkins/jobs/[my job]/builds
I have a downstream job which needs the artifacts but they keep getting deleting.
If I turn off SCM polling and use the 'Build Now' option in the GUI, it doesn't delete the build directory. I can' see anything in the configuration which could cause this. The jenkins job is cloned from one with the same configuration. The problem does not occur in the job I cloned from.
This was caused by the misconfiguration of the Source Code Management section of the Jenkins config. Under the Additional Behaviours section I had added, Clean before checkout. It should have been set to Clean after checkout.
Related
I have an svn repo and a certain Jenkins job for the stuff therein. Using Jenkins svn plugin's "include regions" feature, I can configure Jenkins to poll changes in certain folders or filetypes. But that is for triggering the job. When the actual job starts to execute, how do I know what were the files whose change triggered the build?
I can easily grep the answer out of svn log in a shell script if there is only one commit that triggers the build. But if there is an unknown number of commits causing my Jenkins job to start, I'm in trouble.
I'm asking this because I want my Jenkins job to run certain analysis ONLY for those files whose change triggered the build.
Multiple commits pushed at once also can execute the script. So I think you are in trouble already. So please maintain a file in job's workspace such that for every build at the end it will save its commit id. In your script, now check from that commit to current (HEAD) diff and check for changes in your files as per your constraints. And now run your job if all the conditions are met. Hope this helps.
I assume this may be fixed, but just in case, there is also the "Last Changes plugin" from Jenkins.
https://github.com/jenkinsci/last-changes-plugin
That makes a diff between what was in that environment, and what is about to be pushed and gives you the result.
I have Perforce managing our source. I have an application that uses Perforce as the back end. I have also setup an automated test tool that runs my application and performs automated tests. I want Jenkins to trigger the test every time there is a change in the source code. However, my Jenkins instance messes up the workspace root. It creates its own workspace root and that causes my application to fail. Jenkins actually overwrites the Perforce clients workspace root. So every time I try to get Jenkins work, I have to go and edit the workspace root in Perforce and reset it to the required value. I have tried getting Jenkins to manage the workspace and clearing the option to do the same but have failed. Is there anyway that Jenkins will use my workspace (root) settings and not change it?
Jenkins 'owns' the Perforce workspace used for the build, hence it sets the root.
Your application should ideally build and run independently of its location. However, there is the 'Advanced Project Options' --> 'Use custom workspace' configuration option in Jenkins.
Jenkins must have it's own Perforce workspace separate to your own development workspace. Use the Template workspace to mirror the options or create a Manual workspace for Jenkins own use.
Please note, there are two Perforce plugins for Jenkins p4 and perforce, documentation for the p4 plugin is located here/
The setup is used to build and deploy to Adobe AEM.
Master Build job pulls from git repository, builds and packages, run the tests and then fires downstream jobs that should use the built packages from upstream job.
The issue is that downstream job fail with the message:
Unable to access upstream artifacts area /var/lib/jenkins/jobs/PROJECTNAME-Master-Branch/builds/2014-10-22_11-33-46/archive. Does source project archive artifacts?
It seems to me that somehow CopyArtifacts plugin, triggered by the downstream job, is looking for the artifacts in wrong location. The correct location would be
/var/lib/jenkins/jobs/PROJECTNAME-Master-Branch/workspace/PROJECTNAME-*/**/*.jar,/var/lib/jenkins/jobs/PROJECTNAME-Master-Branch/workspace/PROJECTNAME-*/**/*.zip
But then, it complains about
java.io.IOException: Expecting Ant GLOB pattern, but saw '/var/lib/jenkins/jobs/PROJECTNAME-Master-Branch/workspace/PROJECTNAME-*/**/*.jar,/var/lib/jenkins/jobs/PROJECTNAME-Master-Branch/workspace/PROJECTNAME-*/**/*.zip'. See http://ant.apache.org/manual/Types/fileset.html for syntax
The downstream job copies artifacts from another project, and then the build was either "Upstream build that triggered this job" or "Copy from workspace of latest completed build". And none works.
Any ideas?
TL;DR
You are trying to use artifacts without archiving them first.
You are trying to use absolute paths, but they should be relative to $WORKSPACE and/or "archive location".
Full Answer
You are misunderstanding the concept of "Artifacts" as it relates to Jenkins.
What are Jenkins Artifacts
Artifacts are files that are specifically preserved after the build with the help of Archive the Artifacts post-build action.
When the build runs, it runs within:
$WORKSPACE, which on filesystem usually resides within
$JENKINS_HOME/jobs/$JOB_NAME/workspace
Inside there, you can have your SCM checkout folders, temporary build files, final built files, binaries, etc.
The contents of $WORKSPACE is volatile, you should never rely on it, outside of the build timeframe (and downstream jobs are outside of the build timeframe). The contents of $WORKSPACE could be different between different master/slave nodes, it could be deleted at any time by admin, or by SCM update/cleanup/checkout.
It's also important to understand that there is only one $WORKSPACE for the whole Job.
But now pay attention to your Build History, there are several entries in that list, referenced by build number (#) and date timestamp.
These are stored under:
$JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_ID
with $BUILD_ID being the date-timestamp of the build, like 2014-10-22_11-33-46
The $WORKSPACE contains the information relevant to current or last (and the problem is: you can never be sure if it's "current" or "last") build;
The builds folder contains a record of all past (retained) build executions (this is what makes up the Build History list on your left), per build.
By default, it contains only what Jenkins itself needs: build.xml copy, changelog information, console log. When you go to URL http://$JENKINS_URL/job/$JOB_NAME/[nn]/ where [nn] is a numeric job build/run number (#), it's reading this information from the builds folder on the filesystem.
To preserve artifacts of a build (to avoid them being overwritten by the next build, wiped out worskpace, or just to access older builds), you need to Archive the Artifacts (with same post-build action with the same title). When you archive the artifacts, you indicate which files within $WORKSPACE you want to preserve. When Jenkins does the archiving, it will place those files (keeping paths [relative to $WORKSPACE] preserved) into:
$JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_ID/archive/.
This way, you can have multiple sets of artifacts preserved for previous builds, not just "latest/last" from $WORKSPACE.
For the sake of completeness, I will mention that Jenkins's "permalinks", such as http://$JENKINS_URL/job/$JOB_NAME/lastSuccessfulBuild and /lastFailedBuild, etc are in fact symlinks on the filesystem to one of the preserved builds/$BUILD_ID folders.
Lastly, you control how many build runs and how many artifacts are retained (can be configured separately) through "Discard old builds" checkmark on job configuration. By default, all are retained, but if you start retaining artifacts, you need to think of hard-disk space capacity.
Solutions to your problem
So with the information above, and looking at your error messages, you should now see that the Copy Artifacts plugin is correctly looking for artifacts under the /archive/ section of a build.
You should also notice that Copy Artifacts plugin does not let you pick "current build" when selecting which build to copy from. It has permalinks (like "last successful" or "last build"), and specific build numbers, all of which translate to preserved builds under $JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_ID/archive/
Even "Upstream Build that triggered this job" will link to a specific $BUILD_ID.
In either of below options
Configuration for Archiving Artifacts is relative to $WORKSPACE.
Configuration for Copy Artifacts is relative to "archive location", that is $JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_ID/archive/.
Since "Copy Artifacts" is relative to "archive location", and "archive location" is relative to $WORKSPACE, then for all intensive purposes, the relative paths of both configurations can be same and relative to $WORKSPACE
Option 1
First Archive the Artifacts with the post-build action, otherwise you have nothing to copy from.
If you have your files in the root of $WORKSPACE, it should be:
PROJECTNAME-*/**/*.jar,PROJECTNAME-*/**/*.zip
(Note, not full paths in here)
Then use Upstream Build that triggered this job for Copy Artifacts selection.
For Artifacts to copy field use either:
** or blank to copy all archived artifacts, or
PROJECTNAME-*/**/*.jar,PROJECTNAME-*/**/*.zip (same as the archiving section)
Option 2
If you don't want to archive, you can use $WORKSPACE directly, with Copy from workspace of latest completed build, however you must ensure that no second upstream build can run while downstream build is executing, else you risk getting a partial file from a partial build, because as previously explained, $WORKSPACE is volatile.
Again, for the Copy Artifacts step, under Artifacts to copy field, use path relative to $WORKSPACE, that is:
PROJECTNAME-*/**/*.jar,PROJECTNAME-*/**/*.zip
Option 3
If you really want to copy the whole WORKSPACE between different jobs, use either
Clone Workspace SCM plugin or
Shared Workspace plugin
The fix may be this simple: disable or remove Compress Artifacts plugin and reboot Jenkins.
This workaround was deduced from a long-standing bug report: "Copy Artifacts Plugin" should support ArtifactManager.
The solution is about the configuration of the builder.
The root cause sits on the configuration of the downstream job. Once "Copy from workspace of latest completed build" is chosen for the build to be copied, and the path of artifacts to copy is set to relative path, such as projectname-//.jar,projectname-//.zip then the build succeeds.
Furthemore, in the parent job configuration, downstream job needs to be allowed to CopyArtifact and Projects to allow copy artifacts field should specify the downstream job.
Edit: Now I see that you responded in the meantime. Great answer and basically clears up some of the questions I had.
The one unclear thing about option 1 is that archiving of the files happens after the parent job completes.
Waiting for the completion of projectname-Deploy
projectname-Deploy #19 completed. Result was SUCCESS
Waiting for the completion of projectname-Deploy
projectname-Deploy #20 completed. Result was SUCCESS
Build step 'Trigger/call builds on other projects' changed build result to SUCCESS
Strings match run condition: string 1=[lab2b], string 2=[both]
Run condition [Strings match] preventing perform for step [BuilderChain]
Archiving artifacts
Once I changed the approach to option two it worked for me, but I would like to understand first option as well.
I have created some build configurations with snapshot and artifact dependencies (to create a build chain). The configurations are executed always on the same agent so upload of artifacts to master is not necessary.
Is that possible in TeamCity? Can I somehow avoid the upload of artifacts to master and rather pass the artifacts directly to the next build configuration in the chain?
Thanks in advance.
Martin
There is no way to disable the upload of artifacts to TeamCity, as build agents are allowed to come and go.
But since v. 8.1, TeamCity build agent caches artifacts while uploading them, so they are not re-downloaded again, when next build starts.
I have a maven job in jenkins. Normally at the end of the build the artifacts will be deployed to artifactory via jenkins post build action.
But if I make a release build I get an error from jenkins in this case.
So, is there a possiblity to avoid deploying the artifacts at the end of a release build.
Let me precise the error. The maven goals are 'clean install'. I need the post action for deploying to artifactory by a 'normal' job. If I make a release of this artifact via the M2 Release Plugin the deploying of the relased artifacts will be done by the M2 Release Plugin itself. But at the end of the job the post action tries to deploy artifact with the old SNAPSHOT version which is not allowed by artifactory.
Jenkins M2 release plugin (used maven-release-plugin of Maven). If you have created a Maven job (instead of a Free Style), then in M2 Release section in job's configuration, you'll see the goals are:
-Dresume=false release:prepare release:perform
If you replace it with the following M2 release plugin won't call deploy goal which is initiated by release:perform goal by default.
-Dresume=false release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true"
In my case, I didn't want the artifacts to go to Artifactory as soon as release:perform and release:prepare goals were completed, so the above helped. But, even though Jenkins job has a Post Build action as "Deploy to Artifactory" to either snapshot or release repositories (depending upon what kind of build you have aka automated/manually run build job OR by running Perform Maven Release ), it never called the post build action.
This can be good in the sense, now I can call deployment using the generated release artifacts in an environment and if the deployment/some IT tests are successful, then I can upload the artifacts to Artifactory. Downside is, what if your deployment depends upon fetching the new artifact from Artifactory/Nexus (i.e. somewhere in deploy script's logic) then you can't have that working until you copy artifacts from one job to another child job.
Apart from that, maven deploy goal requires valid / settings in either settings.xml or pom.xml where the you specify for each of the above sections, which are defined under section, must match with the value of section defined in setting.xml/pom.xml.
One can defined / set the value of section to use a non-release repository which is higher in order (for artifact resolution) than a snapshot repository i.e. use libs-alpha-local or libs-stage-local and then let maven deploy goal deploy the artifacts to Artifactory/Nexus.
Later, upon successful deployments to higher environments (like QA/PRE etc), you can move the artifact from alpha/stage to libs-release-local.
IS_M2RELEASEBUILD Boolean variable which comes with M2 Release plugin can be used in a conditional step to deploy here or there or not at all.
In the configuration of the 'Maven release build' you can set in the advanced mode a 'Release environment variable' (default is IS_M2RELEASEBUILD). Later in the post-bild-action 'publish artifacts' you can check if this environment variable is set and then the deploying is skipped.
I'm thinking you may want to create a separate jenkins job just for your release builds. And under post build action to run different set of maven commands just to package the artificat and not install it to the artifactory. That being said if other applications depends on that artifact you do not want to release. This may be causing versioning problems.
You should take a look at Artifactory Jenkins plugin. 1. It deploys with errors. 2. It has built in release functionality. 3. It will provide you with unique buildInfo functionality for saving build information tougher with artifacts in Artifactory, https://wiki.jenkins-ci.org/display/JENKINS/Artifactory+Plugin
If I can assume you're using the M2 Release Plugin, then there's another issue.
Skipping the deployment after a release would be an unnecessary workaround, since I've seen this work. You should try to fix this by the root cause.
It would help if you could provide more info about the error.