Jenkins + Gradle + Artifactory: Couldn't read generated build info - jenkins

I'm trying to push my artifacts to Artifactory with Jenkins Pipeline, which call Gradle tool.
I am following the examples published on GitHub:
Example1
Example2
My Jenkins Pipeline script:
stage('Perform Gradle Release') {
//ssh-agent required to perform GIT push (when tagging the branch on release)
sshagent([git_credential]) {
sh "./gradlew clean release unSnapshotVersion -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${release_version} -Prelease.newVersion=${development_version}"
}
// Create an Artifactory server instance
def server = Artifactory.server('my-artifactory')
// Create and set an Artifactory Gradle Build instance:
def rtGradle = Artifactory.newGradleBuild()
rtGradle.resolver server: server, repo: 'libs-release'
rtGradle.deployer server: server, repo: 'libs-release-local'
//Use Gradle Wrapper
rtGradle.useWrapper = true
//Creates buildinfo
def buildInfo = Artifactory.newBuildInfo()
buildInfo.env.capture = true
buildInfo.env.filter.addInclude("*")
// Run Gradle:
rtGradle.run rootDir: "./", buildFile: 'build.gradle', tasks: 'clean artifactoryPublish', buildInfo: buildInfo
// Publish the build-info to Artifactory:
server.publishBuildInfo buildInfo
}
My Gradle file is very light, I'm just using the plugin Gradle Release Plugin to perform gradle release.
When executing the pipeline, it fails with this message:
:artifactoryPublish
BUILD SUCCESSFUL
Total time: 17.451 secs
ERROR: Couldn't read generated build info at : /tmp/generated.build.info4898776990575217114.json
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.model.Run$RunnerAbortedException
at org.jfrog.hudson.pipeline.Utils.getGeneratedBuildInfo(Utils.java:188)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:127)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:96)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousStepExecution.start(AbstractSynchronousStepExecution.java:40)
...
Finished: FAILURE
When I check on the server, there is no such file /tmp/generated.build.info4898776990575217114.json (the user has of course permission to write to /tmp).
Thanks for your help.
[EDIT] It is weird, but I found some files named "buildInfo2408849984051060030.properties", containing the informations. The name is not the same, neither the format, and these files are stores on my Jenkins machine, not my slave executing the pipeline.

Thanks #tamir-hadad, it has indeed been fixed on 2.8.2.

Related

Conan Artifactory Jenkins Integration fails

We have a problem regarding the Jenkins Artifactory Plug-in using Conan.
Basically, we created our scripted pipeline (following the example in https://github.com/jfrog/project-examples/blob/master/jenkins-examples/pipeline-examples/scripted-examples/conan-example/Jenkinsfile), which set-up the artifactory server, creates the conanClient, performs the creation of a Conan Package (by running through the conanClient the command "conan create", since our project is built via a conanfile.py recipe), uploads the package to our artifactory instance, and finally we want to clear the conan cache (through the "conan remove * -f" command).
But this last step always fails: the problem seems to be caused by the conanClient, which implicity invokes the "conan_build_info" command after the "conan remove * -f". The "conan_build_info" command fails since the "conan_build_info" apparently requires the "package" cache, which is, of course, cleared. Is this a conanClient bug (maybe because, as we read in the documentation, the conan_build_info command is not recommended to be used: https://docs.conan.io/en/latest/reference/commands/misc/conan_build_info.html), or are we missing something?
Is there a way to perform a package creation and upload via the ConanClient, and clear the conan cache without causing the pipeline to fail?
It seems to me this is a big bogus bug, since package creation and upload via CI - Jenkins is a fondumental aspect... and of course, at the end, the conan cache must be cleared...
Here our Jenkinsfile:
node()
{
// Obtain an Artifactory server instance, defined in Jenkins --> Manage:
def server = Artifactory.server "artifactory_server"
// Create a local build-info instance:
def buildInfo = Artifactory.newBuildInfo()
buildInfo.name = "our git master pipeline"
// Create a conan client instance:
def conanClient = Artifactory.newConanClient()
// Add a new repository named 'conan-local' to the conan client.
// The 'remote.add' method returns a 'serverName' string, which is used later in the script:
String serverName = conanClient.remote.add server: server, repo: "conan-local"
// We enable strict ABI dependency propagation
conanClient.run(command: "config set general.default_package_id_mode=package_revision_mode", buildInfo: buildInfo)
conanClient.run(command: "config set general.revisions_enabled=1", buildInfo: buildInfo)
conanClient.run(command: "config set general.full_transitive_package_id=1", buildInfo: buildInfo)
stage('Checkout')
{
// checkout from our repo...
}
stage('Build Release')
{
// Run a conan build. The 'buildInfo' instance is passed as an argument to the 'run' method:
conanClient.run(command: "create ./project_dir channel/channel", buildInfo: buildInfo)
}
stage('Upload')
{
// Create an upload command. The 'serverName' string is used as a conan 'remote', so that the artifacts are uploaded into it:
String command = "upload our_packet/*#*/master --all -r ${serverName} --confirm"
// Run the upload command, with the same build-info instance as an argument:
conanClient.run(command: command, buildInfo: buildInfo)
}
stage("Clear Conan Cache")
{
// Clean all conan cache
String command = "remove * -f"
// Run the remove command, with the same build-info instance as an argument:
conanClient.run(command: command, buildInfo: buildInfo)
}
stage('Publish build info')
{
// Publish the build-info to Artifactory:
server.publishBuildInfo buildInfo
}
}
The error from Jenkins log:
[out_project] $ cmd.exe /C "conan remove "*" -f && exit %%ERRORLEVEL%%"
[out_project] $ cmd.exe /C "conan_build_info D:\jenkins\workspace\out_project#tmp\artifactory\conan.tmp8862200414119894885\conan_log.log --output D:\jenkins\workspace\out_project#tmp\artifactory\conan1112119574977543529build-info && exit %%ERRORLEVEL%%"
[1m[31mERROR: [Errno 2] No such file or directory: 'D:\\.conan\\efa5c5\\1\\conaninfo.txt'[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.lang.RuntimeException: Conan build failed
at org.jfrog.hudson.pipeline.common.Utils.launch(Utils.java:256)
at org.jfrog.hudson.pipeline.common.executors.ConanExecutor.execute(ConanExecutor.java:132)
at org.jfrog.hudson.pipeline.common.executors.ConanExecutor.collectConanBuildInfo(ConanExecutor.java:180)
at org.jfrog.hudson.pipeline.common.executors.ConanExecutor.execCommand(ConanExecutor.java:101)
at org.jfrog.hudson.pipeline.scripted.steps.conan.RunCommandStep$Execution.runStep(RunCommandStep.java:50)
at org.jfrog.hudson.pipeline.scripted.steps.conan.RunCommandStep$Execution.runStep(RunCommandStep.java:37)
at org.jfrog.hudson.pipeline.ArtifactorySynchronousNonBlockingStepExecution.run(ArtifactorySynchronousNonBlockingStepExecution.java:42)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

jenkins pipleline build error:Cannot open assembly 'MSBuild.exe': No such file or directory

I want to use jenkins pipeline and sonarqube to analysis code. Both jenkins and sonarqube are running in docker.I followed manual:https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-jenkins to create a pipeline job:
node {
stage('SCM') {
checkout(XXXX)
}
stage('Build + SonarQube analysis') {
def sqScannerMsBuildHome = tool 'msbuild'
withSonarQubeEnv('sonar') {
sh "mono ${sqScannerMsBuildHome}/SonarScanner.MSBuild.exe begin /k:myKey"
sh 'mono MSBuild.exe /t:Rebuild'
sh "mono ${sqScannerMsBuildHome}/SonarScanner.MSBuild.exe end"
}
}
}
when I built this pipeline,I got an error like this:
Cannot open assembly 'MSBuild.exe': No such file or directory.
Here is the build log:
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build + SonarQube analysis)
[Pipeline] tool
[Pipeline] withSonarQubeEnv
Injecting SonarQube environment variables using the configuration: sonar
[Pipeline] {
[Pipeline] sh
+ mono /var/jenkins_home/tools/hudson.plugins.sonar.MsBuildSQRunnerInstallation/msbuild/SonarScanner.MSBuild.exe begin /k:myKey
SonarScanner for MSBuild 5.0.4
Using the .NET Framework version of the Scanner for MSBuild
Pre-processing started.
Preparing working directories...
07:36:24.082 Updating build integration targets...
07:36:24.993 Fetching analysis configuration settings...
07:36:25.387 Provisioning analyzer assemblies for cs...
07:36:25.388 Installing required Roslyn analyzers...
07:36:25.592 Provisioning analyzer assemblies for vbnet...
07:36:25.592 Installing required Roslyn analyzers...
07:36:25.655 Pre-processing succeeded.
[Pipeline] sh
+ mono MSBuild.exe /t:Rebuild
Cannot open assembly 'MSBuild.exe': No such file or directory.
[Pipeline] }
WARN: Unable to locate 'report-task.txt' in the workspace. Did the SonarScanner succeeded?
[Pipeline] // withSonarQubeEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
Finished: FAILURE
I checked /var/jenkins_home/tools/hudson.plugins.sonar.MsBuildSQRunnerInstallation/msbuild directory and can't find MSBuild.exe file.
[file in directory][1]
[1]: https://i.stack.imgur.com/2UcdK.png
How can I solve this problem or How can I use jenkins and sonarqube to analysis C# project

i am getting [0m[[0m[0minfo[0m] [0m[0m in jenkins console on building sbt jenkins pipeline project

i have an sbt project in jenkins, created the project as jenkins pipeline project, i have installed sbt in jenkins, i have checked the auto install checkbox and select the version number 1.2.8
here is my jenkins file
pipeline {
agent any
stages {
stage('Reload') {
steps {
echo "Reloading..."
//sh "sbt reload"
sh "${tool name: 'sbt1.2.8', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
}
}
and here is sbt settings in jenkins
here is jenkins console logs
+ /var/lib/jenkins/tools/org.jvnet.hudson.plugins.SbtPluginBuilder_SbtInstallation/sbt1.2.8/bin/sbt compile
[0m[[0m[0minfo[0m] [0m[0mLoading settings for project interpret-backend-project-jenkinsfile-build from plugins.sbt ...[0m
[0m[[0m[0minfo[0m] [0m[0mLoading project definition from /var/lib/jenkins/workspace/interpret-backend-project-jenkinsfile/project[0m
[0m[[0m[0minfo[0m] [0m[0mLoading settings for project interpret-backend-project-jenkinsfile from build.sbt ...[0m
[0m[[0m[0minfo[0m] [0m[0mSet current project to interpret (in build file:/var/lib/jenkins/workspace/interpret-backend-project-jenkinsfile/)[0m
[0m[[0m[0minfo[0m] [0m[0mExecuting in batch mode. For better performance use sbt's shell[0m
[0m[[0m[32msuccess[0m] [0m[0mTotal time: 4 s, completed Nov 25, 2020, 4:53:05 PM[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Checks API] No suitable checks publisher found.
Finished: SUCCESS
[0m[[0m[0minfo[0m] [0m[0m why is this displaying how can i fix this ?
These are ANSI color codes. You can either embrace them to have colorful logs or disable them.
To enable colors in Jenkins you can modify your pipeline definition:
pipeline {
...
options {
ansiColor('xterm')
}
...
}
This is a reference to the AnsiColor plugin.
If you can't use this plugin and want to disable colors in sbt logs, you can do that by modifying sbt.color option. For example, by launching sbt with -Dsbt.color=false (I see that you can add this to in the UI) or by adding it to the SBT_OPTS environment variable:
pipeline{
...
environment {
SBT_OPTS = "${SBT_OPTS} -Dsbt.color=false"
}
...
}
Check out sbt docs and take a look at the sbt.ci option as well, it should be automatically set on Jenkins.

Problem with Connecting Jenkins to SonarQube

I have a Jenkins pipeline that periodically pull from gitlab and build different repos, build a multi-component platform, run and test it. Now I installed a sonarqube server on the same machine (Ubuntu 18.04) and I want to connect my Jenkins to sonarqube.
In Jenkins:
I set up the sonarqube scanner at Global Tool Configuration as below:
I generated a token in sonarqube and in Jenkins at configuration I set up the server as below BUT I couldn't find any place to insert the token (and I think this is the problem):
In the jenkins pipeline this is how I added a stage for sonarqube:
stage('SonarQube analysis') {
steps{
script {
scannerHome = tool 'SonarQube';
}
withSonarQubeEnv('SonarQube') {
sh "${scannerHome}/bin/sonar-scanner"
}
}
}
But this fails with below logs and ERROR: script returned exit code 127:
[Pipeline] { (SonarQube analysis)
[Pipeline] script
[Pipeline] {
[Pipeline] tool
Invalid tool ID
[Pipeline] }
[Pipeline] // script
[Pipeline] withSonarQubeEnv
Injecting SonarQube environment variables using the configuration: SonarQube
[Pipeline] {
[Pipeline] sh
+ /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner
/var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: 1: /var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner: not found
[Pipeline] }
WARN: Unable to locate 'report-task.txt' in the workspace. Did the SonarScanner succeeded?
[Pipeline] // withSonarQubeEnv
[Pipeline] }
[Pipeline] // stage
And when I check my jenkinstools on the disk sonnar plugin is not there:
$ ls /var/lib/jenkins/tools/
jenkins.plugins.nodejs.tools.NodeJSInstallation
Can someone please let me know how I can connect Jenkins to sonarqube?
Create and add token to be able to connect to SonarQube.
You have create project in SonarQube and use it as a parameter:
sh """
${scannerHome}/bin/sonar-scanner \
-Dsonar.projectKey=your_project_key_created_in_sonarqube_as_project \
-Dsonar.sources=. \
"""

Get gradle variables in jenkins pipeline script

I'm trying to migrate my build pipelines to the "Pipeline plugin" using the groovy build scripts.
My pipelines are usually:
Test (gradle)
IntegrationTest (gradle)
Build (gradle)
Publish (artifactory)
I would like to use the gradle variables like version/group etc. in my jenkins build script to publish to the correct folders in artifactory. Something the artifactory plugin would take care of for me in the past. How can this be achieved?
For a single gradle project I use something like this:
node('master')
{
def version = 1.0
def gitUrl = 'some.git'
def projectRoot = ""
def group = "dashboard/frontend/"
def artifactName = "dashboard_ui"
def artifactRepo = "ext-release-local"
stage "git"
git branch: 'develop', poll: true, url: "${gitUrl}"
dir(projectRoot)
{
sh 'chmod +x gradlew'
stage "test"
sh './gradlew clean test'
stage "build"
sh './gradlew build createPom'
stage "artifact"
def server = Artifactory.server('artifactory_dev01')
def uploadSpec = """{
"files": [
{
"pattern": "build/**.jar",
"target": "${artifactRepo}/$group/${artifactName}/${version}/${artifactName}-${version}.jar"
},
{
"pattern": "pom.xml",
"target": "${artifactRepo}/$group/${artifactName}/${version}/${artifactName}.pom"
}
]
}"""
def buildInfo1 = server.upload spec: uploadSpec
server.publishBuildInfo buildInfo1
}
}
For future reference here an example with the more modern declarative pipeline:
pipeline {
agent any
stages {
stage('somestage') {
steps {
script {
def version = sh (
script: "./gradlew properties -q | grep \"version:\" | awk '{print \$2}'",
returnStdout: true
).trim()
sh "echo Building project in version: $version"
}
}
}
}
}
see also:
Gradle plugin project version number
How to do I get the output of a shell command executed using into a variable from Jenkinsfile (groovy)?
I think you actually have two different approaches to tackle this problem :
1. Get version/group from sh script
Find a way to get Gradle version from gradle build tool (e.g. gradle getVersion(), but I'm not familiar with Gradle) and then use shell script to get this version. If Gradle command to get the version is gradle getVersion(), you would do in your pipeline :
def projectVersion = sh script: "gradle getVersion()", returnStdout: true
def projectGroup= sh script: "gradle getGroup()", returnStdout: true
and then just inject your $projectVersion and $projectGroup variables in your current pipeline.
2. Configure your Gradle build script to publish to Artifactory
This is the reverse approach, which I personnaly prefer : instead of giving Artifactory all your Gradle project information, juste give Gradle your Artifactory settings and use Gradle goal to easily publish to Artifactory.
JFrog has a good documentation for this solution in their Working with Gradle section. Basically, you will follow the following steps :
Generate a compliant Gradle build script from Artifactory using Gradle Build Script Generator and include it to your project build script
Use Gradle goal gradle artifactoryPublish to simply publish your current artifact to Artifactory
For others who Google'd their way here, if you have the Pipeline Utility Steps plugin and store what you need in your gradle.properties file, you can do something like this in the environment block:
MY_PROPS = readProperties file:"${WORKSPACE}/gradle.properties"
MY_VERSION = MY_PROPS['version']

Resources