Conan Artifactory Jenkins Integration fails - jenkins

We have a problem regarding the Jenkins Artifactory Plug-in using Conan.
Basically, we created our scripted pipeline (following the example in https://github.com/jfrog/project-examples/blob/master/jenkins-examples/pipeline-examples/scripted-examples/conan-example/Jenkinsfile), which set-up the artifactory server, creates the conanClient, performs the creation of a Conan Package (by running through the conanClient the command "conan create", since our project is built via a conanfile.py recipe), uploads the package to our artifactory instance, and finally we want to clear the conan cache (through the "conan remove * -f" command).
But this last step always fails: the problem seems to be caused by the conanClient, which implicity invokes the "conan_build_info" command after the "conan remove * -f". The "conan_build_info" command fails since the "conan_build_info" apparently requires the "package" cache, which is, of course, cleared. Is this a conanClient bug (maybe because, as we read in the documentation, the conan_build_info command is not recommended to be used: https://docs.conan.io/en/latest/reference/commands/misc/conan_build_info.html), or are we missing something?
Is there a way to perform a package creation and upload via the ConanClient, and clear the conan cache without causing the pipeline to fail?
It seems to me this is a big bogus bug, since package creation and upload via CI - Jenkins is a fondumental aspect... and of course, at the end, the conan cache must be cleared...
Here our Jenkinsfile:
node()
{
// Obtain an Artifactory server instance, defined in Jenkins --> Manage:
def server = Artifactory.server "artifactory_server"
// Create a local build-info instance:
def buildInfo = Artifactory.newBuildInfo()
buildInfo.name = "our git master pipeline"
// Create a conan client instance:
def conanClient = Artifactory.newConanClient()
// Add a new repository named 'conan-local' to the conan client.
// The 'remote.add' method returns a 'serverName' string, which is used later in the script:
String serverName = conanClient.remote.add server: server, repo: "conan-local"
// We enable strict ABI dependency propagation
conanClient.run(command: "config set general.default_package_id_mode=package_revision_mode", buildInfo: buildInfo)
conanClient.run(command: "config set general.revisions_enabled=1", buildInfo: buildInfo)
conanClient.run(command: "config set general.full_transitive_package_id=1", buildInfo: buildInfo)
stage('Checkout')
{
// checkout from our repo...
}
stage('Build Release')
{
// Run a conan build. The 'buildInfo' instance is passed as an argument to the 'run' method:
conanClient.run(command: "create ./project_dir channel/channel", buildInfo: buildInfo)
}
stage('Upload')
{
// Create an upload command. The 'serverName' string is used as a conan 'remote', so that the artifacts are uploaded into it:
String command = "upload our_packet/*#*/master --all -r ${serverName} --confirm"
// Run the upload command, with the same build-info instance as an argument:
conanClient.run(command: command, buildInfo: buildInfo)
}
stage("Clear Conan Cache")
{
// Clean all conan cache
String command = "remove * -f"
// Run the remove command, with the same build-info instance as an argument:
conanClient.run(command: command, buildInfo: buildInfo)
}
stage('Publish build info')
{
// Publish the build-info to Artifactory:
server.publishBuildInfo buildInfo
}
}
The error from Jenkins log:
[out_project] $ cmd.exe /C "conan remove "*" -f && exit %%ERRORLEVEL%%"
[out_project] $ cmd.exe /C "conan_build_info D:\jenkins\workspace\out_project#tmp\artifactory\conan.tmp8862200414119894885\conan_log.log --output D:\jenkins\workspace\out_project#tmp\artifactory\conan1112119574977543529build-info && exit %%ERRORLEVEL%%"
[1m[31mERROR: [Errno 2] No such file or directory: 'D:\\.conan\\efa5c5\\1\\conaninfo.txt'[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.lang.RuntimeException: Conan build failed
at org.jfrog.hudson.pipeline.common.Utils.launch(Utils.java:256)
at org.jfrog.hudson.pipeline.common.executors.ConanExecutor.execute(ConanExecutor.java:132)
at org.jfrog.hudson.pipeline.common.executors.ConanExecutor.collectConanBuildInfo(ConanExecutor.java:180)
at org.jfrog.hudson.pipeline.common.executors.ConanExecutor.execCommand(ConanExecutor.java:101)
at org.jfrog.hudson.pipeline.scripted.steps.conan.RunCommandStep$Execution.runStep(RunCommandStep.java:50)
at org.jfrog.hudson.pipeline.scripted.steps.conan.RunCommandStep$Execution.runStep(RunCommandStep.java:37)
at org.jfrog.hudson.pipeline.ArtifactorySynchronousNonBlockingStepExecution.run(ArtifactorySynchronousNonBlockingStepExecution.java:42)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

Related

Unspecified Jenkinsfile Null Pointer Error in Workflow Steps

I encountered the following error in a Jenkinsfile pipeline I was building:
java.lang.NullPointerException
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:80)
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:67)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE
The issue here, primarily, is that I'm not sure exactly what is going on. Since all it's mentioning is a null pointer error, I can't exactly be sure, and I can't find anything more specific.
Here's my Jenkinsfile:
#!groovy
node {
withEnv(["WORKSPACE=${pwd()}"]) { //Setting Workspace to the current directory
stage('Clone repository...') {
checkout scm //Let checkout automagically handle pulling in all the names we need and whatnot
}
stage('Building WAR...') {
step(withMaven(
// Maven installation declared in the Jenkins "Global Tool Configuration"
maven: 'Maven 3.6.0') {
// Run the maven build
sh 'mvn clean install' //Same as running on local
sh 'mv ${WORKSPACE}/target/QUserService.war ${WORKSPACE}/target/QUserService_War-QUserService-${BRANCH_NAME}-${BUILD_NUMBER}.war'
//For above line, 'mv' is the Linux command to rename/move files, which is needed for the UCD script
}
// withMaven will discover the generated Maven artifacts, JUnit Surefire & FailSafe & FindBugs reports...
)
}
}
}
So - first, you don't need to define WORKSPACE. It is defined for you by Jenkins. You can convince yourself of this by running sh 'set' on a linux agent.
Next, you don't need to check out the project. It will already be there (assuming you're using a pipeline project).
Next, you don't need to put the withMaven inside of a step call. In a scripted pipeline, the stuff in a stage is groovy script. Step isn't required.
node {
stage('Building WAR...') {
withMaven(
maven: 'Maven 3.5.0') {
// Run the maven build
sh 'mvn clean install' //Same as running on local
}
}
I took out the move step and comment just to make it clearer.
I didn't get a null pointer error. See if removeing the step call and removing the step call make the NPE go away. If not, I'd suggest attaching the console output to attempt to see where this happens.

How do I use Jenkins to build a private GitHub Rust project with a private GitHub dependency?

I have a private GitHub Rust project that depends on another private GitHub Rust project and I want to build the main one with Jenkins. I have called the organization Organization and the dependency package subcrate in the below code.
My Jenkinsfile looks something like
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sh "cargo build"
}
}
etc...
}
}
I have tried the following in Cargo.toml to reference the dependency, it works fine on my machine
[dependencies]
subcrate = { git = "ssh://git#ssh.github.com/Organization/subcrate.git", tag = "0.1.0" }
When Jenkins runs I get the following error
+ cargo build
Updating registry `https://github.com/rust-lang/crates.io-index`
Updating git repository `ssh://git#github.com/Organization/subcrate.git`
error: failed to load source for a dependency on `subcrate`
Caused by:
Unable to update ssh://git#github.com/Organization/subcrate.git?tag=0.1.0#0623c097
Caused by:
failed to clone into: /usr/local/cargo/git/db/subcrate-3e391025a927594e
Caused by:
failed to authenticate when downloading repository
attempted ssh-agent authentication, but none of the usernames `git` succeeded
Caused by:
error authenticating: no auth sock variable; class=Ssh (23)
script returned exit code 101
How can I get Cargo to access this GitHub repository? Do I need to inject the GitHub credentials onto the slave? If so, how can I do this? Is it possible to use the same credentials Jenkins uses to checkout the main crate in the first place?
I installed the ssh-agent plugin and updated my Jenkinsfile to look like this
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sshagent(credentials: ['id-of-github-credentials']) {
sh "ssh -vvv -T git#github.com"
sh "cargo build"
}
}
}
etc...
}
}
I get the error
+ ssh -vvv -T git#github.com
No user exists for uid 113
script returned exit code 255
Okay, I figured it out, No user exists for uid error is because of a mismatch between the users in the host /etc/passwd and the container /etc/passwd. This can be fixed by mounting /etc/passwd.
agent {
docker {
image 'rust:latest'
args '-v /etc/passwd:/etc/passwd'
}
}
Then
sshagent(credentials: ['id-of-github-credentials']) {
sh "cargo build"
}
Works just fine

SonarQube Scanner fails in a Jenkins pipeline due to command not found

I'd like to run SonarQube Scanner from a Jenkins pipeline and I followed the documentation.
Regarding the error, it seems that the scanner is present but some commands are not found. My jenkins instance runs in a docker.
Jenkins version : 2.46.1
SonarQube Scanner : 2.6.1
+ /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube_Scanner/bin/sonar-scanner
/var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube_Scanner/bin/sonar-scanner: line 56: which: command not found
/var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube_Scanner/bin/sonar-scanner: line 66: exec: : not found
In the sonar-scanner script, there is this block
if [ -n "$JAVA_HOME" ]
then
java_cmd="$JAVA_HOME/bin/java"
else
java_cmd="$(which java)"
fi
And given that my JAVA_HOME was unset, the script called which and the command is not installed inside my container.
As a workaround, I set the env variable JAVA_HOME.
Make sure the PATH is complete, or check if resetting it is enough
def sonarqubeScannerHome = tool name: 'SonarQubeScanner', type: 'hudson.plugins.sonar.SonarRunnerInstallation'
withEnv(["PATH=/usr/bin: ..."]) {
// Your call to Sonar
sh "${sonarqubeScannerHome}/bin/sonar-scanner -e -Dsonar.host.url=..."
}
I used the setup from "Execute SonarQube Scanner within Jenkins 2 Pipeline", but with Sonar 2.5, there is an official support of Jenkins pipeline:
def scannerHome = tool 'SonarQube Scanner 2.8';
withEnv(["PATH=/usr/bin: ..."]) {
withSonarQubeEnv('My SonarQube Server') {
sh "${scannerHome}/bin/sonar-scanner"
}
}

Jenkins + Gradle + Artifactory: Couldn't read generated build info

I'm trying to push my artifacts to Artifactory with Jenkins Pipeline, which call Gradle tool.
I am following the examples published on GitHub:
Example1
Example2
My Jenkins Pipeline script:
stage('Perform Gradle Release') {
//ssh-agent required to perform GIT push (when tagging the branch on release)
sshagent([git_credential]) {
sh "./gradlew clean release unSnapshotVersion -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${release_version} -Prelease.newVersion=${development_version}"
}
// Create an Artifactory server instance
def server = Artifactory.server('my-artifactory')
// Create and set an Artifactory Gradle Build instance:
def rtGradle = Artifactory.newGradleBuild()
rtGradle.resolver server: server, repo: 'libs-release'
rtGradle.deployer server: server, repo: 'libs-release-local'
//Use Gradle Wrapper
rtGradle.useWrapper = true
//Creates buildinfo
def buildInfo = Artifactory.newBuildInfo()
buildInfo.env.capture = true
buildInfo.env.filter.addInclude("*")
// Run Gradle:
rtGradle.run rootDir: "./", buildFile: 'build.gradle', tasks: 'clean artifactoryPublish', buildInfo: buildInfo
// Publish the build-info to Artifactory:
server.publishBuildInfo buildInfo
}
My Gradle file is very light, I'm just using the plugin Gradle Release Plugin to perform gradle release.
When executing the pipeline, it fails with this message:
:artifactoryPublish
BUILD SUCCESSFUL
Total time: 17.451 secs
ERROR: Couldn't read generated build info at : /tmp/generated.build.info4898776990575217114.json
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.model.Run$RunnerAbortedException
at org.jfrog.hudson.pipeline.Utils.getGeneratedBuildInfo(Utils.java:188)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:127)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:96)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousStepExecution.start(AbstractSynchronousStepExecution.java:40)
...
Finished: FAILURE
When I check on the server, there is no such file /tmp/generated.build.info4898776990575217114.json (the user has of course permission to write to /tmp).
Thanks for your help.
[EDIT] It is weird, but I found some files named "buildInfo2408849984051060030.properties", containing the informations. The name is not the same, neither the format, and these files are stores on my Jenkins machine, not my slave executing the pipeline.
Thanks #tamir-hadad, it has indeed been fixed on 2.8.2.

Using the sh step on Windows

TL;DR I want to use the sh step even though Jenkins is running on Windows. I do not want to use the bat step, unless you can show me how to easily reproduce what I need done using bat
I've been converting some old Jenkins jobs over to 2.x Pipeline script. One of my jobs uses the "Publish over SSH plugin" to:
Send artifacts to a remote server
Exec a set of commands on the remote server
For instance:
I wanted to replicate this in Pipeline Script so I've done the following:
stage('Deploy') {
withCredentials([[$class: 'FileBinding', credentialsId: 'bitbucket-key-file', variable: 'SSHKEY']]) {
sh '''
scp -i "$SSHKEY" dsub.tar.gz tprmbbuild#192.168.220.57:dsubdeploy
scp -i "$SSHKEY" deployDsubUi.sh tprmbbuild#192.168.220.57:dsubdeploy
ssh -i "$SSHKEY" -o StrictHostKeyChecking=no 192.168.220.57 <<- EOF
DEPLOY_DIR=/home/tprmbbuild/dsubdeploy
echo '*** dos2unix using sed'
sed -e 's/\r$//' $DEPLOY_DIR/deployDsubUi.sh > $DEPLOY_DIR/deployDsubUi-new.sh
mv $DEPLOY_DIR/deployDsubUi-new.sh $DEPLOY_DIR/deployDsubUi.sh
chmod 755 $DEPLOY_DIR/deployDsubUi.sh
echo '*** Deploying Dsub UI'
$DEPLOY_DIR/deployDsubUi.sh $DEPLOY_DIR/dsub.tar.gz
EOF'''
}
}
Problem is, I get this stack trace when executing my build:
[Pipeline] sh
[E:\Jenkins\jenkins_home\workspace\tpr-ereg-ui-deploy#2] Running shell script
1 [main] sh 3588 E:\Jenkins\tools\Git_2.10.1\usr\bin\sh.exe: *** fatal error - add_item ("\??\E:\Jenkins\tools\Git_2.10.1", "/", ...) failed, errno 1
Stack trace:
Frame Function Args
000FFFF9BB0 0018005C24E (0018023F612, 0018021CC39, 000FFFF9BB0, 000FFFF8B30)
000FFFF9BB0 001800464B9 (000FFFFABEE, 000FFFF9BB0, 1D2345683BEC046, 00000000000)
000FFFF9BB0 001800464F2 (000FFFF9BB0, 00000000001, 000FFFF9BB0, 4A5C3A455C3F3F5C)
000FFFF9BB0 001800CAA8B (00000000000, 000FFFFCE00, 001800BA558, 1D234568CAFA549)
000FFFFCC00 00180118745 (00000000000, 00000000000, 001800B2C5E, 00000000000)
000FFFFCCC0 00180046AE5 (00000000000, 00000000000, 00000000000, 00000000000)
00000000000 00180045753 (00000000000, 00000000000, 00000000000, 00000000000)
000FFFFFFF0 00180045804 (00000000000, 00000000000, 00000000000, 00000000000)
End of stack trace
Agreed with "it is my belief it is failing to spawn the shell". It is trying to run "E:\Jenkins\tools\Git_2.10.1\usr\bin\sh.exe" (using Windows backslash syntax). Unless we have a shell executable configured (sh.exe) in the mentioned directory, it will fail.
Powershell (or Cmd Shell):
If you are oepn to use batch files, you would have to install/configure 3 binaries (ssh, scp, ssh). Everything else falls in place (I see that you are channeling commands to a remote machine using ssh. I assume that the remote server is linux/unix based).
Alternatives:
You can use cygwin or run linux on virtualbox (or any software that emulates linux on windows). But, running just 3 commands may not be worth the trouble (It will definitely be fruitful if you have plans to convert/write more shell scripts in future).
You can use "bat" instead of "sh" in windows.
Also use 2 backslashes to escape the path string correctly. See example below
node {
currentBuild.result = "SUCCESS"
try {
stage('Checkout'){
checkout scm
}
stage('Convert to Binary RPD'){
bat "D:\\oracle\\Middleware\\user_projects\\domains\\bi\\bitools\\bin\\biserverxmlexec -D .\\RPD -P Gl081Reporting -O .\\GLOBI.rpd"
}
stage('Notify'){
echo 'sending email'
// send to email
emailext (
subject: "SUCCESS: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
body: """$PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS:
Check console output at $BUILD_URL to view the results.""",
to:"girish.lakshmanan#abc.co.uk girish.la#gmail.com"
)
}
}
catch (err) {
currentBuild.result = "FAILURE"
throw err
}
}
You should be able to run the scp.exe from the git installation directly within a batch script.
There is no "here" document for batch as far as I know, but you could just put the script to be run on the server into a separate file.
(Untested)
stage('Deploy') {
withCredentials([[$class: 'FileBinding', credentialsId: 'bitbucket-key-file', variable: 'SSHKEY']]) {
bat '''
E:\\Jenkins\\tools\\Git_2.10.1\\usr\\bin\\scp.exe -i "${SSHKEY}" dsub.tar.gz tprmbbuild#192.168.220.57:dsubdeploy
E:\\Jenkins\\tools\\Git_2.10.1\\usr\\bin\\scp.exe -i "${SSHKEY}" deployDsubUi.sh tprmbbuild#192.168.220.57:dsubdeploy
E:\\Jenkins\\tools\\Git_2.10.1\\usr\\bin\\scp.exe -i "${SSHKEY}" -o StrictHostKeyChecking=no 192.168.220.57 < server_script.sh
}
}

Resources