Jenkins pipeline errors out with Null pointer exception when I am using Jacoco plugin. If I comment out the Jacoco step from Jenkinsfile then there is no error thrown.
The log file in Jenkins indicate that the error is thrown after the End of Pipeline.
Below is the log message and the Jenkins file details. Any idea why this error is thrown?
[Pipeline] // node
[Pipeline] End of Pipeline
java.lang.NullPointerException
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:87)
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:70)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
Jenkins file ....
pipeline{
agent any
stages{
stage('Git clone project'){
steps{
git branch: 'sandbox',url:'https://<repo url>'
sh 'git branch -a'
}
}
stage('Test TMS'){
steps{
dir('TestManagementService'){
sh 'pwd'
sh './gradlew test'
step(
jacoco(
execPattern: '**/build/jacoco/**.exec',
classPattern: '**/build/classes/java/main',
sourcePattern: '**/src',
inclusionPattern: 'com/testMgmt/**',
)
)
}
}
post{
always{
junit '**/build/test-results/test/TEST-*.xml'
}
}
}
}// end of stages
}
The issue was due to incorrect use of step() statement which contained a jacoco() statement.
Incorrect usage....
step(
jacoco(
execPattern: '**/build/jacoco/**.exec',
classPattern: '**/build/classes/java/main',
sourcePattern: '**/src',
inclusionPattern: 'com/testMgmt/**',
)
)
Correct usage (step() should not contain jacoco() )....
jacoco(
execPattern: '**/build/jacoco/**.exec',
classPattern: '**/build/classes/java/main',
sourcePattern: '**/src',
inclusionPattern: 'com/testMgmt/**',
)
Related
I need to launch few instance on AWS using terraform script, i am automating the whole process using jenkins
pipeline{
agent any
tools {
terraform 'terraform'
}
stages{
stage('Git Checkout'){
steps{
git branch: 'main', credentialsId: 'gitlab id', url: 'https://gitlab.com/ndey1/kafka-infra'
}
}
stage('Terraform init'){
steps{
sh 'cd terraform-aws-ec2-with-vpc'
sh 'terraform init'
}
}
stage('Terraform plan'){
steps{
sh 'terraform plan'
}
}
stage('Terraform apply'){
steps{
sh 'terraform apply --auto-approve'
}
}
}
}
but while running jenins jobs ( pipeline ) it throws the error
+ cd terraform-aws-ec2-with-vpc
[Pipeline] sh
+ terraform init
[0m[1mTerraform initialized in an empty directory![0m
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.[0m
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Terraform plan)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ terraform plan
[31m╷[0m[0m
[31m│[0m [0m[1m[31mError: [0m[0m[1mNo configuration files[0m
[31m│[0m [0m
[31m│[0m [0m[0mPlan requires configuration to be present. Planning without a configuration
[31m│[0m [0mwould mark everything for destruction, which is normally not what is
[31m│[0m [0mdesired. If you would like to destroy everything, run plan with the
[31m│[0m [0m-destroy option. Otherwise, create a Terraform configuration file (.tf
[31m│[0m [0mfile) and try again.
[31m╵[0m[0m
though all terraform cod is in the same dir named "kafka-infra" but still its saying no configuration file in dir but terraform init runs successfully , the error comes in the stage of " terraform plan"
The answer was edited as per #NoamHelmer suggestions from the comments.
You can use the dir option and set it to the directory of the cloned repo, as by default Jenkins is using something called a workspace directory.
stage('Terraform init'){
steps{
dir("terraform-aws-ec2-with-vpc") { // this was added
sh 'terraform init'
}
}
}
The same line should be added to all the stages.
Or you could alternatively use multiline shell scripts:
steps{
sh '''
cd terraform-aws-ec2-with-vpc
terraform init
'''
}
As for the style of the configuration, there are probably multiple (better) ways of doing it. For example, you could use environment variables instead of hardcoding the directory you want to use to execute Terraform code etc.
[1] https://www.jenkins.io/doc/pipeline/tour/environment/
[2] https://www.jenkins.io/doc/pipeline/steps/workflow-basic-steps/#dir-change-current-directory
I am trying to create a jenkinsfile using ant script to build a salesforce project. I need to provide some build properties to execute the ant script.
Here is my jenkinsfile:
pipeline{
agent any
environment{
def antVersion ='Ant 1.9.16'
}
stages{
stage('Checkout'){
steps{
git credentialsId: '******', url: 'https://********/test_repo.git', branch: 'develop'
}
}
stage('Execute Ant Script'){
steps{
withEnv ([
["ANT_HOME=${tool antVersion}"],["readFile('./abc.txt').split('\n') as List"]
])
{
sh 'ant compile'
}
}
}
}
post {
always {
cleanWs()
}
}
}
Jenkins build fails with below error:
java.lang.ClassCastException: org.jenkinsci.plugins.workflow.steps.EnvStep.overrides expects class java.lang.String but received class java.util.ArrayList
at org.jenkinsci.plugins.structs.describable.DescribableModel.coerce(DescribableModel.java:492)
Caused: java.lang.IllegalArgumentException: Could not instantiate {overrides=[[ANT_HOME=C:\Users\********\Documents\Softwares\apache-ant-1.9.16-bin], [readFile('./abc.txt').split('
') as List]]} for org.jenkinsci.plugins.workflow.steps.EnvStep
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:334)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:302)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193)
Finished: FAILURE
Can anyone help me resolve this?
I am running a Jenkins Pipeline and trying to upload a docker image but it is failing using the Artifactory command.
This is a snippet of my Jenkinsfile stage:
stage("Build docker image") {
steps {
container('docker') {
sh 'docker -v'
script {
def rtServer = Artifactory.server "artifactory"
def rtDocker = Artifactory.docker server: rtServer
docker.build("app", "--build-arg JAR_FILE=app.jar -f Dockerfile .")
def buildInfo = rtDocker.push '<companyname>.jfrog.io/app','docker-snapshot-local'
}
}
}
}
This fails after the docker.build with the following message:
[Pipeline] newBuildInfo
[Pipeline] dockerPushStep
expected to call org.jfrog.hudson.pipeline.common.types.Docker.push but wound up catching dockerPushStep; see: https://jenkins.io/redirect/pipeline-cps-method-mismatches/
[Pipeline] }
Jenkins EOF log:
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from ip-XX-XX-XX-XX.ec2.internal/XX.XX.XX.XX:42790
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at org.jfrog.hudson.pipeline.common.docker.utils.DockerAgentUtils.getImageIdFromAgent(DockerAgentUtils.java:291)
at org.jfrog.hudson.pipeline.common.executors.DockerExecutor.execute(DockerExecutor.java:59)
at org.jfrog.hudson.pipeline.scripted.steps.DockerPushStep$Execution.run(DockerPushStep.java:104)
at org.jfrog.hudson.pipeline.scripted.steps.DockerPushStep$Execution.run(DockerPushStep.java:71)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:290)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at ...io.netty.channel.epoll.AbstractEpollChannel.doConnect(AbstractEpollChannel.java:713)
at io.netty.channel.epoll.EpollDomainSocketChannel.doConnect(EpollDomainSocketChannel.java:87)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.connect(AbstractEpollChannel.java:555)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1366)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:545)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:530)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.connect(CombinedChannelDuplexHandler.java:497)
at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:298)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:545)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:530)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:512)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:1024)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:259)
at io.netty.bootstrap.Bootstrap$3.run(Bootstrap.java:252)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:335)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
The expected to call... is just a warning message and you can ignore it. This may disappear in the next version of workflow-cps-plugin. More information about it can be found here.
About the error - Try first to follow the instructions here HAP-1241.
I would like to use the parameterized remote trigger plugin to run the remote project(all branches) and monitor only one of the branches for the status.
node(''){
triggerRemoteJob abortTriggeredJob: true, auth: CredentialsAuth(credentials: 'E2E'), job: 'http://localhost:8080/job/test-projectF', maxConn: 5, useCrumbCache: true, useJobInfoCache: true
}
This is my code to trigger the remote project. But I end up with an exception
[Pipeline] {
[Pipeline] triggerRemoteJob################################################################################################################
Parameterized Remote Trigger Configuration:
- job: http://localhost:8080/job/test-projectF
- auth: 'Credentials Authentication' as user 'admin' (Credentials ID 'E2E')
- parameters:
- blockBuildUntilComplete: true
- connectionRetryLimit: 5
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: net.sf.json.JSONException: JSONObject["property"] is not a JSONArray.
at net.sf.json.JSONObject.getJSONArray(JSONObject.java:1986)
at org.jenkinsci.plugins.ParameterizedRemoteTrigger.RemoteBuildConfiguration.isRemoteJobParameterized(RemoteBuildConfiguration.java:1086)
at org.jenkinsci.plugins.ParameterizedRemoteTrigger.RemoteBuildConfiguration.performTriggerAndGetQueueId(RemoteBuildConfiguration.java:637)
at org.jenkinsci.plugins.ParameterizedRemoteTrigger.pipeline.RemoteBuildPipelineStep$Execution.run(RemoteBuildPipelineStep.java:263)
at org.jenkinsci.plugins.ParameterizedRemoteTrigger.pipeline.RemoteBuildPipelineStep$Execution.run(RemoteBuildPipelineStep.java:239)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$1$1.call(SynchronousNonBlockingStepExecution.java:51)
at hudson.security.ACL.impersonate(ACL.java:290)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$1.run(SynchronousNonBlockingStepExecution.java:48)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
However, same command with a branch name ends in SUCCESS. http://localhost:8080/job/test-projectF/job/master
What is the better way to build the dynamically created branch on the remote Jenkins and track the status?
I have installed Jenkins on Docker and created a declarative pipeline from SCM. The Jenkinsfile is placed on Github and has following code:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
Now whenever I build the Jenkins job I get the following error
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: pipeline for class: groovy.lang.Binding
at groovy.lang.Binding.getVariable(Binding.java:63)
jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
Finished: FAILURE
And when I place the code from Jenkinsfile on Github directly to Jenkins, then it builds successfully. Not sure what is the issue, though the same thing had worked earlier(I have fresh installed Jenkins on Docker)
It worked for me after upgrading Script Security plugin to v1.46(latest)