I am trying to upload files into Artifactory repo using my Jenkins file.
The pipeline fails at the Artifactory upload stage.
I'm using Jfrog-Artifactory Plugin version: 3.15.4.
#Pipeline Script
rtUpload (
serverId: 'app1-artifactory',
spec: """{
"files": [
{
"pattern": "*app*.tar.gz",
"target": "myrepo/${Artifactory_Directory}/${Bld_ID}/artifacts/"
},
{
"pattern": "*app*.tar.Z",
"target": "myrepo/${Artifactory_Directory}/${Bld_ID}/artifacts/"
}
]
}"""
)
#Error
java.lang.Exception: Error occurred during operation, please refer to logs for more information.
at org.jfrog.build.extractor.producerConsumer.ProducerConsumerExecutor.start(ProducerConsumerExecutor.java:85)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.uploadArtifactsBySpec(SpecsHelper.java:101)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:166)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to mymachine
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1800)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
at hudson.remoting.Channel.call(Channel.java:1001)
at hudson.FilePath.act(FilePath.java:1159)
at hudson.FilePath.act(FilePath.java:1148)
at org.jfrog.hudson.pipeline.common.executors.GenericUploadExecutor.execute(GenericUploadExecutor.java:56)
at org.jfrog.hudson.pipeline.declarative.steps.generic.UploadStep$Execution.runStep(UploadStep.java:39)
at org.jfrog.hudson.pipeline.declarative.steps.generic.UploadStep$Execution.runStep(UploadStep.java:24)
at org.jfrog.hudson.pipeline.ArtifactorySynchronousNonBlockingStepExecution.run(ArtifactorySynchronousNonBlockingStepExecution.java:54)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused: java.lang.RuntimeException: Failed uploading artifacts by spec
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:168)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:117)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3317)
at hudson.remoting.UserRequest.perform(UserRequest.java:211)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:376)
at hudson.remoting.InterceptingExecutorService.lambda$wrap$0(InterceptingExecutorService.java:78)
at hudson.remoting.InterceptingExecutorService$$Lambda$8/0x000000008800fe60.call(Unknown Source)
at java.util.concurrent.FutureTask.run(FutureTask.java:277)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:825)
Sometimes on rebuilding the pipeline, it runs without errors and the files are uploaded to the repo successfully. What could be the issue?
This seems to be an open issue https://issues.jenkins.io/browse/JENKINS-66424
Downgrading the Artifactory Plugin to version 3.10.5 is the workaround till the issue gets resolved.
This is already opened issue on the Jenkins official websites
https://issues.jenkins.io/browse/JENKINS-66424
Please check the link.
Related
Jenkins Master : 2.282
Kubernetes plugin 1.28.5
after upgrade Kubernetes to 1.28.6 version , we find out that part of our pipelines that use yaml from shared library
our shared library yaml
agent {
kubernetes {
cloud "kube-cloud"
inheritFrom 'jnlp-slave-terraform'
yaml libraryResource('k8s/tf.yaml')
}
}
the failed flows use initContainers in the yaml file.
I think this commit break it - [https://github.com/jenkinsci/kubernetes-plugin/pull/929
attached the error we got when trying to lunch new pods
Also: java.lang.Throwable: launched here
at hudson.slaves.SlaveComputer._connect(SlaveComputer.java:283)
at hudson.model.Computer.connect(Computer.java:435)
at hudson.slaves.CloudRetentionStrategy.start(CloudRetentionStrategy.java:73)
at org.jenkinsci.plugins.durabletask.executors.OnceRetentionStrategy.start(OnceRetentionStrategy.java:83)
at org.jenkinsci.plugins.durabletask.executors.OnceRetentionStrategy.start(OnceRetentionStrategy.java:46)
at hudson.model.AbstractCIBase.updateComputer(AbstractCIBase.java:161)
at hudson.model.AbstractCIBase.access$000(AbstractCIBase.java:43)
at hudson.model.AbstractCIBase$2.run(AbstractCIBase.java:223)
at hudson.model.Queue._withLock(Queue.java:1383)
at hudson.model.Queue.withLock(Queue.java:1260)
at hudson.model.AbstractCIBase.updateComputerList(AbstractCIBase.java:206)
at jenkins.model.Jenkins.updateComputerList(Jenkins.java:1632)
at jenkins.model.Nodes$2.run(Nodes.java:139)
at hudson.model.Queue._withLock(Queue.java:1383)
at hudson.model.Queue.withLock(Queue.java:1260)
at jenkins.model.Nodes.addNode(Nodes.java:135)
at jenkins.model.Jenkins.addNode(Jenkins.java:2155)
at hudson.slaves.NodeProvisioner.lambda$update$6(NodeProvisioner.java:256)
at hudson.model.Queue._withLock(Queue.java:1383)
at hudson.model.Queue.withLock(Queue.java:1260)
at hudson.slaves.NodeProvisioner.update(NodeProvisioner.java:225)
at hudson.slaves.NodeProvisioner.access$900(NodeProvisioner.java:64)
at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(NodeProvisioner.java:823)
at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:91)
at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:67)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
created an issue as well
issues.jenkins.io/browse/JENKINS-65074
I am using Jenkins version 2.236 (but also tried with 2.263.1), vSphere Plugin version 2.24, vSphere Client version 6.7.0
When i start my job and go to jenkins logs i see that below:
enter image description here
This vmtx file is avaliable on vSphere client (i can download it, delete it, move by hands no problem), i also can Clone VM from template manualy, but plugin can't get access to it and clone it, it has an error
enter image description here
Please help, how to avoid this problem?
Unexpected exception encountered while provisioning agent dynamic3fofguaybl6yiwkjfe7jk5zm9
com.vmware.vim25.CannotAccessVmConfig
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at com.vmware.vim25.ws.XmlGenDom.fromXml(XmlGenDom.java:253)
at com.vmware.vim25.ws.XmlGenDom.fromXml(XmlGenDom.java:363)
at com.vmware.vim25.ws.XmlGenDom.fromXml(XmlGenDom.java:363)
at com.vmware.vim25.ws.XmlGenDom.fromXml(XmlGenDom.java:363)
at com.vmware.vim25.ws.XmlGenDom.fromXml(XmlGenDom.java:356)
at com.vmware.vim25.ws.XmlGenDom.fromXML(XmlGenDom.java:233)
at com.vmware.vim25.ws.XmlGenDom.fromXML(XmlGenDom.java:124)
at com.vmware.vim25.ws.SoapClient.unMarshall(SoapClient.java:253)
at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:96)
at com.vmware.vim25.ws.VimStub.retrieveProperties(VimStub.java:106)
at com.vmware.vim25.mo.PropertyCollector.retrieveProperties(PropertyCollector.java:98)
at com.vmware.vim25.mo.ManagedObject.retrieveObjectProperties(ManagedObject.java:146)
at com.vmware.vim25.mo.ManagedObject.getCurrentProperty(ManagedObject.java:167)
at com.vmware.vim25.mo.Task.getTaskInfo(Task.java:51)
Caused: org.jenkinsci.plugins.vsphere.tools.VSphereException: vSphere Error: Couldn't clone "ubuntu-18.04-dynamic-node-for-test". Clone task ended with status error.
Unable to access the virtual machine configuration: Unable to access file [esxi-datastore_nvme] ubuntu-18.04-dynamic-node-for-test/ubuntu-18.04-dynamic-node-for-test.vmtx
at org.jenkinsci.plugins.vsphere.tools.VSphere.newVSphereException(VSphere.java:1151)
at org.jenkinsci.plugins.vsphere.tools.VSphere.cloneOrDeployVm(VSphere.java:287)
at org.jenkinsci.plugins.vSphereCloudSlaveTemplate.provision(vSphereCloudSlaveTemplate.java:428)
at org.jenkinsci.plugins.vSphereCloudSlaveTemplate.provision(vSphereCloudSlaveTemplate.java:403)
at org.jenkinsci.plugins.vSphereCloud$VSpherePlannedNode.provisionNewNode(vSphereCloud.java:534)
at org.jenkinsci.plugins.vSphereCloud$VSpherePlannedNode.access$100(vSphereCloud.java:496)
at org.jenkinsci.plugins.vSphereCloud$VSpherePlannedNode$1.call(vSphereCloud.java:510)
at org.jenkinsci.plugins.vSphereCloud$VSpherePlannedNode$1.call(vSphereCloud.java:506)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I want to stash one file from one machine and unstash it to other machine hence I've following pipeline code -
post {
success {
script {
stash allowEmpty: true, includes: 'Installer/My.war', name: 'MyWar', useDefaultExcludes: false
}
node(SERVER1) {
ws("workspace/RedmineAndReviewboardProject/SVNCheckout") {
script {
sh '''mkdir -p "Installer"'''
dir('/Installer') {
unstash 'MyWar'
}
}
}
}
}
}
In this stash code is executed on one machine and unstash on another machine.
But it results into following error -
java.io.IOException: Failed to extract MyWar.tar.gz
at hudson.FilePath.readFromTar(FilePath.java:2608)
at hudson.FilePath.access$500(FilePath.java:211)
at hudson.FilePath$UntarRemote.invoke(FilePath.java:585)
at hudson.FilePath$UntarRemote.invoke(FilePath.java:576)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3069)
at hudson.remoting.UserRequest.perform(UserRequest.java:211)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 192.168.136.30
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1069)
at hudson.FilePath.act(FilePath.java:1058)
at hudson.FilePath.untar(FilePath.java:571)
at org.jenkinsci.plugins.workflow.flow.StashManager.unstash(StashManager.java:165)
at org.jenkinsci.plugins.workflow.support.steps.stash.UnstashStep$Execution.run(UnstashStep.java:76)
at org.jenkinsci.plugins.workflow.support.steps.stash.UnstashStep$Execution.run(UnstashStep.java:63)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
Caused by: java.nio.file.AccessDeniedException: /Installer
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at hudson.FilePath.mkdirs(FilePath.java:3256)
at hudson.FilePath.readFromTar(FilePath.java:2592)
... 12 more
What has gone wrong here?
As #vinWin suggests, your problem seems to be related to accessing the Installer directory.
While on the pipeline you are creating a directory at workspace/RedmineAndReviewboardProject/SVNCheckout/Installer, on the dir command you are trying to access a different one /Installer
On one hand, where you trying to refer to $WORKSPACE on SERVER1 ? If so, on the ws command you should use $WORKSPACE/RedmineAndReviewboardProject/SVNCheckout. Otherwise, your will be working at $WORKSPACE/workspace/RedmineAndReviewboardProject/SVNCheckout
For the dir command, since you are already located at the root of the above defined workspace, using Installer, without the leading / will suffice.
A weird problem I came across: I run my slave as a docker image, sometimes as docker container on master node and other times on ECS(Fargate) using Amazon Elastic Container Service plugin.
I run this below piece of code
publishLambda(
awsAccessKeyId:"${env.AWS_ACCESS_KEY_ID}",
awsSecretKey:"${env.AWS_SECRET_ACCESS_KEY}",
awsRegion:"${lambda_config.region}",
functionARN:lambda_name,
functionAlias:"DEV"
)
It works fine when I’m running the slave as docker container but when run on ECS, I get the following error after the lambda gets published successfully. I suspect its something with the hudson.remoting api when its trying to get a response across the network.
IMO hudson.remoting should be behaving the same irrespective of where the containers are running. How am i getting such discrepencies?
java.io.NotSerializableException:
com.xti.jenkins.plugin.awslambda.publish.LambdaPublishServiceResponse
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at hudson.remoting.UserRequest._serialize(UserRequest.java:264)
at hudson.remoting.UserRequest.serialize(UserRequest.java:273) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to
JNLP4-connect connection from
ec2-18-224-68-207.us-east-2.compute.amazonaws.com/18.224.68.207:40038
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at com.xti.jenkins.plugin.awslambda.publish.LambdaPublishBuildStep.perform(LambdaPublishBuildStep.java:58)
at com.xti.jenkins.plugin.awslambda.publish.LambdaPublishBuildStep.perform(LambdaPublishBuildStep.java:46)
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:80)
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:67)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused: java.io.IOException: Unable to serialize
com.xti.jenkins.plugin.awslambda.publish.LambdaPublishServiceResponse#4ec0e00f
at hudson.remoting.UserRequest.serialize(UserRequest.java:275)
at hudson.remoting.UserRequest.perform(UserRequest.java:223)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
Caused: java.lang.RuntimeException
at com.xti.jenkins.plugin.awslambda.publish.LambdaPublishBuildStep.perform(LambdaPublishBuildStep.java:66)
at com.xti.jenkins.plugin.awslambda.publish.LambdaPublishBuildStep.perform(LambdaPublishBuildStep.java:46)
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:80)
at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:67)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Finished: FAILURE
The reason for this was Class LambdaPublishServiceResponse in Amazon Elastic Container Service plugin was not inheriting java.io.Serializable.
And since java remoting over network from slave running on ECS(fargate) had to be serialized to be retrieved by master, this change had to be done. Have raised a PR https://github.com/XT-i/aws-lambda-jenkins-plugin/pull/100
We have the following steps in our Jenkinsfile (trying to upload artifacts to our Artifactory server):
def server = script.Artifactory.server("our-artifactory-server-id")
def uploadSpec = """{
"files": [
{
"pattern": "${sourcePath}",
"target": "${targetPath}"
}
]
}"""
server.upload(uploadSpec)
This used to work until we updated to a newer version of Artifactory. Ever since the update, we get the following error when running the build job:
java.io.IOException: Failed to deploy file. Status code: 400
at org.jfrog.build.extractor.clientConfiguration.client.ArtifactoryBuildInfoClient.uploadFile(ArtifactoryBuildInfoClient.java:656)
at org.jfrog.build.extractor.clientConfiguration.client.ArtifactoryBuildInfoClient.deployArtifact(ArtifactoryBuildInfoClient.java:343)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.deploy(SpecsHelper.java:291)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.uploadArtifactsBySpec(SpecsHelper.java:65)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:189)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:130)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2750)
at hudson.remoting.UserRequest.perform(UserRequest.java:181)
at hudson.remoting.UserRequest.perform(UserRequest.java:52)
at hudson.remoting.Request$2.run(Request.java:336)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at ......remote call to docker-bc26fb0b91c4(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1554)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:281)
at hudson.remoting.Channel.call(Channel.java:839)
at hudson.FilePath.act(FilePath.java:987)
Caused: java.io.IOException: remote file operation failed: /home/jenkins/workspace/tration_feature_jenkinsfile-ANARWI2SDBPRVZNIYHCS6XKXIAD2SZ5ZTHM6DRXHYSARAQHPWEMQ at hudson.remoting.Channel#4c39a5aa:docker-bc26fb0b91c4
at hudson.FilePath.act(FilePath.java:994)
at hudson.FilePath.act(FilePath.java:976)
at org.jfrog.hudson.pipeline.executors.GenericUploadExecutor.execution(GenericUploadExecutor.java:52)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:65)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:46)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:260)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
Some background regarding our setup:
Jenkins version 2.69
Artifactory version 5.8.4
Artifactory plugin version 2.14.0
the error started to appear with a recent update of Artifactory
the Artifactory log shows no output for the error
we are sitting behind a proxy, but no_proxy is set correctly, at least we can curl https://... to our Artifactory host
we have self-signed certificates for Artifactory, but they should be properly added to the java truststore and the system truststore, since we can open URLs in java apps as well as with curl without any issues.
Any idea how we could debug this issue?
I had a very similar issue and found the solution in the configuration of the Artifactory-Jenkins plugin (manage jenkins --> configure system --> artifactory).
What I did was changing the Artifactory server URL from:
https://<artifactorydomain.com>
to the new URL (adding the /artifactory):
https://<artifactorydomain.com>/artifactory
Hope this helps.