Kicked off our pipeline (id: 2015-05-24_21_45_40-9907197960194601343), and it failed with the below error. Looks like a problem with reading SideInputs.
Did something change in the service?
May 25, 2015, 2:53:09 PM
(ac187ee1445808a5): java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Premature EOF at
com.google.cloud.dataflow.sdk.runners.worker.SideInputUtils$ReaderIterator.hasNext(SideInputUtils.java:127) at com.google.common.collect.TransformedIterator.hasNext(TransformedIterator.java:43)
Update: Another job with the same problem: 2015-05-25_00_50_13-9599484476849123094
Update: Another job with the same problem: 2015-06-04_18_22_47-8603825216881045184
We've not seen this error since. Seems to have been fixed on the Dataflow service side.
Related
I removed some old jobs from the Jenkins job DSL file that was used to create the jobs. While the seed job is running and trying to process the unreferenced jobs by deleting them it fails with a stack overflow error.
Here is an excerpt from the error message:
Unreferenced items:
GeneratedJob{name='...'}
GeneratedJob{name='...'}
... about 20 more Unreferenced jobs listed here ...
java.lang.RuntimeException: java.io.IOException: Remote call on JNLP4-connect connection from ***.***.***.net/***.***.**.**:***** failed
at hudson.plugins.tfs.model.Server.execute(Server.java:237)
at hudson.plugins.tfs.model.Workspaces.getListFromServer(Workspaces.java:36)
at hudson.plugins.tfs.model.Workspaces.populateMapFromServer(Workspaces.java:45)
at hudson.plugins.tfs.model.Workspaces.exists(Workspaces.java:71)
at hudson.plugins.tfs.actions.RemoveWorkspaceAction.remove(RemoveWorkspaceAction.java:25)
at hudson.plugins.tfs.TeamFoundationServerScm.processWorkspaceBeforeDeletion(TeamFoundationServerScm.java:465)
at hudson.scm.SCM.processWorkspaceBeforeDeletion(SCM.java:245)
at hudson.model.AbstractProject.performDelete(AbstractProject.java:358)
at hudson.model.AbstractItem.delete(AbstractItem.java:775)
at hudson.model.Job.delete(Job.java:675)
at com.cloudbees.hudson.plugins.folder.AbstractFolder.delete(AbstractFolder.java:1176)
at javaposse.jobdsl.plugin.ExecuteDslScripts.updateGeneratedJobs(ExecuteDslScripts.java:460)
at javaposse.jobdsl.plugin.ExecuteDslScripts.perform(ExecuteDslScripts.java:361)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:79)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1818)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: java.io.IOException: Remote call on JNLP4-connect connection from ***.***.***.net/***.***.**.**:***** failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.plugins.tfs.model.Server.execute(Server.java:233)
... 22 more
Caused by: java.lang.StackOverflowError
at com.microsoft.tfs.util.listeners.StandardListenerList$ListenerNode.addListener(StandardListenerList.java:304)
at com.microsoft.tfs.util.listeners.StandardListenerList$ListenerNode.addListener(StandardListenerList.java:304)
at com.microsoft.tfs.util.listeners.StandardListenerList$ListenerNode.addListener(StandardListenerList.java:304)
... about 500 more lines like this ...
at com.microsoft.tfs.util.listeners.StandardListenerList$ListenerNode.addListener(StandardListenerList.java:304)
at com.microsoft.tfs.util.listeners.StandardListenerList$ListenerNode.addListener(StandardListenerList.java:304)
ERROR: java.io.IOException: Remote call on JNLP4-connect connection from ***.***.***.net/***.***.**.**:***** failed
Finished: FAILURE
Help us localize this pagePage generated: Feb 2, 2020 2:49:10 PM CSTREST APIJenkins ver. 2.177
I attempted to fix things by deleting the jobs using the Jenkins UI. I got the same stack overflow error there on a couple of the affected jobs. I changed the source control setting from Team Foundation Server to None, saved the job, and then deleted it. That worked and the jobs are now cleared out.
I went back and ran the seed job again but it still fails with this message.
What causes this stack overflow error in Jenkins while trying to delete jobs?
The stacktrace is below:
Evacuated stdout
Starting Selenium nodes on ci2
March 18, 2019 11:04:00 AM org.jenkinsci.remoting.util.AnonymousClassWarnings warn
WARN: Attempt to (de-)serialize anonymous class org.jenkinsci.plugins.gitclient.Git$1; see: https://jenkins.io/redirect/serialization-of-anonymous-classes/
March 18, 2019 11:04:03 AM org.jenkinsci.remoting.util.AnonymousClassWarnings warn
WARN: Attempt to (de-)serialize anonymous class org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1; see: https://jenkins.io/redirect/serialization-of-anonymous-classes/
Slave JVM has not reported exit code. Is it still running?
[03/18/19 11:04:06] Launch failed - cleaning up connection
ERROR: Connection terminated
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2681)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3156)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:862)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:358)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:140)
at hudson.remoting.Command.readFrom(Command.java:126)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
[03/18/19 11:04:06] [SSH] Connection closed。
I use JDK8 and jenkins 2.164.1 on Ubuntu 16.04.6
How to fix this?
This issue has been bugging us as well. There seems to be a temporary work around, not a solution. Just set the remote root directory of the slave node configuration on jenkins server to a new path, and remoting will take care of the rest.. but, the issue seem to reoccur now and then. We don't know the root cause yet. Any word is more than welcome.
Weblogic is not coming up . It is giving following stack trace . Can any one help in solving that ?
<Jun 20, 2018 1:04:27,029 PM UTC> <Critical> <WebLogicServer> <BEA-000386> <Server subsystem failed. Reason: A MultiException has 4 exceptions. They are:
1. java.lang.ExceptionInInitializerError
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.rjvm.RJVMService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.protocol.ProtocolRegistrationService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.protocol.ProtocolRegistrationService
A MultiException has 4 exceptions. They are:
1. java.lang.ExceptionInInitializerError
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.rjvm.RJVMService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.protocol.ProtocolRegistrationService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.protocol.ProtocolRegistrationService
at org.jvnet.hk2.internal.Collector.throwIfErrors(Collector.java:89)
at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:250)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:358)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:487)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:305)
Truncated. see log file for complete stacktrace
Caused By: java.lang.ExceptionInInitializerError
at weblogic.utils.net.AddressUtils.getIPForLocalHost(AddressUtils.java:163)
at weblogic.rjvm.JVMID.setLocalID(JVMID.java:278)
at weblogic.rjvm.RJVMService.setJVMID(RJVMService.java:72)
at weblogic.rjvm.RJVMService.start(RJVMService.java:54)
at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:76)
Truncated. see log file for complete stacktrace
Caused By: java.lang.NullPointerException
at weblogic.utils.net.AddressUtils$AddressMaker.getAllAddresses(AddressUtils.java:62)
at weblogic.utils.net.AddressUtils$AddressMaker.<clinit>(AddressUtils.java:45)
at weblogic.utils.net.AddressUtils.getIPForLocalHost(AddressUtils.java:163)
at weblogic.rjvm.JVMID.setLocalID(JVMID.java:278)
at weblogic.rjvm.RJVMService.setJVMID(RJVMService.java:72)
Truncated. see log file for complete stacktrace
>
The WebLogic Server encountered a critical failure
Reason: Assertion violated
Stopping Derby server...
Derby server stopped.
Actually there was an interface resolution problem inside the docker container which was causing this
Make sure following points for resolution :
1) Cat /etc/hosts should have entry corresponding to localhost
2) docker0 interface should be in up state
I am running a batch pipeline with the Apache Beam 2.2 SDK via the Cloud Dataflow service. There are 751 text files that I parse using TextIO.readAll() transform, deserialize and write to a date partitioned table in BigQuery.
First thing I noticed is that autoscaling was not really kicking in and left the pipeline at 15 workers, even though I was able to push throughput a lot higher when for example manually setting the number of workers to 250.
My pipeline fails with the following stack trace:
(abed94a6f5139e21): java.io.IOException: Failed to close some writers
at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:248)
Suppressed: java.io.IOException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
Service Unavailable
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(AbstractGoogleAsyncWriteChannel.java:431)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.close(AbstractGoogleAsyncWriteChannel.java:289)
at org.apache.beam.sdk.io.gcp.bigquery.TableRowWriter.close(TableRowWriter.java:81)
at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:242)
at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles$DoFnInvoker.invokeFinishBundle(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.finishBundle(SimpleDoFnRunner.java:187)
at com.google.cloud.dataflow.worker.SimpleParDoFn.finishBundle(SimpleParDoFn.java:407)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.finish(ParDoOperation.java:60)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
at com.google.cloud.dataflow.worker.DataflowWorker.executeWork(DataflowWorker.java:330)
at com.google.cloud.dataflow.worker.DataflowWorker.doWork(DataflowWorker.java:302)
at com.google.cloud.dataflow.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:251)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
Service Unavailable
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:357)
... 4 more
Should I try with even more workers or split the work across several pipelines?
Thanks to the comment by jkff it worked flawlessly - after setting --maxNumWorkers=250 (15 seems to be the standard maximum).
The error was a transient error that Dataflow would retry several times and in the end, the pipeline ran successfully.
Task I am doing
I am trying to trigger Junit using gradle in Jenkins. I am using powerMock (1.4.12) + mockito (1.9.5) for mocking along with Junit4 (4.11) & java8.
The problem
Getting the error in Jenkin console java.lang.RuntimeException: java.io.IOException: invalid constant type: 15
JUnit Report below stack trace
java.lang.IllegalStateException: Failed to transform class with name amdocs.APILink.backend.services.arCrgAdjnRef00. Reason: null at org.powermock.core.classloader.MockClassLoader.loadMockClass(MockClassLoader.java:266)
at org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:180)
at org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:68)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
You should probably not use libraries / tools that are five years old, when there are newer versions. Either this is a bug in one of the tools you use, or something got added to Java as language that they don't understand.