Docker: Illegal access: this web application instance has been stopped already - docker

A docker with apache, tomcat and elasticsearch images starts and runs without issue. But after restart with docker-compose down/up, the following error occurs:
29-Nov-2018 16:43:22.063 INFO [elasticsearch[_client_][scheduler][T#1]] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1348)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1336)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1195)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1156)
at org.elasticsearch.common.util.concurrent.ThreadContext.preserveContext(ThreadContext.java:273)
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.wrapRunnable(EsThreadPoolExecutor.java:159)
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:89)
at org.elasticsearch.threadpool.ThreadPool$ThreadedRunnable.run(ThreadPool.java:455)
at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:419)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Any idea how to avoid this error? Thanks in advance.

Related

Error when binding two instances Janusgraph on same graph

So I'm trying to deploy 2 instances of Janusgraph (v 0.3.1) to insert data in the same keyspace of ScyllaDb backend. To do that i deploy 2 janusgraph containers with docker. The first one is starting without errors and creates the keyspace in my ScyllaDb but the second one displays some errors.
So my Janusgraph containers works on a cluster while my backend works on another cluster. Actually, I tried with only one container to restart it when my keyspace Scylla is already created and I have the same problem.
I also found a script clean.groovy that allowed to force shutdown the graph instances opened. But nothing works...
5316 [main] INFO org.janusgraph.diskstorage.Backend - Initiated backend operations thread pool of size 8
5648 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [/janusgraph-config/janusgraph.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:82)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:70)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:104)
at org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager.lambda$new$0(DefaultGraphManager.java:57)
at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
at org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager.<init>(DefaultGraphManager.java:55)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:80)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:120)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:84)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:343)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:78)
... 13 more
Caused by: org.janusgraph.core.JanusGraphException: A JanusGraph graph with the same instance id [0a0a01657-node11] is already open. Might required forced shutdown.
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:165)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)
... 18 more
5651 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
5800 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and preparing GremlinScriptEngines instances.
7837 [gremlin-server-exec-1] ERROR org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager - Could not create GremlinScriptEngine for gremlin-groovy
java.lang.IllegalStateException: javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.lambda$createGremlinScriptEngine$16(DefaultGremlinScriptEngineManager.java:464)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.createGremlinScriptEngine(DefaultGremlinScriptEngineManager.java:450)
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.getEngineByName(DefaultGremlinScriptEngineManager.java:219)
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.lambda$getEngineByName$0(CachedGremlinScriptEngineManager.java:57)
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.getEngineByName(CachedGremlinScriptEngineManager.java:57)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:263)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:397)
at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:264)
at org.apache.tinkerpop.gremlin.jsr223.DefaultGremlinScriptEngineManager.lambda$createGremlinScriptEngine$16(DefaultGremlinScriptEngineManager.java:460)
... 24 more
Caused by: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:713)
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:395)
... 26 more
Caused by: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:66)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:310)
at Script1.run(Script1.groovy:16)
at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:690)
... 27 more
7841 [main] WARN org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Could not initialize gremlin-groovy GremlinScriptEngine as init script could not be evaluated
java.util.concurrent.CompletionException: java.lang.IllegalArgumentException: gremlin-groovy is not an available GremlinScriptEngine
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.lambda$new$4(ServerGremlinExecutor.java:141)
at java.util.LinkedHashMap$LinkedKeySet.forEach(LinkedHashMap.java:559)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:136)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:120)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:84)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:343)
Caused by: java.lang.IllegalArgumentException: gremlin-groovy is not an available GremlinScriptEngine
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.registerLookUpInfo(CachedGremlinScriptEngineManager.java:95)
at org.apache.tinkerpop.gremlin.jsr223.CachedGremlinScriptEngineManager.getEngineByName(CachedGremlinScriptEngineManager.java:58)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:263)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I think all those errors are created by this :
Caused by: org.janusgraph.core.JanusGraphException: A JanusGraph graph with the same instance id [0a0a01657-node11] is already open. Might required forced shutdown.
And this is the script clean.groovy
graph = JanusGraphFactory.open('janusgraph.properties')
mgmt = graph.openManagement()
instances = mgmt.getOpenInstances()
instances.iterator().findAll {
!it.contains('current')
}.each {
mgmt.forceCloseInstance(it)
}
mgmt.commit()
So did anybody solved this problem of graph ?
Thanks in advance.
I finally got answer ! So you can use ConfiguredGraphFactory but it failed for me. So for the first container i started it normally but the second one i added another configuration in the janusgraph.properties file :
graph.replace-instance-if-exists=true`
And it works perfectly !

Dataflow read GCS error

Does the following exception look familiar to anyone? The exactly same pipeline and data worked last week but got failed couple of times for the same exception today. I didn't see any footprint of my code from the stack trace. Wondering what it might be related to... for example, GCS read quota?
And, since it went fine on my direct runner, for these type of Dataflow exceptions, how can I debug on Dataflow?
{
insertId: "7289985381136617647:828219:0:906922"
jsonPayload: {
exception: "java.io.IOException: Failed to advance reader of source: gs://fiona_dataflow/tmp/BigQueryExtractTemp/5c813875537d4c1a89b74a800bb37c50/000000000864.avro range [0, 808559590)
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:605)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:398)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:383)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:355)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:286)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at com.geotab.bigdata.streaming.mapserver.backfill.MapServerBatchBeamApplication.lambda$main$fd9fc9ef$1(MapServerBatchBeamApplication.java:82)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase$1.apply(BigQuerySourceBase.java:211)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase$1.apply(BigQuerySourceBase.java:205)
at org.apache.beam.sdk.io.AvroSource$AvroBlock.readNextRecord(AvroSource.java:579)
at org.apache.beam.sdk.io.BlockBasedSource$BlockBasedReader.readNextRecord(BlockBasedSource.java:223)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.advanceImpl(FileBasedSource.java:473)
at org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.advance(OffsetBasedSource.java:267)
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:602)
... 14 more
"
job: "2018-04-23_07_30_32-17662367668739576363"
logger: "com.google.cloud.dataflow.worker.WorkItemStatusClient"
message: "Uncaught exception occurred during work unit execution. This will be retried."
stage: "s19"
thread: "27"
work: "1213589185295287945"
worker: "mapserverbatchbeamapplica-04230730-s20x-harness-713d"

How to handle "an unhandled error caused by the Dataflow SDK" (corrupted gz as input)

Is there a way to deal with "an unhandled error caused by the Dataflow SDK"?
Specifically, we have a Dataflow job that takes a list of gz files (in GCS) as input, and produces some output.
Once in a while one of the gz files may be corrupted, and the job fails because of it.
We are wondering if there is a way to handle this -- specifically, we want to make it so that the job would ignore such corrupted file(s) and proceed.
It is not clear if we can catch an exception thrown due to the corrupted gz file or not (because it appears that it is handled in Dataflow SDK itself, causing it to fail).
(For Google Dataflow team: Here is a specific dataflow job id: 2017-04-02_05_08_20-5491890758767473661.)
Update: Here's the stack trace we got from the logging UI.
(778029c78ed61ff2): java.io.IOException: Failed to advance reader of source: StaticValueProvider{value=gs://aaa.gz} range [0, 9223372036854775807)
at com.google.cloud.dataflow.sdk.runners.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:544)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:425)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:217)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:182)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:69)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:284)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:220)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:170)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:192)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:172)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:159)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:278)
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
at com.google.cloud.dataflow.sdk.io.TextIO$TextSource$TextBasedReader.tryToEnsureNumberOfBytesInBuffer(TextIO.java:1077)
at com.google.cloud.dataflow.sdk.io.TextIO$TextSource$TextBasedReader.findSeparatorBounds(TextIO.java:1011)
at com.google.cloud.dataflow.sdk.io.TextIO$TextSource$TextBasedReader.readNextRecord(TextIO.java:1043)
at com.google.cloud.dataflow.sdk.io.CompressedSource$CompressedReader.readNextRecord(CompressedSource.java:482)
at com.google.cloud.dataflow.sdk.io.FileBasedSource$FileBasedReader.advanceImpl(FileBasedSource.java:536)
at com.google.cloud.dataflow.sdk.io.OffsetBasedSource$OffsetBasedReader.advance(OffsetBasedSource.java:287)
at com.google.cloud.dataflow.sdk.runners.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:541)
... 14 more

Android build failing in Jenkins due to connectedAndroidTestDebug TimeoutException

My current Jenkins build is failing when trying to deploy tests on a real android device.
I have found some people having a similar issue on Travis, but I could not find many others reporting it in Jenkins.
As you can see in the error outputed by gradle this is caused by the TimeoutException triggered when attempting to install the tests:
:library:connectedAndroidTestDebug
11:20:37 E/Device: Error during Sync: timeout.
Unable to install /Users/Shared/Jenkins/Home/jobs/WORKSPACE_PATH/build/outputs/apk/library-debug-androidTest-unaligned.apk
com.android.ddmlib.InstallException
at com.android.ddmlib.Device.installPackage(Device.java:853)
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:91)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:143)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:49)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.android.ddmlib.TimeoutException
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:769)
at com.android.ddmlib.SyncService.doPushFile(SyncService.java:695)
at com.android.ddmlib.SyncService.pushFile(SyncService.java:380)
at com.android.ddmlib.Device.syncPackageToDevice(Device.java:1069)
at com.android.ddmlib.Device.installPackage(Device.java:844)
... 9 more
com.android.builder.testing.ConnectedDevice > runTests[Nexus 4 - 4.4.4] [31mFAILED [0m
com.android.builder.testing.api.DeviceException: com.android.ddmlib.InstallException
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:95)
null
com.android.builder.testing.api.DeviceException: com.android.ddmlib.InstallException
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:95)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:143)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:49)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.android.ddmlib.InstallException
at com.android.ddmlib.Device.installPackage(Device.java:853)
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:91)
... 8 more
Caused by: com.android.ddmlib.TimeoutException
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:769)
at com.android.ddmlib.SyncService.doPushFile(SyncService.java:695)
at com.android.ddmlib.SyncService.pushFile(SyncService.java:380)
at com.android.ddmlib.Device.syncPackageToDevice(Device.java:1069)
at com.android.ddmlib.Device.installPackage(Device.java:844)
... 9 more
:library:connectedAndroidTestDebug FAILED
FAILURE: Build failed with an exception.
I tried invoking Gradle Wrapper which made no difference to specifying the version in the job configuration - I tried Gradle 2.5-rc-1, Gradle 2.3 and Gradle 2.2.1, this last one worked for two builds and then started returning this same timeout error.
I also tried setting up environmental variable:
export ADB_INSTALL_TIMEOUT=10
Setting in build.gradle the com.android.ddmlib.DdmPreferences.setTimeOut param:
com.android.ddmlib.DdmPreferences.setTimeOut(60000)
Also tried using the adbOption 'timeOutInMs' which neither worked.
android {
adbOptions {
timeOutInMs 60000 // set timeout to 1 minute
}
}
It looks like this is an ongoing issue in code.google.com - the following two threads seem to be connected to it:
Issue 1: com.android.builder.testing.ConnectedDevice.getDeviceConfig has a hard coded timeout value that is too low
Issue 2: Allow the user to increase the ADB timeout
Any ideas what else could I try?
Thank you!
I finally fixed it restarting the adb server and issuing any other adb command like for example requesting the list of devices.
Just run the following commands from the terminal:
adb kill-server
adb devices
I hope it helps!

After installation of github plugin into jenkins, restart not responding and showing errors

Even I remove the github plugin from jenkins not working. which is running under the tomcat7 server.
hudson.util.HudsonFailedToLoad: org.jvnet.hudson.reactor.ReactorException: java.io.IOException: Unable to read /usr/share/tomcat7/.jenkins/config.xml
at hudson.WebAppMain$3.run(WebAppMain.java:237)
Caused by: org.jvnet.hudson.reactor.ReactorException: java.io.IOException: Unable to read /usr/share/tomcat7/.jenkins/config.xml
at org.jvnet.hudson.reactor.Reactor.execute(Reactor.java:269)
at jenkins.InitReactorRunner.run(InitReactorRunner.java:44)
at jenkins.model.Jenkins.executeReactor(Jenkins.java:914)
at jenkins.model.Jenkins.<init>(Jenkins.java:813)
at hudson.model.Hudson.<init>(Hudson.java:83)
at hudson.model.Hudson.<init>(Hudson.java:79)
at hudson.WebAppMain$3.run(WebAppMain.java:225)
Caused by: java.io.IOException: Unable to read /usr/share/tomcat7/.jenkins/config.xml
at hudson.XmlFile.unmarshal(XmlFile.java:165)
at jenkins.model.Jenkins$16.run(Jenkins.java:2642)
at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169)
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:282)
at jenkins.model.Jenkins$7.runTask(Jenkins.java:903)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:210)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
If you see a Java stack trace, you can usually find what the problem is by looking at the "Caused by:" line. Here is yours:
Caused by: org.jvnet.hudson.reactor.ReactorException: java.io.IOException: Unable to read /usr/share/tomcat7/.jenkins/config.xml
Jenkins is unable to read its configuration file. This could be due to one of:
The file doesn't exist
The file isn't readable by the user that runs the Jenkins process (note that it needs to be writable as well)
The JENKINS_HOME environment variable isn't set correctly for the user that runs the Jenkins process

Resources