I've installed Ignite using a docker image from docker hub. Ignite server node starts correctly. But I get the following exception when trying to update cache:
[SEVERE][rest-#35%null%][GridCacheCommandHandler] Failed to execute cache command: GridRestCacheRequest [cacheName=null, cacheFlags=0, ttl=null, super=GridRestRequest [destId=null, clientId=466b7ff5-c303-452e-8f2d-97d59c753de5, addr=null, cmd=CACHE_PUT]]
class org.apache.ignite.IgniteCheckedException: Failed to find cache for given cache name (null for default cache): null
at org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.localCache(GridCacheCommandHandler.java:754)
at org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.executeCommand(GridCacheCommandHandler.java:677)
at org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.handleAsync(GridCacheCommandHandler.java:468)
at org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:264)
at org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:87)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:153)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[14:57:18,637][SEVERE][rest-#35%null%][GridRestProcessor] Failed to handle request: CACHE_PUT
class org.apache.ignite.IgniteCheckedException: Failed to find cache for given cache name (null for default cache): null
at org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.localCache(GridCacheCommandHandler.java:754)
at org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.executeCommand(GridCacheCommandHandler.java:677)
at org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.handleAsync(GridCacheCommandHandler.java:468)
at org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:264)
at org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:87)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:153)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Any ideas what is wrong?
You should create cache before you will start using it. Use getOrCreateCache method.
You could read more information in doc and check this example which use cache api.
Also, there are a lot examples in apache ignite for various use cases.
Related
I am trying to connect to a hive instance installed in cloud instance using Apache beam-dataflow. When I run this, I am getting the below exception. This is happening when I access this db using Apache beam. I have seen many related questions which is not about apache beam or google dataflow.
(c9ec8fdbe9d1719a): java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot create PoolableConnectionFactory (Method not supported)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:125)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:105)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:92)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot create PoolableConnectionFactory (Method not supported)
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$8CR0LcYI.invokeSetup(Unknown Source)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:65)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:47)
at com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:100)
at com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:365)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Caused by: java.sql.SQLException: Cannot create PoolableConnectionFactory (Method not supported)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2294)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2039)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1533)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn.setup(JdbcIO.java:377)
Caused by: java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveConnection.isValid(HiveConnection.java:898)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:918)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:283)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:357)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2307)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2290)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2039)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1533)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn.setup(JdbcIO.java:377)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$8CR0LcYI.invokeSetup(Unknown Source)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:65)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:47)
at com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:100)
at com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:365)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:125)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:105)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:92)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
With the same connection string and Driver files, I can connect to this instance using a normal java-jdbc program.
Bugging this for a while now, and I am unable to find a solution for it. Can anyone please give any idea on this?
Please see the code snippet connecting to hive below :
PCollection<Customer> collection = dataflowPipeline.apply(JdbcIO.<Customer>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration
.create("org.apache.hive.jdbc.HiveDriver", "jdbc:hive2://<external IP of computer instance>:10000/dbtest")
.withUsername("username").withPassword("password"))
.withQuery(
"select c_customer_id,c_first_name,c_last_name,c_preferred_cust_flag,c_birth_day,from dbtest.customer")
.withRowMapper(new JdbcIO.RowMapper<Customer>() {
#Override
public Customer mapRow(ResultSet resultSet) throws Exception {
// TODO Auto-generated method stub
Customer customer = new Customer();
customer.setC_customer_id(resultSet.getString("c_customer_id"));
customer.setC_first_name(resultSet.getString("c_first_name"));
customer.setC_last_name(resultSet.getString("c_last_name"));
customer.setC_preferred_cust_flag(resultSet.getString("c_preferred_cust_flag"));
customer.setC_birth_day(resultSet.getInt("c_birth_day"));
return customer;
}
}).withCoder(AvroCoder.of(Customer.class)));
Apache DBCP BasicDataSource uses the method isValid to validate a connection, which was not implemented by an old version of the Hive JDBC driver - see JDBC to hive connection fails on invalid operation isValid() .
However, the method is implemented in versions after Hive 2.1.0. https://github.com/apache/hive/commit/2d2ab0942482a6ce1523dd9dd0f4094865e93b28 .
Can you use a newer version of the Hive JDBC driver?
After upgrading a service that uses Akka + Akka cluster sharding to the newly released Akka (2.5.0), we started encountering issues starting the system in Docker + Alpine Linux. From what I can infer, Akka Cluster sharding is configured to used Akka Distributed Data (which is not experimental anymore as of 2.5.0) and it is using LMDB (which requires GCC + glibc and it is not available in Alpine Linux).
My questions are as follows:
1) Is there any standard alternative supported by Akka instead of LMDB?
2) Is there any way to get LMDB to work in Alpine Linux?
Stack Trace:
[ERROR] [04/20/2017 13:42:19.014] [lotus-akka.actor.default-dispatcher-5] [akka://lotus/system/sharding/replicator/durableStore] Error relocating /tmp/lmdbjava-native-library-5972006786989102785.so: __fprintf_chk: symbol not found
akka.actor.ActorInitializationException: akka://lotus/system/sharding/replicator/durableStore: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:191)
at akka.actor.ActorCell.create(ActorCell.scala:600)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:454)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:476)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
at akka.dispatch.Mailbox.run(Mailbox.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at akka.util.Reflect$.instantiate(Reflect.scala:65)
at akka.actor.ArgsReflectConstructor.produce(IndirectActorProducer.scala:96)
at akka.actor.Props.newActor(Props.scala:213)
at akka.actor.ActorCell.newActor(ActorCell.scala:555)
at akka.actor.ActorCell.create(ActorCell.scala:581)
... 7 more
Caused by: java.lang.UnsatisfiedLinkError: Error relocating /tmp/lmdbjava-native-library-5972006786989102785.so: __fprintf_chk: symbol not found
at jnr.ffi.provider.jffi.NativeLibrary.loadNativeLibraries(NativeLibrary.java:87)
at jnr.ffi.provider.jffi.NativeLibrary.getNativeLibraries(NativeLibrary.java:70)
at jnr.ffi.provider.jffi.NativeLibrary.getSymbolAddress(NativeLibrary.java:49)
at jnr.ffi.provider.jffi.NativeLibrary.findSymbolAddress(NativeLibrary.java:59)
at jnr.ffi.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:158)
at jnr.ffi.provider.jffi.AsmLibraryLoader.loadLibrary(AsmLibraryLoader.java:89)
at jnr.ffi.provider.jffi.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:43)
at jnr.ffi.LibraryLoader.load(LibraryLoader.java:325)
at jnr.ffi.LibraryLoader.load(LibraryLoader.java:304)
at org.lmdbjava.Library.<clinit>(Library.java:95)
at org.lmdbjava.Env$Builder.open(Env.java:406)
at org.lmdbjava.Env$Builder.open(Env.java:430)
at akka.cluster.ddata.LmdbDurableStore.<init>(DurableStore.scala:131)
... 16 more
Finally managed to solve this problem. Cluster sharding attempts to use durable storage by default (default is LMDB). For cluster sharding without using remember-entities, durable storage is not required.
Hence, the solution to this was to disable durable storage for cluster sharding by adding the following configuration
akka.cluster.sharding.distributed-data.durable.keys = []
Is there a way to deal with "an unhandled error caused by the Dataflow SDK"?
Specifically, we have a Dataflow job that takes a list of gz files (in GCS) as input, and produces some output.
Once in a while one of the gz files may be corrupted, and the job fails because of it.
We are wondering if there is a way to handle this -- specifically, we want to make it so that the job would ignore such corrupted file(s) and proceed.
It is not clear if we can catch an exception thrown due to the corrupted gz file or not (because it appears that it is handled in Dataflow SDK itself, causing it to fail).
(For Google Dataflow team: Here is a specific dataflow job id: 2017-04-02_05_08_20-5491890758767473661.)
Update: Here's the stack trace we got from the logging UI.
(778029c78ed61ff2): java.io.IOException: Failed to advance reader of source: StaticValueProvider{value=gs://aaa.gz} range [0, 9223372036854775807)
at com.google.cloud.dataflow.sdk.runners.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:544)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation$SynchronizedReaderIterator.advance(ReadOperation.java:425)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:217)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:182)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:69)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:284)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:220)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:170)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:192)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:172)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:159)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:278)
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
at com.google.cloud.dataflow.sdk.io.TextIO$TextSource$TextBasedReader.tryToEnsureNumberOfBytesInBuffer(TextIO.java:1077)
at com.google.cloud.dataflow.sdk.io.TextIO$TextSource$TextBasedReader.findSeparatorBounds(TextIO.java:1011)
at com.google.cloud.dataflow.sdk.io.TextIO$TextSource$TextBasedReader.readNextRecord(TextIO.java:1043)
at com.google.cloud.dataflow.sdk.io.CompressedSource$CompressedReader.readNextRecord(CompressedSource.java:482)
at com.google.cloud.dataflow.sdk.io.FileBasedSource$FileBasedReader.advanceImpl(FileBasedSource.java:536)
at com.google.cloud.dataflow.sdk.io.OffsetBasedSource$OffsetBasedReader.advance(OffsetBasedSource.java:287)
at com.google.cloud.dataflow.sdk.runners.worker.WorkerCustomSources$BoundedReaderIterator.advance(WorkerCustomSources.java:541)
... 14 more
My current Jenkins build is failing when trying to deploy tests on a real android device.
I have found some people having a similar issue on Travis, but I could not find many others reporting it in Jenkins.
As you can see in the error outputed by gradle this is caused by the TimeoutException triggered when attempting to install the tests:
:library:connectedAndroidTestDebug
11:20:37 E/Device: Error during Sync: timeout.
Unable to install /Users/Shared/Jenkins/Home/jobs/WORKSPACE_PATH/build/outputs/apk/library-debug-androidTest-unaligned.apk
com.android.ddmlib.InstallException
at com.android.ddmlib.Device.installPackage(Device.java:853)
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:91)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:143)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:49)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.android.ddmlib.TimeoutException
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:769)
at com.android.ddmlib.SyncService.doPushFile(SyncService.java:695)
at com.android.ddmlib.SyncService.pushFile(SyncService.java:380)
at com.android.ddmlib.Device.syncPackageToDevice(Device.java:1069)
at com.android.ddmlib.Device.installPackage(Device.java:844)
... 9 more
com.android.builder.testing.ConnectedDevice > runTests[Nexus 4 - 4.4.4] [31mFAILED [0m
com.android.builder.testing.api.DeviceException: com.android.ddmlib.InstallException
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:95)
null
com.android.builder.testing.api.DeviceException: com.android.ddmlib.InstallException
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:95)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:143)
at com.android.builder.internal.testing.SimpleTestCallable.call(SimpleTestCallable.java:49)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.android.ddmlib.InstallException
at com.android.ddmlib.Device.installPackage(Device.java:853)
at com.android.builder.testing.ConnectedDevice.installPackage(ConnectedDevice.java:91)
... 8 more
Caused by: com.android.ddmlib.TimeoutException
at com.android.ddmlib.AdbHelper.read(AdbHelper.java:769)
at com.android.ddmlib.SyncService.doPushFile(SyncService.java:695)
at com.android.ddmlib.SyncService.pushFile(SyncService.java:380)
at com.android.ddmlib.Device.syncPackageToDevice(Device.java:1069)
at com.android.ddmlib.Device.installPackage(Device.java:844)
... 9 more
:library:connectedAndroidTestDebug FAILED
FAILURE: Build failed with an exception.
I tried invoking Gradle Wrapper which made no difference to specifying the version in the job configuration - I tried Gradle 2.5-rc-1, Gradle 2.3 and Gradle 2.2.1, this last one worked for two builds and then started returning this same timeout error.
I also tried setting up environmental variable:
export ADB_INSTALL_TIMEOUT=10
Setting in build.gradle the com.android.ddmlib.DdmPreferences.setTimeOut param:
com.android.ddmlib.DdmPreferences.setTimeOut(60000)
Also tried using the adbOption 'timeOutInMs' which neither worked.
android {
adbOptions {
timeOutInMs 60000 // set timeout to 1 minute
}
}
It looks like this is an ongoing issue in code.google.com - the following two threads seem to be connected to it:
Issue 1: com.android.builder.testing.ConnectedDevice.getDeviceConfig has a hard coded timeout value that is too low
Issue 2: Allow the user to increase the ADB timeout
Any ideas what else could I try?
Thank you!
I finally fixed it restarting the adb server and issuing any other adb command like for example requesting the list of devices.
Just run the following commands from the terminal:
adb kill-server
adb devices
I hope it helps!
I am trying to extract twitter data using flume. but i am getting the following error
15/04/08 23:16:36 ERROR node.PollingPropertiesFileConfigurationProvider: Unhandled error
java.lang.NoSuchMethodError: twitter4j.conf.Configuration.isStallWarningsEnabled()Z
at twitter4j.TwitterStreamImpl.<init>(TwitterStreamImpl.java:60)
at twitter4j.TwitterStreamFactory.<clinit>(TwitterStreamFactory.java:40)
at com.cloudera.flume.source.TwitterSource.<init>(TwitterSource.java:64)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at java.lang.Class.newInstance(Class.java:433)
at org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:42)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:327)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have used the flume-sources-1.0-SNAPSHOT.jar from cloudera.The TwitterAgent runs with the above mentioned error.
Is there any work around for it?
Thanks in advance.
This is obviously a dependency error. the flume-sources library expects a version of Twitter4j that isn't present, hence the NoSuchMethod error. I would suggest that you pull the right versions, whihc would be
1.6.0-SNAPSHOT for twitter source and 3.0.3 for twitter4j. You should consult flume's pom.xml, which has all the version info you need.
It should be noted that you should use the most current version as possible, as old implementations will not work. Twitter broke their old APIs in the meantime.
Hope this helps.
This is an issue with fully qualified name of the Class in your Agent.conf file.
In the older versions, class name is: com.cloudera.flume.source.TwitterSource
In the latest version of the flume TwitterSource is already shipped and no need to download separately.
The class name is changed to org.apache.flume.source.twitter.TwitterSource
Please carefully change the class name, defitely It will work for you