I've been using the Enyim Memcached Client for .Net, trying to connect to a server running on AppHarbor. The relevant parts of my configuration file look like this:
<enyim.com>
<log factory="Enyim.Caching.DiagnosticsLogFactory, Enyim.Caching" />
<memcached protocol="Binary">
<servers>
<add address="8d593f28-37d7-4c4f-a702-aa7687a85ea1.memcacher.com" port="11211" />
</servers>
<authentication
type="Enyim.Caching.Memcached.PlainTextAuthenticator, Enyim.Caching"
userName="changed to post on stack overflow"
password="changed to post on stack overflow"
zone="AUTHZ"
/>
</memcached>
</enyim.com>
My connection keeps timing out. Any ideas whats going on here? Here are the logs from Enyim client:
2012-01-21 18:56:08 [ERROR] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Could not init pool. - System.TimeoutException: Could not connect to 50.19.210.46:11211
at Enyim.Caching.Memcached.PooledSocket.ConnectWithTimeout(Socket socket, IPEndPoint endpoint, Int32 timeout)
at Enyim.Caching.Memcached.PooledSocket..ctor(IPEndPoint endpoint, TimeSpan connectionTimeout, TimeSpan receiveTimeout)
at Enyim.Caching.Memcached.MemcachedNode.CreateSocket()
at Enyim.Caching.Memcached.Protocol.Binary.BinaryNode.CreateSocket()
at Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl.CreateSocket()
at Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl.InitPool()
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Mark as dead was requested for 50.19.210.46:11211
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - FailurePolicy.ShouldFail(): True
2012-01-21 18:56:08 [WARN] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Marking node 50.19.210.46:11211 as dead
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.DefaultServerPool - Node 50.19.210.46:11211 is dead.
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.DefaultServerPool - Starting the recovery timer.
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.DefaultServerPool - Timer started.
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Acquiring stream from pool. 50.19.210.46:11211
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Pool is dead or disposed, returning null. 50.19.210.46:11211
UPDATE:
Turns out the reason I can't connect to the memcached server is because it's only accessible from appharbor's environment. So for anyone else that runs across this, you need to use a local memcached service when developing locally, then simply change the credentials when deploying (which apphaorbor actually does automatically for you). Problem resolved.
AppHarbor Memcacher buckets are only accessible from AppHarbor application servers. The documentation has been amended to clearly reflect this.
You should use a locally installed memcached server for testing.
Related
I have a ktor app. I works fine when I run it in development mode. I package it in a docker image by copying over what the gradle application plugin provided. That also works fine on my local machine 8 cores. But now the strange part. When I do exactly the same thing on a rented V-Server also running Ubuntu-20.04 like my local system, ktor is incredible slow.
docker-compose logs server:
server | 2021-08-24 08:00:23.337 [main] INFO ktor.application - Autoreload is disabled because the development mode is off.
server | 2021-08-24 08:25:35.048 [main] INFO ktor.application - Autoreload is disabled because the development mode is off.
server | 2021-08-24 09:18:48.246 [main] INFO c.e.e.s.TemplateStore - Starting to parse Sentences
server | 2021-08-24 09:18:48.345 [main] INFO c.e.e.s.TemplateStore - Finished parsing sentences
server | 2021-08-24 09:18:48.346 [main] INFO ktor.application - Responding at http://0.0.0.0:8080
server | 2021-08-24 09:18:48.347 [main] INFO ktor.application - Application started in 3193.32 seconds.
Application started in 3193.32 seconds
The source code can be found here https://github.com/1-alex98/whatisthat . It has a docker-compose.yml defining the whole docker container being started.
Local system 32 gb ram + 8 cores . V-Server 4 gb Ram + 2 cores (htop shows pleinty of resources are free).
I am looking for ideas on what in the world could cause this behavior. Or ways to debug it.
Update:
Seems to read a file forever:
"main" #1 prio=5 os_prio=0 cpu=652.14ms elapsed=173.92s tid=0x00007f01d4016000 nid=0xe runnable [0x00007f01dace6000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(java.base#11.0.12/Native Method)
at java.io.FileInputStream.read(java.base#11.0.12/FileInputStream.java:279)
at java.io.FilterInputStream.read(java.base#11.0.12/FilterInputStream.java:133)
at sun.security.provider.NativePRNG$RandomIO.readFully(java.base#11.0.12/NativePRNG.java:424)
at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(java.base#11.0.12/NativePRNG.java:526)
at sun.security.provider.NativePRNG$RandomIO.implNextBytes(java.base#11.0.12/NativePRNG.java:545)
- locked <0x00000000c7571158> (a java.lang.Object)
at sun.security.provider.NativePRNG$Blocking.engineNextBytes(java.base#11.0.12/NativePRNG.java:268)
at java.security.SecureRandom.nextBytes(java.base#11.0.12/SecureRandom.java:751)
at kotlin.random.AbstractPlatformRandom.nextBytes(PlatformRandom.kt:47)
at kotlin.random.Random.nextBytes(Random.kt:260)
at com.example.routes.websocket.WebsocketRoutingKt.<clinit>(WebsocketRouting.kt:40)
at com.example.plugins.RoutingKt$routing$1.invoke(Routing.kt:13)
at com.example.plugins.RoutingKt$routing$1.invoke(Routing.kt:11)
at io.ktor.routing.Routing$Feature.install(Routing.kt:106)
at io.ktor.routing.Routing$Feature.install(Routing.kt:88)
at io.ktor.application.ApplicationFeatureKt.install(ApplicationFeature.kt:68)
at io.ktor.routing.RoutingKt.routing(Routing.kt:129)
at com.example.plugins.RoutingKt.routing(Routing.kt:11)
at com.example.ApplicationKt$main$1.invoke(Application.kt:18)
at com.example.ApplicationKt$main$1.invoke(Application.kt:14)
at io.ktor.server.engine.internal.CallableUtilsKt.executeModuleFunction(CallableUtils.kt:50)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:317)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:316)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartupFor(ApplicationEngineEnvironmentReloading.kt:341)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.launchModuleByName(ApplicationEngineEnvironmentReloading.kt:316)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.access$launchModuleByName(ApplicationEngineEnvironmentReloading.kt:30)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:304)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:295)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartup(ApplicationEngineEnvironmentReloading.kt:323)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:295)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:136)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:268)
at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:174)
at com.example.ApplicationKt.main(Application.kt:21)
at com.example.ApplicationKt.main(Application.kt)
It is a fresh rented server but I guess something is wrong with it
docker-compose being slow and my program not starting seemed to be due to insufficient(not good enough) input to /dev/urandom. Installing https://github.com/smuellerDD/jitterentropy-rngd resolved the problem.
I get this error when start Alwayson sql, tried many ways but the results still same. any ideas why?
Im using 1 cluster, 1 analytics+search center, 2 ubuntu 16.04 nodes.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,348 ALWAYSON-SQL AlwaysOnSqlRunner.scala:304 - Shutting down AlwaysOn SQL.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,617 ALWAYSON-SQL AlwaysOnSqlRunner.scala:328 - Set status to stopped
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,620 ALWAYSON-SQL AlwaysOnSqlRunner.scala:382 - Reserve port for AlwaysOn SQL
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,621 ALWAYSON-SQL AlwaysOnSqlRunner.scala:375 - Release reserved port
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,622 ALWAYSON-SQL AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
INFO [ForkJoinPool-1-worker-1] 2019-02-14 11:36:04,650 AlwaysOnSqlRunner.scala:740 - dsefs server heartbeat response: pong
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,757 AlwaysOnSqlRunner.scala:704 - Create DseFs directory /var/log/spark/alwayson_sql
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,758 AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
ERROR [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,788 AlwaysOnSqlRunner.scala:722 - Failed to check dsefs directory alwayson_sql
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,788 ALWAYSON-SQL AlwaysOnSqlRunner.scala:247 - ALWAYSON-SQL caused an exception in state RUNNING : com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
I have seen this problem too! It was a permissions problem in dsefs! To fix, login with the root Cassandra user, and change permissions of the your alwayson log directory to the alwayson user.
I set up the release plugin on my Grails project and successfully ran it on my localhost.
When I try to set up the same build in Jenkins, the build hangs indefinitely. The last thing in the output before it hangs is the checkCommitNeeded step.
Anything I can do to figure out what's going wrong?
I have set -Prelease.useAutomaticVersion=true and the two version params in switches, as mentioned in the plugin docs.
Update
On the researchgate Gitter, Christian Gonzalez mentioned that Jenkins is detecting another commit caused by the release plugin, and getting itself stuck in a loop. For Git, an additional behavior can be added to ignore changes committed by the plugin. However, my project is using SVN.
Update
Below is a snippet of the output from adding -d
11:12:48.907 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter] Executing actions for task ':checkCommitNeeded'.
11:12:48.908 [INFO] [org.gradle.api.Project] Running [svn, status] in [/var/lib/jenkins/jobs/MyTeam/jobs/MyProject/jobs/MyProject-release/workspace]
11:12:48.924 [INFO] [org.gradle.api.Project] Running [svn, status] produced output: []
11:12:48.926 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter] Finished executing task ':checkCommitNeeded'
11:12:48.926 [INFO] [org.gradle.execution.taskgraph.AbstractTaskPlanExecutor] :checkCommitNeeded (Thread[Daemon worker,5,main]) completed. Took 0.02 secs.
11:12:48.926 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationWorkerRegistry] Worker root.3 completed (0 in use)
11:12:48.926 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationWorkerRegistry] Worker root.4 started (1 in use).
11:12:48.926 [INFO] [org.gradle.execution.taskgraph.AbstractTaskPlanExecutor] :checkUpdateNeeded (Thread[Daemon worker,5,main]) started.
11:12:48.927 [LIFECYCLE] [class org.gradle.internal.buildevents.TaskExecutionLogger] :myproject:checkUpdateNeeded
11:12:48.927 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter] Starting to execute task ':checkUpdateNeeded'
11:12:48.927 [DEBUG] [org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter] Determining if task ':checkUpdateNeeded' is up-to-date
11:12:48.927 [INFO] [org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter] Executing task ':checkUpdateNeeded' (up-to-date check took 0.0 secs) due to:
Task has not declared any outputs.
11:12:48.927 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter] Executing actions for task ':checkUpdateNeeded'.
11:12:48.928 [INFO] [org.gradle.api.Project] Running [svn, status, -q, -u] in [/var/lib/jenkins/jobs/MyTeam/jobs/MyProject/jobs/MyProject-release/workspace]
11:12:51.477 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
11:12:51.479 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
11:12:51.480 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired.
11:12:51.481 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
11:13:01.477 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
11:13:01.477 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
11:13:01.478 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired.
11:13:01.480 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
11:13:11.477 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
11:13:11.477 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
11:13:11.477 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired.
11:13:11.479 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
...
The last 4 lines are repeated over and over.
I faced the same issue. For me, the reason was that I did the wrong setup configuration for the project. Example: wrong GitHub URL (without .git extension) added, incorrect Poll SCM config, etc.
Fix for me was to restart the Jenkins server & correct the changes under 'Manage' for your project & again build.
I am managing to get everything working with the local master and two remote workers. Now, I want to connect to a remote master that has the same remote workers. I have tried different combinations of settings withing the /etc/hosts and other reccomendations on the Internet, but NOTHING worked.
The Main class is:
public static void main(String[] args) {
ScalaInterface sInterface = new ScalaInterface(CHUNK_SIZE,
"awsAccessKeyId",
"awsSecretAccessKey");
SparkConf conf = new SparkConf().setAppName("POC_JAVA_AND_SPARK")
.setMaster("spark://spark-master:7077");
org.apache.spark.SparkContext sc = new org.apache.spark.SparkContext(
conf);
sInterface.enableS3Connection(sc);
org.apache.spark.rdd.RDD<Tuple2<Path, Text>> fileAndLine = (RDD<Tuple2<Path, Text>>) sInterface.getMappedRDD(sc, "s3n://somebucket/");
org.apache.spark.rdd.RDD<String> pInfo = (RDD<String>) sInterface.mapPartitionsWithIndex(fileAndLine);
JavaRDD<String> pInfoJ = pInfo.toJavaRDD();
List<String> result = pInfoJ.collect();
String miscInfo = sInterface.getMiscInfo(sc, pInfo);
System.out.println(miscInfo);
}
It fails at:
List<String> result = pInfoJ.collect();
The error I am getting is:
1354 [sparkDriver-akka.actor.default-dispatcher-3] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
1354 [main] WARN org.apache.spark.util.Utils - Service 'sparkDriver' could not bind on port 0. Attempting port 1.
1355 [main] DEBUG org.apache.spark.util.AkkaUtils - In createActorSystem, requireCookie is: off
1363 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1364 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
1364 [sparkDriver-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1367 [sparkDriver-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
1370 [sparkDriver-akka.actor.default-dispatcher-6] INFO Remoting - Starting remoting
1380 [sparkDriver-akka.actor.default-dispatcher-4] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
Exception in thread "main" 1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
java.net.BindException: Failed to bind to: spark-master/192.168.0.191:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
1383 [sparkDriver-akka.actor.default-dispatcher-7] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1385 [delete Spark temp dirs] DEBUG org.apache.spark.util.Utils - Shutdown hook called
Thank you kindly for your help!
Setting the environment variable SPARK_LOCAL_IP=127.0.0.1 solved this for me.
I had this problem when my /etc/hosts file was mapping the wrong IP address to my local hostname.
The BindException in your logs complains about the IP address 192.168.0.191. I assume that resolves to the hostname of your machine and it's not the actual IP address that your network interface is using. It should work fine once you fix that.
I had spark working in my EC2 instance. I started a new web server and to meet its requirement I had to change hostname to ec2 public DNS name i.e.
hostname ec2-54-xxx-xxx-xxx.compute-1.amazonaws.com
After that my spark could not work and showed error as below:
16/09/20 21:02:22 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
16/09/20 21:02:22 ERROR SparkContext: Error initializing SparkContext.
I solve it by setting SPARK_LOCAL_IP to as below:
export SPARK_LOCAL_IP="localhost"
then just launched sparkling shell as below:
$SPARK_HOME/bin/spark-shell
Possily your master is running on non-default port. Can you post your submit command?
Have a look in https://spark.apache.org/docs/latest/spark-standalone.html#connecting-an-application-to-the-cluster
From last two years on tomcat I am using this configuration for database connection pooling in tomcat
<Resource auth="Container"
driverClassName="com.mysql.jdbc.Driver"
logAbandoned="true"
maxActive="100"
maxIdle="30"
maxWait="10000"
name="jdbc/maindb"
password="xxxxx"
removeAbandoned="true"
removeAbandonedTimeout="60"
type="javax.sql.DataSource"
url="jdbc:mysql://localhost:3306/maindb?zeroDateTimeBehavior=convertToNull"
connectionProperties="useEncoding=true;"
username="sqladmin" validationQuery="select 1"/>
On production server, from last one month, with this configuration - suddenly tomcat stop responding to any requests. And results in timeout. There is no errors in logs, but as soon as I shutdown tomcat a huge amount of error log comes, which seems to show some kind of deadlock in database connections.
To rectify it, I used database connection pooling configuration from http://tomcat.apache.org/tomcat-8.0-doc/jdbc-pool.html . After using this configuration, now I face two problems, on production, either a table lock occurs even after using INNODB engine, or some queries start returning empty result set even when query is perfectly fine.
<Resource name="jdbc/maindb"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
testWhileIdle="true"
testOnBorrow="true"
testOnReturn="false"
validationQuery="SELECT 1"
validationInterval="30000"
timeBetweenEvictionRunsMillis="30000"
maxActive="100"
minIdle="10"
maxWait="10000"
initialSize="10"
removeAbandonedTimeout="60"
removeAbandoned="true"
logAbandoned="true"
minEvictableIdleTimeMillis="30000"
jmxEnabled="true"
jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;
org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer"
username="sqladmin"
password="xxxxx"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/maindb"/>
In case of first configuration, after shutting down tomcat following error logs start coming
04-Feb-2015 20:44:46.048 INFO [main] org.apache.catalina.core.StandardServer.await A valid shutdown command was received via the shutdown port. Stopping the Server instance.
04-Feb-2015 20:44:46.049 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["http-apr-8080"]
04-Feb-2015 20:44:46.100 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["ajp-apr-8009"]
04-Feb-2015 20:44:46.151 INFO [main] org.apache.catalina.core.StandardService.stopInternal Stopping service Catalina
04-Feb-2015 20:44:46.157 INFO [localhost-startStop-2] org.apache.catalina.core.StandardWrapper.unload Waiting for 81 instance(s) to be deallocated for Servlet [dispatcher]
04-Feb-2015 20:44:47.158 INFO [localhost-startStop-2] org.apache.catalina.core.StandardWrapper.unload Waiting for 81 instance(s) to be deallocated for Servlet [dispatcher]
04-Feb-2015 20:44:48.160 INFO [localhost-startStop-2] org.apache.catalina.core.StandardWrapper.unload Waiting for 81 instance(s) to be deallocated for Servlet [dispatcher]
04-Feb-2015 20:44:48.260 INFO [localhost-startStop-2] org.springframework.context.support.AbstractApplicationContext.doClose Closing WebApplicationContext for namespace 'dispatcher-servlet': startup date [Tue Feb 03 18:26:26 UTC 2015]; parent: Root WebApplicationContext
04-Feb-2015 20:44:48.307 INFO [localhost-startStop-2] org.springframework.context.support.AbstractApplicationContext.doClose Closing Root WebApplicationContext: startup date [Tue Feb 03 18:26:24 UTC 2015]; root of context hierarchy
04-Feb-2015 20:44:48.310 INFO [localhost-startStop-2] org.springframework.scheduling.concurrent.ExecutorConfigurationSupport.shutdown Shutting down ExecutorService 'taskExecutor'
04-Feb-2015 20:44:48.329 WARNING [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [ROOT] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation. Stack trace of request processing thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
org.apache.tomcat.dbcp.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:582)
org.apache.tomcat.dbcp.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:439)
org.apache.tomcat.dbcp.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
org.apache.tomcat.dbcp.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:118)
org.apache.tomcat.dbcp.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1412)
com.myproj.dao.ConnectionPool.getConnection(ConnectionPool.java:41)
And in second case following error logs comes, while doing certain operation
22-Jan-2015 16:36:04.077 SEVERE [http-apr-8080-exec-2] com.myproj.dao.cart.impl.VisitorCartDaoImpl.addCartItem Lock wait timeout exceeded; try restarting transaction
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:996)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2530)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1907)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2141)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2077)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2062)
at com.myproj.dao.cart.impl.VisitorCartDaoImpl.addCartItem(VisitorCartDaoImpl.java:96)
Is there anything wrong with the configuration or with the database? I am using MySQL 5.6 as the database and on production, mySQL is running on amazon RDS. And I am using tomcat 8.0.15