I keep seeing this log when I use connections in Hikari pool.
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
[com.zaxxer.hikari.pool.PoolElf] : HikariPool-0 - Reset (autoCommit) on connection com.mysql.jdbc.JDBC4Connection#1c9b0314
What does that mean? Is this something I should worry about/fix, or is it normal? I'm trying to understand what really happens there.
It means either:
the pool is configured as auto-commit, but code is changing connections to autoCommit=false, and then returning them to the pool, or
the pool is configured as not auto-commit, but code is changing the connections to autoCommit=true, and then returning them to the pool.
HikariCP will reset autoCommit to the pool default whenever a connection is returned with a different autoCommit mode. In general this can have a negative impact on performance; sometimes quite large.
Related
We are using testcontainers to instantiate Kafka containers for testing an app that creates a kstream StateStore and kafka consumer to process some messages. The test goes fine and we could both produce and consume messages. But these tests continuously print annoying WARN messages -
(kafka-admin-client-thread)([]) WARN - NetworkClient - [AdminClient
clientId=test_clientId] Connection to node 1
(localhost/127.0.0.1:55304) could not be established. Broker may not
be available.
It seems the AdminClient is scheduled to poll every second and this creates substantial log files.
On enabling DEBUG logs, I observe the error
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) DEBUG - NetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Node 1 disconnected.
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) WARN - NetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Connection to node 1 (localhost/127.0.0.1:55304) could not be established. Broker may not be available.
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) DEBUG - ConsumerNetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=test_x-global-consumer, correlationId=11) due to node 1 being disconnected
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) INFO - FetchSessionHandler - [Consumer clientId=test_x-global-consumer, groupId=null] Error sending fetch request (sessionId=1421844635, epoch=INITIAL) to node 1:
org.apache.kafka.common.errors.DisconnectException: null
[2022-09-16T22:48:11,792Z](kafka-producer-network-thread | producer-1)([]) DEBUG - NetworkClient - [Producer clientId=producer-1] Initialize connection to node localhost:55304 (id: 1 rack: null) for sending metadata request
[2022-09-16T22:48:11,792Z](kafka-admin-client-thread | adminclient-1)([]) DEBUG - NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:55304 (id: 1 rack: null) using address localhost/127.0.0.1
[2022-09-16T22:48:11,792Z](kafka-producer-network-thread | producer-1)([]) DEBUG - NetworkClient - [Producer clientId=producer-1] Initiating connection to node localhost:55304 (id: 1 rack: null) using address localhost/127.0.0.1
[2022-09-16T22:48:11,793Z](kafka-producer-network-thread | producer-1)([]) DEBUG - Selector - [Producer clientId=producer-1] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.base/java.lang.Thread.run(Thread.java:833)
[2022-09-16T22:48:11,793Z](kafka-admin-client-thread | adminclient-1)([]) DEBUG - Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1293)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
at java.base/java.lang.Thread.run(Thread.java:833)
[2022-09-16T22:48:11,793Z](kafka-producer-network-thread | producer-1)([]) DEBUG - NetworkClient - [Producer clientId=producer-1] Node 1 disconnected.
[2022-09-16T22:48:11,793Z](kafka-admin-client-thread | adminclient-1)([]) DEBUG - NetworkClient - [AdminClient clientId=adminclient-1] Node 1 disconnected.
Similar logs are also coming from GlobalStreamThread
[2022-09-16T22:48:11,746Z](test_x-GlobalStreamThread)([]) DEBUG - NetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Initiating connection to node localhost:55304 (id: 1 rack: null) using address localhost/127.0.0.1
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) DEBUG - Selector - [Consumer clientId=test_x-global-consumer, groupId=null] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1296)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.pollAndUpdate(GlobalStreamThread.java:237)
at org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:284)
Docker port mapping looks like
0.0.0.0:55306->2181/tcp, 0.0.0.0:55305->9092/tcp, 0.0.0.0:55304->9093/tcp
Testcontainer is started as
KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:6.2.1"))
.withNetwork(Network.SHARED)
Testcontainer maps KAFKA_LISTENERS here
https://github.com/testcontainers/testcontainers-java/blob/master/modules/kafka/src/main/java/org/testcontainers/containers/KafkaContainer.java#L49
Could you help me understand what is happening here and how to provide the correct port mappings via testcontainers for these threads to work fine?
I am trying to make an ios app and an osx app communicate between each other. Everything is good until the "invite peer" part. I got the errors bellow:
2022-07-18 17:45:10.066594+0800 iRemoteMac[42755:5561655] [connection] nw_socket_handle_socket_event [C6:1] Socket SO_ERROR [54: Connection reset by peer]
2022-07-18 17:45:10.067320+0800 iRemoteMac[42755:5558565] [MCNearbyServiceAdvertiser] PeerConnection connectedHandler (advertiser side) - error [Unable to connect].
2022-07-18 17:45:10.067366+0800 iRemoteMac[42755:5558565] [MCNearbyServiceAdvertiser] PeerConnection connectedHandler remoteServiceName is nil.
2022-07-18 17:45:20.050295+0800 iRemoteMac[42755:5562197] [MCNearbyServiceAdvertiser] Data from peer [My Dev,721645DB] received with error Connection closed.
Any ideas on what causes this error? I don't see where the remoteServiceName is.
thanks
Well, I didn't set MCNearbyServiceAdvertiserDelegate right, that's why I can't invite peer, but sometimes I still get this error though the connection can be set.
I have the following setup:
Keycloak running in docker, public interface mapped to 127.0.0.1:8180, internal keycloak-n:8080
Quarkus running in docker, public interface mapped to 127.0.0.1:8080
Both run in the same docker network and can communicate
An external AutzClient (not in docker) that uses the token communicate with quarkus
Everything works if client and quarkus are outside of Docker and communicate with keycloak via the same interface. As soon as quarkus is in docker, I can't get it to work.
I've tried many changes so far. On keycloak I set the frontendUrl with /subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl="http://127.0.0.1:8180/auth"
My current quarkus config (oidc part) looks like:
# OIDC Configuration
quarkus.oidc.auth-server-url=http://keycloak-n:8080/auth/realms/quarkus
quarkus.oidc.client-id=backend-service
quarkus.oidc.credentials.secret=85174256-b231-4385-9fa9-257dd0d27bf0
quarkus.oidc.token.lifespan-grace=20
quarkus.oidc.introspection-path=.well-known/openid-configuration
quarkus.oidc.jwks-path=.well-known/jwks.json
quarkus.oidc.token.issuer=http://127.0.0.1:8180/auth/realms/quarkus
# Enable Policy Enforcement
quarkus.keycloak.policy-enforcer.enable=true
If I remove the token issuer, I get from vertx a issuer validation failed. With the current configuration the initial auth works, but than I get a Connection refused (Connection refused) from PolicyEnforcer, because it tries to communicate with 127.0.0.1. Stacktrace is:
2020-08-03 05:43:27,933 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Releasing connection [{}->http://keycloak-n:8080][null]
2020-08-03 05:43:27,933 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Pooling connection [{}->http://keycloak-n:8080][null]; keep alive indefinitely
2020-08-03 05:43:27,933 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Notifying no-one, there are no waiting threads
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ThreadSafeClientConnManager] (executor-thread-1) Get connection: {}->http://127.0.0.1:8180, timeout = 0
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) [{}->http://127.0.0.1:8180] total kept alive: 1, total issued: 0, total allocated: 1 out of 20
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) No free connections [{}->http://127.0.0.1:8180][null]
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Available capacity: 20 out of 20 [{}->http://127.0.0.1:8180][null]
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Creating new connection [{}->http://127.0.0.1:8180]
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.DefaultClientConnectionOperator] (executor-thread-1) Connecting to 127.0.0.1:8180
2020-08-03 05:43:27,945 DEBUG [org.apa.htt.imp.con.DefaultClientConnection] (executor-thread-1) Connection org.apache.http.impl.conn.DefaultClientConnection#6ba49b73 closed
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.DefaultClientConnection] (executor-thread-1) Connection org.apache.http.impl.conn.DefaultClientConnection#6ba49b73 shut down
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.tsc.ThreadSafeClientConnManager] (executor-thread-1) Released connection is not reusable.
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Releasing connection [{}->http://127.0.0.1:8180][null]
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.DefaultClientConnection] (executor-thread-1) Connection org.apache.http.impl.conn.DefaultClientConnection#6ba49b73 closed
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Notifying no-one, there are no waiting threads
2020-08-03 05:43:27,947 ERROR [org.key.ada.aut.PolicyEnforcer] (executor-thread-1) Could not lazy load resource with path [/hello/find/1] from server: java.lang.RuntimeException: Could not find resource
at org.keycloak.authorization.client.util.Throwables.retryAndWrapExceptionIfNecessary(Throwables.java:91)
at org.keycloak.authorization.client.resource.ProtectedResource.find(ProtectedResource.java:232)
at org.keycloak.authorization.client.resource.ProtectedResource.findByMatchingUri(ProtectedResource.java:291)
at org.keycloak.adapters.authorization.PolicyEnforcer$PathConfigMatcher.matches(PolicyEnforcer.java:268)
at org.keycloak.adapters.authorization.AbstractPolicyEnforcer.getPathConfig(AbstractPolicyEnforcer.java:351)
at org.keycloak.adapters.authorization.AbstractPolicyEnforcer.authorize(AbstractPolicyEnforcer.java:72)
at io.quarkus.keycloak.pep.runtime.KeycloakPolicyEnforcerAuthorizer.apply(KeycloakPolicyEnforcerAuthorizer.java:45)
at io.quarkus.keycloak.pep.runtime.KeycloakPolicyEnforcerAuthorizer.apply(KeycloakPolicyEnforcerAuthorizer.java:29)
at io.quarkus.vertx.http.runtime.security.HttpAuthorizer$1$1$1.run(HttpAuthorizer.java:68)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2046)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1578)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1452)
at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.jboss.threads.JBossThread.run(JBossThread.java:479)
Caused by: java.lang.RuntimeException: Error executing http method [GET]. Response : null
at org.keycloak.authorization.client.util.HttpMethod.execute(HttpMethod.java:106)
at org.keycloak.authorization.client.util.HttpMethodResponse$3.execute(HttpMethodResponse.java:68)
at org.keycloak.authorization.client.resource.ProtectedResource$5.call(ProtectedResource.java:226)
at org.keycloak.authorization.client.resource.ProtectedResource$5.call(ProtectedResource.java:222)
at org.keycloak.authorization.client.resource.ProtectedResource.find(ProtectedResource.java:230)
... 15 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:121)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:605)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:440)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.keycloak.authorization.client.util.HttpMethod.execute(HttpMethod.java:84)
... 19 more
2020-08-03 05:43:27,951 DEBUG [org.key.ada.aut.AbstractPolicyEnforcer] (executor-thread-1) Checking permissions for path [http://127.0.0.1:8080/hello/find/1] with config [null].
2020-08-03 05:43:27,951 DEBUG [org.key.ada.aut.AbstractPolicyEnforcer] (executor-thread-1) Could not find a configuration for path [/hello/find/1]
Is there any real example on how to configure such a scenario? I already tried to set the frontendUrl to the internal address, that actually works for the runtime, but the web frontend is no longer accessible.
UPDATE:
From front end code (abbreviated):
java.io.InputStream stream = Thread.currentThread().getContextClassLoader()
.getResourceAsStream("META-INF/keycloak.json");
auth=AuthzClient.create(stream);
response = auth.obtainAccessToken(user, password);
final String accessToken = response.getToken();
...
requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, AUTH_HEADER_PREFIX + accessToken);
...
and config in keycloak.json is
{
"realm": "quarkus",
"auth-server-url": "http://localhost:8180/auth/",
"ssl-required": "external",
"resource": "backend-service",
"verify-token-audience": true,
"credentials": {
"secret": "85174256-b231-4385-9fa9-257dd0d27bf0"
},
"confidential-port": 0,
"policy-enforcer": {}
}
Many thanks
So the following setup works for me:
frontendUrl: external-docker-ip --> NOT localhost!
set in jboss cli by e.g.:
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl,value="http://172.20.48.1:8180/auth")
##quarkus config
quarkus.oidc.auth-server-url=http://internal_keycloak_docker_IP:8080/auth/realms/quarkus
quarkus.oidc.token.issuer=http://external-docker-ip:8180/auth/realms/quarkus
##client json file
"auth-server-url": "http://external-docker-ip:8180/auth/"
I have been using flume for a while now, I have got agent and collector running on same machine.
Configuration
agent: exec("/usr/bin/tail -n +0 -F /path/to/file") | agentE2ESink("hostname", 35855)
collector: collectorSource(35855) | collector(10000) { collectorSink("/hdfs/path/to/sink","name") }
Facing issues in the agent node:
2012-06-04 19:13:33,625 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 0 failed, backoff (1000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
2012-06-04 19:13:34,625 [logicalNode hostname-19] ERROR connector.DirectDriver: Expected ACTIVE but timed out in state OPENING
2012-06-04 19:13:34,632 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 1 failed, backoff (2000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
2012-06-04 19:13:36,635 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 2 failed, backoff (4000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
and then empty ACKs will be sent continuously
2012-06-04 19:19:56,960 [Roll-TriggerThread-0] INFO endtoend.AckListener$Empty: Empty Ack Listener began 20120604-191956958+0530.881565921235084.00000026
2012-06-04 19:20:07,043 [Roll-TriggerThread-0] INFO hdfs.SeqfileEventSink: closed /tmp/flume-user1/agent/hostname/writing/20120604-191956958+0530.881565921235084.00000026
I dont understand why the connection is refused. Are there any system level changes that needs to be done ?
Note: the collector is listening to the port but agent is unable to send data through the 35855 port.
Can anyone help me with this problem.
Thanks
If you are running both the agent and the collector on the same box, you should be using localhost as the address.
agentE2ESink("localhost", 35855)
I've been using the Enyim Memcached Client for .Net, trying to connect to a server running on AppHarbor. The relevant parts of my configuration file look like this:
<enyim.com>
<log factory="Enyim.Caching.DiagnosticsLogFactory, Enyim.Caching" />
<memcached protocol="Binary">
<servers>
<add address="8d593f28-37d7-4c4f-a702-aa7687a85ea1.memcacher.com" port="11211" />
</servers>
<authentication
type="Enyim.Caching.Memcached.PlainTextAuthenticator, Enyim.Caching"
userName="changed to post on stack overflow"
password="changed to post on stack overflow"
zone="AUTHZ"
/>
</memcached>
</enyim.com>
My connection keeps timing out. Any ideas whats going on here? Here are the logs from Enyim client:
2012-01-21 18:56:08 [ERROR] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Could not init pool. - System.TimeoutException: Could not connect to 50.19.210.46:11211
at Enyim.Caching.Memcached.PooledSocket.ConnectWithTimeout(Socket socket, IPEndPoint endpoint, Int32 timeout)
at Enyim.Caching.Memcached.PooledSocket..ctor(IPEndPoint endpoint, TimeSpan connectionTimeout, TimeSpan receiveTimeout)
at Enyim.Caching.Memcached.MemcachedNode.CreateSocket()
at Enyim.Caching.Memcached.Protocol.Binary.BinaryNode.CreateSocket()
at Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl.CreateSocket()
at Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl.InitPool()
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Mark as dead was requested for 50.19.210.46:11211
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - FailurePolicy.ShouldFail(): True
2012-01-21 18:56:08 [WARN] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Marking node 50.19.210.46:11211 as dead
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.DefaultServerPool - Node 50.19.210.46:11211 is dead.
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.DefaultServerPool - Starting the recovery timer.
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.DefaultServerPool - Timer started.
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Acquiring stream from pool. 50.19.210.46:11211
2012-01-21 18:56:08 [DEBUG] 7 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Pool is dead or disposed, returning null. 50.19.210.46:11211
UPDATE:
Turns out the reason I can't connect to the memcached server is because it's only accessible from appharbor's environment. So for anyone else that runs across this, you need to use a local memcached service when developing locally, then simply change the credentials when deploying (which apphaorbor actually does automatically for you). Problem resolved.
AppHarbor Memcacher buckets are only accessible from AppHarbor application servers. The documentation has been amended to clearly reflect this.
You should use a locally installed memcached server for testing.