I have a mule application which exposes a rest webservice.
<inbound-endpoint exchange-pattern="request-response" address="${webservice.url}" doc:name="Generic"></inbound-endpoint>
I am using jersey resources for rest component.
The code works fine in QA but in UAT it throws the following error.
ERROR org.mule.exception.DefaultSystemExceptionStrategy - Caught exception in Exception Strategy: Connection reset
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209) ~[?:1.8.0_111]
at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_111]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) ~[?:1.8.0_111]
at java.io.BufferedInputStream.read(BufferedInputStream.java:265) ~[?:1.8.0_111]
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78) ~[commons-httpclient-3.1.jar:?]
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106) ~[commons-httpclient-3.1.jar:?]
at org.mule.transport.http.HttpServerConnection.readLine(HttpServerConnection.java:245) ~[mule-transport-http-3.8.1.jar:3.8.1]
at org.mule.transport.http.HttpServerConnection.getRequestLine(HttpServerConnection.java:557) ~[mule-transport-http-3.8.1.jar:3.8.1]
at org.mule.transport.http.HttpRequestDispatcherWork.run(HttpRequestDispatcherWork.java:67) ~[mule-transport-http-3.8.1.jar:3.8.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Telnet to the port is working. cron jobs are working fine in the server except the rest methods.
Connection request means that the other side closed the connection. Given that the application works in one environment I assume that the problem is the environment. You have to find out what is different between the environments to cause that problem. Is there anything between the client and the Mule application? A load balancer, proxy, etc? Difference in timeouts? Performing a network traffic capture from both sides could be helpful to isolate the problem.
Related
We are deploying several services in docker swarm (16 services in total) in a single master node. Most of these services are developed in Quarkus, some of them compiled in native mode, others are java compiled because of their dependencies.
When the services are in use everything works fine, but if they are waiting to be used for more than 15 minutes we start to receive this message:
Caused by: java.net.SocketException: Connection reset,
at java.net.SocketInputStream.read(SocketInputStream.java:186),
at java.net.SocketInputStream.read(SocketInputStream.java:140),
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137),
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153),
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280),
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138),
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56),
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259),
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163),
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157),
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273),
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125),
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272),
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186),
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89),
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110),
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185),
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83),
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56),
at org.jboss.resteasy.client.jaxrs.engines.ManualClosingApacheHttpClient43Engine.invoke(ManualClosingApacheHttpClient43Engine.java:302),
It doesn't matter if it is native compilation or Java compilation, in all of them the novelty is reproduced.
This message is presented when trying to consume the API also built in Quarkus, its consumption is through Swarm's internal network using the name of the service to resolve its location.
To consume the api we build a jar, in which we implement the services interfaces as indicated in the guide https://quarkus.io/guides/rest-client using additionally org.eclipse.microprofile.faulttolerance for the retry.
Because of lack of communication through the socket your connection is lost.
I found few solutions here: Apache HttpClient throws java.net.SocketException: Connection reset if I use it as singletone
apparently with no specific reason, and with nothing on neo4j logs, our application is getting this:
2019-01-30 14:15:08,715 WARN com.calenco.core.content3.ContentHandler:177 - Unable to acquire connection from the pool within configured maximum time of 60000ms
org.neo4j.driver.v1.exceptions.ClientException: Unable to acquire connection from the pool within configured maximum time of 60000ms
at org.neo4j.driver.internal.async.pool.ConnectionPoolImpl.processAcquisitionError(ConnectionPoolImpl.java:192)
at org.neo4j.driver.internal.async.pool.ConnectionPoolImpl.lambda$acquire$0(ConnectionPoolImpl.java:89)
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.neo4j.driver.internal.util.Futures.lambda$asCompletionStage$0(Futures.java:78)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:745)
The neo4j server is still running, and answering requests to either its web browser console, or the cypher-shell CLI. Also, restarting our application re-acquires the connection to neo4j with no issue.
Our application is connecting to neo4j once when it's started and then keeps that connection open for as lunch as it's running, opening and closing sessions against that connection as needed to fulfill the received requests.
It's the 2nd time in less than a month that we see the above exception thrown.
Any ideas?
Thanks in advance
I'm running Keycloak, keycloak Security Proxy and an ui application in a Docker-compose network. When I try to access the webpage, I get a login page, which I can use - but instead of being successfully redirected, I get the following error:
> Aug 03, 2018 1:13:24 PM org.keycloak.adapters.OAuthRequestAuthenticator resolveCode
ERROR: failed to turn code into token
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
I get this error no matter what kind of application I proxy, or if I run it within Docker-compose or simply as a node. It also probably appears when I try to use python adapters, instead of the security proxy.
The whole network runs behind a company proxy, could this be the reason?
Considering that the code seems to be send (see below), it seems Keycloak can at least verify the user. But I'm stumped on how to solve the problem. Has anyone any ideas?
http://localhost:8080/?state=84736978-afe6-43eb-a554-aedf86717415session_state=8a231709-5ef3-45fd-8e36-103e521ba49ecode=eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..4GewkGISgYEXeGPuCxupsA.V939JivWRaNltjnjT4r2CJGT4oj1HEX9iXycJFoAb_qhI4ietRc5Z2wQO6ekF9MOZ0VtMcLAyX0zASY-NPEcf3byX0INP-2zJDSF4TOEXNbMbMnVeKFgmLgQKDseUsl1ieofPVY7df8QVvpTs98VAw2_g2XwTsLemBcpxfalvMRBwViN6PyJI8A-gJJToolyDafHbzIco7bH4X4y5bzZsUh5yB6ZUMy0goBkAV_KPLepnA8X2OjEJef8GHyqgHVi.QQtjD-E_MZq72hb4g0BEbw
My proxy.json file is:
{
"target-url": "http://localhost:7005",
"bind-address":"0.0.0.0",
"http-port":"8080",
"applications":[
{
"base-path":"/",
"adapter-config":{
"realm":"realm",
"resource":"realm_ui",
"auth-server-url":"http://localhost:8800/auth",
"ssl-required":"external",
"credentials": {
"secret":"secret"
},
"confidential-port":0
},
"constraints":[
{
"pattern":"/*",
"roles-allowed":[
"user"
]
}
]
}
]
}
In Keycloak:
Access Type: confidential
Standard Flow Enabled: ON
Direct Access Grands: ON
The Valid Redirect URI: *
After searching for a while, I found the solution. It was a networking problem.
Keycloak OpenIDConnect Authentication flow follows 3 steps, as explained here: https://www.keycloak.org/docs/3.3/server_admin/topics/sso-protocols/oidc.html
Step 1 & 2 were completed, but upon receiving the temporary code from the browser the application was unable to connect with Keycloak. In step 1&2 it is always the browser connecting to application or Keycloak, not them speaking with each other.
This happened, because within my docker-compose file I declared networks that overwrote the automatic binding to 0.0.0.0 of Keycloak and the proxy. Additionally, the auth-server-url to connect to Keycloak must be true for the browser as well as the docker container of the Keycloak security proxy.
Make sure credential secret and auth-server-url values are same in keycloak.json and proxy.json files. Also, try removing "confidential-port":0 in the proxy.json file.
I think having a company proxy is not the reason for this error.
Our application is able to connect to RabbitMQ server, messages get received in the message listeners, everything works fine. But when RabbitMQ is restarted then, we get below exception in the logs and messages are not received in the listeners. Once we restart our application container as well, then everything starts working. We cannot restart our application in Prodcution environment and we want the application to be able to recover connection to RabbitMQ once RabbitMQ server is up. Can someone please help.
2018-05-10 09:10:01,561[SimpleAsyncTaskExecutor-34]|DEBUG|org.springframework.beans.factory.support.DefaultListableBeanFactory|13-org.springframework.beans-3.1.4.RELEASE|Returning cached instance of singleton bean 'org.springframework.amqp.core.Queue#3'
2018-05-10 09:10:01,560[SimpleAsyncTaskExecutor-27]|DEBUG|org.springframework.amqp.rabbit.listener.BlockingQueueConsumer|435-wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar-0.0.0|Closing Rabbit Channel: null
2018-05-10 09:10:01,563[SimpleAsyncTaskExecutor-34]|DEBUG|org.springframework.amqp.rabbit.listener.BlockingQueueConsumer|435-wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar-0.0.0|Starting consumer Consumer: tags=[[]], channel=null, acknowledgeMode=MANUAL local queue size=0
2018-05-10 09:10:01,560[SimpleAsyncTaskExecutor-32]|DEBUG|org.springframework.beans.factory.support.DefaultListableBeanFactory|13-org.springframework.beans-3.1.4.RELEASE|Returning cached instance of singleton bean 'org.springframework.amqp.core.Queue#1'
2018-05-10 09:10:01,559[SimpleAsyncTaskExecutor-29]|WARN|org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer|435-wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar-0.0.0|Consumer raised exception, processing can restart if the connection factory supports it
org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:54)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:195)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:371)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createConnection(ConnectionFactoryUtils.java:80)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:130)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:67)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:365)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1009)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_80]
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)[:1.7.0_80]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)[:1.7.0_80]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)[:1.7.0_80]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)[:1.7.0_80]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)[:1.7.0_80]
at java.net.Socket.connect(Socket.java:579)[:1.7.0_80]
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625)[:1.7.0_80]
at com.rabbitmq.client.impl.SocketFrameHandlerFactory.create(SocketFrameHandlerFactory.java:50)[432:com.rabbitmq.client:4.3.0]
at com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newConnection(RecoveryAwareAMQConnectionFactory.java:61)[432:com.rabbitmq.client:4.3.0]
at com.rabbitmq.client.impl.recovery.AutorecoveringConnection.init(AutorecoveringConnection.java:99)[432:com.rabbitmq.client:4.3.0]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:925)[432:com.rabbitmq.client:4.3.0]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:884)[432:com.rabbitmq.client:4.3.0]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:842)[432:com.rabbitmq.client:4.3.0]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1026)[432:com.rabbitmq.client:4.3.0]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:191)[435:wrap_file_._jetstream_thirdparty_spring-rabbit-1.3.6.RELEASE.jar:0]
... 7 more
This exception seems wierd to me because, if RabbitMQ is refusing connections then, how is the connection established when we do an application restart.
This issue is giving me a very hard time.
The 1.3.6 is too old and out of support version. Would be great if you consider to upgrade to the latest one: https://projects.spring.io/spring-amqp/, at least 1.7.7 . Also read this: https://docs.spring.io/spring-amqp/docs/2.0.3.RELEASE/reference/html/_reference.html#auto-recovery.
I'm trying to get most basic example for Spring Cloud Dataflow running on CloudFoundry.
I've followed the steps here: http://docs.spring.io/spring-cloud-dataflow-admin-cloudfoundry/docs/current-SNAPSHOT/reference/htmlsingle/#getting-started to make the admin app available in my org/space.
Then I tried to create the most basic example from http://cloud.spring.io/spring-cloud-dataflow/, namely to create the "ticktock" stream:
dataflow:>stream create ticktock --definition "time | log" --deploy
I can see that both apps ticktock-time and ticktock-log are created in the space, the needed service "redis" is bound to these apps and they try to start. Unfortunately they don't start completely, because they have problem to access "redis" service. In the log we find:
Exception encountered during context initialization - cancelling
refresh attempt:
org.springframework.context.ApplicationContextException: Failed to
start bean 'outputBindingLifecycle'; nested exception is
org.springframework.context.ApplicationContextException: Failed to
start bean 'inputBindingLifecycle'; nested exception is
org.springframework.data.redis.RedisConnectionFailureException: Cannot
get Jedis connection; nested exception is
redis.clients.jedis.exceptions.JedisConnectionException: Could not get
a resource from the pool
which eventually is caused by
Caused by: redis.clients.jedis.exceptions.JedisConnectionException:
java.net.ConnectException: Connection refused
Am I missing some configuration step in between?
Alexander
There seems to be an issue with our deployer using the master branch of the Java buildpack. Try these settings for the Dataflow Server:
cf set-env s-c-dataflow-server CLOUDFOUNDRY_BUILDPACK https://github.com/cloudfoundry/java-buildpack.git#v3.6
cf restage s-c-dataflow-server
Also, be aware that we currently launch apps using "streamname-module" as part of the URL so unless you use unique stream names you might collide with other users and get a "400 Bad Request" error.