How to print all the HTTP/REST requests those hit the Wildfly - wildfly-8

If I hit Wildfly with 100 requests it will print both InBound and OutBound messages in server.log
Where are if I hit 1000 requests, around 5% of requests are getting 'Connection refused' and not printing in server.log
So please let me know if there is a way to print all the requests those hit the Wildfly even before Wildfly adds them into queue
We are using Undertow and Wildfly 8.1

Proxy WildFly with any front-end HTTP server (I.e., Apache or Nginx). This way you can keep track of all the requests via your front-end server log.

Related

Unable to get the Response from angular7 services in production environment

I am new to angular and developed an angular application using angular CLI 7.
When I am running the application from my local system, I am getting the response from the service and it is working fine.
But when I deployed the application in the production server, I am unable to get the response from the service. Service is taking too long to respond and getting the HTTPErrorResponse of status Unknown Error.
We are using the Spring microservices for api calls to get the response data.
I am using the proxy.conf.json for the services because the URL running angular app is different from the service.
proxy.conf.json:
{
"/api/*":{
"target":"http://wsd185erd986.test.com/api",
"secure":false,
"loglevel":debug,
"changeOrigin":true
}
}
Changed the package.json to include the proxy.conf.json in proxyConfig.
Include the response headers in the service.
Could any one know on how to configure these proxy settings in production build for angular. Do we need to include any headers in the service calls.
HTTPErrorResponse - A response that represents an error or failure,
either from a non-successful HTTP status, an error while executing the
request, or some other failure which occurred during the parsing of
the response.
So as per the docs this error is thrown in multiple cases either there is an error at server end and server send the error response or there was some issue in parsing.
Please check the Spring Boot API request logs to see what response code is sent back.
You can check the API by a standalone client too (like Postman) and see if there is some issue.
As an aside - you should not be using angular development server in production as it is meant for angular development. Typically you can use any web server ( like Apache, NGinx etc) to host your angular production files ( they are merely static resources) and then either use them as a proxy ( by having their proxy configuration) or have CORS enabled services.

ruby/rails grpc server restart without disconnecting clients

I am using the following code snippet to start a grpc server which works fine. But whenever I need to deploy new code to the server, what is the right way for me to restart the server? Should I just kill the server process, and let client to handle the error message? Or is there a way for enabling master/worker mode like unicorn does?
s = GRPC::RpcServer.new
s.run_till_terminated
There is no such support for rolling out new deployments that's built in to the ruby-gRPC.
However, it should be possible for applications with multiple server instances to do rolling restarts. E.g., note that if gRPC connects to a server and starts to make RPC's to it and that server gets shut down, then gRPC will internally notice that the connection went bad and it will try to make its next RPC on a newly connection (the default gRPC behavior will be to perform its next RPC on the next resolved address that can be successfully connected to, and this might mean reconnecting to the same address for which the connection just broke). Note too that gRPC servers use SO_REUSEPORT by default, so one could potentially run multiple servers on the same port.

Neo4j Enterprise 3.2 browser does not connect

I am trying to learn Neo4j by using the trial Enterprise version, however the browser is not able to connect. The service is running but when I try to log in via browser http://localhost:7474/browser/ the error is:
N/A: WebSocket connection failure. Due to security constraints in your
web browser, the reason for the failure is not available to this Neo4j
Driver. Please use your browsers development console to determine the
root cause of the failure. Common reasons include the database being
unavailable, using the wrong connection URL or temporary network
problems. If you have enabled encryption, ensure your browser is
configured to trust the certificate Neo4j is configured to use.
WebSocket readyState is: 3
In the console the error is:
WebSocket is already in CLOSING or CLOSED state.
I am using Chrome and the neo4j.conf is:
# Bolt connector
dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=OPTIONAL
dbms.connector.bolt.listen_address=:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
#dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
#dbms.connector.https.listen_address=:7473
I understand from this issue the 3.2 version only allows bolt and I tried playing with the conf but so far no luck. Is there a way to get the local connection going with bolt?
Thank you in advance, Paola

Rails app not responding to Postman requests

My locally running rails app (on localhost:3000) responds to requests in the browser or from curl, but is not responding to requests from the desktop postman client, which immediately gives the generic "Could not get any response". Any idea what could be causing this?
For this you can use NGROK. It provides you a tunnel which can easily be used with postman or anyother such service. Download the library from here and run the tunnel as
./ngrok http 3000
or you can use lvh.me:3000 if your request is from same machine.

Request to external service times out on Heroku web process but works in console process

I have a Rails 4 application running on Heroku. For one type of request I make a HTTP call to an external service and then return the response to the client.
As I see from the logs, the request to the external service is taking too long and resulting in the Heroku's H12 error where it sends a 503 after 30 seconds. The HTTP request that I am making to the external service eventually comes back with a Net::ReadTimeout after some more time (60 seconds).
However if I run heroku run console and make the same HTTP call (through the same Ruby code), it works just fine. The request completes in about a second or two at the max.
I am unable to understand why this request is timing out when run from the web process while it works seamlessly in the heroku run console.
I am running Puma as my webserver. I followed guidelines given here : https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
I also tried the basic WEBrick server to see if that helps. But no avail.
Has anyone faced this issue? Any hints on how to debug this?

Resources