Request to external service times out on Heroku web process but works in console process - ruby-on-rails

I have a Rails 4 application running on Heroku. For one type of request I make a HTTP call to an external service and then return the response to the client.
As I see from the logs, the request to the external service is taking too long and resulting in the Heroku's H12 error where it sends a 503 after 30 seconds. The HTTP request that I am making to the external service eventually comes back with a Net::ReadTimeout after some more time (60 seconds).
However if I run heroku run console and make the same HTTP call (through the same Ruby code), it works just fine. The request completes in about a second or two at the max.
I am unable to understand why this request is timing out when run from the web process while it works seamlessly in the heroku run console.
I am running Puma as my webserver. I followed guidelines given here : https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
I also tried the basic WEBrick server to see if that helps. But no avail.
Has anyone faced this issue? Any hints on how to debug this?

Related

Err max clients reached Redis/Sidekiq/Rails

I have been stuck on this issue for the past 3 days and unsure where to look now.
I have a simple Sidekiq implementation into my rails app.
I am working on: Rails 4.2.0, Sidekiq 4.1.2, Redis 3.0.6
The production app is running live with heroku, and I have 1 worker dyno and 1 web dyno.
The issue is this, and I am unsure on how to approach it or what I did to make it do this.
When I run the redis-cli on heroku I can see the clients that I have running. At most I have 2 or 3 clients running at any given time. I can easily kill the clients with
CLIENT KILL TYPE normal
So that's all fine and dandy. The part when things get a little tricky is when I fire up my server locally, and I am working in development. All of a sudden my redic-cli shows that I have 19 clients running. This will result in me logging
Err max clients reached
My assumption is that somehow locally I am directing sidekiq to work off the redis production url. I have to admit what I know about Redis and Sidekiq is limited, but I do have a basic understanding of how it should be working.
Any help or guidance would be appreciated.
Try using sidekiq -c 3 to limit your concurrency.
This ended up being a configuration error. Just in case anyone stumbles upon this question hopefully this will help them not overlook something like I did.
This issue was happening only when I was firing up my local server, so I knew it had something to do with me locally. I noticed that on my production redis:cli I was seeing clients that had my local IP in the ADDR column.
This led me to believe that my local machine was pushing clients to my production Redis server. Looking at my logs when I fired up my Procfile I saw the Redis url there so that only confirmed it.
Finally after searching through my code, I discovered that I had actually added the url into my .env, so when I fired up my server it was using that production Redis url. So I changed it to the appropriate IP address for local development on my .env file redis://127.0.0.1:6379 and everything is now working as normal.

How can I automatically restart my Heroku app when there's a server error?

I have a Rails 4.2 app running on Heroku. Occasionally there is an issue that causes most incoming requests to get a server error. For example, there could be a memory leak or a max database connection issue. How can I setup a script or service to automatically restart the server when it detects errors?
I think this service could ping the app every few minutes and if it detects an error, it should confirm there's really a problem and then run heroku restart. How could this be set up?
After Googling this topic, I came across Neptune.io, which seems to provide a useful service for this task.

502 Bad gateway error for rails production environment?

When I deploy my rails app in jenkins I am getting 502 gateway error for sending form data. But when I run the same in local with all three environments it is working properly.
Updated Question:
My rails app is working properly in local machine in all test,dev,prod environment.
But when I deploy it into Jenkins CI I am getting the above error for form data submission to another server.
The problem is I configured my unicorn server
timeout 5
but the response from the service call taking time so increased time
timeout 15
now it is working

403 errors from load balancer while new instances are booting

I have a RoR app running on elastic beanstalk. I have occasionally seen 403 errors from Passenger for a while. Most of the time 1 server is running but this gets increased to 3 or 4 instances in busy periods during the day.
Session stickeyness is not turned on
I have noticed that when a new server is started the ELB is sending requests to it before bundle install has finished.
If I ssh to the newly started server I can see in /var/app/current/ that the app has not yet been installed and if I run top it looks like bundler is running and compiling things with cc1, etc.
/var/app/support/log/passenger.log shows that requests to valid urls within my rails app are being received and responded to with 404. Hardly surprising because the app isn't there yet
After 5-10 minutes all of the compiling is complete and the app files appear in /var/app/current and all is well.
This doesn't seem quite right to me. How do I set up the ELB / my rails app so that the ELB can tell when it is ready to receive requests?
I found the answer to this. There was no application health check url set. In this case the ELB pings the instance to see if it's healthy, i.e. it checks that it is booted rather than if rails is up and running. Setting the health check url to '/login/' fixed it for me because this gives a 404 until rails in running and a 200 afterwards.
Elastic beanstalk demands 2 correct responses before it deems an instance to be healthy. It checks the instance every 5 minutes. This means that an instance can take a while to start serving requests. i.e. it takes boot time + waiting for next poll from elb + 5 minutes before it sees any real traffic

Prevent development.log backlog from delaying server response

Ever notice this Rails dev server behavior?
Start rails server
Run some commands in "rails console" or a script, appending content to development.log
Make a request to the dev server you started in step #1
The server hangs, because it's waiting for the server process to write out to the terminal all the new text from step #2
Is there a way to stop that? It's especially annoying with the combination of an infrequently-accessed server and a frequently-polling background process.
I understand that the theoretical solution is to make the Rails dev server write to its log asynchronously, but I'm hoping someone's already written something so I don't have to hack it up myself.

Resources