I am running rails applications with nginx+passenger
after nginx started serve, I can access it
but after sometime,may be one hour or half a day, it tells me the following message
Internal server error
An error occurred while starting the web application. It sent an unknown response type "".
then i need to reboot the server to let nginx serve normally
My server is running on AliYun and it's memory size is only 512M, is it too small too run passenger?
or what's wrong with the configureation?
It's only a workaround and you should find what is the actual problem (by monitoring memory usage, processor usage, open file handles etc) but until then you can use passenger_max_requests directive
Related
My problem is similar to AWS: None of the Instances are sending data but has a slightly different error message.
I have a Rails application running on ElasticBeanstalk, and it appears to be running correctly. Periodically, Enhanced Health Monitoring sends me error messages such as:
Environment health has transitioned from Ok to Degraded. 20.0 % of the
requests are failing with HTTP 5xx.
where the percentage varies up to 100%. Even though I've made no changes, a minute later I get a followup message telling me that everything is back to normal:
Environment health has transitioned from Degraded to Ok.
I've downloaded the full logs from ElasticBeanstalk but I don't know exactly where to look (there are around 20 different log files in various directories).
I'm currently using the free AWS tier with the smallest instances of database, server, etc. Could this be the cause? Which of the log files should I be looking in, and what should I be looking for?
I run rails apps on Elastic Beanstalk and have found it helpful to think about Beanstalk as a computer (in this case an Amazon EC2 instance) running your rails app and a web server (either Passenger or Puma). When you get a 500 error, it could be because your rails app didn't properly deploy–in which case Passenger or Puma will return an error—or your app is deployed properly but encountered an error just like it might on your local machine.
In either case, to diagnose an error, download the full logs from your AWS console (open the correct app environment and then choose Logs > Request Logs > Full logs > Download). Deployment errors are harder to diagnose, but I recommend starting by looking in var-XX/logs/log/eb-activity.log. I suspect your error is coming from your rails app itself, in which case I recommend looking in var-XX/app/support/logs/passenger.log and production.log. To find a 500 error, search for "500 Internal" and then treat the error like you would any other rails error.
You can go to the EC2 instance and run the application just like you would run on your local machine and see the logs.
You can ssh into your EC2 instance using the command eb ssh and go to /opt/python/ directory (It will be different for Ruby or other programming languages).
/opt/python/run is the directory where you will find the version of your application which is run from the EC2 instance. Look for the directory venv and app inside run directory.
Note: The above folder structure is for Python but a similar folder structure post deployment can be found for any other programming language. Just look for the standard directory structure for the deployment environment for your programming language.
For Python:
/opt/python: Root of where you application will end up.
/opt/python/current/app: The current application that is hosted in the environment.
/opt/python/on-deck/app: The app is initially put in on-deck and then, after all the deployment is complete, it will be moved to current. If you are getting failures in your container_commands, check out out the on-deck folder and not the current folder.
/opt/python/current/env: All the env variables that eb will set up for you. If you are trying to reproduce an error, you may first need to source /opt/python/current/env to get things set up as they would be when eb deploy is running.
/opt/python/run/venv: The virtual env used by your application; you will also need to run source /opt/python/run/venv/bin/activate if you are trying to reproduce an error.
I know it is a little late but I wanted to comment the trick I use to find the error, I use to connect via ssh and then, once in the app I try to enter "rails console" It uses to fail, but it shows normally the error you´re making. This little trick saved my life several times. Hope it helps!
I have a Rails 4.2 app running on Heroku. Occasionally there is an issue that causes most incoming requests to get a server error. For example, there could be a memory leak or a max database connection issue. How can I setup a script or service to automatically restart the server when it detects errors?
I think this service could ping the app every few minutes and if it detects an error, it should confirm there's really a problem and then run heroku restart. How could this be set up?
After Googling this topic, I came across Neptune.io, which seems to provide a useful service for this task.
I'm trying to get a Bitnami Rails stack running with Nginx and 5 Thin app servers.
I have the Thin app servers running OK and I've got Nginx started and it's connected to the 5 Thin servers.
But, some code is giving me "The service is not available. Please try again later." html when I access my app from a browser. I don't know where the code is that's giving me that message.
I have the Nginx server listening on port 80.
This is my nginx.conf file: https://dl.dropbox.com/u/35302780/nginx.conf
Thanks for the help!
Your config seems fine, except those 2 if statements because IF is evil inside location block. You may want to fix that.
Seems that you are using a 3rd party upstream module. Try to uninstall that module and use the normal upstream module.
At work we're running some high traffic sites in rails. We often get a problem with the following being spammed in the nginx error log:
2011/05/24 11:20:08 [error] 90248#0: *468577825 connect() to unix:/app_path/production/shared/system/unicorn.sock failed (61: Connection refused) while connecting to upstream
Our setup is nginx on the frontend server (load balancing), and unicorn on our 4 app servers. Each unicorn is running with 8 workers. The setup is very similar to the one GitHub uses.
Most of our content is cached, and when the request hits nginx it looks for the page in memcached and serves that it if can find it - otherwise the request goes to rails.
I can solve the above issue - SOMETIMES - by doing a pkill of the unicorn processes on the servers followed by a:
cap production unicorn:check (removing all the pid's)
cap production unicorn:start
Do you guys have any clue to how I can debug this issue? We don't have any significantly high load on our database server when these problems occurs..
Something killed your unicorn process on one of the servers, or it timed out. Or you have an old app server in your upstream app_server { } block that is no longer valid. Nginx will retry it from time to time. The default is to re-try another upstream if it gets a connection error, so hopefully your clients didn't notice anything.
I don't think this is a nginx issue for me, restarting nginx didn't help. It seems to be gunicorn...A quick and dirty way to avoid this is to recycle the gunicorn instances when the system is not being used, say 1AM for example if that is an acceptable maintenance window. I run gunicorn as a service that will come back up if killed so a pkill script takes care of the recycle/respawn:
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /var/web/proj/server.sh
I am starting to wonder if this is at all related to memory allocation. I have MongoDB running on the same system and it reserves all the memory for itself but it is supposed to yield if other applications require more memory.
Other things worth a try is getting rid of eventlet or other dependent modules when running gunicorn. uWSGI can also be used as an alternative to gunicorn.
Background
I'm running a Ruby on Rails application that has to serve a lot of static files as well.
My setup currently is:
Debian Linux Lenny 5.0
Apache 2.2.9
Passenger 2.2.10
The problem
Everything runs fine. I see apache process spinning up, passenger instances get created and everything works fast and snappy.
Then, after some time Apache does not respond to requests any more. Clients do get a connection and are "waiting for a response", but none comes.
I cannot manually reproduce this problem. Sometimes it occurs a few hours after a restart, other times it takes a few days to happen. Here's what I found:
Apache process are up; Passenger is there, but it does not have any instances spun up (probably because instances die after a period of inactivity)
No error messages or problems in /var/log/syslog, /var/log/messages, not in apache's access and errors logs, not in my Rails production log. Nothing.
When I stop and start apache everything is back to normal.
Does any one have any clues what's happening here? And how it can be resolved?
Due to an enormous load on static files we decided to host static files on separate server (later Amazon S3+CloudFront) for performance reasons.
My current guess is that Apache couldn't cope with the large number of requests on static files and also doing Passenger. The current setup is Nginx+Unicorn for the Rails application and S3+CloudFront for static files.