Configuring Ruby on Rails to run under Apache on Windows 64-bit - ruby-on-rails

I've inherited the maintenance and development of a Ruby on Rails site that runs on Ruby 1.8.7 and Rails 2.3.2. While we try to deploy to Linux servers using Passenger as much as possible, my boss has told me that there we must be able to deploy to Windows at times for our clients.
I have installed my Rails app fine and it works perfectly when I test with the Webrick server. I have also installed Apache 2.2 which is serving up generic HTML pages perfectly. However, when I try to run my Rails app under Apache I get a 503 Service Temporarily Unavailable error
There is no error listed in the Apache logs but when I check the RoR logs it does show
127.0.0.1 - - [09/Aug/2012:10:31:02 +1000] "GET / HTTP/1.1" 503 323
127.0.0.1 - - [09/Aug/2012:10:31:02 +1000] "GET /favicon.ico HTTP/1.1" 503 323
and
[Thu Aug 09 10:31:06 2012] [error] proxy: BALANCER: (balancer://mmapscluster). All workers are in error state
[Thu Aug 09 10:31:07 2012] [error] proxy: BALANCER: (balancer://mmapscluster). All workers are in error state
As you may have guessed we are running Mongrel as a proxy server for performance reasons.
When I removed all of the proxying from the Apache configuration (incidentally restarting Apache is not enough for the proxy config - I had to reboot the entire machine), I got a seemingly endless list of the following Apache errors,
[notice] Parent: Created child process 1944
[notice] Child 1944: Child process is running
[notice] Parent: child process exited with status 255 -- Restarting.
[notice] Apache/2.2.15 (Win32) configured -- resuming normal operations
I have gone round and round on this and I've checked my config against a working installation that we have but I cannot see any differences in the setup. The only real difference is that the working one is running on a 32-bit machine and the failing one is running on a 64-bit machine.
Could this be the problem? Has anybody else had any similar types of problems running Apache on 64-bit machines?

Related

Why my Amz EC2 server downs everyday for few minutes? Errors 503 and 502. It's a Rails app

I don't know which error causes the problem. When I see, the server is down with the error 503. In Google Chrome log, I have the following error:
503 Service Unavailable: Back-end server is at capacity
While the server is down, I can't get to connect via SSH to see the error log. After few minutes the server works and I am go to the nginx error log.
In the log, I have common errors, like:
ActiveRecord::RecordNotFound (Couldn't find Attachment with 'id'=4240)
I know how to solve and I think that this errors is not the problem.
But I have this error too:
Sending 502 response: application did not send a complete response
Process (pid=31880, group=/home/ubuntu/........./current/public) no longer exists! Detaching it from the pool.
I think that it is the problem, but I looked in the internet and the causes and solutions do not appear to solve the problem.
This problem happens after I created a Load Balancer and use HTTPS.
Before, this problem never happens.
About my server and app:
Amazon Ec2 instance;
Using Classic Load Balancer (with Amazon Certificate Manager in https port);
Using Route 53;
Don't using Elastic IP;
OS: Ubuntu 14.04.2 LTS
ruby -v: 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]
rails -v: Rails 4.2.3
nginx -v: nginx/1.8.0
passenger -v: Phusion Passenger version 5.0.10
Load Balancer Health Check is set up like this:
Ping Target
HTTP:80/index.html
Timeout 5 seconds
Interval 30 seconds
Unhealthy threshold 5
Healthy threshold 5
Health Check Information:
I get this print in the Load Balancer MONITORING tab. Is the Unhealthy Hosts (Count). Why my host was unhealthy?
SOLUTION
In my case, the problem was in the assets precompile task.
I have a lot of assets in my app and when I did the deploy with capistrano, it exhausts the server.
In other side, sometimes, the assets was precompiled after the deploy, during the page load. But this task is very slowly, and returns the errors 502, 503 and 504.
It causes the servers down to, because the CPU utilization goes to 100%, the average latency is going higher too.
To solve, I removed the assets precompile task from Capistrano. I precompile the assets in my locally PC and send all of them to GIT branch MASTER. When I run cap production deploy, the precompile taks will not run. More details in this post.
I did some changes in my Load Balancer Health Check settings:
Ping Target HTTP:80/elb/index.html (I created in pubic folder this folder and file)
Timeout 5 seconds
Interval 30 seconds
Unhealthy threshold 2
Healthy threshold 10
Idle timeout: 65 seconds (equal my nginx timeout)
With this I hope the task assets precompile never more runs on the server.

Apache 2.18 denied Rails app (with Apache & Passenger)

I have a server (ubuntu) which runs some Rails app with Apache & Phusion Passenger.
Every Rails app stopped to work since I upgraded Apache to 2.18.
The server always respond with a 403 status (forbidden) but the public/ directory is not private (I can access via http any .html, .jpg, etc... inside it).
In error.log from Apache there are some entries like:
No matching DirectoryIndex (none) found, and server-generated directory index forbidden by Options directive
Anyone has any idea?
UPDATE:
I managed to solve this issue reinstalling (purge than install) Apache 2.4 and Passenger (and all its dependencies). Running ok with Apache 2.4.18 and Passenger 5.

Problems connecting to rails server

I'm working at getting a very basic ruby/rails environment up and running, and I am presently unable to connect to it.
I'm working on a Macbook (updates current), and I've got an Ubuntu VM running that I'm using as my test server.
I have port-forwarding set up for connecting with the VM (hypervisor = VirtualBox)
8083 to 80 (http)
9307 to 3306 (mysql)
2223 to 22 (ssh)
3010 to 3000 (WEBrick)
VM Port-Forwarding Setup Screen
I have not setup a firewall on the server
Using this, I can ssh in, and I can hit "non-rails" sites just fine via Apache. What I can't do is get a basic "hello-world" screen up for the rails environment. I CAN get there if I cut out the VM and start WEBrick ("rails s") directly from the Mac environment and then go to localhost's port 3000, but when I attempt the same thing on the VM and then try to hit port 3010 the same way, I get back a "no data received" response.
WEBrick appears to be running just fine
myserver$ rails s
=> Booting WEBrick
=> Rails 4.2.5.1 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
[2016-02-08 09:01:30] INFO WEBrick 1.3.1
[2016-02-08 09:01:30] INFO ruby 2.1.5 (2014-11-13) [x86_64-linux-gnu]
[2016-02-08 09:01:30] INFO WEBrick::HTTPServer#start: pid=3756 port=3000
Turning Apache on/off does not change anything beyond changing the accessibility of the content served by Apache.
I've tried monitoring "log/development.log" in my rails application, and I've also poked around to see if I noticed any clues in any of the various log files under "/var/log", but nothing's jumping out at me.
I am not doing anything fancy here. I just want to get to the "Welcome Aboard!" screen for rails. I thought that getting started would be as simple as setting up the port-forwarding, creating the rails-application, and hitting it from my browser; it's been a few hours now and I'm getting nowhere with this. Apologies if this has a super easy solution that I'm just not seeing.

Strange issue with unicorn and nginx caused 502 error

We have Ruby on Rails application, that is running on VPS. This night the nginx went down and responded with "502 Bad Gateway". Nginx error log contained lots of folowing messages:
2013/10/02 00:01:47 [error] 1136#0: *1 connect() to
unix:/app_directory/shared/sockets/unicorn.sock failed (111:
Connection refused) while connecting to upstream, client:
5.10.83.46, server: www.website.com, request: "GET /resource/206 HTTP/1.1", upstream:
"http://unix:/app_directory/shared/sockets/unicorn.sock:/resource/206",
host: "www.website.com"
These errors started suddenly, because previous error messages was 5 days earlier.
So the problem was in unicorn server. Then i opened unicorn error log and found there just some info messages, which doesn't connected with a problem. Production log was useless too.
I tried to restart server via service nginx restart, but it didn't help. Also there were not some pending processes of unicorn.
The problem was solved when i redeploy the application. And it is strange, because i deployed the same version of application 10 hours before server went down.
I'm looking for any suggestions how to prevent such 'magic' cases in future. Appreciate any help you can provide!
Looks like your unicorn server wasn't running when nginx tried to access it.
This can be caused by VPS restart, some exception in unicorn process, or killing of unicorn process due to low free memory. (IMHO VPS restart is the most possible reason)
Check unicorn by
ps aux | grep unicorn
Also you can check server uptime with
uptime
Then you can:
add script that would start unicorn on VPS boot
add it as service
run some monitoring process (like monit)

Reducing Memory Usage in Spree

I checked my applications, and they're running a huge amount of memory which is crashing my server.
Here's my ps :
RSS COMMAND
1560 sshd: shadyfront#pts/0
1904 -bash
1712 PassengerNginxHelperServer /home/shadyfront/webapps/truejersey/gems/gems/p
8540 Passenger spawn server
612 nginx: master process /home/shadyfront/webapps/truejersey/nginx/sbin/nginx
1368 nginx: worker process
94796 Rails: /home/shadyfront/webapps/truejersey/True-Jersey
1580 PassengerNginxHelperServer /home/shadyfront/webapps/age_of_revolt/gems/gem
8152 Passenger spawn server
548 nginx: master process /home/shadyfront/webapps/age_of_revolt/nginx/sbin/ng
1240 nginx: worker process
92196 Rack: /home/shadyfront/webapps/age_of_revolt/Age-of-Revolt
904 ps -u shadyfront -o rss,command
Is this abnormally large for an e-commerce application?
If you are on linux, You can use
ulimit
http://ss64.com/bash/ulimit.html
Not sure why it is eating your memory though.
If your using a 64-bit OS then it's fairly normal.
RSS COMMAND
89824 Rack: /var/www/vhosts/zmdev.net/zmdev # RefineryCMS on Passenger
148216 thin server (0.0.0.0:5000) # Redmine
238856 thin server (0.0.0.0:3000) # Spree after a couple of weeks
140260 thin server (0.0.0.0:3000) # Spree after a fresh reboot
All of these are 64-bit OSes, there are significant memory reductions using 32-bit OS
Here's the exact same Spree application running Webrick in my dev environment using 32-but Ubuntu
RSS COMMAND
58904 /home/chris/.rvm/rubies/ruby-1.9.2-p180/bin/ruby script/rails s

Resources