Precompiled assets not loading in rails app deployed to AWS elastic beanstalk - ruby-on-rails

this is a question that pertains to deployment of rails application on AWS elastic beanstalk. When I type eb logs in console, I get the following error under /var/log/nginx/error.log:-
2017/06/04 06:02:08 [error] 31759#0:
*1 open()"/var/app/current/public/assets/trumbowyg.min.css" failed
(2: No such file or directory), client: 172.31.79.121, server: _,
request: "GET /assets/trumbowyg.min.css HTTP/1.1",
host: "kanttly-dev.kfzi8ynhke.us-east-1.elasticbeanstalk.com",
referrer: "http://kanttly-dev.kfzi8ynhke.us-east1.elasticbeanstalk.com/"
However, when I eb ssh and cd /var/app/current/public/assets/ && ls, I can see the precompiled files in the directory.
trumbowyg.min-65157a3a7fa7c31aa4e2b9e7036c1e389339f4f7964eece797770708df9d2ca1.css
I would be glad if there were anyone who can explain to me what the problem is since the asset(trumbowyg.min.css) is already pre-compiled and how I can load the precompiled asset. Thank you!

Related

Rails app migrating to AWS Elastic Beanstalk :: Bad Gateway (502)

So I'm migrating from Heroku to AWS Elastic Beanstalk and testing out the waters. I'm following this documentation:
AWS Docs :: Deploy Rails app to AWS
However after following the documentation I keep receiving a Bad Gateway 502 (error).
Here's the specs of my app:
Rails 4.1.8
Ruby 2.1.7
Server Puma
So I checked my /log/nginx/error.log and here is what I see:
2015/11/24 06:44:12 [crit] 2689#0: *4719 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.13.129, server: _, request: "G ET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "my-app-env-mympay5afd.elasticbeanstalk.com"
From this AWS Forum thread it appears as though Puma is not starting correctly.
So the three log files that I have taken a look at are:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
and none of them seem to indicate any errors except for the "secret_key_base" error which I fixed (I used the eb setenv SECRET_KEY_BASE=[some_special_key] command).
One thing that could hint at the source of the issue is /var/log/nginx/rotated/error.log1448330461.gz has the following content
2015/11/24 01:06:55 [warn] 2680#0: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:39
2015/11/24 01:06:55 [warn] 2680#0: conflicting server name "localhost" on 0.0.0.0:80, ignored
But they seem to be warnings rather than severe show stoppers.
Are there any other files that I should be taking a look at?
As another point of reference, I've looked at this SO Post which would seem to imply that I need to enable SSL in order for all of this to work.
Thanks in advance!
Got it.
In my 'production.rb' I had a force_ssl setting and I didn't set up SSL yet since I was just starting out.

AWS Beanstalk - Passenger Standalone not serving web pages after Rails 4.2.1 migration

My Rails 3.2.21 app was running fine on AWS Beanstalk under Passenger Standalone 4.0.53. I migrated the app to Rails 4.2.1 and got it passing all tests on my local development machine (Ubuntu, WEBrick). I deployed it to Beanstalk (aws.push), the deploy succeeds (copied from /ondeck to /current) and: nothing. I browse to the site and see a blank page. No 404 error, nothing.
On the old version, I had pre-compiled assets locally. This time, I let Beanstalk run 'bundle exec rake assets:precompile' webapp. I see in eb-activity.log that this succeeded, and I see those assets in var/app/current/public/assets. But this isn't about not serving public assets, it's about not serving anything.
So yes, there is a /public directory as required by Passenger. There is also a /tmp directory. And here's the config.ru that Rails created:
require ::File.expand_path('../config/environment', __FILE__)
run Rails.application
Also in eb-activity.log, I see this message that seems to indicate Passenger is working:
+ service passenger restart
=============== Phusion Passenger Standalone web server started ===============
PID file: /var/app/support/pids/passenger.pid
Log file: /var/app/support/logs/passenger.log
Environment: production
Accessible via: http://0.0.0.0/
Serving in the background as a daemon.
But when I run curl -o - http://0.0.0.0 from the console, it just goes to the next line--no headers or other web content is returned.
I can run rails c to start a Rails console on the AWS machine. Works fine, has database access.
I've re-run eb init on my development machine to make sure it connects correctly, although the git aws.push has always worked to upload a new version.
I terminated the EC2 instance and let the load balancer start a new instance. That built successfully. The first time it came up (and only then), I saw some error messages in /var/app/support/logs/passenger.log:
[ 2015-06-17 01:03:54.8281 2556/7fd8941b8740 agents/Watchdog/Main.cpp:538 ]: Options: { [redacted] }
[ 2015-06-17 01:03:55.2388 2559/7ff254abe740 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.2555/generation-0/request
[ 2015-06-17 01:03:55.9207 2567/7fcb54570740 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.2555/generation-0/logging
[ 2015-06-17 01:03:55.9209 2556/7fd8941b8740 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
2015/06/17 01:03:57 [error] 2575#0: *3 "/var/app/current/public/index.html" is not found (2: No such file or directory), client: 127.0.0.1, server: _, request: "HEAD / HTTP/1.1", host: "0.0.0.0"
2015/06/17 01:04:02 [error] 2575#0: *4 "/var/app/current/public/app_check/index.html" is not found (2: No such file or directory), client: 172.31.21.97, server: _, request: "GET /app_check/ HTTP/1.1", host: "172.31.25.47"
[ 2015-06-17 01:06:10.4000 21317/7f433c999740 agents/Watchdog/Main.cpp:538 ]: Options: { [redacted] }
[ 2015-06-17 01:06:10.4047 21320/7f414cdd5740 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21316/generation-0/request
[ 2015-06-17 01:06:10.4090 21325/7f87c44f2740 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21316/generation-0/logging
[ 2015-06-17 01:06:10.4092 21317/7f433c999740 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
Usually, passenger.log is empty, even after I try to access the site.
I just tried setting the log level to DEBUG. After Passenger restarted, it printed this in passenger.log:
[ 2015-06-18 17:52:28.3921 631/7f42fc9ce700 Pool2/SmartSpawner.h:298 ]: Preloader for /var/app/current started on PID 666, listening on unix:/tmp/passenger.1.0.627/generation-0/backends/preloader.p23dhh
Why isn't Passenger serving web pages? How can I troubleshoot this?
Update 6/18/2015
Tried pre-compiling assets on development machine and then deploying. Didn't help.
Update 6/19/2015
I now have a Rails 4.2.1 test app successfully running under the same Beanstalk environment (64bit Amazon Linux 2014.09 v1.1.0 running Ruby 2.0 - Passenger Standalone). I see that on the test app, there are four Passenger processes:
PassengerHelper (running as webapp)
PassengerLogging (running as webapp)
PassengerWatchdog (running as root)
PassengerHelper (running as root)
The pid returned by service passenger status corresponds to the PassengerHelper running as root.
On the failing live app, there is only one Passenger process:
PassengerHelper (running as webapp)
The pid returned by service passenger status does not correspond to an active process.
So apparently, three of the four Passenger processes are crashing after they start. So far I have not been able to find any corresponding error logs, so I don't know why they crash.
I finally figured out how to increase the logging level for Passenger Standalone (blogged here). From the log, I could see that the web server was responding to the Beanstalk health check with 301 redirects. That meant that the load balancer thought the app was dead, so it was sending 503 errors back to the browser, which displayed a blank page.
The 301 redirect tipped me to the force_ssl configuration. I had it configured as follows in production.rb:
config.force_ssl = true
config.ssl_options = { exclude: proc { |env| env['PATH_INFO'].start_with?('/app_check') } }
The second line, following this post, is supposed to allow the health check (which only works on http) to bypass the force_ssl> However, the exclude option has been removed from Rail 4.2 per this commit. Without the exclude, the health check failed and the load balancer wouldn't serve the app.
The default Elastic Beanstalk behavior, if you don't specify a health check URL, is to simply ping the server on port 22. However I prefer that the health check actually confirm that the web server is working by loading a web page.
I've implemented this workaround:
Remove config.force_ssl and config.ssl_options from production.rb.
Add the following to application.rb, assuming that the health check is accessible from StaticPagesController#health_check:
force_ssl if: :require_ssl? # see private method below for conditions
private
def require_ssl?
Rails.env.production? && !(controller_name == "static_pages" && action_name == "health_check")
end
I may still have a Passenger or Nginx issue since I still only have one Passenger process running, and its PID doesn't match the PID from service passenger status. But at least the web server is serving my site again.

Rails file upload issue due to Phusion Passenger tmp folder rotation

I have a problem with a rails app running on a newly set up server. The rails app runs on ubuntu/ngnix/passenger.
OS: Ubuntu 12.04.2 LTS
Webserver: ngninx 1.5.4
Passenger v 4.0.24
When uploading larger files through the rails app, when the upload finishes, the app throws a "502 Bad Gateway" error. This only happens for bigger files.
The error log from nginx looks like this:
2014/10/07 14:03:21 [crit] 24511#0: *339 connect() to /home/.../tmp_passenger/passenger.1.0.14062/generation-15/request failed (2: No such file or directory) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: www.xxx.com, request: "POST /path HTTP/1.1", upstream: "passenger:/home/.../tmp_passenger/passenger.1.0.14062/generation-15/request:", host: "www.example.com", referrer: "http://www.example.com/path"
Now, when looking into the folder in /home/.../tmp_passenger/passenger.1.0.14062/generation-15 the generation-N folder seems to be rotating exactly every 60 seconds from .../generation-11 to .../generation-12 and so forth.
Now, obviously any upload - for which passenger uses this generation-N directory as tmp – will fail if it takes longer than a minute, or hits this rotation by chance on shorter uploads.
Now, I really don't know where this rotating of the generation-N folders is happening/configured. I'm pretty sure this is what I need to fix this annoying issue.
From Googling I have the impression that either:
systemd-tmpfiles-clean.service is responsible for the rotation. (but how to change the timing of 60 secs in that case?)
or the respawning of passenger somehow invokes a rotation of the generation-N folder although the PID stays the same so I don't think it's a complete restart of passenger responsible here.

Upload a file on server [internal error 500]

On ruby on rails I have written a code where I can upload a file on the amazon console, when i run the code from the localhost i am able to upload the file successfully . But i try to upload it from the swagger i am getting a error called : internal error 500. I have checked the log file and found the following error:
/2013/12/23 09:34:05 [crit] 1705#0: *315335 open() "/tmp/passenger-standalone.1627/client_body_temp/0000000007" failed (2: No such file or directory), client: 10.29.36.248, server: _, request: "POST /v1/models/GTAG2/modelfirmware.json?api_key=5rx2mR3muK1mCydYerw3 HTTP/1.1", host: "dev-api-3.elasticbeanstalk.com"
Can anyone tell me how to fix this bug? In my S3 the folder and bucket is available.
Sounds like you may be encountering issues similar to one of the following:
Phusion-Passenger Issue #654 where something (a daemon) is deleting (cleaning) /tmp files while they are still in use.
Issue Uploading Files from Rails app hosted on Elastic Beanstalk
I'd check to make sure nothing is running on your system that could be deleting/cleaning files under /tmp while Passenger is running.

rails 500 error no production log entry

I have installed a new rails application on the same server as another application. The original is running fine but the new application gives me the infamous "We're sorry but something went wrong" but there is no entry in production log. The last entries in production log are from my migration using rake.
I have found if I run cap deploy:cold it will work but then on the next yupdate will fail again with no sock file. Then cap deploy:cold seems to be required on each update.
edit: If I run /etc/init.d/unicorn_taxidata restart I get "couldn't reload" then if I run it again immediately it starts fine.
The app works in development. Environment is NGINX, unicorn, postgresql, rails 4.0.0.0, ruby 2.0.0p195.
I have this error in my nginx error log:
[crit] 889#0: *65 connect() to unix:/tmp/unicorn.myapp.sock failed (2: No such file or directory) while connecting to upstream, client: 1.123.13.26, server: myapp.com.au, request: "GET /login HTTP/1.1", upstream: "http://unix:/tmp/unicorn.myapp.sock:/login"

Resources