I have been handed a Ruby Project that creates a document and serves it to the user, when I try to access the file on a local environment it it is delivered correctly, (this is the code that does so).
filepath = Rails.root.join("public",#records.document.url)
send_file (filepath)
So I know the file is constructed correctly and sending it to the user using send_file works at least in a local environment.
But when it's deployed on the production server (running Amazon EC2, ubuntu, deployed with dokku) I get a 500 Internal server error:
ActionController::MissingFile (Cannot read file *path of the file*)
Few things I'm noticing: doing a find / -iname "*filename*" tells me the file is stored in var/lib/docker/overlay2/*container_name*/merged/app/public/filename and var/lib/docker/overlay2/*container_name*/diff/app/public/filename but the result of joining Rails.root with the filename is app/public/filename, do I need to pass send_file the whole filepath?
I googled for a couple hours and it seems nginx has no access to the public folder because it's running in the host machine while the app is inside a container? How would I know if that is the case and if so, how should I serve the file?
The person who originally wrote the code told me to use OpenURI.open_uri() but googling it doesn't seem to turn up anything applicable to the situation.
Nothing you're doing here actually makes sense - its sounds like you're just following a bunch of misinformation down a bunch of rabbit holes.
The way this is supposed to work is that the files in /public - not /app/public are served directly by the HTTP server (NGinX or Apache) in production and your Rails application in development (so you don't have to configure a local HTTP server). The /app directory is for your application code and uncompiled assets. Do not serve files from there - ever.
The /public directory is used for your compiled assets and stuff like robots.txt, the default error pages and various icons. Serving the files directly by your HTTP server is far more efficient then serving them through your Rails application. You can do a litmus test to see if serving static assets are working by sending curl -v YOUR_URL/robots.txt.
If this isn't working in production you need to check your NGinX configuration. There is no shortage of guides on how to serve static files with NGinX and Docker.
Serving files with a Rails controller and send_data / send_file should only be done when its actually needed:
The file is not a static file or something that can be compiled at deploy time.
You need to provide access control to the files with your application.
Your proxying files from another source.
Related
I'm trying to deploy to production (on a local machine) a Rails 5.2 app which uses webpacker for assets managemnet (I have totally replaced the assets pipeline).
Everything seems ok: as part of my deployment process I run the webpacker:compile task and both JS and CSS are compiled in the public/packs folder.
However, the assets aren't loaded from the app even if they are correctly linked.
Am I missing anything here?
I have tried to load via browser other files in the /public folder (i.e. robots.txt) but they are not availble neither. I get "The page you were looking for doesn't exist." error message.
In production by default rails expects to be behind a reverse proxy server like nginx that will serve all static files from public more efficiently.
Also for low loads the built-in file server can be enabled as a quick-fix, in production.rb:
config.public_file_server.enabled = true
So we have resources in the grails-app/assets folder, i.e: javascript files, stylesheets and other documents.
Some of these documents are user docs which would have to have the option of being updated in production mode. When you usually add something to this assets folder, the grails app doesn't detect this change until after redeploying the app which would cause the folder to be reprocessed.
Is there any way to detect these changes in production systems or an alternate location other than the assets folder where grails would pick up this new/updated file without re-deployment ?
Under production the most recommendable approach is to have an Apache Httpd or an NGinx server as a front end where you put the static assets. In both cases you will need configure reverse proxy on NGinx or mod_jk (depending of your Java container.).
Inclusive you may think on store large assets in a repository like S3 (if you will run on Internet).
I have an action, that generates a PDF files and save it in the /public/output.pdf.
When I set
config.serve_static_assets = false
this file can't be found.
What's wrong ?
From the documentation:
"config.serve_static_assets configures Rails itself to serve static
assets. Defaults to true, but in the production environment is turned
off as the server software (e.g. Nginx or Apache) used to run the
application should serve static assets instead. Unlike the default
setting set this to true when running (absolutely not recommended!) or
testing your app in production mode using WEBrick. Otherwise you won´t
be able use page caching and requests for files that exist regularly
under the public directory will anyway hit your Rails app."
Which means that if you set that to false Rails will not serve any assets from your public folder as it is assumed that a front-end web server (apache/nginx) will handle it. This lessons the load on Rails as the front-end server is much, much more efficient at serving files directly.
After testing, I came to this conclusion:
1) when using the command
rails s -e production
Rails will only serve the statics files. Any other file created after you compile your assets will not be found.
To handle this, you need to execute your application under a web server like Apache, Nginx or other. These web servers will serve this files for you.
This looks to be obvious, but not for a beginner.
I'm upgrading to rails 3.1 and I need to have the /images directory be an alias to /assets. Is this possible? The reason being I don't want emails that I have sent out to clients which have direct links to files in /images to break.
Is this possible at the web server level? I'm on nginx.
You can do this in nginx
location /images {
alias /usr/share/rails_app/public/assets/images;
}
Though I think the bigger problem will be when you run
rake assets:precompile
It will add a md5hash string to your images. This hash string is added to force browsers to download changed images, so it doesn't use the browser cache. Since the names of the images will be different. It might make more sense to host the old images in a static directory with nginx.
Using Rails 3.0.7 and git, deploying with capistrano. I'm using different machines as web and app servers. I cannot deploy the application code to the web server, only the static assets--basically the public/ folder.
This would seem common but no luck searching for a best practice.
Is anything build around capistrano to handle this case? Otherwise I'm thinking that adding tasks to create the structure, but scp the public directory from the app server would be the solution.
So I assume there's a business reason you can't deploy the app to the other server?
If there isn't then just deploy the whole code
and configure your web server to just serve the public folder.
(in Apache/Passenger the configs would be exactly the same, you just wouldn't enable passenger on the static server)
That is the only simple way to do it..
otherwise you're going to cause yourself a load of headaches..
Nevertheless I'm going to make up a way to solve this.
If you do need to deploy just the static code
then I suggest you create two repositories
the app (eg. git#myserver:app.git
the static files (eg. git#myserver:static.git)
Now in your app include git#myserver:static.git as a submodule mounted at public/
Having done this, you should search standard capistrano recipes for deploying with git submodules (in particular I guess you'll want to store a local cache of the submodules, update it, then git submodule init somehow with that)
You can then have two capistrano recipes
I suggest you check out capistrano multi-stage... defining app and static as two stages
You can therefore just specify git#myserver:app.git as the repository for "app"
and git#myserver:static.git as the repository for "static"
then a simple cap app deploy:migrations && cap static deploy should do it.
but remember these will not be simultaneous
I too wish there were more established practices published. We've done ours based on the Django book which recommends making your public app directory a networked directory.
This is much better as scp only works if your public directory is static. Many apps will write things to the public directory, e.g. image generation on-the-fly. These files also need to be copied to the web server immediately.
I recommend using a NFS, Samba Share or similar, so that your public directory is actually just a networked folder, so when you write to it, it's like writing to the remote folder.
To integrate it into capistrano we do the following:
create this networked folder in shared/public
After deploy:update_code:
move content from current/public to shared/public (overriding files as needed)
remove or rename current/public then symlink current/public to shared/public
Downsides:
* doesn't remove old files (like someone earlier said)
* no real rollback option (apart from redeploying older version)
Best approach I've come up with is to in fact scp files over to the web server.