Accessing rails active storage no routes matches error - ruby-on-rails

I am trying to host rails sever as a backend.
Nginx will redirect to localhost:3000 when the route start with /api.
In my case, 'etl.robust.best/api' will go to host's localhost:3000 where rails have been hosted.
The problem is I cannot access images from active storage.
Instead I get no routes matches error.
config/storage.yml
I can access images from active storage when testing on my computer.
How do I fix this.
This is my development.rb file.
The config of nginx

You should check what your Rails.root is. You could do that in rails console, launched inside your app. Then check config/storage.yml and see that the path defined there for local storage matches the actual path. For example this would point to /storage inside your app:
local:
service: Disk
root: <%= Rails.root.join("storage") %>
More info on Active Storage here.

Related

Rails Active Storage KeyError: Missing configuration for the Active Storage service

UPDATE:
If I run the same action twice in the console, it fails the first time and WORKS the second time!
This is happening to us on an upgrade to Rails 7.0.3 from 5.2
Everything was working before and I know the YAML files are correct.
Any active storage operation gives us:
KeyError: Missing configuration for the Active Storage service. Configurations available for the local, test, amazon, and amazonDemo services.
storage.yml:
local:
service: Disk
root: <%= Rails.root.join("storage") %>
test:
service: Disk
root: <%= Rails.root.join("tmp/storage") %>
In test.rb:
config.active_storage.service = :test
(We set those correctly in all of our environments..like I said, it was working before the upgrade)
The thing that is interesting to me is the double space in the error message. It appears that the active storage service is not truly being set. I've looked at the Edge Guides and in every tutorial I can find. It doesn't feel like any other configurations should be necessary. I'm officially stumped.
Ruby: 2.7.6
Rails 7.0.3
I did find one similar issue here:
https://github.com/rails/rails/issues/43145
Our blob fixtures did not have service_name set. It was complaining about the blobs having no service.
I am having similar issue after upgrading to 6.1. The new active storage update added a new column called service_name.
I added service_name: test in active_storage_blobs.yml in fixtures folder and it worked

How to fix ActionText image attachment link resulting in error 404

I am trying to run a very simple CMS of a page with Action Text on my own server.
In development it all works fine. The attachment is uploaded and I can see it after saving my page model.
When switching to production the upload still works and I can see the file on the local file system, but viewing the page shows a broken image tag.
The link for the image looks like
http://example.com/rails/active_storage/representations/SIGNED_ID/myimage.png
On my local production the link is:
http://localhost:3000/rails/active_storage/disk/gsid/myimage.png?content_type=image%2Fpng&disposition=inline%3B+filename%3D%22myimage.png%22%3B+filename%2A%3DUTF-8%27%27myimage.png
I have provided a secrete_key_base, white-listed my host and all the stuff to get the application running for production.
I am using:
Unicorn
Nginx
Ruby 2.6.5
Rails 6.0.2.1
My storage service is "Disk" and all files and folders inside, including the RAILS_ROOT are owned by the user running the application.
storage.yml:
local:
service: Disk
root: <%= Rails.root.join("storage") %>
routes.rb:
Rails.application.routes.draw do
resources :pages, param: :seo_url, path: 'seite'
get '/index', to: 'application#home', as: :home
root to: 'application#home'
end
production.rb:
# Store uploaded files on the local file system (see config/storage.yml for options).
config.active_storage.service = :local
Does anyone have a clue why this does not work?
I was able to solve my problem. I misconfigured my Nginx. I pasted the wrong config snippet for static image delivery into the config.
I removed the lines from the Nginx config. Everything is working fine now.
This is actually so embarrassing!

Sitemap generation does not save file to storage

I'm just getting a sitemap going with the rails gem and am having trouble generating a sitemap in production.
Running the rake command: rake sitemap:refresh in development creates the sitemap.xml.gz file in the public folder. I navigate to localhost:3000/sitemap.xml.gz and get it downloads the zipped file.
When I run it in production (Heroku-like command line with Dokku on a Digital Ocean VM) I get:
+ sitemap.xml.gz 6 links / 450 Bytes
Sitemap stats: 6 links / 1 sitemaps / 0m00s
Pinging with URL 'https://www.myapp.com/sitemap.xml.gz':
Successful ping of Google
Successful ping of Bing
It appears the file has been created, so I navigate to www.myapp.com/sitemap.xml.gz and get a 404 response.
Server say:
ActionController::RoutingError (No route matches [GET] "/sitemap.xml.gz"):
It appears that this request is hitting the Rails stack when it should be served by Nginx. I just checked to see if the file exists:
FileTest.exists?("public/sitemap.xml.gz")
It returns false so it seems like the sitemap is not actually saved on file. Is there a possibility my file system is read-only right now? How could I test that?
With the new dokku docker-options plugin, you could append persistent storage/volume from your host machine into you container.
First create a local directory in your host machine.
mkdir <path/to/dir>
Then add the following docker-options in dokku
dokku docker-options:add <app> deploy,run -v <path/to/host/dir>:<path/to/container/public/sub/dir>:rw
On your config/sitemap.rb file, add the following lines
SitemapGenerator::Sitemap.public_path = 'public/sitemap/'
SitemapGenerator::Sitemap.sitemaps_path = 'sitemap/'
The sitemap:refresh rake task should write into the sitemap sub folder within the public folder.
This would also allow sitemap_generator to ping the search engine with the right address to your sitemap.xml.gz file.
Feel free to give this a try.
I believe this is a dokku related "issue". Dokku uses Heroku buildpacks, and this yields a read-only file system like on Heroku.
I'd be curious to know if there's a way to modify that behavior in Dokku (seems unlikely if using Heroku buildpacks), but that's a bit out of my league.
I think the best solution to this problem is the same as on Heroku - using Amazon S3.
The sitemap_generator gem has docs on how to set this up.

How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk?

How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk?
I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using:
git aws.push
The following files are in the .gitignore:
/config/database.yml
/config/initializers/omniauth.rb
/config/initializers/secret_token.rb
Following this link I attempted to add an S3 file to my deployment:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html
Quoting from that link:
Example Snippet
The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp:
sources:
/etc/myapp: http://s3.amazonaws.com/mybucket/myobject
Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .ebextensions directory:
sources:
/var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz
That config.tar.gz file will extract to:
/config/database.yml
/config/initializers/omniauth.rb
/config/initializers/secret_token.rb
However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message:
Error message:
No such file or directory - /var/app/current/config/database.yml
Exception class:
Errno::ENOENT
Application root:
/var/app/current
The "right" way to do what I think that you want to do is to use IAM Roles. You can see a blog post about it here: http://aws.typepad.com/aws/aws-iam/
Basically, it allows you to launch an EC2 instance without putting any personal credential on any configuration file at all. When you launch the instance it will be assigned the given role (a set of permissions to use AWS resources), and a rotating credential will be put on the machine automatically with Amazon IAM.
In order to have the .ebextension/*.config files be able to download the files from S3, they would have to be public. Given that they contain sensitive information, this is a Bad Idea.
You can launch an Elastic Beanstalk instance with an instance role, and you can give that role permission to access the files in question. Unfortunately, the file: and sources: sections of the .ebextension/*.config files do not have direct access to use this role.
You should be able to write a simple script using the AWS::S3::S3Object class of the AWS SDK for Ruby to download the files, and use a command: instead of a sources:. If you don't specify credentials, the SDK will automatically try to use the role.
You would have to add a policy to your role which allows you to download the files you are interested in specifically. It would look like this:
{
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
Then you could do something like this in your .config file
files:
/usr/local/bin/downloadScript.rb: http://s3.amazonaws.com/mybucket/downloadScript.rb
commands:
01-download-config:
command: ruby /usr/local/downloadScript.rb http://s3.amazonaws.com/mybucket/config.tar.gz /tmp
02-unzip-config:
command: tar xvf /tmp/config.tar.gz
cwd: /var/app/current
It is possible (and easy) to store sensitive files in S3 and copy them to your Beanstalk instances automatically.
When you create a Beanstalk application, an S3 bucket is automatically created. This bucket is used to store app versions, logs, metadata, etc.
The default aws-elasticbeanstalk-ec2-role that is assigned to your Beanstalk environment has read access to this bucket.
So all you need to do is put your sensitive files in that bucket (either at the root of the bucket or in any directory structure you desire), and create a .ebextension config file to copy them over to your EC2 instances.
Here is an example:
# .ebextensions/sensitive_files.config
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["elasticbeanstalk-us-east-1-XXX"] # Replace with your bucket name
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role" # This is the default role created for you when creating a new Beanstalk environment. Change it if you are using a custom role
files:
/etc/pki/tls/certs/server.key: # This is where the file will be copied on the EC2 instances
mode: "000400" # Apply restrictive permissions to the file
owner: root # Or nodejs, or whatever suits your needs
group: root # Or nodejs, or whatever suits your needs
authentication: "S3Auth"
source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-east-1-XXX/server.key # URL to the file in S3
This is documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html
Using environment variables is a good approach. Reference passwords in the environment, so in a yaml file:
password: <%= ENV['DATABASE_PASSWORD'] %>
Then set them on the instance directly with eb or the console.
You may be worried about having such sensitive information readily available in the environment. If a process compromises your system, it can probably obtain the password no matter where it is. This approach is used by many PaaS providers such as Heroku.
From there security document Amazon EC2 supports TrueCrypt for File Encryption and SSL for data in transit. Check out these documents
Security Whitepaper
Features
Risk and Compliance
Best Practices
You can upload a server instance with an encrypted disk, or you can use a private repo (I think this costs for github but there are alternatives)
I think the best way is not to hack AWS (set hooks, upload files). Just use ENV variables.
Use gem 'dot-env' for development (i.e. <%= ENV['LOCAL_DB_USERNAME'] %> in 'config/database.yml') and default AWS console to set variables in Beanstalk.
I know this is an old post but I couldn't find another answer anywhere so I burned the midnight oil to come up one. I hope it saves you several hours.
I agreed with the devs that posted how much of a PITA it is to force devs to put ENV vars in their local dev database.yml. I know the dotenv gem is nice but you still have to maintain the ENV vars, which adds to the time it takes to bring up a station.
My approach is to store a database.yml file on S3 in the bucket created by EB and then use a .ebextensions config file to create a script in the server's pre hook directory so it would be executed after the unzip to the staging directory but before the asset compilation--which, of course, blows up without a database.yml.
The .config file is
# .ebextensions/sensitive_files.config
# Create a prehook command to copy database.yml from S3
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/03_copy_database.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
set -xe
EB_APP_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_staging_dir)
echo EB_APP_STAGING_DIR is ${EB_APP_STAGING_DIR} >/tmp/copy.log
ls -l ${EB_APP_STAGING_DIR} >>/tmp/copy.log
aws s3 cp s3://elasticbeanstalk-us-east-1-XXX/database.yml ${EB_APP_STAGING_DIR}/config/database.yml >>/tmp/copy.log 2>>/tmp/copy.log
Notes
Of course the XXX in the bucket name is a sequence number created by EB. You'll have to check S3 to see the name of your bucket.
The name of the script file I create is important. These scripts are executed in alphabetical order so I was careful to name it so it sorts before the asset_compilation script.
Obviously redirecting output to /tmp/copy.log is optional
The post that helped me the most was at Customizing ElasticBeanstalk deployment hooks
posted by Kenta#AWS. Thanks Kenta!

Restricting file access on server with Rails Nginx and passenger

I have a Rails application (with Nginx and Passenger) that save video files on the server. How can I restrict access to those files to logged in users with permissions to those files? I believe when I try to access a file such as www.mysite.com/videos/video1.flv it bypasses Rails correct? So Do I have to do something at Nginx level to restrict that a link to an article with instruction will be great. I'm using Rails 2.3.8
You can check the user credentials with rails and then issue a X-Accel-Redirect to nginx to make it serve the file from an hidden directory (see http://wiki.nginx.org/NginxXSendfile)
There's a walkthrough here;
http://ramblingsonrails.com/how-to-protect-downloads-but-still-have-nginx-serve-the-files

Resources