How to add host to a Carrierwave url? - ruby-on-rails

I'm using Carrierwave on a Document model.
class Document
mount_uploader :file, DocumentUploader
end
and am trying to send an email with document as attachment
class DocumentMailer
def distribute(recipient, document)
filename = document.file.file.original_filename
attachments[ filename ] = File.read(document.file.url)
mail(
to: receipient.email,
subject: "Document attached"
)
end
end
In tests, the Mailer is raising an error
Errno::ENOENT:
No such file or directory # rb_sysopen - /uploads/document/file/2/my_attachment.jpg
I can resolve this error in the test suite by calling path instead of url in DocumentMailer, which returns the full filesystem path
attachments[ filename ] = File.read(document.file.path)
# /Users/AHH/code/myapp/tmp/uploads/document/file/2/my_attachment.jpg
However, this causes the method to fail in production. Carrierwave is using fog to store files on S3, and so I need the full url to assign an attachment to DocumentMailer.
Why do the tests fail when using file.url? I assume it is because the url has no host. So how to I ensure that Carrierwave applies a host to file.url in the test environment?

This is a side effect of the way that you store files in development/test versus production due to the fact that url is actually a URL in production, probably to an S3 host, but locally it's a path to a file, relative to wherever you've specified uploads to be stored. Unfortunately, it doesn't seem like CarrierWave has a graceful way of handling this, at least so far as I've seen, so I ended up doing something like this in our spec helper:
config.before do
allow_any_instance_of(DocumentUploader).to receive(:url) do |uploader|
uploader.path
end
end

Related

Defining where Paperclip stores the attachment locally when running Spec tests

I'm using AmazonS3 to store Paperclip attachments on all non-test environments.
For test specifically I use a local path/url setup to avoid interacting with S3 remotely
Paperclip::Attachment.default_options[:path] =
":rails_root/public/system/:rails_env/:class/:attachment/:id_partition/:filename"
Paperclip::Attachment.default_options[:url] =
"/system/:rails_env/:class/:attachment/:id_partition/:filename"
I define my attachment as follows in the model
has_attached_file :source_file, use_timestamp: false
In my Production code I need to access the file using Model.source_file.url because .url returns the remote fully qualified Amazon S3 path to the file. This generally works fine for non-test environments.
However on my test environment I can't use .url because Paperclip creates and stores the file under the path defined by :path above. So I need to use .path. If I use .url I get the error -
Errno::ENOENT:
No such file or directory # rb_sysopen - /system/test/imports/source_files/000/000/030/sample.txt
which makes sense because paperclip didn't store the file there...
How do I get paperclip on my test environment to store/create my file under the :url path so I can use .url correctly?
Edit: If it helps, in test I create the attachment from a locally stored fixture file
factory :import do
source_file { File.new(Rails.root + "spec/fixtures/files/sample.tsv") }
end
Edit2: Setting :path and :url to be the same path in the initializer might seem like a quick fix, but I'm working on a larger app with several contributors, so I don't the have the luxury to do that or break any one else's specs. Plus it looks like Thoughtbot themselves recommend this setup, so there should be a "proper" way to get it working as is.
Thanks!
Have you tried using s3proxy in your test environment to simulate S3 instead of directly have paperclip write to local files?

Stubbing Paperclip downloads from S3 in RSpec

I am using Paperclip/RSpec and StackOverflow has helped me successfully stub file uploads to S3 using this code:
spec/rails_helper.rb
config.before(:each) do
allow_any_instance_of(Paperclip::Attachment).to receive(:save).and_return(true)
end
This is working great.
On my model I have two Paperclip fields:
class MyModel < ActiveRecord::Base
has_attached_file :pdf
has_attached_file :resource
end
My code uses the #copy_to_local_file method (Docs) to retrieve a file from S3.
#copy_to_local_file takes two params: the style (:original, :thumbnail, etc) and the local file path to copy to.
Example:
MyModel.resource.copy_to_local_file(:original, local_file.path)
When the system under test tries to access MyModel#pdf#copy_to_local_file or MyModel#resource#copy_to_local_file, I originally got errors like the following:
No Such Key - cannot copy /email_receipts/pdfs/000/000/001/original/email_receipt.eml.pdf to local file /var/folders/4p/1mm86g0n58x7d9rvpy88_s9h0000gn/T/receipt20150917-4906-13evk95.pdf
No Such Key - cannot copy /email_receipts/resources/000/000/001/original/email_receipt.eml to local file /var/folders/4p/1mm86g0n58x7d9rvpy88_s9h0000gn/T/resource20150917-4906-1ysbwr3.eml
I realize these errors were happening because uploads to S3 are stubbed, so when it encounters MyModel#pdf#copy_to_local_file or MyModel#resource#copy_to_local_file it tries to grab a file in S3 that isn't there.
Current Solution:
I've managed to quash the errors above, but I feel it's not a complete solution and gives my tests a false sense of security. My half-solution is to stub this method in the following way:
spec/rails_helper.rb
before(:each) do
allow_any_instance_of(Paperclip::Storage::S3).to receive(:copy_to_local_file)
end
While this does stub out the #copy_to_local_file method and removes the errors, it doesn't actually write any content to the local file that is provided as the second argument to #copy_to_local_file, so it doesn't quite simulate the file being downloaded from S3.
Question:
Is there a way to stub #copy_to_local_file AND have it write the contents of a canned file in my spec/factories/files directory to the local file (its second argument)?
Or am I overthinking this? Is this something I shouldn't be worrying about?
You don't need to worry about whether the 'downloaded' files actually exist in your tests. You've decided to stub out Paperclip, so do it completely, by stubbing out both #save and #copy_to_file. You may also need to stub out reads of downloaded files from the filesystem.
All this stubbing raises the possibility of integration errors, so you should probably write a feature spec (using a captive browser like poltergeist) that actually uploads and downloads something and reads it from the filesystem.
That said, you can do anything you want in an RSpec stub by passing it a block:
allow_any_instance_of(Paperclip::Storage::S3).to receive(:copy_to_local_file) do |style, local_dest_path|
# write a file here, or do anything you like
end

Carrierwave can't find file when testing with Capybara

I'm trying to write an integration test that involves a file uploaded with Carrierwave. I have the following configuration:
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
else
# other configs
end
end
And my uploader has the store path set to:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
The outcome of this is that the file gets stored in public/uploads. But when I try to access this file via the doc.url method on my model, it returns a path like /uploads/..., and when running my integration specs I get the following error:
unable to open file `/uploads/doc/paper/1/img_1.png'
I can confirm that the file does exist in the /public/uploads directory. How do I get the Carrierwave to return the correct path to the uploaded image? I could patch it together with Rails.root.join('public', doc.url) but that would break in production where we're uploading to S3.
After investigating it some more I realized that the problem here is that we are trying to access the file server side. Client side everything works fine, because it's using relative paths and Rails resolves it as an asset path. Server side it doesn't know where to look. In dev and production we use S3 so the url is the same either way. It feels like a bit of a hack, but where we need to access the image on the server, we did this:
path = Rails.env.test? ? doc.img.path : doc.url
file = File.open(path)
I wasn't able to find an environment agnostic way to handle this.

Rails: allow download of files stored on S3 without showing the actual S3 URL to user

I have a Rails application hosted on Heroku. The app generates and stores PDF files on Amazon S3. Users can download these files for viewing in their browser or to save on their computer.
The problem I am having is that although downloading of these files is possible via the S3 URL (like "https://s3.amazonaws.com/my-bucket/F4D8CESSDF.pdf"), it is obviously NOT a good way to do it. It is not desirable to expose to the user so much information about the backend, not to mention the security issues that rise.
Is it possible to have my app somehow retrieve the file data from S3 in a controller, then create a download stream for the user, so that the Amazon URL is not exposed?
You can create your s3 objects as private and generate temporary public urls for them with url_for method (aws-s3 gem). This way you don't stream files through your app servers, which is more scalable. It also allows putting session based authorization (e.g. devise in your app), tracking of download events, etc.
In order to do this, change direct links to s3 hosted files into links to controller/action which creates temporary url and redirects to it. Like this:
class HostedFilesController < ApplicationController
def show
s3_name = params[:id] # sanitize name here, restrict access to only some paths, etc
AWS::S3::Base.establish_connection!( ... )
url = AWS::S3::S3Object.url_for(s3_name, YOUR_BUCKET, :expires_in => 2.minutes)
redirect_to url
end
end
Hiding of amazon domain in download urls is usually done with DNS aliasing. You need to create CNAME record aliasing your subdomain, e.g. downloads.mydomain, to s3.amazonaws.com. Then you can specify :server option in AWS::S3::Base.establish_connection!(:server => "downloads.mydomain", ...) and S3 gem will use it for generating links.
Yes, this is possible - just fetch the remote file with Rails and either store it temporarily on your server or send it directly from the buffer. The problem with this is of course the fact that you need to fetch the file first before you can serve it to the user. See this thread for a discussion, their solution is something like this:
#environment.rb
require 'open-uri'
#controller
def index
data = open(params[:file])
send_data data, :filename => params[:name], ...
end
This issue is also somewhat related.
First you need create a CNAME in your domain, like explain here.
Second you need create a bucket with the same name that you put in CNAME.
And to finish you need add this configurations in your config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
...
config.asset_host = 'http://bucket_name.your_domain.com'
config.fog_directory = 'bucket_name.your_domain.com'
...
end

When using paperclip on Rails3, the some characters( # and ~) get erased or altered from the file name when uploading

I'm not sure if this is a paperclip issue. Tried it on gitlab and the same thing happened.
I have a back end for an iOS app written in Rails, and when I upload an image file with the # character in the filename, it gets erased upon uploading, if I have a file named,
aaa#2x.jpg
it gets saved as
aaa2x.jpg
Also, ~ gets converted into a _.
This is a problem because iOS apps presume that retina supported images are named with the #2x prefix.
I can regex the file name post upload and change it in the database and rename the file, but that seems like an odd hack to do, anyone have any idea whats happening? How to have the file name saved properly to begin with?
According to this article: http://en.wikipedia.org/wiki/HFS_Plus, you should be able to use any character, including NUL in file names. But OS APIs may limit some characters for legacy reasons.
It can be server or client issue, try to debug your application and check file name provided in request.request_parameters it should contain valid file name.
If you going to use uploaded files in URLs you should transliterate them before upload, this also resolve your problem. To do this you can use this extension:
module TransliteratePaperclip
def transliterate_file_name(paperclip_file)
paperclip_file=[paperclip_file] unless paperclip_file.is_a?(Enumerable)
paperclip_file.each do |file|
filename=read_attribute("#{file}_file_name")
if filename.present?
extension = File.extname(filename).gsub(/^\.+/, '')
filename = filename.gsub(/\.#{extension}$/, '')
self.send(file).instance_write(:file_name, "#{filename.parameterize}.#{extension.parameterize}")
end
end
end
end
# include the extension
ActiveRecord::Base.send(:include, TransliteratePaperclip)
put this code in /config/initializers/paperclip_transliterate.rb and in your paperclip model:
before_post_process { |c| transliterate_file_name(:file) }
where :file is attribute defined by has_attached_file.

Resources