I have an app that works fine in on my development machine, but on my production server it uses a broken link to serve an image served using the Paperclip Gem.
Production environment is Linux(Debian), Apache, Passenger and I am deploying with Capistrano.
The app is stored in (a symlink that points to the public folder of the current version of the app deployed using capistrano):
/var/www/apps/root/appname
However, when I try and access it on the production server, the Apache error log displays this as the path it is looking in:
/var/www/apps/root/system
The correct path, however, is:
/var/www/apps/appname/shared/system
One option available to me is to create a symlink in root that directs system to the correct path, but I don't want to do this in case I want to deploy another app in the same root dir.
The url for this request is generated by rails, but Apache is what fetches the static resource (image files), so I have tried placing the following in my config/environments/production.rb:
ENV["RAILS_RELATIVE_URL_ROOT"] = '/appname/'
Which has resolved all other pathing issues I've been experiencing, but when rails generates the url (via the Paperclip gem), it doesn't seem to use it.
How can I set it so Paperclip uses the right path and only uses it production?
I've a workaround, add this as one of initializers:
config/initializer/paperclip.rb
Paperclip::Attachment.class_eval do
def url(style_name = default_style, options = {})
if options == true || options == false # Backwards compatibility.
res = #url_generator.for(style_name, default_options.merge(:timestamp => options))
else
res = #url_generator.for(style_name, default_options.merge(options))
end
# replace adding uri before res, minus final /
Rails.application.config.site_relative_url[0..-2]+res
end
end
At the moment Paperclip doesn’t work with ENV['RAILS_RELATIVE_URL_ROOT'] and the. You can follow the issue here:
https://github.com/thoughtbot/paperclip/issues/889
Related
I have a Rails 4 application that allows to upload videos using the jQuery Dropzone plugin and the paperclip gem. Each uploaded video is encoded into multiple formats and uploaded to Amazon S3 in the background using delayed_paperclip, av-transcoder and sidekiq gems.
All works fine with most videos, but with a higher size like 1.1GB after the upload reaches what seems like the end of the progress bar of the dropzone plugin it returns an Nginx 504 Gateway Time-out.
As far as server goes, the rails app runs on Nginx + Passenger on a couple of servers that are behind a load balancer (Nginx used here too). I do not have timeouts set in the upstream section of the load balancer, the client_max_body_size is set to 2000M (both on the load balancer and servers), I've tried setting passenger_pool_idle_time to a large value (600), that didn't help, I have also tried setting send_timeout (600s), nothing made any difference.
Note: When making those changes, I did them on the host files of both servers as well as of the load balancer and always restarted nginx afterwards.
I've read also several answers regarding similar problems like this one and this one but still can't figure this out, google wasn't much more helpful either.
Some extra notes for those unfamiliar with the whole paperclip/delayed_paperclip process, the file is uploaded to the server and then the operation is done as far as the user is concerned, in the background the post processing of the videos (encoding/uploading to S3) is pushed to Redis as a job and Sidekiq processes it whenever it has time/resources.
What could be causing this issue? How can I debug this and solve it?
UPDATE
Thanks to Sergey's answer I was able to solve the issue. Since I was restricted to a specific version of Paperclip, I couldn't update it to the newest version that has the fix, therefore I'll leave here what I ended up doing.
In the engine that I use to handle the uploads I've added the following code in the engine_name.rb file to override the methods from Paperclip that needed fixing:
Paperclip::AbstractAdapter.class_eval do
def copy_to_tempfile(src)
link_or_copy_file(src.path, destination.path)
destination
end
def link_or_copy_file(src, dest)
Paperclip.log("Trying to link #{src} to #{dest}")
FileUtils.ln(src, dest, force: true) # overwrite existing
#destination.close
#destination.open.binmode
rescue Errno::EXDEV, Errno::EPERM, Errno::ENOENT => e
Paperclip.log("Link failed with #{e.message}; copying link #{src} to #{dest}")
FileUtils.cp(src, dest)
end
end
Paperclip::AttachmentAdapter.class_eval do
def copy_to_tempfile(source)
if source.staged?
link_or_copy_file(source.staged_path(#style), destination.path)
else
source.copy_to_local_file(#style, destination.path)
end
destination
end
end
Paperclip::Storage::Filesystem.class_eval do
def flush_writes #:nodoc:
#queued_for_write.each do |style_name, file|
FileUtils.mkdir_p(File.dirname(path(style_name)))
begin
move_file(file.path, path(style_name))
rescue SystemCallError
File.open(path(style_name), "wb") do |new_file|
while chunk = file.read(16 * 1024)
new_file.write(chunk)
end
end
end
unless #options[:override_file_permissions] == false
resolved_chmod = (#options[:override_file_permissions] &~ 0111) || (0666 &~ File.umask)
FileUtils.chmod( resolved_chmod, path(style_name) )
end
file.rewind
end
after_flush_writes # allows attachment to clean up temp files
#queued_for_write = {}
end
private
def move_file(src, dest)
# Support hardlinked files
if File.identical?(src, dest)
File.unlink(src)
else
FileUtils.mv(src, dest)
end
end
end
I faced similar issue a while ago. Maybe, my experience will help.
We had m3.medium instance on Amazon with 4Gb of memory.
User could be able to upload large video files. We faced an issue of 504 error when uploading files larger than 400Mb.
During monitoring and logging the upload process it appeared that Paperclip creates 4 files per attachment and thus all the instance resources work on a file system.
Here there is a description of this problem
https://github.com/thoughtbot/paperclip/issues/1642
and proposed a solution - use links instead of files when possible. You can see the appropriate code changes here
https://github.com/arnonhongklay/paperclip/commit/cd80661df18d7cd112944bfe26d90cb87c928aad
However 2 days ago Paperclip was updated to 5.2.0 version and they implemented similar solution.
So for now it creates only one file per attachment. Thus our file system is not overloaded and after updating to 5.2.0 version we stopped receiving 504 error.
Conclusion:
Use monkey patch from the link attached above if you're restricted in Paperclip version for some reason
Update Paperclip to 5.2.0 version. Should help.
I've a Rails app with Paperclip and I use Google Cloud Storage. So far so good.
To avoid having both development and production using the same storage, I decided to change the default Paperclip path to another based based on the environment. This way every env has his own directory. Then I consistently moved the old images from the default Paperclip path to the new ones.
The problem is that now old images give a 404, whereas any new image I upload works properly. Is there any way to fix that?
Here it's the previous settings:
module MyApp
class Application < Rails::Application
config.paperclip_defaults = {
storage: :fog,
fog_public: true,
fog_directory: 'myapp-01',
fog_credentials: {
google_storage_access_key_id: ENV['GOOGLE_STORAGE_ID'],
google_storage_secret_access_key: ENV['GOOGLE_STORAGE_SECRET'],
provider: 'Google'
}
}
}
I override the default using the following settings:
path: ":rails_env/:class/:attachment/:id_partition/:style/:filename",
url: "/:rails_root/:class/:attachment/:id_partition/:style/:filename"
My guess is that it's not sufficient to update Paperclip config with the new path and move all images to the new directory. You need also to update the old records...
If you wonder, the old records point to root/images/?123456789.
Your guess is right. Changing the config is not enough, you need to move the files. This, is better left for a rake task or background job. I have some code for S3, but it should give you an idea of how to implement it for Google:
def old_key(image, file_name_field)
# Previous `:path`: '/:class/:attachment/:id/:style/:filename'
klass = self.class.to_s.pluralize.downcase
attachment = image.pluralize
"#{klass}/#{attachment}/#{id}/original/#{send(file_name_field)}"
end
def re_path(image)
file_name_field = "#{image}_file_name"
return if send(file_name_field).blank?
old_object = bucket.object(old_key(image, file_name_field))
return unless old_object.exists?
Rails.logger.warn "Re-saving image attachment #{self.class}/#{id}"
send "#{image}=", URI.parse(old_object.public_url)
save
end
I'm basically building the old path using my own interpolation, finding the object in S3 (hence key/object lingo) and re-download every image from S3. Be careful with this, since you might incur in extra cost for downloading rather than just moving, if that's someone Google allows.
Then I just called this method on every image for every object:
Object.each do { |o| o.re_path(:logo); o.re_path(:background); }
I'm trying to attach a file to an email. The file is in assets/downloads/product.pdf
In the mailer, I have:
attachments["product.pdf"] = File.read(ActionController::Base.helpers.asset_path("product.pdf"))
I've tried:
attachments["product.pdf"] = File.read(ActionController::Base.helpers.asset_url("product.pdf"))
...and even:
attachments["product.pdf"] = File.read(ActionController::Base.helpers.compute_asset_host("product.pdf") + ActionController::Base.helpers.compute_asset_path("product.pdf"))
I always get the same error:
EmailJob crashed!
Errno::ENOENT: No such file or directory - //localhost:3000/assets/product.pdf
...or a variation on the theme. But even when I try using asset_url in the view or just put the url in the browser it works:
http://localhost:3000/assets/product.pdf
I've also tried using straight up:
File.read("app/assets/downloads/product.pdf")
File.read("downloads/product.pdf")
...which works in dev environment but not on staging server (heroku). Error is still:
Errno::ENOENT: No such file or directory - downloads/product-market-fit-storyboard.pdf
Also tried:
File.read("/downloads/product.pdf")
File.read("http://lvh.me:3000/assets/product.pdf")
...don't work at all.
Ideas?
you should use syntex like this.it is work for me may be will work for you also.
File.open(Dir.glob("#{Rails.root}/app/assets/downloads/product.pdf"), "r")
When using mailer, you shouldn't use assets pipline. Asset pipeline would be useful if you wanted to have link to a file inside your email. When rendering an email, action mailer has access to files in app directory.
Please read about attachments in action mailer guide. As you can see, you just need to pass path to a file, not url:
attachments['filename.jpg'] = File.read('/path/to/filename.jpg')
I try to setup Amazon S3 support to store images in the cloud with refinerycms.
I created the bucket at https://console.aws.amazon.com/s3/
I named it like the app 'bee-barcelona' and it says it is in region US Standard
In ~/config/initializers/refinery/images.rb I entered all the data (where 'xxx? stands for the actual keys I entered:
# Configure S3 (you can also use ENV for this)
# The s3_backend setting by default defers to the core setting for this but can be set just for images.
config.s3_backend = Refinery::Core.s3_backend
config.s3_bucket_name = ENV['bee-barcelona']
config.s3_access_key_id = ENV['xxx']
config.s3_secret_access_key = ENV['xxx']
config.s3_region = ENV['xxx']
Then I applied the changes to heroku with:
heroku config:add S3_KEY=xxx S3_SECRET=xxx S3_BUCKET=bee-barcelona S3_REGION=us-standard
But still, in the app I only get: "Sorry, something wen wrong" when I try to upload.
What did I miss?
What a sad error. I didn't think about that option till I went for a 10 km run…
I had the app set up to be "beekeeping"
My bucket on Amazon was named "bee-barcelona"
I did register the correct bucket in the app. Still refinery tried to keep on going to another persons bucket, named "beekeeping". With my secret key there was no way my files would end up there.
I created a new app and a new bucket, all with crazy names, BUT! They are the same on AmazonS3 and GIT!!!
No it works like a charm.
What a very rare situation...
The way I did it was as follows:
Create a bucket in region US-STANDARD!!!!!!!!!!
Did you see that? US-STANDARD, not oregon, not anywhere else.
Add gems to Gemfile
gem "fog"
gem "unf"
gem "dragonfly-s3_data_store"
In config/application.rb
config.assets.initialize_on_precompile = true
In config/environments/production.rb
Refinery::Core.config.s3_backend = true
In config/environments/development.rb
Refinery::Core.config.s3_backend = false
Configure S3 for heroku (production) and local storage for development. In config/initializers/refinery/core.rb
if Rails.env.production?
config.s3_backend = true
else
config.s3_backend = false
end
config.s3_bucket_name = ENV['S3_BUCKET']
config.s3_region = ENV['S3_REGION']
config.s3_access_key_id = ENV['S3_ACCESS_KEY']
config.s3_secret_access_key = ENV['S3_SECRET_KEY']
Add variables to heroku:
heroku config:add S3_ACCESS_KEY=xxxxxx S3_SECRET_KEY=xxxxxx S3_BUCKET=bucket-name-here S3_REGION=us-east-1
I had a lot of issues because I had before S3_REGION=us-standard. This is WRONG. Set your US-Standard bucket as shown:
S3_REGION=us-east-1
This worked flawlessly for me on Rails 4.2.1 and refinery 3.0.0. Also, make sure you are using the exact same names for the variables. Sometimes it says S3_KEY instead of S3_ACCESS_KEY or S3_SECRET instead of S3_SECRET_KEY. Just make sure you have the same ones in your files and your Heroku variables.
I am trying to check if a file exists in my rails application.
I am running ruby 1.8.6 and rails 2.1.2 with windows XP.
So, the problem is that the FileTest.exists? method doesn't seem to be working. I have simplified the code to this point :
if FileTest.exists?("/images/header.jpg")
render :text => "yes"
else
render :text => "no <img src='/images/header.jpg' />"
end
If I do that the system displays "no" and then includes the image that displays correctly because /images/header.jpg exists.
I tried FileTest.exists?, FileTest.exist?, File.exists?, File.exist? and nothing seems to be working.
What am I doing wrong ?
Thanks
I'm guessing it's because you're asking whether a file "header.jpg" exists in a directory "images" off of the root directory for your system (which on Windows I'd assume is "c:\"). Try putting the full path (from the filesystem root) to the "/images" directory rather than the URL path.
In particular, as pointed out by #Brian, you should use:
FileTest.exists?(RAILS_ROOT + "/images/header.jpg") # < rails 3.0
FileTest.exists?(Rails.root + "/images/header.jpg") # >= rails 3.0
Add RAILS_ROOT to the filename that you're checking before calling exists?