I'm using rails 4.2 and paperclip 4.2.1. I have several rails model with paperclip attachments so I set default config for paperclip in development.rb like this:
config.paperclip_defaults = config_for(:paperclip_settings)
and set paperclip_settings.yml to:
development:
:storage: :s3
:s3_credentials:
:bucket: "xxx"
:access_key_id: "yyy"
:secret_access_key: "zzz"
:url: ':s3_path_url'
:path: '/:attachment/:id_partition/:style/:filename
and I've added one more attachment only for one model that must saves locally
has_attached_file :faq, :storage => :filesystem but it still tries to save this attachment to S3. So the question is how can I separate config for S3 and filesystem uploading and call them on paperclip has_attached_file or just fill all needed properties to this attachment.
Thanks!
Related
I am struggling to access files on S3 with Carrierwave.
In my uploader file doc_uploader.rb I have the following code
storage :file
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
to uplooad "doc" model defined as follow
class Doc < ActiveRecord::Base
belongs_to :user
mount_uploader :doc, DocUploader
end
To access the uploaded file I have the following line of code in a controller
#doc = current_user.docs.order("created_at").last #last file uploaded by user
io = open("#{Rails.root}/public" + #doc.doc.url)
Everything works perfectly locally. Now I want to move my file to S3 in the uploader I use fog and replace
storage :file
by
storage :fog
I adjust my config file carrierwave.rb and uploading works perfectly. However, to access the file I try to use
#doc = current_user.docs.order("created_at").last
io = open("#{#doc.doc.url}")
and I get the following error
No such file or directory # rb_sysopen - /uploads/doc/doc/11/the_uploaded_file.pdf
Could anyone give me the right syntax to access the file on S3 please? Thanks.
When accessing the asset through the console, it gives you only the path, you might need to append the protocol & host to the #doc.doc.url, something like:
io = open("http://example.com#{#doc.doc.url}")
Or you can set the asset url on the environment you need to, but this is not really necessary:
config.asset_host = 'http://example.com'
This only applies if you are using the console, on any web view this will not apply, carrierwave seems to handle it
I am building a rails app with carrierwave and fog for attachment storage. In my test environment, I am using fog local storage.
I am looking for a way to get the full attachment path with this configuration.
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'Local',
local_root: '/Users/me/fog',
endpoint: '/Users/me/fog',
}
config.fog_directory = 'test.myapp.com
config.fog_public = false
config.fog_attributes = { 'Cache-Control' => 'max-age=315576000' }
end
When I use any other storage options (like AWS S3), I can get the full url to an attachment just by doing my_object.my_attachment_url or my_object.my_attachment.path.
However, when using Local storage, I only get a relative path to my configuration options like my_object/my_attachment/1/test.jpg.
Is there any way through carrierwave or fog to get the full path to this local file?
For my example, the output I am looking for would be: /Users/me/fog/test.myapp.com/my_object/my_attachment/1/test.jpg
For me, the answer was modifying to carrierwave uploader class.
I had
def store_dir
"#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
which worked fine for AWS S3 as all the S3 specific info was inserted before this string. However, to get this to work with fog Local as well, I added:
if Rails.env.test?
def base_path
"#{File.expand_path(CONFIG.fog_local_root)}/#{CONFIG.fog_directory}/"
end
else
def base_path
''
end
end
I followed this tutorial:
http://lifesforlearning.com/uploading-images-with-carrierwave-to-s3-on-rails/
I had working carrierwave uploader which was storing files to disk space
What I did step by step:
1)added fog gem and run bundle install and bundle update
2)in config/initializers I created r3.rb file with this:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'mykey',
:aws_secret_access_key => 'mysecretkey',
:region => 'us-west-2' # Change this for different AWS region.
}
config.fog_directory = "bucket-main"
end
I ran rails s and tried to save some photo. But as you can see on the picture my bucket is empty.So they must be stored to my disk.
What do I do now?
Update I changed storage to fog.
Here is my photouploader class code:
# encoding: utf-8
class PhotoUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
And now I get this error:
hostname "bucket-main.bucket-main.s3-us-west-1.amazonaws.com" does not
match the server certificate (OpenSSL::SSL::SSLError)
i eventually solved my problem by updating
bundle update fog
and
bundle update carrierwave
Try adding path_style to your config and the fog_directory
config.fog_credentials = {
...
:path_style => true
}
config.fog_directory = 'bucket-main'
I just spent a few hours tracking down the cause of this error, which I was also getting:
hostname "bucket-main.bucket-main.s3-us-west-1.amazonaws.com" does not match the server certificate (OpenSSL::SSL::SSLError)
The odd thing is how the bucket name is repeated twice in the hostname. It turned out I had configured the wrong region name. Notice in your config.fog_credentials you have
:region => 'us-west-2'
...but the hostname in the exception has s3-us-west-1? If your bucket is in one AWS region, but you configure a different region in your Fog credentials, Fog will try to follow a redirect from AWS, and somehow the bucket name gets doubled up in this situation. Fog produces a warning about the redirect, but Carrierwave ends up hiding this from you.
Set :region in your Fog credentials to where the bucket actually is in AWS, and the does not match the server certificate exception will stop happening.
Im getting the following error AWS::Errors::MissingCredentialsError in LocationsController#create with paperclip and AWS gem
more details on the exception:
Missing Credentials. Unable to find AWS credentials.
You can configure your AWS credentials a few different ways:
* Call AWS.config with :access_key_id and :secret_access_key
* Export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to ENV *
I'm currently running this code in a development environment on my machine Here is the development.rb
Gmaps::Application.configure do
# Settings specified here will take precedence over those in config/application.rb
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
# Log error messages when you accidentally call methods on nil.
config.whiny_nils = true
# Show full error reports and disable caching
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
# Don't care if the mailer can't send
config.action_mailer.raise_delivery_errors = false
# Print deprecation notices to the Rails logger
config.active_support.deprecation = :log
# Only use best-standards-support built into browsers
config.action_dispatch.best_standards_support = :builtin
# Raise exception on mass assignment protection for Active Record models
config.active_record.mass_assignment_sanitizer = :strict
# Log the query plan for queries taking more than this (works
# with SQLite, MySQL, and PostgreSQL)
config.active_record.auto_explain_threshold_in_seconds = 0.5
# Do not compress assets
config.assets.compress = false
# Expands the lines which load the assets
config.assets.debug = true
# Amazon S3 settings for Paperclip uploads
Paperclip::Attachment.default_options.merge!({
storage: :s3 ,
s3_credentials: {
access_key_id: ENV['key_id'],
secret_access_key: ENV['key'],
bucket: "#{ENV['bucket']}-#{Rails.env}"
},
url: ":s3_domain_url",
path: "/:class/:attachment/:id_partition/:style/:filename"
})
end
here is my model
class Location < ActiveRecord::Base
geocoded_by :address
after_validation :geocode
has_attached_file :picture,
:styles => { :medium => "300x300>", :thumb => "100x100>" }
end
thanks for taking the time to look at this
It's obviously not finding your S3 credentials. I can't find anywhere on the Paperclip docs where it says you should specify your S3 credentials in application configuration.
Generally, you can put it in the model like described here:
http://rubydoc.info/gems/paperclip/Paperclip/Storage/S3
There are several threads related to this that could help:
AWS::Errors::MissingCredentialsError using paperclip and aws-s3 in rails 3.1
Troubles setting up Paperclip + AWS S3 for image storing in our Rails3/Heroku App
Try Using Just the AWS Gem
I'd like you to try using just AWS Gem config and see if the credentials work:
config/initializers/aws.rb
# Make the connection.
AWS.config(
access_key_id: ENV['key_id'],
secret_access_key: ENV['key'] )
S3 = AWS::S3.new.buckets[ 'your-bucket-name' ]
And then open the console and try the following:
bundle exec rails c
S3.objects.first
What is the output?
Essentially, get this working first and then deal with the abstraction of having the Paperclip Gem as well.
The fix for me was to rearrange the order of my gems in my gemfile:
gem 'aws-sdk' must precede gem 'paperclip'
THIS:
gem 'aws-sdk'
gem 'paperclip'
NOT THIS:
gem 'paperclip'
gem 'aws-sdk'
discovered this with a stroke of luck.
I have just recently setup my rails 3.2 app to use the carrierwave gem and upload files to S3. What I don't see is the ability to use a different bucket per uploader. Does anyone know if this is a possiblity?
The bucket is specified via the fog_directory config. This configuration option is defined on the uploader and could simply be overwritten with your own method.
Just add the following to your uploader:
def fog_directory
# your bucket name here
end
The carrierwave wiki explains how to use a separate s3 bucket for each uploader:
def initialize(*)
super
self.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'YOURAWSKEYID', # required
:aws_secret_access_key => 'YOURAWSSECRET', # required
}
self.fog_directory = "YOURBUCKET"
end
Multiple buckets is not currently supported by CarrierWave. You can separate files between uploaders by adding prefixes (folders) to the store_dir. Pull requests are welcome though if this is something you'd like to work on!