I'm having this problem with carrierwave+fog+s3 with Amazon cloud front. With the following setup I can upload files to s3 but after uploaded, the S3 Objects URLs I get from my rails app doesn't have the assets_host based URLs i.e I'm expeting the URLs to be looks like this format https://mycloudfrontname.cloudfront.net/uploads/myfile.mp3
But they all appear in this format https://mybucketname.s3.amazonaws.com/uploads/myfile.mp3
What might be wrong here?
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'XXXX',
:aws_secret_access_key => 'XXXX',
:region => 'us-east-1'
}
config.fog_directory = 'mybucketname'
config.asset_host = 'https://mycloudfrontname.cloudfront.net'
config.fog_public = false
config.fog_attributes = {'Cache-Control' => 'max-age=315576000'}
end
UPDATE:
I found this code bit from Carrierwave's /lib/carrierwave/storage/fog.rb - So if we set the asset_host as on above code snippet this must work right? or is there any other configuration I must do as well?
def public_url
if host = #uploader.asset_host
if host.respond_to? :call
"#{host.call(self)}/#{path}"
else
"#{host}/#{path}"
end
else
# AWS/Google optimized for speed over correctness
case #uploader.fog_credentials[:provider]
when 'AWS'
# if directory is a valid subdomain, use that style for access
if #uploader.fog_directory.to_s =~ /^(?:[a-z]|\d(?!\d{0,2}(?:\d{1,3}){3}$))(?:[a-z0-9\.]|(?![\-])|\-(?![\.])){1,61}[a-z0-9]$/
"https://#{#uploader.fog_directory}.s3.amazonaws.com/#{path}"
else
# directory is not a valid subdomain, so use path style for access
"https://s3.amazonaws.com/#{#uploader.fog_directory}/#{path}"
end
when 'Google'
"https://commondatastorage.googleapis.com/#{#uploader.fog_directory}/#{path}"
else
# avoid a get by just using local reference
directory.files.new(:key => path).public_url
end
end
end
Change config.fog_public to true and add config.asset_host = 'YOUR_CND_ADDRESS'. The asset_host doesn't work when fog_public is false
In your environment file you need to set the asset host. Just add the line below to your config/environments/production.rb file, and you should be okay. Also may want to make sure you're using the latest version of carrierwave and fog gems.
-- config/environments/production.rb
Myapp::Application.configure do
# Use Content Delivery Network for assets
config.action_controller.asset_host = 'https://mycloudfrontname.cloudfront.net'
end
Don't use asset_host. The asset_host setting is for files served by the rails asset helpers. CarrierWave files are handled in a different manner. The config you are looking for is config.fog_host
config.fog_host = 'https://mycloudfrontname.cloudfront.net'
Related
Currently I am migrating a 10 year old on-prem application to heroku.
We have good amount of data in our servers.
dragonfly/refinerycms data stored at: public/system/refinery/...
Images stored under refinery folder images/2021/11/25/486ucenknk_image.png
same is the case for resources.
But when I set in images.rb
config.s3_datastore = true
obviously files start saving to S3 bucket. But with a different path.
Which is
2021/11/25/02/01/37/5f6e0f21-658c-4cf2-9edc-da7cb8575ab8/images.png
means it is including time as well in folders.
I tried changing this path at so many places but I couldn't. I tried changing url_format as well, but looks like it is not affecting anything in the store location.
I have attached the config files for both the files.
config/initializers/dragonfly.rb
# config/initializers/dragonfly.rb
require 'dragonfly/s3_data_store'
# Configure
Dragonfly.app.configure do
protect_from_dos_attacks true
secret "Some secret"
url_format "/media/:job/:name"
datastore :s3,
bucket_name: ENV['S3_BUCKET'],
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'],
url_scheme: 'https'
end
# Logger
Dragonfly.logger = Rails.logger
# Mount as middleware
Rails.application.middleware.use Dragonfly::Middleware
# Add model functionality
if defined?(ActiveRecord::Base)
ActiveRecord::Base.extend Dragonfly::Model
ActiveRecord::Base.extend Dragonfly::Model::Validations
end
Excon.defaults[:write_timeout] = 500
config/initializers/refinery/images.rb
# config/initializers/refinery/images.rb
# encoding: utf-8
Refinery::Images.configure do |config|
# Configure S3 (you can also use ENV for this)
# The s3_backend setting by default defers to the core setting for this but can be set just for images.
config.s3_datastore = false
config.s3_bucket_name = ENV['S3_BUCKET']
config.s3_access_key_id = ENV['S3_KEY']
config.s3_secret_access_key = ENV['S3_SECRET']
config.s3_region = 'us-east-1'
# Configure Dragonfly
config.dragonfly_verify_urls = false
config.datastore_root_path = "/refinery/images"
end
If anyone encountered problem like this before, please help me. Thanks in Advance.
Sometimes I am away from internet and still need to work on upload pages. Carrierwave Direct seems to force storage :fog; with no way of overriding in dev.
Is it possible to tell Carrierwave Direct to use local storage (:file) and simply fallback to Carrierwave's development config settings?
Setting storage :file in carrierwave initializer under development config settings doesnt work...carrierwave_direct errors with "is not a recognized provider" from "<%= direct_upload_form_for #uploader do |f| %>".
I have attempted to work around carrierwave direct, but between forcing :fog, expecting a redirect url and expecting the direct_upload_form_for form method...carrierwave_direct is pretty much in charge.
Using storage :file in development would be a welcome feature for the carrierwave_direct gem. Does anyone know how to cleanly do this?
I think it can be done as follows:
CarrierWave.configure do |config|
if Rails.env.development? || Rails.env.test?
config.storage = :file
config.asset_host = ENV["dev_url"]
else
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV["aws_id"], # required
aws_secret_access_key: ENV["aws_key"], # required
region: ENV["aws_zone"] # optional, defaults to 'us-east-1'
}
config.fog_directory = ENV["aws_bucket"] # required
config.max_file_size = 600.megabytes # defaults to 5.megabytes
config.use_action_status = true
config.fog_public = false # optional, defaults to true
config.fog_attributes = { cache_control: "public, max-age=#{365.day.to_i}" } # optional, defaults to {}
end
end
And in your Uploader add in:
class SomeUploader < CarrierWave::Uploader::Base
if Rails.env.development? || Rails.env.test?
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
end
So this seems like it should be quite easy... everyone is saying just to use config.asset_host. When I set that though, all the links inside my app still point to S3.
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY_ID,
:aws_secret_access_key => AWS_SECRET_ACCESS_KEY,
:region => 'us-east-1'
}
config.fog_authenticated_url_expiration = 3.hours
config.asset_host = "http://xyz123.cloudfront.net"
config.fog_directory = S3_BUCKET_NAME
config.fog_public = false
config.fog_attributes = {
'Cache-Control' => "max-age=#{1.year.to_i}"
}
end
here is how I call my files...
image_tag book.attachments.first.filename.file.authenticated_url(:thumb175)
It looks to me like public_url prepends the proper host, but it takes 0 arguments... so how am I supposed to pass the proper response-content-disposition and response-content-type and the link expire time?
I had the same problem and spent far too long figuring out the answer! It turns out that when you set fog_public = false CarrierWave will ignore config.asset_host. You can demo this by setting config.fog_public = true: your URLs will now be CloudFront URLs rather than S3 URLs. This issue has been raised previously:
https://github.com/carrierwaveuploader/carrierwave/issues/1158
https://github.com/carrierwaveuploader/carrierwave/issues/1215
In a recent project I was happy using CarrierWave to handle uploads to S3, but wanted it to return a signed CloudFront URL when using Model.attribute_url. I came up with the following (admittedly ugly) workaround that I hope others can benefit from or improve upon:
Add the 'cloudfront-signer' gem to your project and configure it per the instructions. Then add the following override of /lib/carrierwave/uploader/url.rb in a new file in config/initializers (note the multiple insertions of AWS::CF::Signer.sign_url):
module CarrierWave
module Uploader
module Url
extend ActiveSupport::Concern
include CarrierWave::Uploader::Configuration
include CarrierWave::Utilities::Uri
##
# === Parameters
#
# [Hash] optional, the query params (only AWS)
#
# === Returns
#
# [String] the location where this file is accessible via a url
#
def url(options = {})
if file.respond_to?(:url) and not file.url.blank?
file.method(:url).arity == 0 ? AWS::CF::Signer.sign_url(file.url) : AWS::CF::Signer.sign_url(file.url(options))
elsif file.respond_to?(:path)
path = encode_path(file.path.gsub(File.expand_path(root), ''))
if host = asset_host
if host.respond_to? :call
AWS::CF::Signer.sign_url("#{host.call(file)}#{path}")
else
AWS::CF::Signer.sign_url("#{host}#{path}")
end
else
AWS::CF::Signer.sign_url((base_path || "") + path)
end
end
end
end # Url
end # Uploader
end # CarrierWave
Then override /lib/carrierwave/storage/fog.rb by adding the following to the bottom of the same file:
require "fog"
module CarrierWave
module Storage
class Fog < Abstract
class File
include CarrierWave::Utilities::Uri
def url
# Delete 'if statement' related to fog_public
public_url
end
end
end
end
end
Lastly, in config/initializers/carrierwave.rb:
config.asset_host = "http://d12345678.cloudfront.net"
config.fog_public = false
That's it. You can now use Model.attribute_url and it will return a signed CloudFront URL to a private file uploaded by CarrierWave to your S3 bucket.
I think you found this for yourself, but public urls will not expire. If you want that you'll need to use authenticated urls. For public urls I think you can simply get the url and append whatever query params you would like, at least for now. If that works well for you, we can certainly see about patching things to do the right thing.
I just pulled an older version of a project I'm working on in rails from github and am having problems with carrierwave for image upload. I used the figaro gem to store my secret keys so they were not in the file I pulled down (figaro puts an application.yml file that then gets listed in .gitignor). So I added the figaro configuration, but carrierwave still refuses to work. I even tried putting the keys into the carrierwave configuration directly to see if it was something with figaro, but no luck.
My config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_KEY'],
:region => 'us-east-1',
}
config.fog_directory = 'bucketname'
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
I am pretty sure that my keys are being stored properly in my development environment, but I have no idea why carrierwave is not working like it did before.
Thanks!
I have two carrierwave uploaders in my application. ImageUploader is for uploading locally and ImageRemoteUploader for uploading to Amazon S3 storage using fog. ImageUploader has storage set to :file and ImageRemoteUploader has storage set to :fog. This setup works fine, but when I start to set up my rspec tests, things change.
The problem arises when I change the ImageRemoteUploader to use :file storage during testing. I do this in my fog initialization file. The file,
/config/initializers/fog.rb, looks like:
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
else
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'XXXXXXXX', # required
:aws_secret_access_key => 'XXXXXX', # required
:region => 'XXXX' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'xxx' # required
config.fog_public = true
end
end
When I do this, I get an ArgumentError is not a recognized storage provider carrierwave exception. When I use the fog credentials (I don't set config.storage to :file), the test works as expected.
Carrierwave 0.7.1, Rails 3.2.8, Ruby 1.9.3, Rspec 2.10
Thanks.
I'd try moving the config.storage and config.enable_processing lines into lib/initializers/carrierwave.rb, as recommended in the Carrierwave docs.
Fog also has its own mocking support, which is enabled by running Fog.mock! before the examples. This might be a better approach.