I'm using Carrierwave + S3 to store my record images in S3.
The problem is when I retrieve 250 records from a JSON file, loading gets ULTRA slow because it needs to sign individually each version of each image of each record:
"url":"https://xxxxxx.s3.eu-west-3.amazonaws.com/uploads/product/images/1156/1.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026X-Amz-Credential=xxxxx-west-3%2Fs3%2Faws4_request\u0026X-Amz-Date=20220921T134602Z\u0026X-Amz-Expires=604800\u0026X-Amz-SignedHeaders=host\u0026X-Amz-Signature=xxxxxx
¿How can I retrieve images without needing to sign each one in a fast way?
My Carrierwave config file:
CarrierWave.configure do |config|
config.storage = :aws
config.aws_bucket = ENV['S3_BUCKET_NAME'] # for AWS-side bucket access permissions config, see section below
config.aws_acl = 'private'
# Optionally define an asset host for configurations that are fronted by a
# content host, such as CloudFront.
# The maximum period for authenticated_urls is only 7 days.
config.aws_authenticated_url_expiration = 60 * 60 * 24 * 7
# Set custom options such as cache control to leverage browser caching.
# You can use either a static Hash or a Proc.
config.aws_attributes = -> { {
expires: 1.week.from_now.httpdate,
cache_control: 'max-age=604800'
} }
config.aws_credentials = {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
region: ENV['AWS_REGION'], # Required
stub_responses: Rails.env.test? # Optional, avoid hitting S3 actual during tests
}
# Optional: Signing of download urls, e.g. for serving private content through
# CloudFront. Be sure you have the `cloudfront-signer` gem installed and
# configured:
# config.aws_signer = -> (unsigned_url, options) do
# Aws::CF::Signer.sign_url(unsigned_url, options)
# end
end
Related
Currently I am migrating a 10 year old on-prem application to heroku.
We have good amount of data in our servers.
dragonfly/refinerycms data stored at: public/system/refinery/...
Images stored under refinery folder images/2021/11/25/486ucenknk_image.png
same is the case for resources.
But when I set in images.rb
config.s3_datastore = true
obviously files start saving to S3 bucket. But with a different path.
Which is
2021/11/25/02/01/37/5f6e0f21-658c-4cf2-9edc-da7cb8575ab8/images.png
means it is including time as well in folders.
I tried changing this path at so many places but I couldn't. I tried changing url_format as well, but looks like it is not affecting anything in the store location.
I have attached the config files for both the files.
config/initializers/dragonfly.rb
# config/initializers/dragonfly.rb
require 'dragonfly/s3_data_store'
# Configure
Dragonfly.app.configure do
protect_from_dos_attacks true
secret "Some secret"
url_format "/media/:job/:name"
datastore :s3,
bucket_name: ENV['S3_BUCKET'],
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'],
url_scheme: 'https'
end
# Logger
Dragonfly.logger = Rails.logger
# Mount as middleware
Rails.application.middleware.use Dragonfly::Middleware
# Add model functionality
if defined?(ActiveRecord::Base)
ActiveRecord::Base.extend Dragonfly::Model
ActiveRecord::Base.extend Dragonfly::Model::Validations
end
Excon.defaults[:write_timeout] = 500
config/initializers/refinery/images.rb
# config/initializers/refinery/images.rb
# encoding: utf-8
Refinery::Images.configure do |config|
# Configure S3 (you can also use ENV for this)
# The s3_backend setting by default defers to the core setting for this but can be set just for images.
config.s3_datastore = false
config.s3_bucket_name = ENV['S3_BUCKET']
config.s3_access_key_id = ENV['S3_KEY']
config.s3_secret_access_key = ENV['S3_SECRET']
config.s3_region = 'us-east-1'
# Configure Dragonfly
config.dragonfly_verify_urls = false
config.datastore_root_path = "/refinery/images"
end
If anyone encountered problem like this before, please help me. Thanks in Advance.
Carrierwave is returning a JSON response like this:
"url": "/mys3bucket/uploads/entrees/photo/32/4c312e9aed37a59319096a03_1.jpg",
I need the absolute url. Images are hosted on Amazon S3. How can I get the absolute url?
My temporary hack is to add following to Carrierwave initializer:
config.asset_host = "s3.#{ENV.fetch('AWS_REGION')}.amazonaws.com/mybucket"
CarrierWave uses the combination of the filename and the settings
specified in your uploader class to generate the proper URL. This
allows you to easily swap out the storage backend without making any
changes to your core application.
That said, you cannot store the full URL. You can set CarrierWave's asset_host config setting that is based on envrionment.
What storage are you using on Production? Here is my configuration and It works very well. Hope it helps.
CarrierWave.configure do |config|
config.root = Rails.root
if Rails.env.production?
config.storage = :fog
config.fog_credentials = {
provider: "AWS",
aws_access_key_id: ENV["AWS_ACCESS_KEY_ID"],
aws_secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"],
region: ENV["S3_RESION"]
}
config.fog_directory = ENV["S3_BUCKET_NAME"]
# config.asset_host = ENV["S3_ASSET_HOST"]
else
config.storage = :file
# config.asset_host = ActionController::Base.asset_host
end
end
I'm currently fighting to get S3 uploads to work via Carrierwave, Carrierwave-aws & Figaro.
But I keep getting
SocketError in OffersController#create
getaddrinfo: Name or service not known
I've tried changing asset host to '127.0.0.1' still appears to produce this error.
carrierwave.rb
CarrierWave.configure do |config|
config.storage = :aws
config.aws_bucket = ENV.fetch('S3_BUCKET_NAME')
config.aws_acl = 'public-read'
# Optionally define an asset host for configurations that are fronted by a
# content host, such as CloudFront.
config.asset_host = 'localhost'
# The maximum period for authenticated_urls is only 7 days.
config.aws_authenticated_url_expiration = 60 * 60 * 24 * 7
# Set custom options such as cache control to leverage browser caching
config.aws_attributes = {
expires: 1.week.from_now.httpdate,
cache_control: 'max-age=604800'
}
config.aws_credentials = {
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
region: ENV.fetch('AWS_REGION') # Required
}
end
gemfile
# Figaro
gem "figaro"
# Carrierwave Integration
gem 'carrierwave'
# Carrierwave AWS
gem 'carrierwave-aws'
Any help on this would be fantastic.
Try remove config.asset_host = 'localhost' from your CarrierWave.configure. It's optional and mainly use to set third party assets path like cloudfront.
SO remove config.asset_host = 'localhost' and you are done.
I have just migrated from paperclip to carrierwave and managed to successfully get uploading to S3 to work locally on my machine, but after deploying the Rails application to my server (which uses Ubuntu, Passenger with nginx, was a mission getting it to work), and when i try to upload an image, it tries to save it to public/uploads/... which comes up with a permission denied error, I have looked and searched everywhere to find out why its not working, and have found nothing.
My Uploader file:
class AvatarUploader < CarrierWave::Uploader::Base
include CarrierWave::Compatibility::Paperclip
storage :fog
def store_dir
"/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
fog.rb
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '*******', # required
aws_secret_access_key: '********', # required
region: 'ap-southeast-2', # optional, defaults to 'us-east-1'
}
config.fog_directory = 'publicrant' # required
# config.fog_public = false
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
Ok so after hours of googling and failing miserably at finding a solution, turns out, in a production environment it did need to put the file temporary in uploads/tmp before it pushes it to S3 Bucket
Seems like you doesn't have read permissions for other users (o+r). Check it use command:
namei -lm <absolute path to your current/public>
and grant read permissions:
chmod o+r <directory>
In your case I think it will be /home/<user> directory.
I am attempting to use Carrierwave with Amazon S3 in my Rails app, and I keep getting the error
"Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)."
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.
I also receive the warning
"[WARNING] fog: the specified s3 bucket name() is not a valid dns name, which will negatively impact performance. For details see: http://docs.amazonwebservices.com/AmazonS3/latest/dev/BucketRestrictions.html"
config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: ENV["AWS_ACCESS_KEY_ID"],
aws_secret_access_key: ENV["AWS_ACCESS_KEY"]
}
config.fog_directory = ENV["AWS_BUCKET"]
end
My bucket name is "buildinprogress"
I've double checked that my access key ID and access key are correct.
How can I fix this error??
It is a problem with Fog/Excom that kept throwing random errors for me too.
My fix was to remove gem 'fog' and replace it with gem 'carrierwave-aws' instead.
Then, in your *_uploader.rb change
storage :fog ---> storage :aws
and update your carrierwave.rb file Ex.:
CarrierWave.configure do |config|
config.storage = :aws # required
config.aws_bucket = ENV['S3_BUCKET'] # required
config.aws_acl = :public_read
config.aws_credentials = {
access_key_id: ENV['S3_KEY'], # required
secret_access_key: ENV['S3_SECRET'] # required
}
config.aws_attributes = {
'Cache-Control'=>"max-age=#{365.day.to_i}",
'Expires'=>'Tue, 29 Dec 2015 23:23:23 GMT'
}
end
For more info check out the carrierwave-aws GitHub page