I have just migrated from paperclip to carrierwave and managed to successfully get uploading to S3 to work locally on my machine, but after deploying the Rails application to my server (which uses Ubuntu, Passenger with nginx, was a mission getting it to work), and when i try to upload an image, it tries to save it to public/uploads/... which comes up with a permission denied error, I have looked and searched everywhere to find out why its not working, and have found nothing.
My Uploader file:
class AvatarUploader < CarrierWave::Uploader::Base
include CarrierWave::Compatibility::Paperclip
storage :fog
def store_dir
"/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
fog.rb
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '*******', # required
aws_secret_access_key: '********', # required
region: 'ap-southeast-2', # optional, defaults to 'us-east-1'
}
config.fog_directory = 'publicrant' # required
# config.fog_public = false
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
Ok so after hours of googling and failing miserably at finding a solution, turns out, in a production environment it did need to put the file temporary in uploads/tmp before it pushes it to S3 Bucket
Seems like you doesn't have read permissions for other users (o+r). Check it use command:
namei -lm <absolute path to your current/public>
and grant read permissions:
chmod o+r <directory>
In your case I think it will be /home/<user> directory.
Related
I'm trying to set up Carrierwave and Fog to handle image and file uploads on a rails app that I have hosted on AWS' Elastic Beanstalk.
I'm a little confused on how to properly set up the Fog config.
I tried using my AWS Access and Secret keys (commented out in the example below). That through an error on my EB CLI (ERROR: NotAuthorizedError - Operation Denied. The security token included in the request is invalid.)
I'm tyring to use IAM instead of having my Access/Secret codes in my ruby code. Can anyone tell me how to set this up properly?
Here's my config file:
CarrierWave.configure do |config|
# Use local storage if in development or test
if Rails.env.development? || Rails.env.test?
CarrierWave.configure do |config|
config.storage = :file
end
end
# Use AWS storage if in production
if Rails.env.production?
CarrierWave.configure do |config|
config.storage = :fog
end
end
config.fog_credentials = {
:provider => 'AWS', # required
# :aws_access_key_id => 'My Access', # required
# :aws_secret_access_key => 'My Secret', # required
:use_iam_profile => true,
:region => 'eu-west-2' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'elasticbeanstalk-us-west-2-XXXXXXXXXX' # required
#config.fog_host = 'https://assets.example.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
This is a setup that works for me:
config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV['aws_access_key_id'], # required
aws_secret_access_key: ENV['aws_secret_access_key'], # required
#region: 'Singapore', # optional, defaults to 'us-east-1'
#host: 's3.example.com', # optional, defaults to nil
#endpoint: 'olucube-images.s3-website-ap-southeast-1.amazonaws.com', # optional, defaults to nil
}
config.fog_directory = ENV['fog_directory'] # required
#config.fog_public = false # optional, defaults to true
# config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" }, # optional, defaults to {}
end
and I used figaro gem to hold my credentials as follow:
config/application.yml
aws_access_key_id: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_access_key: 'XXXXXXXXXXXXXXXXXX'
fog_directory: 'myAppName'
This was a bit a of a wild ride. I had a hard time figuring out that Figaro gem. It's probably simple but I didn't really understand it. So for a test, I put my keys directly in the code. It still didn't work.
I pushed my code to github (publicly) and didn't think too much of it. I was going to change the keys just in case. Before I was able to do that someone found my code on github and gained access to my AWS account. They started a bunch of EC2 instances and racked up $3000 worth of usage in a few hours!
My AWS account got suspended and I'm still dealing with having the charges reversed.
Anyway. I found out that you can actually set environment variables on the Elastic Beanstalk web interface. Its under Configuration → Software Configuration. So I did that instead of using Fiagro (much safer IMO). Now it works great. I simplified my Carrierwave config file to only use AWS calling the environment variables from EB. Here's the file:
# config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: ENV['S3_KEY'],
aws_secret_access_key: ENV['S3_SECRET'],
region: ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
config.fog_public = false
config.storage = :fog
end
I changed my uploader files to use fog too. Here's an example:
# app/uploaders/image_uploader.rb
class ImageUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
def extension_white_list
%w(jpg jpeg gif png)
end
end
Everything works great now. I hope this helps someone else.
I tried with solutions and answers that already exist in this environment but do not solve my problem, so I open a new thread.
Mount a test server with these technologies in a homely own server and works perfectly in keeping gcloud store. When I ride gcloud a server with the same configuration it does not work.
I discard probleamas Fog credentials proque in pre-production environment works.
user permissions and try 644 755
When I upload a photo, rather than persist in gcloud storage, it is in the root project folder with filenames such RackMultipart20160414-4440-ry1g0r.jpg
and stored in public / uploads / tmp folder one
No more can be. settings are
config/initializers/carrier_wave.rb
config.fog_credentials = {
:provider => 'Google',
:google_storage_access_key_id => Figaro.env.google_storage_access_key_id,
:google_storage_secret_access_key => Figaro.env.google_storage_secret_access_key
}
config.fog_directory = 'uploads-prod'
config.fog_public = true
config.storage = :fog
config.root = Rails.root.join('public')
content_uploader.rb
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
Any idea or solution ?!
Thank you
I followed this tutorial:
http://lifesforlearning.com/uploading-images-with-carrierwave-to-s3-on-rails/
I had working carrierwave uploader which was storing files to disk space
What I did step by step:
1)added fog gem and run bundle install and bundle update
2)in config/initializers I created r3.rb file with this:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'mykey',
:aws_secret_access_key => 'mysecretkey',
:region => 'us-west-2' # Change this for different AWS region.
}
config.fog_directory = "bucket-main"
end
I ran rails s and tried to save some photo. But as you can see on the picture my bucket is empty.So they must be stored to my disk.
What do I do now?
Update I changed storage to fog.
Here is my photouploader class code:
# encoding: utf-8
class PhotoUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
And now I get this error:
hostname "bucket-main.bucket-main.s3-us-west-1.amazonaws.com" does not
match the server certificate (OpenSSL::SSL::SSLError)
i eventually solved my problem by updating
bundle update fog
and
bundle update carrierwave
Try adding path_style to your config and the fog_directory
config.fog_credentials = {
...
:path_style => true
}
config.fog_directory = 'bucket-main'
I just spent a few hours tracking down the cause of this error, which I was also getting:
hostname "bucket-main.bucket-main.s3-us-west-1.amazonaws.com" does not match the server certificate (OpenSSL::SSL::SSLError)
The odd thing is how the bucket name is repeated twice in the hostname. It turned out I had configured the wrong region name. Notice in your config.fog_credentials you have
:region => 'us-west-2'
...but the hostname in the exception has s3-us-west-1? If your bucket is in one AWS region, but you configure a different region in your Fog credentials, Fog will try to follow a redirect from AWS, and somehow the bucket name gets doubled up in this situation. Fog produces a warning about the redirect, but Carrierwave ends up hiding this from you.
Set :region in your Fog credentials to where the bucket actually is in AWS, and the does not match the server certificate exception will stop happening.
I have a Rails app that is using Carrierwave for file uploads. It has been working fine but I want to start using Amazon S3 for my image storage. I am getting this error:
ArgumentError ( is not a recognized storage provider):
app/controllers/salons_controller.rb:52:in `update'
I have made sure I have the latest gems for Carrierwave and Fog. This is in my Gemfile:
gem 'carrierwave'
gem 'aws-sdk'
gem 'fog'
fog.rb looks like:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'MYACCESSKEY',
:aws_secret_access_key => 'MYSECRETKACCESSKEY',
:region => 'us-east-1'
}
config.fog_directory = 'andrunix'
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
The Uploader class looks like:
class SalonImageUploader < CarrierWave::Uploader::Base
include CarrierWave::RMagick
storage :fog
def store_dir
# "andrunix" is the bucket name on S3
"andrunix/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
If I change the storage back to 'file', it works fine. Setting storage to 'fog' generates this error.
OK, I'm an idiot. :)
At some point, I don't know where, I added a fog.rb file with my CarrierWave configuration to the lib/carrierwave/storage directory. I got desperate, paid for a Railscasts subscription so I could watch episode #383 (http://railscasts.com/episodes/383-uploading-to-amazon-s3?autoplay=true) and at 3:02 I found the error of my ways. The Carrierwave configuration needed to be placed in config/initializers/carrierwave.rb.
I don't know where I got this other location but once I moved the config to the proper location, everything is good.
I just ran into the same problem, and people must be aware that any typo in the config file : "config/initializers/carrierwave.rb", leads to that error.
I have two carrierwave uploaders in my application. ImageUploader is for uploading locally and ImageRemoteUploader for uploading to Amazon S3 storage using fog. ImageUploader has storage set to :file and ImageRemoteUploader has storage set to :fog. This setup works fine, but when I start to set up my rspec tests, things change.
The problem arises when I change the ImageRemoteUploader to use :file storage during testing. I do this in my fog initialization file. The file,
/config/initializers/fog.rb, looks like:
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
else
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'XXXXXXXX', # required
:aws_secret_access_key => 'XXXXXX', # required
:region => 'XXXX' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'xxx' # required
config.fog_public = true
end
end
When I do this, I get an ArgumentError is not a recognized storage provider carrierwave exception. When I use the fog credentials (I don't set config.storage to :file), the test works as expected.
Carrierwave 0.7.1, Rails 3.2.8, Ruby 1.9.3, Rspec 2.10
Thanks.
I'd try moving the config.storage and config.enable_processing lines into lib/initializers/carrierwave.rb, as recommended in the Carrierwave docs.
Fog also has its own mocking support, which is enabled by running Fog.mock! before the examples. This might be a better approach.