Rails Engine not uploading to s3 with Carrierwave - ruby-on-rails

I am creating a rails plugin with rails new plugin my_plugin --mountable
This was quite some work to figure out but it is supposed to upload files to S3 with carrierwave, but it says ok but nothing is uploaded
Carrierwave is used to generate an uploader with rails g uploader photo
the file looks like this
# my_engine/app/uploaders/my_engine/photo_uploader.rb
# encoding: utf-8
module my_engine
class PhotoUploader < CarrierWave::Uploader::Base
# Choose what kind of storage to use for this uploader:
storage :file
# storage :fog
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
end
the model had an mount :photo, PhotoUploader
module PdfGeneratorEngine
class Assemble < ActiveRecord::Base
attr_accessible :color, :photo, :qr_code_url, :text
mount_uploader :photo, PhotoUploader
end
end
my CarrierWave config file is this
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'MY_ACCES_KEY',
:aws_secret_access_key => 'MY_SECRET_KEY',
:provider => 'AWS',
:region => 'eu-west-1'
}
config.fog_directory = 'my.bucket.com'
config.fog_host = 'https://s3-eu-west-1.amazonaws.com/my.bucket.com'
config.storage = :fog
config.s3_use_ssl = true
config.fog_public = true
end
So first of all it starts screaming at fog_host, it is okay if it is asset_host
Next it's problem lies within s3_use_ssl, while it is an merged issue on CarrierWave's github. But the host is already defined as https:// so I don't see why that line is necessary.
After that it says 'Okay it's done' and when I try to check (with a deamon) for the file, there's nothing there.
What did I miss? Or is there something of an issue with CarrierWave and Rails mountable engines?

In your photo_uploader.rb
comment storage:file and uncomment storage:fog
# storage :file
storage :fog
--
Look at your fog.rb, Its inconsistent with what is given here
carrierwave#using-amazon-s3
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxx', # required
:aws_secret_access_key => 'yyy', # required
:region => 'eu-west-1' # optional, defaults to 'us-east-1'
:hosts => 's3.example.com' # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'name_of_directory' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end

Okay So there is a bit of a problem with CarrierWave.
I have quickly setup RightAws and now it uploads to S3 and I can find it from my deamon.
in my uploader I added
#s3 = RightAws::S3Interface.new('MY KEY', 'MY SECRET KEY')
#s3.put('my.bucket.com', assemble.photo.identifier ,params[:assemble][:photo])
Thanks for your help Nishant, CarrierWave would be a lot slicker and nicer but it currently does not work. There has been an issue for this in their github regarding use in Rails engines.

Related

Fog/Carrierwave config for Rails app on AWS Elastic Beanstalk

I'm trying to set up Carrierwave and Fog to handle image and file uploads on a rails app that I have hosted on AWS' Elastic Beanstalk.
I'm a little confused on how to properly set up the Fog config.
I tried using my AWS Access and Secret keys (commented out in the example below). That through an error on my EB CLI (ERROR: NotAuthorizedError - Operation Denied. The security token included in the request is invalid.)
I'm tyring to use IAM instead of having my Access/Secret codes in my ruby code. Can anyone tell me how to set this up properly?
Here's my config file:
CarrierWave.configure do |config|
# Use local storage if in development or test
if Rails.env.development? || Rails.env.test?
CarrierWave.configure do |config|
config.storage = :file
end
end
# Use AWS storage if in production
if Rails.env.production?
CarrierWave.configure do |config|
config.storage = :fog
end
end
config.fog_credentials = {
:provider => 'AWS', # required
# :aws_access_key_id => 'My Access', # required
# :aws_secret_access_key => 'My Secret', # required
:use_iam_profile => true,
:region => 'eu-west-2' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'elasticbeanstalk-us-west-2-XXXXXXXXXX' # required
#config.fog_host = 'https://assets.example.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
This is a setup that works for me:
config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV['aws_access_key_id'], # required
aws_secret_access_key: ENV['aws_secret_access_key'], # required
#region: 'Singapore', # optional, defaults to 'us-east-1'
#host: 's3.example.com', # optional, defaults to nil
#endpoint: 'olucube-images.s3-website-ap-southeast-1.amazonaws.com', # optional, defaults to nil
}
config.fog_directory = ENV['fog_directory'] # required
#config.fog_public = false # optional, defaults to true
# config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" }, # optional, defaults to {}
end
and I used figaro gem to hold my credentials as follow:
config/application.yml
aws_access_key_id: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_access_key: 'XXXXXXXXXXXXXXXXXX'
fog_directory: 'myAppName'
This was a bit a of a wild ride. I had a hard time figuring out that Figaro gem. It's probably simple but I didn't really understand it. So for a test, I put my keys directly in the code. It still didn't work.
I pushed my code to github (publicly) and didn't think too much of it. I was going to change the keys just in case. Before I was able to do that someone found my code on github and gained access to my AWS account. They started a bunch of EC2 instances and racked up $3000 worth of usage in a few hours!
My AWS account got suspended and I'm still dealing with having the charges reversed.
Anyway. I found out that you can actually set environment variables on the Elastic Beanstalk web interface. Its under Configuration → Software Configuration. So I did that instead of using Fiagro (much safer IMO). Now it works great. I simplified my Carrierwave config file to only use AWS calling the environment variables from EB. Here's the file:
# config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: ENV['S3_KEY'],
aws_secret_access_key: ENV['S3_SECRET'],
region: ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
config.fog_public = false
config.storage = :fog
end
I changed my uploader files to use fog too. Here's an example:
# app/uploaders/image_uploader.rb
class ImageUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
def extension_white_list
%w(jpg jpeg gif png)
end
end
Everything works great now. I hope this helps someone else.

Can carrierwave direct upload to local storage?

Sometimes I am away from internet and still need to work on upload pages. Carrierwave Direct seems to force storage :fog; with no way of overriding in dev.
Is it possible to tell Carrierwave Direct to use local storage (:file) and simply fallback to Carrierwave's development config settings?
Setting storage :file in carrierwave initializer under development config settings doesnt work...carrierwave_direct errors with "is not a recognized provider" from "<%= direct_upload_form_for #uploader do |f| %>".
I have attempted to work around carrierwave direct, but between forcing :fog, expecting a redirect url and expecting the direct_upload_form_for form method...carrierwave_direct is pretty much in charge.
Using storage :file in development would be a welcome feature for the carrierwave_direct gem. Does anyone know how to cleanly do this?
I think it can be done as follows:
CarrierWave.configure do |config|
if Rails.env.development? || Rails.env.test?
config.storage = :file
config.asset_host = ENV["dev_url"]
else
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV["aws_id"], # required
aws_secret_access_key: ENV["aws_key"], # required
region: ENV["aws_zone"] # optional, defaults to 'us-east-1'
}
config.fog_directory = ENV["aws_bucket"] # required
config.max_file_size = 600.megabytes # defaults to 5.megabytes
config.use_action_status = true
config.fog_public = false # optional, defaults to true
config.fog_attributes = { cache_control: "public, max-age=#{365.day.to_i}" } # optional, defaults to {}
end
end
And in your Uploader add in:
class SomeUploader < CarrierWave::Uploader::Base
if Rails.env.development? || Rails.env.test?
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
end

Carrierwave + Fog+ caching

Scenario: We have a few users on the site who have previously uploaded a logo for their site. Recently, we changed the dimensions of this logo and would like all accounts to reflect this change (we've also removed retina_rails from our app). So we plan on making a migration to remove retina rails while at the same time looping through each account and re uploading the logos to normalize across all logos.
Currently, this is what the migration looks like:
class RemoveRetinaDimensionsFromAccounts < ActiveRecord::Migration
def change
remove_column :accounts, :retina_dimensions, :text
end
ActsAsTenant.configure.require_tenant = false
Account.all.each do |account|
if account.logo?
account.logo.cache_stored_file!
account.logo.retrieve_from_cache!(account.logo.cache_name)
account.logo.recreate_versions!(:small, :small)
account.save!
end
end
ActsAsTenant.configure.require_tenant = true
end
This is what our carrierwave.rb file looks like:
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
elsif Rails.env.development?
config.storage = :file
config.cache_dir = "#{Rails.root}/tmp/uploads"
elsif Rails.env.staging?
config.storage = :fog
config.cache_dir = "#{Rails.root}/tmp/uploads"
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => Rails.application.secrets.aws_access_key_id, # required
:aws_secret_access_key => Rails.application.secrets.aws_secret_access_key, # required
:region => 'us-west-2' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'blvd-staging' # required
config.fog_public = false
end
end
I've tried to follow the advice mentioned in this link https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Recreate-and-reprocess-your-files-stored-on-fog but it is not working. I've tested to make sure the cache is saving files, and it is. However, when I try and retrieve_from_cache! I'm unable to do so (as the cached file does not have a name).
This is what my cached files look like:
tmp
uploads
##########-#####-####
Thank you.
Turns out I did not run the desired code block within the change block inside the migration so the code was never being executed.

carrierwave image upload to s3 "hostname does not match certificate error"

I first got carrierwave working by following the directions from this railscast:
http://railscasts.com/episodes/253-carrierwave-file-uploads
Then I hooked up s3 by following the directions here:
http://railgaadi.wordpress.com/2012/06/03/saving-files-in-amazon-s3-using-carrierwave-and-fog-gem/
My image_uploader.rb file:
class ImageUploader < CarrierWave::Uploader::Base
include CarrierWave::RMagick
storage :fog
def store_dir
"development/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
version :iphone do
process :resize_to_limit => [320, 160]
end
end
And my fog.rb file:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxx', # required
:aws_secret_access_key => 'xxx', # required
}
config.fog_directory = 'goodlife.carrierwave' # required
end
This is the error I'm getting:
hostname "goodlife.carrierwave.s3-us-west-1.amazonaws.com" does not match the server certificate
Any advice? Thanks!
Adding :path_style => true to config.fog_credentials worked for me. I learned it from an answer to
Amazon S3 - hostname does not match the server certificate (OpenSSL::SSL::SSLError) + rails.
Is goodlife.carrierwave the name of your bucket?
Edit:
Remove the period from your bucket name. That should fix it.
From Amazon:
If you want to access a bucket by using a virtual hosted-style
request, for example, http://mybucket.s3.amazonaws.com over SSL, the
bucket name cannot include a period (.).

Rspec and Carrierwave. When changing config.storage to file for testing, I get an ArgumentError is not a recognized storage provider exception

I have two carrierwave uploaders in my application. ImageUploader is for uploading locally and ImageRemoteUploader for uploading to Amazon S3 storage using fog. ImageUploader has storage set to :file and ImageRemoteUploader has storage set to :fog. This setup works fine, but when I start to set up my rspec tests, things change.
The problem arises when I change the ImageRemoteUploader to use :file storage during testing. I do this in my fog initialization file. The file,
/config/initializers/fog.rb, looks like:
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
else
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'XXXXXXXX', # required
:aws_secret_access_key => 'XXXXXX', # required
:region => 'XXXX' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'xxx' # required
config.fog_public = true
end
end
When I do this, I get an ArgumentError is not a recognized storage provider carrierwave exception. When I use the fog credentials (I don't set config.storage to :file), the test works as expected.
Carrierwave 0.7.1, Rails 3.2.8, Ruby 1.9.3, Rspec 2.10
Thanks.
I'd try moving the config.storage and config.enable_processing lines into lib/initializers/carrierwave.rb, as recommended in the Carrierwave docs.
Fog also has its own mocking support, which is enabled by running Fog.mock! before the examples. This might be a better approach.

Resources