Using Paperclip, Fog, and Ceph - ruby-on-rails

I'm writing a Rails 3 app that uses Paperclip to transcode a video file attachment into a bunch of other formats, and then to store the resulting files. It all works fine for local storage, but I am trying to make it work using Paperclip's Fog support to store files in a bucket on our own Ceph cluster. However, I can't seem to find the right configuration options to make Fog talk to my Ceph server.
Here is a snippet from my Rails class:
has_attached_file :videofile,
:storage => :fog,
:fog_credentials => { :aws_access_key_id => 'xxx', :aws_secret_access_key => 'xxx', :provider => 'AWS'},
:fog_public => true,
:url => ":id/:filename",
:fog_directory => 'replay',
:fog_host => 'my-hostname',
Writes using this setup fail because Paperclip attempts to save to Amazon S3 rather than the host I've provided. I have a non-Rails / non-Paperclip toy script working just fine:
conn = Fog::Storage.new({
:aws_access_key_id => 'xxx',
:aws_secret_access_key => 'xxx',
:host => 'my-hostname',
:path_style => true,
:provider => "AWS",
})
This correctly connects to my local Ceph server. So I suspect there is something I'm not configuring in Paperclip properly - but what?
Here's the relevant hunk from fog.rb that I think is causing the connection to only go to AWS:
def host_name_for_directory
if #options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
"#{#options[:fog_directory]}.s3.amazonaws.com"
else
"s3.amazonaws.com/#{#options[:fog_directory]}"
end
end

the error was just from an improperly configured Ceph cluster. For anyone who finds this thread, as long as you:
Have your wildcard DNS set up properly for your Ceph frontend;
Ceph configured to recognize as such
Pass in :host in :fog_credentials, which would be the FQDN of the Ceph frontend
:fog_host, which apparently needs to be the URL for your bucket, e.g. https://bucket.ceph-server.foobar.com.
Paperclip will work out of the box. I don't think that it is documented anywhere that you can use :host but it works.

Related

Rails 4 x AWS S3: "This content should also be served over HTTPS."

In my Rails 4 app, I use the paperclip gem to allow users to upload images.
The images are stored on AWS S3.
Here is my configuration in config/environments/production.rb:
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => ENV['S3_BUCKET_NAME'],
:access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
}
}
This was working perfectly fine before I implemented an SSL certificate.
Now that my app — in production — is set with HTTPS, I get the following error in the console:
Mixed Content: The page at 'https://www.domain.com/' was loaded over HTTPS, but requested an insecure image 'http://s3.amazonaws.com/app/model/images/000/000/003/small_thumb/Profile_Picture.png?1448899439'. This content should also be served over HTTPS.
This does not "break" the app, but I would like to make things run properly.
How can I fix this?
Tell Paperclip to produce HTTPS URLs by adding this option to your Paperclip options hash:
:s3_protocol => :https

Paperclip/Rails not reading ENV variables with AWS information

I am trying to use paperclip with S3 AWS. I have set the secret keys and bucket name in the .bash_profile but Rials/Paperclip cannot seem to read them as i get the following error when i try to upload an image in development...
'missing required :bucket option'
If i replace the ENV['S3_BUCKET_NAME'] with the actual name, i then get the following...
'Missing Credentials. Unable to find AWS credentials'
Here is the set-up in the model...
has_attached_file :image, :styles => { :medium => "300x300>", :thumb => "100x100>" },
:storage => :s3,
:s3_credentials => {
:bucket => ENV['S3_BUCKET_NAME'],
:access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
},
:s3_host_name => 's3-eu-west-1.amazonaws.com'
I have tried putting the credentials in the development config file in Rails but i have the same errors. Do i need to tell rails where to look for the ENV variables or am I doing something else wrong here? It seems others are having similar issues but i cannot find a solution.
Thanks for reading.
Just store the AWS bucket, key and key_id in a YAML file and in 'config/initializers', create a new file to read the YAML file contents and set the corresponding ENV variables. Don't forget to restart your rails server
Example
google_client = YAML.load_file("#{Rails.root.join('config/google_client.yml')}")
ENV['GOOGLE_APP_NAME'] = google_client['APP_NAME']
ENV['GOOGLE_CLIENT_ID'] = google_client['CLIENT_ID']
ENV['GOOGLE_CLIENT_SECRET'] = google_client['CLIENT_SECRET']
ENV['GOOGLE_CLIENT_SCOPE'] = google_client['CLIENT_SCOPE']
Typically, we do not push the aws info in Github commit but just ssh to the remote server and place the YAML file in the config folder in our Rails app. So only people who have access to your server can read your aws credentials and not all people even if they have Github access.
tell me if this solves your problem.

S3 integration with Spree Commerce on Heroku

I know this is a topic that has been covered before but I am currently unable to get this working. I have a vanilla local installation of the spree commerce engine (version 2.3.3) which I have deployed to Heroku with a view to getting working before embarking on any customisation.
I have followed the advice of Daniel Pritchett in a similar thread and used the configuration he suggests at https://gist.github.com/dpritchett/c86f6b617d784f943096, and so have a spree_images_paperclip.rb file looking as such:-
Spree.config do |config|
attachment_config = {
s3_credentials: {
access_key_id: ENV["AWS_ACCESS_KEY_ID"],
secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"],
bucket: ENV["AWS_DEV_BUCKET"],
},
s3_host_name: 's3-eu-west-1.amazonaws.com',
storage: :s3,
s3_headers: { "Cache-Control" => "max-age=31557600" },
s3_protocol: "https",
bucket: ENV["AWS_DEV_BUCKET"],
styles: {
mini: "48x48>",
small: "100x100>",
product: "240x240>",
large: "600x600>"
},
path: ":rails_root/public/spree/products/:id/:style/:basename.:extension",
default_url: "/spree/products/:id/:style/:basename.:extension",
default_style: "product",
}
attachment_config.each do |key, value|
Spree::Image.attachment_definitions[:attachment][key.to_sym] = value
end
end unless Rails.env.test?
All config variables are set in a separate YAML file. This all seems to work fine locally, but when I deploy to Heroku, it crashes when I attempt to upload an image through the Spree admin console, unfortunately the generated Heroku logs are very unhelpful, just providing a 500 internal server error.
Does anyone have a good explanation as to why this is not working or alternatively where a good, up-to-date guide exists? Spree are still yet to update their guide since removing S3 support through admin.
Thanks in advance!
Paul
As of version 2.4, Spree has removed support for paperclip and S3 configuration, so you have to configure the relevant gems and tools separately. In the code you provide in your question, you are still trying to assign configuration data to Spree's config, instead of to Paperclip's.
In config/application.rb:
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => ENV['S3_BUCKET'],
:access_key_id => ENV['S3_ACCESS_KEY'],
:secret_access_key => ENV['S3_SECRET']
}
}
Also you have to be sure to include the aws-sdk gem in your Gemfile to enable Paperclip's S3 support - Spree no longer includes it for you.
(If you are using Heroku, they have a guide specifically for configuring Paperclip)

Amazon s3 bucket name warning from Fog gem

I get this warning when all rails server, console setup.
[WARNING] fog: the specified s3 bucket name(hesaplabakalim-production/assets/new_opengraph) is not a valid dns name, which will negatively impact performance.
My fog configuration is like
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => "dummy",
:aws_secret_access_key => "dummy"
})
$directory = connection.directories.create(
:key => "dummy/assets/new_opengraph",
:public => true
)
I must actually create an bucket that's name is dummy and after that walk to assets/new_opengraph folder but i could not find it in fog documentation
I searched on fog gem's github page and i found this issue and solution.
"on the empty folder, I have used zero byte files that we hide in the console that gives the semblance of creating empty folders. Cheap trick but works."
https://github.com/fog/fog/issues/1370

aws-s3 error: AWS::S3::MissingAccessKey error, but keys have been defined?

I'm pretty new to ROR. I've recently deployed an app on heroku and have tried to add an attachment function to the app via paperclip.
I've followed all the steps in adding aws-s3 to my app. Here was my initial code:
user.rb (model)
has_attached_file :avatar,
:styles => {:small => "70x70>"},
:storage => :s3,
:s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => ":attachment/:id/:style/:basename.:extension"
validates_attachment_size :avatar, :less_than => 1.megabytes
validates_attachment_content_type :avatar, :content_type => ['image/jpeg', 'image/png']
s3.yml (file is located in config folder) note: all of these buckets exist on my aws-s3
development:
bucket: my_avatar-dev
access_key_id: amazonaccesskey
secret_access_key: amazon_secret_access_key
test:
bucket: myapp_avatar-test
access_key_id: amazonaccesskey
secret_access_key: amazon_secret_access_key
production:
bucket: myapp_avatar-pro
access_key_id: amazonaccesskey
secret_access_key: amazon_secret_access_key
gemfile
gem 'aws-s3'
When running this configuration, I would get a error page 500 error when loading my app. Running Heroku logs showed the following error: AWS::S3::MissingAccessKey (You did not provide both required access keys.
So I followed some advice and defined the key and secret_key as environment variables to heroku, using the following line of code:
heroku config:add S3_KEY=amazonaccesskey S3_SECRET=amazon_secret_key
I then added an initializer to test environments and launch via key or .yml file depending on environment, code is as follows:
initializers/s3.rb
if Rails.env == "production"
# set credentials from ENV hash
S3_CREDENTIALS = { :access_key_id => ENV['S3_KEY'], :secret_access_key => ENV['S3_SECRET'], :bucket => "myapp_avatar-pro"}
else
# get credentials from YML file
S3_CREDENTIALS = Rails.root.join("config/s3.yml")
end
user.rb model was then update to the following:
has_attached_file :avatar, :storage => :s3, :s3_credentials => S3_CREDENTIALS
I then deployed to heroku and tested the app, but I still keep getting the same error (page 500) and error code: AWS::S3::MissingAccessKey (You did not provide both required access keys.
How is this possible if I have defined the variables in heroku? Is there something I am missing? Is it possible it's something with the gem? Also, I'm using HAML for styling... not sure that matters at all, but just in case it does. I'm quite lost, so any help would be greatly appreciated. Thank you so much!
Having just worked through the same problem and trawling a number of similar posts. I found that any of the possible configurations in the above answer i.e. declaring all of the hashes in the model, using the .yml or using the initializer all work fine from my dev and on heroku as long as the S3 bucket is of US Standard type> The choice is just about how DRY you want to be.
When I originally set S3 up, I used a European bucket. This gave me the spurious error message:
AWS::S3::MissingAccessKey (You did not provide both required access keys.
I note from the AWS site : http://docs.amazonwebservices.com/general/latest/gr/index.html?rande.html
that AWS uses a specific endpoint address for each region to reduce latency and am guessing (because I am a novice coder) that the US standard is either a default or coded into the AWS-S3 plugin. (Maybe someone can edit this up into a more complete answer?)
I solved this problem with this:
:s3_credentials => {
:access_key_id => 'mykey',
:secret_access_key => 'mykey'

Resources