I see this error:
undefined method `root' for AWS::Rails:Module
The corresponding line in my controller:
directory_name = Rails.root.join('public', #curAdmin.name)
This worked fine until I recently added the aws-sdk gem to my application to push static files and assets over to my S3 bucket.
Now it seems like when I call "Rails" the application thinks I'm referring to an AWS class method.
I don't know how I tripped the system up to do this.
This will happen if you refer to Rails within the AWS namespace. You should be able back out of the namespace by prepending :: to the module - ie ::Rails.root.join('public', #curAdmin.name)
I don't know why I tripped everything up - but if I remove the include at the top of the controller:
#include AWS
And then I refer to such methods directly as:
s3 = AWS::S3.new
bucket = s3.buckets['my_bucket_here']
It works ok. I'd still like to know what I did wrong.
Related
I'm using AmazonS3 to store Paperclip attachments on all non-test environments.
For test specifically I use a local path/url setup to avoid interacting with S3 remotely
Paperclip::Attachment.default_options[:path] =
":rails_root/public/system/:rails_env/:class/:attachment/:id_partition/:filename"
Paperclip::Attachment.default_options[:url] =
"/system/:rails_env/:class/:attachment/:id_partition/:filename"
I define my attachment as follows in the model
has_attached_file :source_file, use_timestamp: false
In my Production code I need to access the file using Model.source_file.url because .url returns the remote fully qualified Amazon S3 path to the file. This generally works fine for non-test environments.
However on my test environment I can't use .url because Paperclip creates and stores the file under the path defined by :path above. So I need to use .path. If I use .url I get the error -
Errno::ENOENT:
No such file or directory # rb_sysopen - /system/test/imports/source_files/000/000/030/sample.txt
which makes sense because paperclip didn't store the file there...
How do I get paperclip on my test environment to store/create my file under the :url path so I can use .url correctly?
Edit: If it helps, in test I create the attachment from a locally stored fixture file
factory :import do
source_file { File.new(Rails.root + "spec/fixtures/files/sample.tsv") }
end
Edit2: Setting :path and :url to be the same path in the initializer might seem like a quick fix, but I'm working on a larger app with several contributors, so I don't the have the luxury to do that or break any one else's specs. Plus it looks like Thoughtbot themselves recommend this setup, so there should be a "proper" way to get it working as is.
Thanks!
Have you tried using s3proxy in your test environment to simulate S3 instead of directly have paperclip write to local files?
My users store an external image URL (http://their-site.com/photo.jpg) in - for example - #user.external_image. I'm trying to write a method for the User class that takes that URL and saves it to S3 using Carrierwave.
So on the above #user, I'd like to run #user.save_to_s3 and have it "upload" the image to S3. I've tried to do this by mounting an uploader on :s3_image to the User class and writing the following method:
def save_to_s3
self.remote_s3_image_url = self.external_image
save
end
But I get the following error when I call that method on a #user record:
"ArgumentError: Missing required arguments: aws_access_key_id, aws_secret_access_key"
So it's getting close, but it's not retrieving my S3 credentials - even though they're set. I'd appreciate any thoughts or suggestions.
The problem turned out to be unrelated to Carrierwave or Fog. It was an oversight on my part that the ENV variables which I'd set (in my app's .env file) were not being loaded into the bootstrapped Rails environment (e.g. rails console). Once I added http://github.com/bkeepers/dotenv (which solves precisely that issue) to my bundle, the save_to_s3 method worked.
Since I am currently using nginx to serve public/uploads only in subdomain assets, and also I am using the client side template(eco) to render the image (so I cannot use image_tag or image_url helper method provided by rails), I need to set the model.image_url (which provided by carrierwave) to return url with same domain.
Here is what I had tried: (in config/initializer/carrierwave.rb)
CarrierWave.configuration do |config|
config.assets_host = "http://assets.lvh.me:3000"
end
But when I try this setting then the rails popup an error message:
undefined method `assets_host=' for CarrierWave::Uploader::Base:Class (NoMethodError)
Since the README of carrierwave has this description of setting but only in fog section, so I'm wondering that is this function only work while using fog? Or did I miss anything?
Thanks for help
should use asset_host ( version > 0.7.0)
see chnage commit on github
https://github.com/jnicklas/carrierwave/commit/7046c93d6b23cffef9f171a5f7f0dd14267a7057#lib/carrierwave/uploader/configuration.rb
CarrierWave.configuration do |config|
config.asset_host = "http://assets.lvh.me:3000"
end
In later versions this changed to
CarrierWave.configure do |config|
...
end
I'm trying to write a general Amazon S3 uploader (will be used mostly for images) for my rails project. I was able to set up the environment in my console following http://amazon.rubyforge.org/.
I was able to follow the guide in the console. However, I had trouble when I applied it to my rails project. When I try to access my new view, I get the following error:
NameError in UploadsController#new
uninitialized constant UploadsController::Bucket
Here is my controller:
class UploadsController < ApplicationController
require 'aws/s3'
def new
photo_bucket = Bucket.find('photos')
#photos = photo_bucket.objects
end
def create
file = 'black-flowers.jpg'
S3Object.store(file, open(file), 'photos')
end
end
In my controller, my new action will contain the form for the upload and a list of all the photos.
My create action will just save the file. I haven't figured out how the params from the form will be passed into the controller yet, so ignore the 'black-flowers.jpg' line.
My guess is that I did not establish a connection in the controller.
How do I establish a connection or fix this error?
Thanks for looking.
Bucket is not a top level constant in this case. You probably want the AWS::S3::Bucket constant for new, and I'd assume something similar for S3Object.
Note that you may also want to look into the Fog library for this.
The fact that you haven't figured out how params will be passed in implies that you may also want to work through the Rails tutorials without S3 first.
I had a similar issue and it was solved by just checking all files required were provided and restarting the server
I have uploaded a file on s3 using paperclip.. the file upload process works fine..
Now i wanted to download it. In my model i have set my :s3_host_alias.. now as the file is private.. so if i am trying to fetch the file using paperclip url method... it's giving me access denied error...
and if i am using S3Object.url_for method then the url return is s3.amazonaws.com/mybucket/path_of_file.
I don't want tht s3.amazonaws.com to be shown in the url so used :s3_host_alias in my model
and created a CNAME inmy DNS server... now if i am directly using #object.url then its giving the correct url but throws access denied error. because i guess the access_key and signature is not passed..
Is there a way to fetch private file from s3 using paperclip by using canonical url..
I don't use paperclip, but yes, you can sign a S3 request using a virtual hostname.
I had this problem using Paperclip and the AWS::S3 gem. Paperclip set up everything fine for non-authenticated requests. But falling back to AWS::S3 to generate an authenticated URL didn't use the S3 host alias.
You can pass AWS::S3 a server option on connect, but I didn't need or want a connection just to get the URL. I also couldn't see a way to set it via configuration (so it would apply outside of a connection). Even glancing at the source, it looks like it's non-configurable.
So, I created a monkey patch. My Ruby-fu (and maybe my OO-fu) aren't super high, so there may be a better way to do this, but it works for what I need. Basically, I pass url_for an :s3_host_alias param on the option hash, and then the monkey patch uses that if it's passed. If it's passed, it also has to remove the bucket from the path that's generated.
So....
You can create this 1-line file, RAILS_ROOT/initializers/load_patches.rb, to load all patches in RAILS_ROOT/lib:
Dir[File.join(Rails.root, 'lib', 'patches', '**', '*.rb')].sort.each { |patch| require(patch) }
Then create the file RAILS_ROOT/lib/patches/aws.rb with this code:
http://pastie.org/1622881
And you can call for an authenticated url with something along these lines (Configuration is a custom class for storing, natch, configuration values) :
AWS::S3::S3Object.url_for(media.path(style || media.default_style), media.bucket_name, :expires_in => expires_in, :use_ssl => false, :s3_host_alias => Configuration.s3_host_alias)