I try to setup Amazon S3 support to store images in the cloud with refinerycms.
I created the bucket at https://console.aws.amazon.com/s3/
I named it like the app 'bee-barcelona' and it says it is in region US Standard
In ~/config/initializers/refinery/images.rb I entered all the data (where 'xxx? stands for the actual keys I entered:
# Configure S3 (you can also use ENV for this)
# The s3_backend setting by default defers to the core setting for this but can be set just for images.
config.s3_backend = Refinery::Core.s3_backend
config.s3_bucket_name = ENV['bee-barcelona']
config.s3_access_key_id = ENV['xxx']
config.s3_secret_access_key = ENV['xxx']
config.s3_region = ENV['xxx']
Then I applied the changes to heroku with:
heroku config:add S3_KEY=xxx S3_SECRET=xxx S3_BUCKET=bee-barcelona S3_REGION=us-standard
But still, in the app I only get: "Sorry, something wen wrong" when I try to upload.
What did I miss?
What a sad error. I didn't think about that option till I went for a 10 km run…
I had the app set up to be "beekeeping"
My bucket on Amazon was named "bee-barcelona"
I did register the correct bucket in the app. Still refinery tried to keep on going to another persons bucket, named "beekeeping". With my secret key there was no way my files would end up there.
I created a new app and a new bucket, all with crazy names, BUT! They are the same on AmazonS3 and GIT!!!
No it works like a charm.
What a very rare situation...
The way I did it was as follows:
Create a bucket in region US-STANDARD!!!!!!!!!!
Did you see that? US-STANDARD, not oregon, not anywhere else.
Add gems to Gemfile
gem "fog"
gem "unf"
gem "dragonfly-s3_data_store"
In config/application.rb
config.assets.initialize_on_precompile = true
In config/environments/production.rb
Refinery::Core.config.s3_backend = true
In config/environments/development.rb
Refinery::Core.config.s3_backend = false
Configure S3 for heroku (production) and local storage for development. In config/initializers/refinery/core.rb
if Rails.env.production?
config.s3_backend = true
else
config.s3_backend = false
end
config.s3_bucket_name = ENV['S3_BUCKET']
config.s3_region = ENV['S3_REGION']
config.s3_access_key_id = ENV['S3_ACCESS_KEY']
config.s3_secret_access_key = ENV['S3_SECRET_KEY']
Add variables to heroku:
heroku config:add S3_ACCESS_KEY=xxxxxx S3_SECRET_KEY=xxxxxx S3_BUCKET=bucket-name-here S3_REGION=us-east-1
I had a lot of issues because I had before S3_REGION=us-standard. This is WRONG. Set your US-Standard bucket as shown:
S3_REGION=us-east-1
This worked flawlessly for me on Rails 4.2.1 and refinery 3.0.0. Also, make sure you are using the exact same names for the variables. Sometimes it says S3_KEY instead of S3_ACCESS_KEY or S3_SECRET instead of S3_SECRET_KEY. Just make sure you have the same ones in your files and your Heroku variables.
Related
I've a Rails app with Paperclip and I use Google Cloud Storage. So far so good.
To avoid having both development and production using the same storage, I decided to change the default Paperclip path to another based based on the environment. This way every env has his own directory. Then I consistently moved the old images from the default Paperclip path to the new ones.
The problem is that now old images give a 404, whereas any new image I upload works properly. Is there any way to fix that?
Here it's the previous settings:
module MyApp
class Application < Rails::Application
config.paperclip_defaults = {
storage: :fog,
fog_public: true,
fog_directory: 'myapp-01',
fog_credentials: {
google_storage_access_key_id: ENV['GOOGLE_STORAGE_ID'],
google_storage_secret_access_key: ENV['GOOGLE_STORAGE_SECRET'],
provider: 'Google'
}
}
}
I override the default using the following settings:
path: ":rails_env/:class/:attachment/:id_partition/:style/:filename",
url: "/:rails_root/:class/:attachment/:id_partition/:style/:filename"
My guess is that it's not sufficient to update Paperclip config with the new path and move all images to the new directory. You need also to update the old records...
If you wonder, the old records point to root/images/?123456789.
Your guess is right. Changing the config is not enough, you need to move the files. This, is better left for a rake task or background job. I have some code for S3, but it should give you an idea of how to implement it for Google:
def old_key(image, file_name_field)
# Previous `:path`: '/:class/:attachment/:id/:style/:filename'
klass = self.class.to_s.pluralize.downcase
attachment = image.pluralize
"#{klass}/#{attachment}/#{id}/original/#{send(file_name_field)}"
end
def re_path(image)
file_name_field = "#{image}_file_name"
return if send(file_name_field).blank?
old_object = bucket.object(old_key(image, file_name_field))
return unless old_object.exists?
Rails.logger.warn "Re-saving image attachment #{self.class}/#{id}"
send "#{image}=", URI.parse(old_object.public_url)
save
end
I'm basically building the old path using my own interpolation, finding the object in S3 (hence key/object lingo) and re-download every image from S3. Be careful with this, since you might incur in extra cost for downloading rather than just moving, if that's someone Google allows.
Then I just called this method on every image for every object:
Object.each do { |o| o.re_path(:logo); o.re_path(:background); }
I stuck all afternoon on checking whether an uploaded file to AWS S3 exists or not. I use Ruby On Rails and the gem called aws-sdk, v2.
First of all - the file exists in the bucket, it is located here:
test_bucket/users/10/file_test.pdf
There's no typo, this is the exact path. Also, the bucket + credentials are set up correctly.
And here's how I try to check the existence of the file:
config = {region: 'us-west-1', bucket: AWS_S3_CONFIG['bucket'], key: AWS_S3_CONFIG['access_key_id'], secret: AWS_S3_CONFIG['secret_access_key']}
Aws.config.update({region: config[:region],
credentials: Aws::Credentials.new(config[:key], config[:secret]),
:s3 => { :region => 'us-east-1' }})
bucket = Aws::S3::Resource.new.bucket(config[:bucket])
puts bucket.object("file_test.pdf").exists?
The output is always false.
I also tried puts bucket.object("test_bucket/users/10/file_test.pdf").exists?, but still false.
Also, I tried to make the file public in the AWS S3 dashboard, but no success, still false. The file is visible when click on the generated link.
But the problem is that when I check with using aws-sdk if the file exist, the output is still false.
What am I doing wrong?
Thank you.
You need to pass the full path to the object (not including the bucket name) - users/10/file_test.pdf
In my rails 4 application I'm trying to download and then upload a regular png file to my s3 bucket using the aws-sdk (using gem 'aws-sdk', '~> 2').
In development environment, the code works totally fine. But if I try rails s -e production or if I test out the upload on my heroku instance, I get the following error when I test out the image upload functionality,
Seahorse::Client::NetworkingError (Connection reset by peer):
app/helpers/aws_s3.rb:73:in `upload_to_s3'
app/controllers/evaluations_controller.rb:19:in `test'
my upload_to_s3 method mentioned in the trace looks like:
def upload_to_s3(folder_name)
url = "http://i.imgur.com/WKeQQox.png"
filename = "ss-" + DateTime.now.strftime("%Y%d%m-%s") + "-" + SecureRandom.hex(4) + ".png"
full_bucket_path = Pathname(folder_name.to_s).join(filename).to_s
file = save_to_tempfile(url, filename)
s3 = Aws::S3::Resource.new(access_key_id: ENV["IAM_ID"], secret_access_key: ENV["IAM_SECRET"], region: 'us-east-1')
s3_file = s3.bucket(ENV["BUCKET"]).object(full_bucket_path)
s3_file.upload_file(file.path)
raise s3_file.public_url.to_s.inspect
end
The environment variables are the same between both environments. I don't really know where else to turn to debug this. Why would it work in development, but not in production? I have a feeling I'm missing something pretty obvious.
UPDATE:
Let's simplify this further since I'm not getting very much feedback.
s3 = Aws::S3::Resource.new
bucket = s3.bucket(ENV["BUCKET"])
bucket.object("some_file.txt").put(body:'Hello World!')
The above totally works in my development environment, but not my production environment. In production it faults when I call put(body:'Hello World!'). I know this is probably related to write permissions or something, but again, I've checked my environment variables, and they are identical. Is there some configuration that I'm not aware of that I should be checking?
I've tried using a new IAM user. I also temporarily copied the entire contents of development.rb over to production.rb just to see if the configuration for the development or production was affecting it, to no avail. I ran bundle update as well. Again, no luck.
I wish the error was more descriptive, but it just says Seahorse::Client::NetworkingError (Connection reset by peer) no matter what I try.
Well I never found the solution for this problem and had to resort to other options since I was on a deadline. I'm assuming it is a bug on Amazon's end or with the aws-sdk gem, because I have checked my configuration many times, and it is correct.
My workaround was to use the fog gem, which is actually very handy. after adding gem 'fog' to my gemfile and running bundle install my code now looks like this:
def upload_to_s3(folder_name)
filename = "ss-" + DateTime.now.strftime("%Y%d%m-%s") + "-" + SecureRandom.hex(4) + ".png"
full_bucket_path = Pathname(folder_name.to_s).join(filename).to_s
image_contents = open(url).read
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => ENV["AWS_ACCESS_KEY_ID"],
:aws_secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"]
})
directory = connection.directories.get(ENV["BUCKET"])
file = directory.files.create(key: full_bucket_path, public: true)
file.body = image_contents
file.save
return file.public_url
end
Which is simple enough and was a breeze to implement. Wish I knew what was messing up with the aws-sdk gem, but for anyone else who has problems, give fog a go.
I am using rails , carrierwave and heroku but right now I don't have a s3 account so I used this configuration
How to: Make Carrierwave work on Heroku
It worked very well for files uploaded by the user but It didn't work for files uploaded through seeds
I am using this syntax
book.cover = File.open(File.join(Rails.root, 'photo.jpg'))
book.save!
Try doing this instead:
file = File.open(File.join(Rails.root, 'photo.jpg'))
book.cover = file
file.close
book.save!
I have an app that works fine in on my development machine, but on my production server it uses a broken link to serve an image served using the Paperclip Gem.
Production environment is Linux(Debian), Apache, Passenger and I am deploying with Capistrano.
The app is stored in (a symlink that points to the public folder of the current version of the app deployed using capistrano):
/var/www/apps/root/appname
However, when I try and access it on the production server, the Apache error log displays this as the path it is looking in:
/var/www/apps/root/system
The correct path, however, is:
/var/www/apps/appname/shared/system
One option available to me is to create a symlink in root that directs system to the correct path, but I don't want to do this in case I want to deploy another app in the same root dir.
The url for this request is generated by rails, but Apache is what fetches the static resource (image files), so I have tried placing the following in my config/environments/production.rb:
ENV["RAILS_RELATIVE_URL_ROOT"] = '/appname/'
Which has resolved all other pathing issues I've been experiencing, but when rails generates the url (via the Paperclip gem), it doesn't seem to use it.
How can I set it so Paperclip uses the right path and only uses it production?
I've a workaround, add this as one of initializers:
config/initializer/paperclip.rb
Paperclip::Attachment.class_eval do
def url(style_name = default_style, options = {})
if options == true || options == false # Backwards compatibility.
res = #url_generator.for(style_name, default_options.merge(:timestamp => options))
else
res = #url_generator.for(style_name, default_options.merge(options))
end
# replace adding uri before res, minus final /
Rails.application.config.site_relative_url[0..-2]+res
end
end
At the moment Paperclip doesn’t work with ENV['RAILS_RELATIVE_URL_ROOT'] and the. You can follow the issue here:
https://github.com/thoughtbot/paperclip/issues/889