I'm having a fair bit of trouble with a Rails app using Paperclip for attachments stored in S3 and delivered with Cloudfront. I've configured Paperclip with timestamp interpolation. Everything works on a staging environment as expected. However, in production, the actual filename being stored uses a timestamp that is one second off. I have no idea why....
Paperclip Configuration
if configatron.aws.enabled
Paperclip::Attachment.default_options[:storage] = :s3
Paperclip::Attachment.default_options[:s3_credentials] = {
access_key_id: configatron.aws.access_key,
secret_access_key: configatron.aws.secret_key
}
Paperclip::Attachment.default_options[:protocol] = 'https'
Paperclip::Attachment.default_options[:bucket] = configatron.aws.bucket
Paperclip::Attachment.default_options[:s3_host_alias] = configatron.aws.cloudfront_host
Paperclip::Attachment.default_options[:url] = ":s3_alias_url"
# Change the default path
Paperclip::Attachment.default_options[:path] = "images/:class/:attachment/:id_partition/:style-:timestamp-:filename"
Paperclip.interpolates(:timestamp) do |attachment, style|
attachment.instance_read(:updated_at).to_i
end
end
Example record
> alumni = Alumni.last
> alumni.updated_at.to_i
=> 1428719699
> alumni.thumb.url
=> "http://d2kektcjb0ajja.cloudfront.net/images/alumnis/thumbs/000/000/664/original-1428719699-diana-zeng-image.jpg?1428719699"
Notice that in the above IRB session, the updated_at timestamp matches the timestamp on the filename, which is correct. However, the file actually being stored in S3 is thumb-1428719698-diana-zeng-image.jpg, notice that the timestamp is 1 second off! This means that the above URL is not found.
This only happens on Production. On our staging environment, it works perfectly. I have no idea why the above would happen.
Can anyone help?
Thanks!
Leonard
Resolved by upgrading Rails to >= 4.2.1.
The problem is that Rails 4.2.0 preserves fractional seconds, causing the drift to happen due to rounding to the nearest second.
https://github.com/thoughtbot/paperclip/issues/1772
Related
I stuck all afternoon on checking whether an uploaded file to AWS S3 exists or not. I use Ruby On Rails and the gem called aws-sdk, v2.
First of all - the file exists in the bucket, it is located here:
test_bucket/users/10/file_test.pdf
There's no typo, this is the exact path. Also, the bucket + credentials are set up correctly.
And here's how I try to check the existence of the file:
config = {region: 'us-west-1', bucket: AWS_S3_CONFIG['bucket'], key: AWS_S3_CONFIG['access_key_id'], secret: AWS_S3_CONFIG['secret_access_key']}
Aws.config.update({region: config[:region],
credentials: Aws::Credentials.new(config[:key], config[:secret]),
:s3 => { :region => 'us-east-1' }})
bucket = Aws::S3::Resource.new.bucket(config[:bucket])
puts bucket.object("file_test.pdf").exists?
The output is always false.
I also tried puts bucket.object("test_bucket/users/10/file_test.pdf").exists?, but still false.
Also, I tried to make the file public in the AWS S3 dashboard, but no success, still false. The file is visible when click on the generated link.
But the problem is that when I check with using aws-sdk if the file exist, the output is still false.
What am I doing wrong?
Thank you.
You need to pass the full path to the object (not including the bucket name) - users/10/file_test.pdf
I am using amazon s3, rails 4, and the FOG gem. I have an amazon bucket called uipstudy with 100 folders, each containing about 20 images. I use the following to get all the images in a specific folder (In my application_helper.rb which is included in the application_controller.rb).
def get_files(image_folder)
connection = Fog::Storage.new(
provider: 'AWS',
aws_access_key_id: '######',
aws_secret_access_key: '#######'
)
connection.directories.get('uipimages', prefix:image_folder).files.map do |file|
file.key
end
end
In my controller I have this....in this example I am looking in the folder "1" in the uipstudy bucket.
#Amazon solution:
#images = get_files('1')
#images.each do |image|
image = "https://s3.amazonaws.com/uipstudy/#{image}"
#image_array << image
end
The problem is that its returning the files inside the folder labelled "1" but also in 10, 11, 12,13....etc. I assumed that the prefix was an absolute but it appears not. Is there a way to enforce that the prefix gets exactly the folder specified in the prefix?
I think you should be able to make a small change in your script to get the behavior you want. Simply append a forward slash to the prefix so that it clearly shows you want things that are like a directory instead of any/all things that begin with a particular character.
So, that would get you something like:
directory = connection.directories.get('upimages', prefix: image_folder + '/')
directory.files.map do |file|
file.key
end
(I just split it into two commands to make it format/read easier)
Below is my solution using the aws-sdk gem.
initialize s3 client
s3 = AWS::S3.new
bucket = s3.buckets[ENV['AWS_BUCKET']]
regex for ipa files in _inbox folder
regex = %r{_inbox/(?:[^/]+/)*[^/]+\.ipa}i
get and process ipa files
bucket.objects.select { |o| o.key.match(regex) }.each do |ipa|
In my rails 4 application I'm trying to download and then upload a regular png file to my s3 bucket using the aws-sdk (using gem 'aws-sdk', '~> 2').
In development environment, the code works totally fine. But if I try rails s -e production or if I test out the upload on my heroku instance, I get the following error when I test out the image upload functionality,
Seahorse::Client::NetworkingError (Connection reset by peer):
app/helpers/aws_s3.rb:73:in `upload_to_s3'
app/controllers/evaluations_controller.rb:19:in `test'
my upload_to_s3 method mentioned in the trace looks like:
def upload_to_s3(folder_name)
url = "http://i.imgur.com/WKeQQox.png"
filename = "ss-" + DateTime.now.strftime("%Y%d%m-%s") + "-" + SecureRandom.hex(4) + ".png"
full_bucket_path = Pathname(folder_name.to_s).join(filename).to_s
file = save_to_tempfile(url, filename)
s3 = Aws::S3::Resource.new(access_key_id: ENV["IAM_ID"], secret_access_key: ENV["IAM_SECRET"], region: 'us-east-1')
s3_file = s3.bucket(ENV["BUCKET"]).object(full_bucket_path)
s3_file.upload_file(file.path)
raise s3_file.public_url.to_s.inspect
end
The environment variables are the same between both environments. I don't really know where else to turn to debug this. Why would it work in development, but not in production? I have a feeling I'm missing something pretty obvious.
UPDATE:
Let's simplify this further since I'm not getting very much feedback.
s3 = Aws::S3::Resource.new
bucket = s3.bucket(ENV["BUCKET"])
bucket.object("some_file.txt").put(body:'Hello World!')
The above totally works in my development environment, but not my production environment. In production it faults when I call put(body:'Hello World!'). I know this is probably related to write permissions or something, but again, I've checked my environment variables, and they are identical. Is there some configuration that I'm not aware of that I should be checking?
I've tried using a new IAM user. I also temporarily copied the entire contents of development.rb over to production.rb just to see if the configuration for the development or production was affecting it, to no avail. I ran bundle update as well. Again, no luck.
I wish the error was more descriptive, but it just says Seahorse::Client::NetworkingError (Connection reset by peer) no matter what I try.
Well I never found the solution for this problem and had to resort to other options since I was on a deadline. I'm assuming it is a bug on Amazon's end or with the aws-sdk gem, because I have checked my configuration many times, and it is correct.
My workaround was to use the fog gem, which is actually very handy. after adding gem 'fog' to my gemfile and running bundle install my code now looks like this:
def upload_to_s3(folder_name)
filename = "ss-" + DateTime.now.strftime("%Y%d%m-%s") + "-" + SecureRandom.hex(4) + ".png"
full_bucket_path = Pathname(folder_name.to_s).join(filename).to_s
image_contents = open(url).read
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => ENV["AWS_ACCESS_KEY_ID"],
:aws_secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"]
})
directory = connection.directories.get(ENV["BUCKET"])
file = directory.files.create(key: full_bucket_path, public: true)
file.body = image_contents
file.save
return file.public_url
end
Which is simple enough and was a breeze to implement. Wish I knew what was messing up with the aws-sdk gem, but for anyone else who has problems, give fog a go.
I try to setup Amazon S3 support to store images in the cloud with refinerycms.
I created the bucket at https://console.aws.amazon.com/s3/
I named it like the app 'bee-barcelona' and it says it is in region US Standard
In ~/config/initializers/refinery/images.rb I entered all the data (where 'xxx? stands for the actual keys I entered:
# Configure S3 (you can also use ENV for this)
# The s3_backend setting by default defers to the core setting for this but can be set just for images.
config.s3_backend = Refinery::Core.s3_backend
config.s3_bucket_name = ENV['bee-barcelona']
config.s3_access_key_id = ENV['xxx']
config.s3_secret_access_key = ENV['xxx']
config.s3_region = ENV['xxx']
Then I applied the changes to heroku with:
heroku config:add S3_KEY=xxx S3_SECRET=xxx S3_BUCKET=bee-barcelona S3_REGION=us-standard
But still, in the app I only get: "Sorry, something wen wrong" when I try to upload.
What did I miss?
What a sad error. I didn't think about that option till I went for a 10 km run…
I had the app set up to be "beekeeping"
My bucket on Amazon was named "bee-barcelona"
I did register the correct bucket in the app. Still refinery tried to keep on going to another persons bucket, named "beekeeping". With my secret key there was no way my files would end up there.
I created a new app and a new bucket, all with crazy names, BUT! They are the same on AmazonS3 and GIT!!!
No it works like a charm.
What a very rare situation...
The way I did it was as follows:
Create a bucket in region US-STANDARD!!!!!!!!!!
Did you see that? US-STANDARD, not oregon, not anywhere else.
Add gems to Gemfile
gem "fog"
gem "unf"
gem "dragonfly-s3_data_store"
In config/application.rb
config.assets.initialize_on_precompile = true
In config/environments/production.rb
Refinery::Core.config.s3_backend = true
In config/environments/development.rb
Refinery::Core.config.s3_backend = false
Configure S3 for heroku (production) and local storage for development. In config/initializers/refinery/core.rb
if Rails.env.production?
config.s3_backend = true
else
config.s3_backend = false
end
config.s3_bucket_name = ENV['S3_BUCKET']
config.s3_region = ENV['S3_REGION']
config.s3_access_key_id = ENV['S3_ACCESS_KEY']
config.s3_secret_access_key = ENV['S3_SECRET_KEY']
Add variables to heroku:
heroku config:add S3_ACCESS_KEY=xxxxxx S3_SECRET_KEY=xxxxxx S3_BUCKET=bucket-name-here S3_REGION=us-east-1
I had a lot of issues because I had before S3_REGION=us-standard. This is WRONG. Set your US-Standard bucket as shown:
S3_REGION=us-east-1
This worked flawlessly for me on Rails 4.2.1 and refinery 3.0.0. Also, make sure you are using the exact same names for the variables. Sometimes it says S3_KEY instead of S3_ACCESS_KEY or S3_SECRET instead of S3_SECRET_KEY. Just make sure you have the same ones in your files and your Heroku variables.
I am creating a Rails app which is hosted on Heroku and that allows the user to generate animated GIFs on the fly based on an original JPG that's hosted somewhere in the web (think of it as a crop-resize app). I tried Paperclip but, AFAIK, it does not handle dynamically-generated files. I am using the aws-sdk gem and this is a code snippet of my controller:
im = Magick::Image.read(#animation.url).first
fr1 = im.crop(#animation.x1,#animation.y1,#animation.width,#animation.height,true)
str1 = fr1.to_blob
fr2 = im.crop(#animation.x2,#animation.y2,#animation.width,#animation.height,true)
str2 = fr2.to_blob
list = Magick::ImageList.new
list.from_blob(str1)
list.from_blob(str2)
list.delay = #animation.delay
list.iterations = 0
That is for the basic creation of a two-frame animation. RMagick can generate a GIF in my development computer with these lines:
list.write("#{Rails.public_path}/images/" + #animation.filename)
I tried uploading the list structure to S3:
# upload to Amazon S3
s3 = AWS::S3.new
bucket = s3.buckets['mybucket']
obj = bucket.objects[#animation.filename]
obj.write(:single_request => true, :content_type => 'image/gif', :data => list)
But I don't have a size method in RMagick::ImageList that I can use to specify that. I tried "precompiling" the GIF into another RMagick::Image:
anim = Magick::Image.new(#animation.width, #animation.height)
anim.format = "GIF"
list.write(anim)
But Rails crashes with a segmentation fault:
/path/to/my_controller.rb:103: [BUG] Segmentation fault ruby 1.8.7 (2010-01-10 patchlevel 249) [universal-darwin11.0]
Abort trap: 6
Line 103 corresponds to list.write(anim).
So right now I have no idea how to do this and would appreciate any help I receive.
As per #mga's request in his answer to his original question...
a non-filesystem based approach is pretty simple
rm_image = Magick::Image.from_blob(params[:image][:datafile].read)[0]
# [0] because from_blob returns an array
# the blob, presumably, can have multiple images data in it
a_thumbnail = rm_image.resize_to_fit(150, 150)
# just as an example of doing *something* with it before writing
s3_bucket.objects['my_thumbnail.jpg'].write(a_thumbnail.to_blob, {:acl=>:public_read})
Voila! reading an uploaded file, manipulating it with RMagick, and writing it to s3 without ever touching the filesystem.
Since this project is hosted in Heroku I cannot use the filesystem so that is why I was trying to do everything via code. I found that Heroku does have a temporary-writable folder: http://devcenter.heroku.com/articles/read-only-filesystem
This works just fine in my case since I don't need the file after this request.
The resulting code:
im = Magick::Image.read(#animation.url).first
fr1 = im.crop(#animation.x1,#animation.y1,#animation.width,#animation.height,true)
fr2 = im.crop(#animation.x2,#animation.y2,#animation.width,#animation.height,true)
list = Magick::ImageList.new
list << fr1
list << fr2
list.delay = #animation.delay
list.iterations = 0
# gotta packet the file
list.write("#{Rails.root}/tmp/#{#animation.filename}.gif")
# upload to Amazon S3
s3 = AWS::S3.new
bucket = s3.buckets['mybucket']
obj = bucket.objects[#animation.filename]
obj.write(:file => "#{Rails.root}/tmp/#{#animation.filename}.gif")
It would be interesting to know if a non-filesystem-writing solution is possible.
I am updating this answer for AWS SDK Version 2 which should be:
rm_image = Magick::Image.from_blob(params[:image][:datafile].read)[0]
# [0] because from_blob returns an array
# the blob, presumably, can have multiple images data in it
a_thumbnail = rm_image.resize_to_fit(150, 150)
# just as an example of doing *something* with it before writing
s3 = Aws::S3::Resource.new
bucket = s3.bucket('mybucket')
obj = bucket.object('filename')
obj.put(body: background.to_blob)
I think there's a few things going on here. First, the documentation for RMagick is sub-par, and its easy to get side-tracked. The code you're using to generate the gif can be a little simpler. I cooked up a very contrived example here:
#!/usr/bin/env ruby
require 'rubygems'
require 'RMagick'
# read in source file
im = Magick::Image.read('foo.jpg').first
# make two slightly different frames
fr1 = im.crop(0, 100, 300, 300, true)
fr2 = im.crop(0, 200, 300, 300, true)
# create an ImageList
list = Magick::ImageList.new
# add our images to it
list << fr1
list << fr2
# set some basic values
list.delay = 100
list.iterations = 0
# write out an animated gif to the filesystem
list.write("foo.gif")
This code works -- it reads in a jpg I have locally, and writes out a 2-frame animation. Obviously I've hardcoded some values here, but there's no reason this shouldn't work for you, although I am running ruby 1.9.2 and probably a different version of RMagick, but this is basic code.
The second issue is totally unrelated -- is it possible to upload an image generated in IM to S3 without actually hitting the filesystem? Basically, will this ever work:
obj.write(:single_request => true, :content_type => 'image/gif', :data => list)
I'm not sure if it is or not. I experimented with calling list.to_blob, but it only outputs the first frame, and it's as a JPG, although I didn't spend much time on it. You might be able to fool list.write into outputting somewhere, but rather than going down that road, I would personally just output the file unless that is impossible for some reason.