I am having a scenario where the photos I am uploading has to store in a AWS S3 buckets and call the images in through email but after the Rails and corresponding gems upgradation I could not store the images in S3. I upgraded my aws-s3 version from 0.6.1 to 0.6.3, aws-sdk from 1.3.5 to 1.3.9, right_aws from 3.0.0 to 3.0.5 and finally Rails version from 3.2.1 to 4.2.6.
I have tested by putting puts commands, it is going to all the methods but I doubt whether there is any syntax change in upload method at #type (Here #type is the 2 bucket names photo_screenshots and indicator_screenshots).
Please help me.
This is my lib/screenshot.rb:
class Screenshot
attr_reader :user_id, :report_id, :type
def initialize(user_id, report_id, type)
#user_id, #report_id, #type = user_id, report_id, type
capture
resize(500, 686) if #type == 'report_screenshots'
upload
delete_local_copy
end
def capture
if Rails.env.production?
phantom = Rails.root.join('vendor/javascripts/phantomjs_linux/bin/phantomjs')
url = Rails.application.config.custom.domain_url + "users/#{#user_id}/reports/#{#report_id}"
end
js = Rails.root.join("vendor/javascripts/#{#type}.js")
image = Rails.root.join("public/#{#type}/#{#report_id}.png")
`/bin/bash -c "DISPLAY=:0 #{phantom} #{js} #{url} #{image}"`
end
def resize(width, height)
path = "public/#{#type}/#{#report_id}.png"
img = Magick::Image::read(path).first
#img.thumbnail!(width, height)
img.change_geometry("#{width}x#{height}") do |cols, rows, img|
img.thumbnail!(cols, rows)
end
img.write(path)
end
def upload
file_name = Rails.root.join("public/#{#type}/#{#report_id}.png")
s3config = YAML.load_file(Rails.root.join('config', 's3.yml'))[Rails.env]
s3 = RightAws::S3.new(s3config["access_key_id"], s3config["secret_access_key"])
#type == 'report_screenshots' ? s3.bucket("my_project.#{Rails.env}", true).put("#{#type}/#{#report_id}.png", File.open(file_name), {}, 'public-read', { 'content-type' => 'image/png' }) : s3.bucket("my_project.#{Rails.env}", true).put("indicator_screenshots/#{#report_id}.png", File.open(file_name), {}, 'public-read', { 'content-type' => 'image/png' })
report = Report.find(#report_id)
#type == 'report_screenshots' ? report.update_attribute(:report_screenshot_at, Time.now) : report.update_attribute(:indicator_screenshot_at, Time.now)
end
def delete_local_copy
file_name = Rails.root.join("public/#{#type}/#{#report_id}.png")
File.delete(file_name)
end
def self.delete_s3_copy(report_id, type)
s3config = YAML.load_file(Rails.root.join('config', 's3.yml'))[Rails.env]
s3 = RightAws::S3.new(s3config["access_key_id"], s3config["secret_access_key"])
s3.bucket("my_project.#{Rails.env}").key("#{type}/#{report_id}.png").delete
end
end
Whenever I click on send an email, this is what happens:
controller:
def send_test_email
if #report.photos.empty?
Rails.env.development? ? Screenshot.new(#user.id, #report.id, Rails.application.config.custom.indicator_screenshot_bucket) : Screenshot.delay.new(#user.id, #report.id, Rails.application.config.custom.indicator_screenshot_bucket)
else
Rails.env.development? ? Screenshot.new(#user.id, #report.id, "photo_screenshots") : Screenshot.delay.new(#user.id, #report.id, "photo_screenshots")
end
ReportMailer.delay.test_report_email(#user, #report)
respond_to do |format|
format.json { render :json => { :success => true, :report_id => #report.id, :notice => 'Test email was successfully sent!' } }
end
end
This is RAILS_ENV=production log:
New RightAws::S3Interface using shared connections mode Opening new
HTTPS connection to my_project.production.s3.amazonaws.com:443 Opening
new HTTPS connection to s3.amazonaws.com:443 2016-09-26T10:48:46+0000:
[Worker(delayed_job host:ip-172-31-24-139 pid:8769)] Job
Screenshot.new (id=528) FAILED (16 prior attempts) with Errno::ENOENT:
No such file or directory # rb_sysopen -
/var/www/html/project/my_project/public/photo_screenshots/50031.png
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] Job Screenshot.new (id=529) RUNNING
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] Job Screenshot.new (id=529) FAILED (16 prior attempts) with
Magick::ImageMagickError: unable to open file
`public/report_screenshots/50031.png' # error/png.c/ReadPNGImage/3733
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] 2 jobs processed at 1.6978 j/s, 2 failed
This is AWS production log:
New RightAws::S3Interface using shared connections mode
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] Job
Screenshot.new (id=50) FAILED (6 prior attempts) with Errno::ENOENT:
No such file or directory # rb_sysopen -
/home/abcuser/Desktop/project/my_project/public/photo_screenshots/10016.png
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] Job
Screenshot.new (id=51) RUNNING 2016-09-26T16:00:30+0530:
[Worker(host:OSI-L-0397 pid:7117)] Job Screenshot.new (id=51) FAILED
(6 prior attempts) with Magick::ImageMagickError: unable to open file
`public/report_screenshots/10016.png' # error/png.c/ReadPNGImage/3667
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] 2 jobs
processed at 0.2725 j/s, 2 failed
You can try uploading in a simpler approach.
Uploading images to a fixed bucket with different folders for each object or application.The s3 keeps a limitation on the number of buckets creattion whereas there is no
limitation for content inside a bucket.
This code will upload image for a user to s3 using aws-sdk gem. The bucket and the image uploaded are made public
so that the images uploaded are directly accessible. The input it takes is the image complete path
where it is present, folder in which it should be uploaded and user_id for whom it should
be uploaded.
def save_screenshot_to_s3(image_location, folder_name,user_id)
service = AWS::S3.new(:access_key_id => ACCESS_KEY_ID,
:secret_access_key => SECRET_ACCESS_KEY)
bucket_name = "app-images"
if(service.buckets.include?(bucket_name))
bucket = service.buckets[bucket_name]
else
bucket = service.buckets.create(bucket_name)
end
bucket.acl = :public_read
key = folder_name.to_s + "/" + File.basename(image_location)
s3_file = service.buckets[bucket_name].objects[key].write(:file => image_location)
s3_file.acl = :public_read
user = User.where(id: user_id).first
user.image = s3_file.public_url.to_s
user.save
end
for handling the screenshot part, in your capture method you have use done something like this.
`/bin/bash -c "DISPLAY=:0 #{phantom} #{js} #{url} #{image}"`
Is the /bin/bash thing really required, change it to below code and it should work.
`DISPLAY=:0 "#{phantom}" "#{js}" "#{url}" "#{image}"`
let it be if it breaks something else.
Since you are aware of the final image location which is image. Pass this directly to save_screenshot_to_s3 and you should be able to save it. This will save the image path to user too if you pass your user_id as specified in method
Related
TL;DR
How do you input file paths into the AWS S3 API Ruby client, and have them interpreted as images, not string literal file paths?
More Details
I'm using the Ruby AWS S3 client to upload images programmatically. I have taken this code from their example startup code and barely modified it myself. See https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/s3-example-upload-bucket-item.html
def object_uploaded?(s3_client, bucket_name, object_key)
response = s3_client.put_object(
body: "tmp/cosn_img.jpeg", # is always interpreted literally
acl: "public-read",
bucket: bucket_name,
key: object_key
)
if response.etag
return true
else
return false
end
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end
# Full example call:
def run_me
bucket_name = 'cosn-images'
object_key = "#{order_number}-trello-pic_#{list_config[:ac_campaign_id]}.jpeg"
region = 'us-west-2'
s3_client = Aws::S3::Client.new(region: region)
if object_uploaded?(s3_client, bucket_name, object_key)
puts "Object '#{object_key}' uploaded to bucket '#{bucket_name}'."
else
puts "Object '#{object_key}' not uploaded to bucket '#{bucket_name}'."
end
end
This works and is able to upload to AWS, but it is uploading just the file path from the body, not the actual file itself.
file path shown when you click on attachment link
As far as I can see from the Client documentation, this should work. https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_object-instance_method
Client docs
Also, manually uploading this file through the frontend does work just fine, so it has to be an issue in my code.
How are you supposed to let AWS know that it should interpret that file path as a file path, and not just as a string literal?
You have two issues:
You have commas at the end of your variable assignments in object_uploaded? that are impacting the way that your variables are being stored. Remove these.
You need to reference the file as a File object type, not as a file path. Like this:
image = File.open("#{Rails.root}/tmp/cosn_img.jpeg")
See full code below:
def object_uploaded?(image, s3_client, bucket_name, object_key)
response = s3_client.put_object(
body: image,
acl: "public-read",
bucket: bucket_name,
key: object_key
)
puts response
if response.etag
return true
else
return false
end
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end
def run_me
image = File.open("#{Rails.root}/tmp/cosn_img.jpeg")
bucket_name = 'cosn-images'
object_key = "#{order_number}-trello-pic_#{list_config[:ac_campaign_id]}.jpeg"
region = 'us-west-2'
s3_client = Aws::S3::Client.new(region: region)
if object_uploaded?(image, s3_client, bucket_name, object_key)
puts "Object '#{object_key}' uploaded to bucket '#{bucket_name}'."
else
puts "Object '#{object_key}' not uploaded to bucket '#{bucket_name}'."
end
end
Their docs seem a bit weird and not straigtforward, but it seems that you might need to pass in a file/io object, instead of the path.
The ruby docs here have an example like this:
s3_client.put_object(
:bucket_name => 'mybucket',
:key => 'some/key'
:content_length => File.size('myfile.txt')
) do |buffer|
File.open('myfile.txt') do |io|
buffer.write(io.read(length)) until io.eof?
end
end
or another option in the aws ruby sdk docs, under "Streaming a file from disk":
File.open('/source/file/path', 'rb') do |file|
s3.put_object(bucket: 'bucket-name', key: 'object-key', body: file)
end
I created an ActiveJob to process my carrier waves uploads. However, when I upload more than one image, I get the following error for the second file:
Errno::ENOENT (No such file or directory # rb_sysopen - C:/Users/tdavi/AppData/Local/Temp/RackMultipart20180830-392-z2s2i.jpg)
Here's the code in my controller:
if #post.save
files = params[:post_attachments].map { |p|
{image: p['photo'][:image].tempfile.path, description: p['photo'][:decription]}
}
ProcessPhotosJob.perform_later(#post.id, files.to_json)
format.html { render :waiting }
end
And my ActiveJob
require 'json'
class ProcessPhotosJob < ApplicationJob
queue_as :default
def perform(post_id, photos_json)
post = Post.friendly.find(post_id)
photos = JSON.parse photos_json
photos.each do |p|
src_file = File.new(p['image'])
post.post_attachments.create!(:photo => src_file, :description => p[:description])
end
post.processed = true
post.save
end
end
When I upload only one file to upload, it works okay.
You should not pass Tempfile to the queued jobs.
First of all - TempFiles can be deleted automatically by Ruby (docs, explanation)
If you would like to upload file(s) and process them later (in a background), then I would suggest you check this question.
I am using Carrierwave with 3 separate models to upload photos to S3. I kept the default settings for the uploader, which was to store photos in a root S3 bucket. I then decided to store them in sub-directories according to model name like /avatars, items/, etc. based on the model they were uploaded from...
Then, I noticed that files of the same name were being overwritten and when I deleted a model record, the photo wasn't being deleted.
I've since changed the store_dir from an uploader-specific setup like this:
def store_dir
"items"
end
to a generic one which stores photo under the model ID (I use mongo FYI):
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
Here comes the problem. I am trying to move all the photos already into S3 into the proper "directory" within S3. From what I've ready, S3 doesn't have directories per se. I'm having trouble with the rake task. Since i changed the store_dir, Carrierwave is looking for all the photos previously uploaded in the wrong directory.
namespace :pics do
desc "Fix directory location of pictures on s3"
task :item_update => :environment do
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => 'XXXX',
:aws_secret_access_key => 'XXX'
})
directory = connection.directories.get("myapp-uploads-dev")
Recipe.all.each do |l|
if l.images.count > 0
l.items.each do |i|
if i.picture.path.to_s != ""
new_full_path = i.picture.path.to_s
filename = new_full_path.split('/')[-1].split('?')[0]
thumb_filename = "thumb_#{filename}"
original_file_path = "items/#{filename}"
puts "attempting to retrieve: #{original_file_path}"
original_thumb_file_path = "items/#{thumb_filename}"
photo = directory.files.get(original_file_path) rescue nil
if photo
puts "we found: #{original_file_path}"
photo.expires = 2.years.from_now.httpdate
photo.key = new_full_path
photo.save
thumb_photo = directory.files.get(original_thumb_file_path) rescue nil
if thumb_photo
puts "we found: #{original_thumb_file_path}"
thumb_photo.expires = 2.years.from_now.httpdate
thumb_photo.key = "/uploads/item/picture/#{i.id}/#{thumb_filename}"
thumb_photo.save
end
end
end
end
end
end
end
end
So I'm looping through all the Recipes, looking for items with photos, determining the old Carrierwave path, trying to update it with the new one based on the store_dir change. I thought if I simply updated the photo.key with the new path, it would work, but it's not.
What am I doing wrong? Is there a better way to accomplish the ask here?
Here's what I did to get this working...
namespace :pics do
desc "Fix directory location of pictures"
task :item_update => :environment do
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => 'XXX',
:aws_secret_access_key => 'XXX'
})
bucket = "myapp-uploads-dev"
puts "Using bucket: #{bucket}"
Recipe.all.each do |l|
if l.images.count > 0
l.items.each do |i|
if i.picture.path.to_s != ""
new_full_path = i.picture.path.to_s
filename = new_full_path.split('/')[-1].split('?')[0]
thumb_filename = "thumb_#{filename}"
original_file_path = "items/#{filename}"
original_thumb_file_path = "items/#{thumb_filename}"
puts "attempting to retrieve: #{original_file_path}"
# copy original item
begin
connection.copy_object(bucket, original_file_path, bucket, new_full_path, 'x-amz-acl' => 'public-read')
puts "we just copied: #{original_file_path}"
rescue
puts "couldn't find: #{original_file_path}"
end
# copy thumb
begin
connection.copy_object(bucket, original_thumb_file_path, bucket, "uploads/item/picture/#{i.id}/#{thumb_filename}", 'x-amz-acl' => 'public-read')
puts "we just copied: #{original_thumb_file_path}"
rescue
puts "couldn't find thumb: #{original_thumb_file_path}"
end
end
end
end
end
end
end
Perhaps not the prettiest thing in the world, but it worked.
You need to be interacting with the S3 Objects directly to move them. You'll probably want to look at copy_object and delete_object in the Fog gem, which is what CarrierWave uses to interact with S3.
https://github.com/fog/fog/blob/8ca8a059b2f5dd2abc232dd2d2104fe6d8c41919/lib/fog/aws/requests/storage/copy_object.rb
https://github.com/fog/fog/blob/8ca8a059b2f5dd2abc232dd2d2104fe6d8c41919/lib/fog/aws/requests/storage/delete_object.rb
Paperclip is a great upload plugin for Rails. Storing uploads on the local filesystem or Amazon S3 seems to work well. I'd just assume store files on the localhost, but the use of S3 is required for this app as it will be hosted on Heroku.
How would I go about getting all of my uploads/attachments from S3 in a single zipped download?
Getting a zip of files from the local filesystem seems straight forward. It's getting the files from S3 that has me puzzled. I think it may have something to do with the way that rubyzip handles files referenced by URL. I've tried various approaches but can't seem to avoid errors.
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.to_f.to_s <<
".zip"
# rubyzip gem version 0.9.1
# rdoc http://rubyzip.sourceforge.net/
Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE) do |zip|
#get all of the attachments
# attempt to get files stored on S3
# FAIL
registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.url(:original, false)) }
# => No such file or directory - http://s3.amazonaws.com/bucket/original/abstract.txt
# Should note that these files in S3 bucket are publicly accessible. No ACL.
# works with local storage. Thanks to Henrik Nyh
# registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.path(:original)) }
end
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}
You almost certainly want to use e.abstract.to_file.path instead of e.abstract.url(...).
See:
Paperclip::Storage::S3::to_file (should return a TempFile)
TempFile::path
UPDATE
From the changelog:
New in 3.0.1:
API CHANGE: #to_file has been removed. Use the #copy_to_local_file method instead.
#vlard's solution is ok. However I've run into some issues with the to_file. It creates a tempfile and the garbage collector deletes (sometimes) the file before it was added to the zip file. Therefor, I'm getting random Errno::ENOENT: No such file or directory errors.
So I'm using the following code now (I've kept the initial code variables names for consistency with the initial question)
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
#please note that using nanoseconds option in strftime reduces the risks concerning the situation where 2 or more users initiate the download in the same time
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.strftime('%Y-%m-%d-%H%M%S-%N').to_s <<
".zip"
# rubyzip gem version 0.9.4
zip = Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE)
zip.close
registrations_with_attachments.each { |e|
file_to_add = e.file.to_file
zip = Zip::ZipFile.open(tmp_filename)
zip.add("abstracts/#{e.abstract.original_filename}", file_to_add.path)
zip.close
puts "added #{file_to_add.path} to #{tmp_filename}" #force garbage collector to keep the file_to_add until after the file has been added to zip
}
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}
I am running a Rails app using Paperclip to take care of file attachments and image resizing, etc. The app is currently hosted on EngineYard cloud, and all attachments are stored in their EBS. Thinking about using S3 to handle all Paperclip attachments.
Does anyone know of a good and safe way for this migration? many thanks!
You could work up a rake task that iterates over your attachments and pushes each to S3. I used this one awhile back with attachment_fu -- wouldn't be too different. This uses the aws-s3 gem.
Basically the process is:
1. Select files from the database that need to be moved
2. Push them to S3
3. Update database to reflect that the file is no longer stored locally (this way you can do them in batches and don't need to worry about pushing the same file twice).
#attachments = Attachment.stored_locally
#attachments.each do |attachment|
base_path = RAILS_ROOT + '/public/assets/'
attachment_folder = ((attachment.respond_to?(:parent_id) && attachment.parent_id) || attachment.id).to_s
full_filename = File.join(base_path, ("%08d" % attachment_folder).scan(/..../), attachment.filename)
require 'aws/s3'
AWS::S3::Base.establish_connection!(
:access_key_id => S3_CONFIG[:access_key_id],
:secret_access_key => S3_CONFIG[:secret_access_key]
)
AWS::S3::S3Object.store(
'assets/' + attachment_folder + '/' + attachment.filename,
File.open(full_filename),
S3_CONFIG[:bucket_name],
:content_type => attachment.content_type,
:access => :private
)
if AWS::S3::Service.response.success?
# Update the database
attachment.update_attribute(:stored_on_s3, true)
# Remove the file on the local filesystem
FileUtils.rm full_filename
# Remove directory also if it is now empty
Dir.rmdir(File.dirname(full_filename)) if (Dir.entries(File.dirname(full_filename))-['.','..']).empty?
else
puts "There was a problem uploading " + full_filename
end
end
I found myself in the same situation and took bensie's code and made it work for myself - this is what I came up with:
require 'aws/s3'
# Ensure you do the following:
# export AMAZON_ACCESS_KEY_ID='your-access-key'
# export AMAZON_SECRET_ACCESS_KEY='your-secret-word-thingy'
AWS::S3::Base.establish_connection!
#failed = []
#attachments = Asset.all # Asset paperclip attachment is: has_attached_file :attachment....
#attachments.each do |asset|
begin
puts "Processing #{asset.id}"
base_path = RAILS_ROOT + '/public/'
attachment_folder = ((asset.respond_to?(:parent_id) && asset.parent_id) || asset.id).to_s
styles = asset.attachment.styles.keys
styles << :original
styles.each do |style|
full_filename = File.join(base_path, asset.attachment.url(style, false))
AWS::S3::S3Object.store(
'attachments/' + attachment_folder + '/' + style.to_s + "/" + asset.attachment_file_name,
File.open(full_filename),
"swellnet-assets",
:content_type => asset.attachment_content_type,
:access => (style == :original ? :private : :public_read)
)
if AWS::S3::Service.response.success?
puts "Stored #{asset.id}[#{style.to_s}] on S3..."
else
puts "There was a problem uploading " + full_filename
end
end
rescue
puts "Error with #{asset.id}"
#failed << asset.id
end
end
puts "Failed uploads: #{#failed.join(", ")}" unless #failed.empty?
Of course, if you have multiple models you will need to adjust as necessary...