Intermittent Carrierwave S3 403 Signature Does Not Match errors - ruby-on-rails

I am getting seemingly random erorrs when uploading files to s3 from my app on heroku. I am using jquery-file-upload to upload pictures to a tmp/ directory in my bucket using the CORS method and this code.
def url
temp_url = AWS::S3::S3Object.url_for(
s3_key,
S3_CONFIG['bucket'],
use_ssl: true)
puts temp_url
temp_url
# temp_url.to_s.encode_signs
end
def delete_photo_from_s3
begin
photo = AWS::S3::S3Object.find(s3_key, S3_CONFIG['bucket'])
photo.delete
rescue Exception => e
Rails.logger.error e.message
end
end
private
def s3_key
parent_url = self[:uri]
# If the url is nil, there's no need to look in the bucket for it
return nil if parent_url.nil?
# This will give you the last part of the URL, the 'key' params you need
# but it's URL encoded, so you'll need to decode it
object_key = parent_url.split(/\//)
"#{object_key[3]}/#{object_key[4]}/#{object_key[5]}"
end
From there I am using carrierwave to upload and process these images. However, sometimes the uploads fail silently and I am getting 403 Forbidden errors in my s3 bucket. Not sure what is causing this.
From there, I am using Qu to process a background job to attach the image to carrierwave using the remote__url call. Here is my background task:
class PhotoUploader
def self.perform(finding_id, photo_id)
begin
finding = Finding.find(finding_id)
photo = Photo.find(photo_id)
upload = finding.uploads.build
# attached_picture = photo.temp_image_url || photo.url
upload.remote_attachment_url = photo.url
if upload.save!
Rails.logger.debug "#{Time.now}: Photo #{photo_id} saved to finding..."
photo.set(:delete_at => 1.hour.from_now) # UTC, same as GMT (Not local time!)
photos = Photo.where(:processing => true, :delete_at.lte => Time.now.utc) # Query for UTC time, same type as previous line (also not local time!)
finding.unset(:temp_image)
if photos
photos.each do |photo|
photo.destroy
Rails.logger.debug "Photo #{photo.id} - #{photo.uri} destroyed."
end
end
else
raise "Could not save to s3!"
end
rescue Exception => e
Rails.logger.debug "#{Time.now}: PH01 - Error processing photo #{photo_id}, trying again... :: #{e.message}"
retry
end
end
end
This works sometimes, but not always, which is really wierd.
I end up getting a bunch of these errors in my s3 logs:
fc96aee492e463ff67c0a9835c23c81a09c4c36a53cdf297094ded3a7d02c62f actionlog-development [02/Dec/2012:20:27:18 +0000] 71.205.197.214 - 625CEFB5DB7867A7 REST.GET.OBJECT tmp/4f75d2fb4e484f2ffd000001/apcm_photomix1_0022.jpg "GET /actionlog-development/tmp/4f75d2fb4e484f2ffd000001/apcm_photomix1_0022.jpg?AWSAccessKeyId=AKIAI___ZA6A&Expires=1354480332&Signature=4wPc+nT84WEdOuxS6+Ry4iMNkys= HTTP/1.1" 403 SignatureDoesNotMatch 895 - 8 - "-" "Ruby" -
I have read about this a lot and it seems that people get this issue sometimes when there are unescaped '+'s in the signature. I'm not sure if this is a Carrierwave, Fog, or AWS::S3 issue.
If you could provide any assistance with this, it would be greatly appreciated.
Thanks.

Better use the v4 signature, that should prevent this kind of error. Just add the option "signature_version: :v4" to the url_for call.
temp_url = AWS::S3::S3Object.url_for(
s3_key,
S3_CONFIG['bucket'],
use_ssl: true,
signature_version: :v4)

It is a problem with the Fog and Excon.
See this answer for how to fix it and switch to a better solution that uses the actual aws-sdk.
Library --- Disk Space --- Lines of Code --- Boot Time --- Runtime Deps --- Develop Deps
fog --- 28.0M --- 133469 --- 0.693 --- 9 --- 11
aws-sdk --- 5.4M --- 90290 --- 0.098 --- 3 --- 8*

Related

Why is AWS uploading literal file paths, instead of uploading images?

TL;DR
How do you input file paths into the AWS S3 API Ruby client, and have them interpreted as images, not string literal file paths?
More Details
I'm using the Ruby AWS S3 client to upload images programmatically. I have taken this code from their example startup code and barely modified it myself. See https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/s3-example-upload-bucket-item.html
def object_uploaded?(s3_client, bucket_name, object_key)
response = s3_client.put_object(
body: "tmp/cosn_img.jpeg", # is always interpreted literally
acl: "public-read",
bucket: bucket_name,
key: object_key
)
if response.etag
return true
else
return false
end
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end
# Full example call:
def run_me
bucket_name = 'cosn-images'
object_key = "#{order_number}-trello-pic_#{list_config[:ac_campaign_id]}.jpeg"
region = 'us-west-2'
s3_client = Aws::S3::Client.new(region: region)
if object_uploaded?(s3_client, bucket_name, object_key)
puts "Object '#{object_key}' uploaded to bucket '#{bucket_name}'."
else
puts "Object '#{object_key}' not uploaded to bucket '#{bucket_name}'."
end
end
This works and is able to upload to AWS, but it is uploading just the file path from the body, not the actual file itself.
file path shown when you click on attachment link
As far as I can see from the Client documentation, this should work. https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_object-instance_method
Client docs
Also, manually uploading this file through the frontend does work just fine, so it has to be an issue in my code.
How are you supposed to let AWS know that it should interpret that file path as a file path, and not just as a string literal?
You have two issues:
You have commas at the end of your variable assignments in object_uploaded? that are impacting the way that your variables are being stored. Remove these.
You need to reference the file as a File object type, not as a file path. Like this:
image = File.open("#{Rails.root}/tmp/cosn_img.jpeg")
See full code below:
def object_uploaded?(image, s3_client, bucket_name, object_key)
response = s3_client.put_object(
body: image,
acl: "public-read",
bucket: bucket_name,
key: object_key
)
puts response
if response.etag
return true
else
return false
end
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end
def run_me
image = File.open("#{Rails.root}/tmp/cosn_img.jpeg")
bucket_name = 'cosn-images'
object_key = "#{order_number}-trello-pic_#{list_config[:ac_campaign_id]}.jpeg"
region = 'us-west-2'
s3_client = Aws::S3::Client.new(region: region)
if object_uploaded?(image, s3_client, bucket_name, object_key)
puts "Object '#{object_key}' uploaded to bucket '#{bucket_name}'."
else
puts "Object '#{object_key}' not uploaded to bucket '#{bucket_name}'."
end
end
Their docs seem a bit weird and not straigtforward, but it seems that you might need to pass in a file/io object, instead of the path.
The ruby docs here have an example like this:
s3_client.put_object(
:bucket_name => 'mybucket',
:key => 'some/key'
:content_length => File.size('myfile.txt')
) do |buffer|
File.open('myfile.txt') do |io|
buffer.write(io.read(length)) until io.eof?
end
end
or another option in the aws ruby sdk docs, under "Streaming a file from disk":
File.open('/source/file/path', 'rb') do |file|
s3.put_object(bucket: 'bucket-name', key: 'object-key', body: file)
end

Couldnot upload images to AWS S3 bucket in Rails

I am having a scenario where the photos I am uploading has to store in a AWS S3 buckets and call the images in through email but after the Rails and corresponding gems upgradation I could not store the images in S3. I upgraded my aws-s3 version from 0.6.1 to 0.6.3, aws-sdk from 1.3.5 to 1.3.9, right_aws from 3.0.0 to 3.0.5 and finally Rails version from 3.2.1 to 4.2.6.
I have tested by putting puts commands, it is going to all the methods but I doubt whether there is any syntax change in upload method at #type (Here #type is the 2 bucket names photo_screenshots and indicator_screenshots).
Please help me.
This is my lib/screenshot.rb:
class Screenshot
attr_reader :user_id, :report_id, :type
def initialize(user_id, report_id, type)
#user_id, #report_id, #type = user_id, report_id, type
capture
resize(500, 686) if #type == 'report_screenshots'
upload
delete_local_copy
end
def capture
if Rails.env.production?
phantom = Rails.root.join('vendor/javascripts/phantomjs_linux/bin/phantomjs')
url = Rails.application.config.custom.domain_url + "users/#{#user_id}/reports/#{#report_id}"
end
js = Rails.root.join("vendor/javascripts/#{#type}.js")
image = Rails.root.join("public/#{#type}/#{#report_id}.png")
`/bin/bash -c "DISPLAY=:0 #{phantom} #{js} #{url} #{image}"`
end
def resize(width, height)
path = "public/#{#type}/#{#report_id}.png"
img = Magick::Image::read(path).first
#img.thumbnail!(width, height)
img.change_geometry("#{width}x#{height}") do |cols, rows, img|
img.thumbnail!(cols, rows)
end
img.write(path)
end
def upload
file_name = Rails.root.join("public/#{#type}/#{#report_id}.png")
s3config = YAML.load_file(Rails.root.join('config', 's3.yml'))[Rails.env]
s3 = RightAws::S3.new(s3config["access_key_id"], s3config["secret_access_key"])
#type == 'report_screenshots' ? s3.bucket("my_project.#{Rails.env}", true).put("#{#type}/#{#report_id}.png", File.open(file_name), {}, 'public-read', { 'content-type' => 'image/png' }) : s3.bucket("my_project.#{Rails.env}", true).put("indicator_screenshots/#{#report_id}.png", File.open(file_name), {}, 'public-read', { 'content-type' => 'image/png' })
report = Report.find(#report_id)
#type == 'report_screenshots' ? report.update_attribute(:report_screenshot_at, Time.now) : report.update_attribute(:indicator_screenshot_at, Time.now)
end
def delete_local_copy
file_name = Rails.root.join("public/#{#type}/#{#report_id}.png")
File.delete(file_name)
end
def self.delete_s3_copy(report_id, type)
s3config = YAML.load_file(Rails.root.join('config', 's3.yml'))[Rails.env]
s3 = RightAws::S3.new(s3config["access_key_id"], s3config["secret_access_key"])
s3.bucket("my_project.#{Rails.env}").key("#{type}/#{report_id}.png").delete
end
end
Whenever I click on send an email, this is what happens:
controller:
def send_test_email
if #report.photos.empty?
Rails.env.development? ? Screenshot.new(#user.id, #report.id, Rails.application.config.custom.indicator_screenshot_bucket) : Screenshot.delay.new(#user.id, #report.id, Rails.application.config.custom.indicator_screenshot_bucket)
else
Rails.env.development? ? Screenshot.new(#user.id, #report.id, "photo_screenshots") : Screenshot.delay.new(#user.id, #report.id, "photo_screenshots")
end
ReportMailer.delay.test_report_email(#user, #report)
respond_to do |format|
format.json { render :json => { :success => true, :report_id => #report.id, :notice => 'Test email was successfully sent!' } }
end
end
This is RAILS_ENV=production log:
New RightAws::S3Interface using shared connections mode Opening new
HTTPS connection to my_project.production.s3.amazonaws.com:443 Opening
new HTTPS connection to s3.amazonaws.com:443 2016-09-26T10:48:46+0000:
[Worker(delayed_job host:ip-172-31-24-139 pid:8769)] Job
Screenshot.new (id=528) FAILED (16 prior attempts) with Errno::ENOENT:
No such file or directory # rb_sysopen -
/var/www/html/project/my_project/public/photo_screenshots/50031.png
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] Job Screenshot.new (id=529) RUNNING
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] Job Screenshot.new (id=529) FAILED (16 prior attempts) with
Magick::ImageMagickError: unable to open file
`public/report_screenshots/50031.png' # error/png.c/ReadPNGImage/3733
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] 2 jobs processed at 1.6978 j/s, 2 failed
This is AWS production log:
New RightAws::S3Interface using shared connections mode
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] Job
Screenshot.new (id=50) FAILED (6 prior attempts) with Errno::ENOENT:
No such file or directory # rb_sysopen -
/home/abcuser/Desktop/project/my_project/public/photo_screenshots/10016.png
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] Job
Screenshot.new (id=51) RUNNING 2016-09-26T16:00:30+0530:
[Worker(host:OSI-L-0397 pid:7117)] Job Screenshot.new (id=51) FAILED
(6 prior attempts) with Magick::ImageMagickError: unable to open file
`public/report_screenshots/10016.png' # error/png.c/ReadPNGImage/3667
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] 2 jobs
processed at 0.2725 j/s, 2 failed
You can try uploading in a simpler approach.
Uploading images to a fixed bucket with different folders for each object or application.The s3 keeps a limitation on the number of buckets creattion whereas there is no
limitation for content inside a bucket.
This code will upload image for a user to s3 using aws-sdk gem. The bucket and the image uploaded are made public
so that the images uploaded are directly accessible. The input it takes is the image complete path
where it is present, folder in which it should be uploaded and user_id for whom it should
be uploaded.
def save_screenshot_to_s3(image_location, folder_name,user_id)
service = AWS::S3.new(:access_key_id => ACCESS_KEY_ID,
:secret_access_key => SECRET_ACCESS_KEY)
bucket_name = "app-images"
if(service.buckets.include?(bucket_name))
bucket = service.buckets[bucket_name]
else
bucket = service.buckets.create(bucket_name)
end
bucket.acl = :public_read
key = folder_name.to_s + "/" + File.basename(image_location)
s3_file = service.buckets[bucket_name].objects[key].write(:file => image_location)
s3_file.acl = :public_read
user = User.where(id: user_id).first
user.image = s3_file.public_url.to_s
user.save
end
for handling the screenshot part, in your capture method you have use done something like this.
`/bin/bash -c "DISPLAY=:0 #{phantom} #{js} #{url} #{image}"`
Is the /bin/bash thing really required, change it to below code and it should work.
`DISPLAY=:0 "#{phantom}" "#{js}" "#{url}" "#{image}"`
let it be if it breaks something else.
Since you are aware of the final image location which is image. Pass this directly to save_screenshot_to_s3 and you should be able to save it. This will save the image path to user too if you pass your user_id as specified in method

How to upload using CarrierWave an http_basic_authentication protected file

I'm trying to upload a file that is protected under http_basic_authentication with CarrierWave. Here is the tested code:
hs = House.new
hs.remote_house_url = "http://username:password#127.0.0.1:3000/houses/export.csv"
hs.save!
I'm expecting the file to be uploaded, but I get the following:
(13.2ms) BEGIN
(0.8ms) ROLLBACK
ActiveRecord::RecordInvalid: The validation failed : House could not download file: userinfo not supported. [RFC3986]
from /Users/htaidirt/.rvm/gems/ruby-2.1.1/gems/activerecord-4.0.0/lib/active_record/validations.rb:57:in `save!'
I know it's a problem with giving http_basic_authentication credentials (username & password) thanks to the message http_basic_authentication. But what is the right way to do it? Thanks.
I just encountered a similar problem. OpenURI does not allow you to supply basic auth credentials as part of the url, instead it should be like
open("http://www.your-website.net",
http_basic_authentication: ["user", "password"])
(which I found here: http://blog.andreamostosi.name/2013/04/open-uri-and-basic-authentication/)
Carrierwave does not seem to support this by default. For now, I have monkey patched the CarrierWave::Uploader::Download::RemoteFile class to add the required basic auth. I will try and submit a better version of this as a pull request so hopefully it can be added to the gem, but for now I created config/initializers/overrides.rb with the contents:
#add basic auth to carrierwave
module CarrierWave
module Uploader
module Download
class RemoteFile
private
def file
if #file.blank?
#file = Kernel.open(#uri.to_s, http_basic_authentication: ["USERNAME", "PASSWORD])
#file = #file.is_a?(String) ? StringIO.new(#file) : #file
end
#file
rescue Exception => e
raise CarrierWave::DownloadError, "could not download file: #{e.message}"
end
end
end
end
end

Ruby on Rails - How to Save Remote File over HTTPS and Basic Authentication

I am using a supplier's api and the response they send to our server includes a url to a file, upon trying to save this file locally I fail miserably.
def self.create_file_new(filename, ext, url)
require 'open-uri'
file = Tempfile.new(filename + ext)
file.binmode
# data = open(url).read
# data = open(url, :http_basic_authentication => [username, password])
file << open(url, :http_basic_authentication => [username, password]).read
# file.write CGI::unescape(data)
file.close
file = File.open(file.path)
return file
end
I was originally getting a OpenURI::HTTPError (401 Unauthorised): but I have since created a file named bypass_ssl_verification_for_open_uri in app/initializers containing the following:
# Make open-uri work with https
OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE
which I found whilst Googling on how to fix it.
I then started to get this error message: NoMethodError (undefined method 'tr' for #<StringIO:0xb5b728c4>):, I tried creating another file (cgi_escape_fix.rb in app/initializers) containing this:
require 'cgi'
class << CGI
alias_method :orig_escape, :escape
def escape(str)
orig_escape(str.to_str)
end
end
Which I also found on my Google travels but that doesn't seem to have solved anything, so I commented out the file.write CGI::unescape(data) to try a different way but still no joy.
Now in the log I am just getting a plain 500 Internal Server Error with no useful information.
The file I'm attempting to save will always be a pdf.
Ruby 1.8.7
Rails 2.3.14
Got it to work with the following (two new initializer scripts removed):
file = Tempfile.new(filename + ext)
file.binmode
file << open(url, :http_basic_authentication => [username, password]).read
file.close
file = File.open(file.path)
return file
Should also mention that this is being passed to the attachment_fu plugin incase anyone else has problems with it.

Migrating paperclip S3 images to new url/path format

Is there a recommended technique for migrating a large set of paperclip S3 images to a new :url and :path format?
The reason for this is because after upgrading to rails 3.1, new versions of thumbs are not being shown after cropping (previously cached version is shown). This is because the filename no longer changes (since asset_timestamp was removed in rails 3.1). I'm using :fingerprint in the url/path format, but this is generated from the original, which doesn't change when cropping.
I was intending to insert :updated_at in the url/path format, and update attachment.updated_at during cropping, but after implementing that change all existing images would need to be moved to their new location. That's around half a million images to rename over S3.
At this point I'm considering copying them to their new location first, then deploying the code change, then moving any images which were missed (ie uploaded after the copy), but I'm hoping there's an easier way... any suggestions?
I had to change my paperclip path in order to support image cropping, I ended up creating a rake task to help out.
namespace :paperclip_migration do
desc 'Migrate data'
task :migrate_s3 => :environment do
# Make sure that all of the models have been loaded so any attachments are registered
puts 'Loading models...'
Dir[Rails.root.join('app', 'models', '**/*')].each { |file| File.basename(file, '.rb').camelize.constantize }
# Iterate through all of the registered attachments
puts 'Migrating attachments...'
attachment_registry.each_definition do |klass, name, options|
puts "Migrating #{klass}: #{name}"
klass.find_each(batch_size: 100) do |instance|
attachment = instance.send(name)
unless attachment.blank?
attachment.styles.each do |style_name, style|
old_path = interpolator.interpolate(old_path_option, attachment, style_name)
new_path = interpolator.interpolate(new_path_option, attachment, style_name)
# puts "#{style_name}:\n\told: #{old_path}\n\tnew: #{new_path}"
s3_copy(s3_bucket, old_path, new_path)
end
end
end
end
puts 'Completed migration.'
end
#############################################################################
private
# Paperclip Configuration
def attachment_registry
Paperclip::AttachmentRegistry
end
def s3_bucket
ENV['S3_BUCKET']
end
def old_path_option
':class/:id_partition/:attachment/:hash.:extension'
end
def new_path_option
':class/:attachment/:id_partition/:style/:filename'
end
def interpolator
Paperclip::Interpolations
end
# S3
def s3
AWS::S3.new(access_key_id: ENV['S3_KEY'], secret_access_key: ENV['S3_SECRET'])
end
def s3_copy(bucket, source, destination)
source_object = s3.buckets[bucket].objects[source]
destination_object = source_object.copy_to(destination, {metadata: source_object.metadata.to_h})
destination_object.acl = source_object.acl
puts "Copied #{source}"
rescue Exception => e
puts "*Unable to copy #{source} - #{e.message}"
end
end
Didn't find a feasible method for migrating to a new url format. I ended up overriding Paperclip::Attachment#generate_fingerprint so it appends :updated_at.

Resources