Rails 4, Fog, Amazon s3 - retrieving all the images as an array from a specific folder in a bucket. - ruby-on-rails

I am using amazon s3, rails 4, and the FOG gem. I have an amazon bucket called uipstudy with 100 folders, each containing about 20 images. I use the following to get all the images in a specific folder (In my application_helper.rb which is included in the application_controller.rb).
def get_files(image_folder)
connection = Fog::Storage.new(
provider: 'AWS',
aws_access_key_id: '######',
aws_secret_access_key: '#######'
)
connection.directories.get('uipimages', prefix:image_folder).files.map do |file|
file.key
end
end
In my controller I have this....in this example I am looking in the folder "1" in the uipstudy bucket.
#Amazon solution:
#images = get_files('1')
#images.each do |image|
image = "https://s3.amazonaws.com/uipstudy/#{image}"
#image_array << image
end
The problem is that its returning the files inside the folder labelled "1" but also in 10, 11, 12,13....etc. I assumed that the prefix was an absolute but it appears not. Is there a way to enforce that the prefix gets exactly the folder specified in the prefix?

I think you should be able to make a small change in your script to get the behavior you want. Simply append a forward slash to the prefix so that it clearly shows you want things that are like a directory instead of any/all things that begin with a particular character.
So, that would get you something like:
directory = connection.directories.get('upimages', prefix: image_folder + '/')
directory.files.map do |file|
file.key
end
(I just split it into two commands to make it format/read easier)

Below is my solution using the aws-sdk gem.
initialize s3 client
s3 = AWS::S3.new
bucket = s3.buckets[ENV['AWS_BUCKET']]
regex for ipa files in _inbox folder
regex = %r{_inbox/(?:[^/]+/)*[^/]+\.ipa}i
get and process ipa files
bucket.objects.select { |o| o.key.match(regex) }.each do |ipa|

Related

Rails - Resave All Models for S3 Migration

rails 6.1.3.2
aws-sdk-s3 gem
I currently have a rails app in production that uses ActiveStorage to attach image data to a wrapper Image model. It's currently using the local strategy to save images to disk and I am migrating it to S3. I am not using paperclip or anything similar.
I succeeded in setting it up. Currently it is set to use local primarily and have S3 as a mirror so that I can write to two places during the migration. However the documentation says that it will only save new images to S3 upon create and update of a record. I would like to "re-save" all models in production to force the migration to happen. Does anyone know how to do this?
Looks like it was already answered!
If you happen to be stuck with only access to the Rails Console like I was, this solution worked perfectly. If you copy-paste this code into the console, it will begin to produce output of the S3 uploads. After 5k of those, I was done. An immense thank you to Tayden for the solution.
all_services = [ActiveStorage::Blob.service.primary, *ActiveStorage::Blob.service.mirrors]
# Iterate through each blob
ActiveStorage::Blob.all.each do |blob|
# Select services where file exists
services = all_services.select { |file| file.exist? blob.key }
# Skip blob if file doesn't exist anywhere
next unless services.present?
# Select services where file doesn't exist
mirrors = all_services - services
# Open the local file (if one exists)
local_file = File.open(services.find{ |service| service.is_a? ActiveStorage::Service::DiskService }.path_for blob.key) if services.select{ |service| service.is_a? ActiveStorage::Service::DiskService }.any?
# Upload local file to mirrors (if one exists)
mirrors.each do |mirror|
mirror.upload blob.key, local_file, checksum: blob.checksum
end if local_file.present?
# If no local file exists then download a remote file and upload it to the mirrors (thanks #Rystraum)
services.first.open blob.key, checksum: blob.checksum do |temp_file|
mirrors.each do |mirror|
mirror.upload blob.key, temp_file, checksum: blob.checksum
end
end unless local_file.present?

How to specify a prefix when uploading to S3 using activestorage's direct upload?

With a standard S3 configuration:
AWS_ACCESS_KEY_ID: [AWS ID]
AWS_BUCKET: [bucket name]
AWS_REGION: [region]
AWS_SECRET_ACCESS_KEY: [secret]
I can upload a file to S3 (using direct upload) with this Rails 5.2 code (only relevant code shown):
form.file_field :my_asset, direct_upload: true
This will effectively put my asset in the root of my S3 bucket, upon submitting the form.
How can I specify a prefix (e.g. "development/", so that I can mimic a folder on S3)?
2022 update: as of Rails 6.1 (check this commit), this is actually supported:
user.avatar.attach(key: "avatars/#{user.id}.jpg", io: io, content_type: "image/jpeg", filename: "avatar.jpg")
My current workaround (at least until ActiveStorage introduces the option to pass a path for the has_one_attached and has_many_attached macros) on S3 is to implement the move_to method.
So I'm letting ActiveStorage save the image to S3 as it normally does right now (at the top of the bucket), then moving the file into a folder structure.
The move_to method basically copies the file into the folder structure you pass then deletes the file that was put at the root of the bucket. This way your file ends up where you want it.
So for instance if we were storing driver details: name and drivers_license, save them as you're already doing it so that it's at the top of the bucket.
Then implement the following (I put mine in a helper):
module DriversHelper
def restructure_attachment(driver_object, new_structure)
old_key = driver_object.image.key
begin
# Passing S3 Configs
config = YAML.load_file(Rails.root.join('config', 'storage.yml'))
s3 = Aws::S3::Resource.new(region: config['amazon']['region'],
credentials: Aws::Credentials.new(config['amazon']['access_key_id'], config['amazon']['secret_access_key']))
# Fetching the licence's Aws::S3::Object
old_obj = s3.bucket(config['amazon']['bucket']).object(old_key)
# Moving the license into the new folder structure
old_obj.move_to(bucket: config['amazon']['bucket'], key: "#{new_structure}")
update_blob_key(driver_object, new_structure)
rescue => ex
driver_helper_logger.error("Error restructuring license belonging to driver with id #{driver_object.id}: #{ex.full_message}")
end
end
private
# The new structure becomes the new ActiveStorage Blob key
def update_blob_key(driver_object, new_key)
blob = driver_object.image_attachment.blob
begin
blob.key = new_key
blob.save!
rescue => ex
driver_helper_logger.error("Error reassigning the new key to the blob object of the driver with id #{driver_object.id}: #{ex.full_message}")
end
end
def driver_helper_logger
#driver_helper_logger ||= Logger.new("#{Rails.root}/log/driver_helper.log")
end
end
It's important to update the blob key so that references to the key don't return errors.
If the key is not updated any function attempting to reference the image will look for it in it's former location (at the top of the bucket) rather than in it's new location.
I'm calling this function from my controller as soon as the file is saved (that is, in the create action) so that it looks seamless even though it isn't.
While this may not be the best way, it works for now.
FYI: Based on the example you gave, the new_structure variable would be new_structure = "development/#{driver_object.image.key}".
I hope this helps! :)
Thank you, Sonia, for your answer.
I tried your solution and it works great, but I encountered problems with overwriting attachments. I often got IntegrityError while doing it. I think, that this and checksum handling may be the reason why the Rails core team don't want to add passing pathname feature. It would require changing the entire logic of the upload method.
ActiveStorage::Attached#create_from_blob method, could also accepts an ActiveStorage::Blob object. So I tried a different approach:
Create a Blob manually with a key that represents desired file structure and uploaded attachment.
Attach created Blob with the ActiveStorage method.
In my usage, the solution was something like that:
def attach file # method for attaching in the model
blob_key = destination_pathname(file)
blob = ActiveStorage::Blob.find_by(key: blob_key.to_s)
unless blob
blob = ActiveStorage::Blob.new.tap do |blob|
blob.filename = blob_key.basename.to_s
blob.key = blob_key
blob.upload file
blob.save!
end
end
# Attach method from ActiveStorage
self.file.attach blob
end
Thanks to passing a full pathname to Blob's key I received desired file structure on a server.
Sorry, that’s not currently possible. I’d suggest creating a bucket for Active Storage to use exclusively.
The above solution will still give IntegrityError, need to use File.open(file). Thank Though for idea.
class History < ApplicationRecord
has_one_attached :gs_history_file
def attach(file) # method for attaching in the model
blob_key = destination_pathname(file)
blob = ActiveStorage::Blob.find_by(key: blob_key.to_s)
unless blob
blob = ActiveStorage::Blob.new.tap do |blob|
blob.filename = blob_key.to_s
blob.key = blob_key
#blob.byte_size = 123123
#blob.checksum = Time.new.strftime("%Y%m%d-") + Faker::Alphanumeric.alpha(6)
blob.upload File.open(file)
blob.save!
end
end
# Attach method from ActiveStorage
self.gs_history_file.attach blob
end
def destination_pathname(file)
"testing/filename-#{Time.now}.xlsx"
end
end

Rails uploading to Amazon S3 bucket

I have two differnt buckets in the same Amazon S3 account. I can upload fine to one but not the other.
pictures.rb
def self.set_s3_direct_post
return S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read')
end
aws.rb
S3_BUCKET = Aws::S3::Resource.new.bucket(ENV['S3_FIRST_BUCKET'])
Trying to change bucket (with S3_SECOND_BUCKET instead of S3_FIRST_BUCKET) results in a failed upload. I notice that the url for a failed upload is in the form:
https://s3.amazonaws.com/%E2%80%9Csecond_bucket%E2%80%9D
but a successful upload looks like:
https://first_bucket.s3.amazonaws.com
How do I control this??
Unbelievable. I edited my ~/.profile file in TextEdit. The addition of the line:
export S3_SECOND_BUCKET="second_bucket"
put slightly curly double-quotes around second_bucket. Turns out these double-quotes are different to the ones Sublime Text 2 uses, which are straight (like above). The TextEdit versions weren't being recognised as quotes. Cheers TextEdit, you t**t.

Sanitising a download URL with params being passed to send_file

I have numerous files in a directory app/assets/downloadables/ and I want authenticated users to be able to download arbitrary files by name in params within that directory or any subdirectories.
How can I sanitise the input to send_file to prevent users from accessing arbitrary files on the server?
At the moment I'm looking at doing something like this, but I'm not confident that it's safe:
DOWNLOADS_ROOT = File.join(Rails.root, "app", "assets", "downloadables")
# e.g. request.path = "/downloads/subdir/file1.pdf"
file = request.path.gsub(/^\/downloads\//,'').gsub('..','').split('/')
# send file /app/assets/downloadables/subdir/file1.pdf
send_file File.join(DOWNLOADS_ROOT, file)
Would this sufficiently protect against app-wide arbitrary file access or are there improvements or a different approach that would be better?
I found an answer to this here: http://makandracards.com/makandra/1083-sanitize-user-generated-filenames-and-only-send-files-inside-a-given-directory
This file needed to be created per the link:
app/controllers/shared/send_file_inside_trait.rb
module ApplicationController::SendFileInsideTrait
as_trait do
private
def send_file_inside(allowed_path, filename, options = {})
path = File.expand_path(File.join(allowed_path, filename))
if path.match Regexp.new('^' + Regexp.escape(allowed_path))
send_file path, options
else
raise 'Disallowed file requested'
end
end
end
end
To be used as follows in controllers:
send_file_inside File.join(Rails.root, 'app', 'assets', 'downloadables'), request.path.gsub(/^\/downloads\//,'').split('/')
The magic happens where the calculated path of allowed_path + file_name is entered into expand_path which will strip out any special directory browsing strings and just return an absolute path. That resolved path is then compared against the allowed_path to ensure that the file being requested resides within the allowed path and/or sub-directories.
Note that this solution requires the Modularity gem v2 https://github.com/makandra/modularity

Heroku - how to write into "tmp" directory?

I need to use the tmp folder on Heroku (Cedar) for writing some temporarily data, I am trying to do that this way:
open("#{Rails.root}/tmp/#{result['filename']}", 'wb') do |file|
file.write open(image_url).read
end
But this produce error
Errno::ENOENT: No such file or directory - /app/tmp/image-2.png
I am trying this code and it's running properly on localhost, but I cannot make it work on Heroku.
What is the proper way to save some files to the tmp directory on Heroku (Cedar stack)?
Thank you
EDIT:
I am running method with Delayed Jobs that needs to has access to the tmp file.
EDIT2:
What I am doing:
files.each_with_index do |f, index|
unless f.nil?
result = JSON.parse(buffer)
filename = "#{Time.now.to_i.to_s}_#{result['filename']}" # thumbnail name
thumb_filename = "#{Rails.root}/tmp/#{filename}"
image_url = f.file_url+"/convert?rotate=exif"
open("#{Rails.root}/tmp/#{result['filename']}", 'wb') do |file|
file.write open(image_url).read
end
img = Magick::Image.read(image_url).first
target = Magick::Image.new(150, 150) do
self.background_color = 'white'
end
img.resize_to_fit!(150, 150)
target.composite(img, Magick::CenterGravity, Magick::CopyCompositeOp).write(thumb_filename)
key = File.basename(filename)
s3.buckets[bucket_name].objects[key].write(:file => thumb_filename)
# save path to the new thumbnail to database
f.update_attributes(:file_url_thumb => "https://s3-us-west-1.amazonaws.com/bucket/#{filename}")
end
end
I have in database information about images. These images are stored in Amazon S3 bucket. I need to create thumbnails to these images. So I am going through one image by another one, load the image, temporarily save it, then resize it and afterwards I will upload this thumbnail to S3 bucket.
But this procedure doesn't seems to be working on Heroku, so, how could I do that (my app is running on Heroku)?
Is /tmp included in your git repo? Removed in your .slugignore? The directory may just not exist out on Heroku.
Try tossing in a quick mkdir before the write:
Dir.mkdir(File.join(Rails.root, 'tmp'))
Or even in an initializer or something...
Here's an elegant way
f = File.new("tmp/filename.txt", 'w')
f << "hi there"
f.close
Dir.entries(Dir.pwd.to_s + ("/tmp")) # See your newly created file in /tmp
Don't forget that whenever your app restarts (for any reason, including those outside your control), your files will be deleted, as they are only stored ephemerally.
Try it with heroku restart, you will see the new file you created is no longer there

Resources