Upload files to multiple buckets - ruby-on-rails

I create a rails application for uploading files through carrierwave to S3 bucket,
I uploaded them to one bucket and I want to upload them to two buckets and regions at the same time .
How can I do that?

You can create an upload method and send your bucket name as an argument. A quick and dirty version would look something like:
def upload_file(specific_bucket = nil)
unless specific_bucket
BUCKET_LIST.each do |bucket|
# send file to bucket
end
else
# upload to specific_bucket
end
end
Store your bucket list in an appropriate location
BUCKET_LIST = [bucket_name_one, bucket_name_two]

Related

Use existing image on S3 if it exists from Rails seeds

I am trying to improve a set of Rails seeds which currently upload all their file attachments to S3 regardless of whether they currently exist. This results in very slow seeds and an obvious wastage of S3 resources. It looks like this:
factory :logo_attachment do
file do
filename = Dir.glob(Rails.root.join('test', 'fixtures', 'files','logos','*.{jpg,png,gif,svg}')).sample
extname = File.extname(filename)[1..-1]
mime_type = Mime::Type.lookup_by_extension(extname)
content_type = mime_type.to_s
fixture_file_upload(filename, content_type)
end
end
Attached with:
after(:create) do |organisation|
organisation.logo = create(:logo_attachment, attacher: organisation)
end
How can I improve this state of affairs so it uses pre-existing images on S3? I did wish to use local images only for this process but the application is heavily tied to S3.

Download file in service object with ActiveStorage

A user uploads a document and this gets stored in Azure with ActiveStorage. The next step is that the backend processes this and therefore I have a service object to do this. So I need to download the file from Azure to the tmp folder within the Rails app. How do I download the file? I cannot use rails_blob_url because it is not available in a service object, only in controllers and views.
When I still used Paperclip I did something like this:
require 'open-uri'
file = Rails.root.join('tmp', user.attachment_file_name)
name = user.attachment_file_name
download = open(user.attachment.url)
download_result = IO.copy_stream(download, file)
How can I do something similar with ActiveStorage?
You can use ActiveStorage::Blob#open:
Downloads the blob to a tempfile on disk. Yields the tempfile.
Given this example from the guides:
class User < ApplicationRecord
has_one_attached :avatar
end
You can do this with:
user.avatar.open do |tempfile|
# do something with the file
end
If its has_many_attached you of course need to loop through the attachments.
See:
Active Storage Overview

Active Storage, specify a Google bucket directory ?

I've been implementing an Active Storage Google strategy on Rails 5.2, at the moment I am able to upload files using the rails console without problems, the only thing I am missing is if there is a way to specify a directory inside a bucket. Right now I am uploading as follows
bk.file.attach(io: File.open(bk.source_dir.to_s), filename: "file.tar.gz", content_type: "application/x-tar")
The configuration on my storage.yml
google:
service: GCS
project: my-project
credentials: <%= Rails.root.join("config/myfile.json") %>
bucket: bucketname
But in my bucket there are different directories such as bucketname/department1 and such. I been through the documentation and have not found a way to specify further directories and all my uploads end up in bucket name.
Sorry, I’m afraid Active Storage doesn’t support that. You’re intended to configure Active Storage with a bucket it can use exclusively.
Maybe you can try metaprogramming, something like this:
Create config/initializers/active_storage_service.rb to add set_bucket method to ActiveStorage::Service
module Methods
def set_bucket(bucket_name)
# update config bucket
config[:bucket] = bucket_name
# update current bucket
#bucket = client.bucket(bucket_name, skip_lookup: true)
end
end
ActiveStorage::Service.class_eval { include Methods }
Update your bucket before uploading or downloading files
ActiveStorage::Blob.service.set_bucket "my_bucket_name"
bk.file.attach(io: File.open(bk.source_dir.to_s), filename: "file.tar.gz", content_type: "application/x-tar")

Retrieving files from AWS S3 in Ruby

I uploaded multiple PDF files in following path (user/pdf/) in AWS S3. So that path for each file is going to be like user/pdf/file1.pdf, user/pdf/file2.pdf, etc.
In my website(Angular front-end and Rails backend), I'm trying to do 3 things.
1) Retrieving files in certain path (user/pdf/).
2) Make a view which lists names of the files I retrieved from certain path.
3) Let users to click the name of the file and it will open the file using S3 endpoint
4) Delete the file by clicking a button.
I was looking into AWS S3 doc, but I could not find related API calls from the doc. Would love to get some help on performing above actions.
you should review the ruby S3 sdk doc
listing objects from a bucket
# enumerate ALL objects in the bucket (even if the bucket contains
# more than 1k objects)
bucket.objects.each do |obj|
puts obj.key
end
# enumerate at most 20 objects with the given prefix
bucket.objects.with_prefix('photos/').each(:limit => 20) do |photo|
puts photo.key
end
getting an object
# makes no request, returns an AWS::S3::S3Object
obj = bucket.objects['key']
deleting an object
bucket.objects.delete('abc')

Carrierwave + Fog + S3 remove file without going through a model

I am building an application that has a chat component to it. The application allows users to upload files to the chat. The chat is all javascript but i wanted to use Carrierwave for the uploads because i am using it elsewhere in the application. I am doing the handling of the uploads through AJAX so that i can get into Rails land and let Carrierwave take over.
I have been able to get the chat to successfully upload the files to the correct location in my S3 bucket. The thing i can't figure out is how to delete the files. Here is my code the uploads the files - this is the method that is called from the route that the AJAX call hits.
def upload
file = File.open(params[:file_0].tempfile)
uploader = ChatUploader.new
uploader.store!(file)
end
There is little to no documentation with Carrierwave on how to upload files without going through a model and basically NO documentation on how to remove files without going through a model. I assume it is possible though - i just need to know what to call. So i guess my question is how do i delete files?
UPDATE (11/23)
I got the code to save and delete files from S3 using these methods:
# code to save the file
def upload
file = File.open(params[:file_0].tempfile)
uploader = ChatUploader.new
uploader.store!(file)
uploader.store_path()
end
# code to remove files
def remove_file
file = params[:file]
uploader = ChatUploader.new
uploader.retrieve_from_store!(file)
uploader.remove!
end
My only issue now is that the filename for the uploaded file is not correct. It saves all files with a "RackMultipart" and then some numbers which look like a date, time, and identifier? (example: RackMultipart20141123-17740-1tq4j1g) Need to try and use the original filename plus maybe a timestamp for uniqueness.
I believe it has something to do with these two lines:
file = File.open(params[:file_0].tempfile)
and
uploader.store!(file)

Resources