Convert and store to S3 with REST API / InkFilepicker - ruby-on-rails

I have a Rails app on heroku. From the server side (using the REST API of InkFilepicker), I would like to convert a file, save it to my S3 bucket and store the S3 url to my model.
Concretely: Given an image (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG) I want to convert it (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?w=200&h=150&fit=clip) and store the converted image to my S3 bucket.
EDIT
Here is what I did at the end:
after_save :save_thumbnail_url_to_s3
def save_thumbnail_url_to_s3
convert_options = {
fit: 'clip',
h:500,
w:500
}
file = open("#{self.url}/convert?#{convert_options.to_query}")
# Writing file into S3 bucket
amazon = AWS::S3.new(access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])
bucket = amazon.buckets[ENV['AWS_BUCKET']]
object = bucket.objects[s3_media_path]
written_file = object.write(file, acl: :public_read) # :authenticated_read
self.update_column :thumbnail_url, written_file.public_url.to_s
end

If you are using the filepicker.io API you can convert your file with the API and then provide then use open-uri as below to create a file stream that can be sent to S3, Tempfile as below behaves like the File API in ruby
[3] pry(main)> require 'open-uri'
=> true
[4] pry(main)> file = open("https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?...")
=> #
[5] pry(main)> file.class
=> Tempfile

You can simply use the aws-s3 gem : https://github.com/marcel/aws-s3
But be careful, Heroku is read only oriented, you will only be able to work on temp files.

Related

How to specify server-side S3 encryption via ActiveStorage?

Through paperclip I was able to specify server side encryption for S3, and also specify a content type (for a wonky file) like this:
has_attached_file :attachment,
s3_permissions: :private,
s3_server_side_encryption: 'AES256',
s3_headers: lambda { |attachment|
{
'content-Type' => 'text/csv; charset=utf-16le'
}
}
Where would I specify similar when using has_attached_one in ActiveStorage?
As you can see in Active Storage's S3Service, there's upload options are passed transparently from the upload key to the Aws::S3::Object#put method. This is also true for Rails 5.2.
So you just need to specify server_side_encryption key in your storage.yml this way:
amazon:
service: S3
bucket: mybucket
* other properties *
upload:
server_side_encryption: "AES256"

Rails image upload API without save data in database and return name

I want to create one API for image upload using CarrierWave with S3. Normal file upload is working but I want to create one API for upload image and return image URL, not need to the same name in the database table.
How can I do this?
You need to include the following gem in your gemfile,
gem 'aws-sdk', '~> 3'
And use below code in the controller (Using APIS)
require 'aws-sdk-s3'
class Api::V1::UploaderController < ApplicationController
def create
file = uploader_params[:image_url]
file_name = "#{file.original_filename}"
upload_file = file.tempfile
s3 = Aws::S3::Resource.new(region: ENV['AWS_REGION'],access_key_id: ENV['AWS_ACCESS_KEY'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'] )
obj = s3.bucket(ENV['AWS_BUCKET_NAME']).object(file_name)
obj.upload_file(upload_file, { acl: 'public-read' })
success(data: obj.public_url, status: 200)
end
end

amazon elastic transcode with shrine

I am working on an app that require to upload videos. I added Shrine and s3 storage.
Till here everything is working. Now I need to transcode the videos and I added the following code to the video_uploader file
class VideoUploader < Shrine
plugin :processing
plugin :versions
process(:store) do |io|
transcoder = Aws::ElasticTranscoder::Client.new(
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
region: 'us-east-1',
)
pipeline = transcoder.create_pipeline(options = {
:name => "name",
:input_bucket => "bucket",
:output_bucket => "bucket",
:role => "arn:aws:iam::XXXXX:role/Elastic_Transcoder_Default_Role",
})
PIPELINE_ID = pipeline[:pipeline][:id]
transcode_hd = transcoder.create_job({
:pipeline_id=>PIPELINE_ID,
:input=> {
:key=> "cache/"+io.id,
:frame_rate=> "auto",
:resolution => "auto",
:aspect_ratio => "auto",
:container => 'auto'
},
:outputs=>[{
:key=>"store/"+io.id,
:preset_id=>"1351620000001-000010",
}]
})
end
end
The transcoding is working and basically is transcoding the new file uploaded to cache folder and put in the store folder with the same name.
The issue now is to attach this file to the record in the database. As of now the record is updated with a different name it creates a new file in the store folder of 0mb.
How can I attach the results of processing into Shrine's uploaded file for storage?
The process(:store) block expects you to return a file for Shrine to upload to permanent storage, so this flow won't work Amazon Elastic Transcoder, because Amazon Elastic Transcoder is now the one that will upload the cached file to permanent storage.
You can delay the transcoding request into a background job, poll the transcoding job every N seconds, and create a Shrine::UploadedFile from the results and update the record. Something like the following should work:
# superclass for all uploaders that use Amazon Elastic Transcoder
class TranscoderUploader < Shrine
plugin :backgrounding
Attacher.promote { |data| TranscodeJob.perform_async(data) }
end
class VideoUploader < TranscoderUploader
plugin :versions
end
class TranscodeJob
include Sidekiq::Worker
def perform(data)
attacher = TranscoderUploader::Attacher.load(data)
cached_file = attacher.get #=> #<Shrine::UploadedFile>
# create transcoding job, use `cached_file.id`
transcoder.wait_until(:job_complete, id: job.id)
response = transcoder.read_job(id: job.id)
output = response.output
versions = {
video: attacher.shrine_class::UploadedFile.new(
"id" => cached_file.id,
"storage" => "store",
"metadata" => {
"width" => output.width,
"height" => output.height,
# ...
}
),
...
}
attacher.swap(versions)
end
end
If you'll by any chance be interested in making a Shrine plugin for Amazon Elastic Transcoder, take a look at shrine-transloadit which provides integration for Transloadit, which uses practically the same flow as the Amazon Elastic Transcoder, and it works with webhooks rather than polling for the response.

s3 link expire with aws-sdk

i have been using gem 'aws-sdk' for uploading the file with rails, now i getting the created link, basically this link will expire after one hour(i think thats default), but i need to give this link as public, so is there anyway to prevent the link from expire? as i tried like this
AWS.config(:access_key_id => 'XXXXXXXXXX',
:secret_access_key => 'XXXXXXX')
s3 = AWS::S3.new
my_bucket = s3.buckets['xxx/xxxx/xxxx']
object = my_bucket.objects[filename]
puts object.url_for(:read).to_s
Set your file access permission as public read
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new('akid', 'secret'),
region: 'us-west-1'
)
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/source/file/path', acl:'public-read')
obj.public_url
This link will help you

Zip up all Paperclip attachments stored on S3

Paperclip is a great upload plugin for Rails. Storing uploads on the local filesystem or Amazon S3 seems to work well. I'd just assume store files on the localhost, but the use of S3 is required for this app as it will be hosted on Heroku.
How would I go about getting all of my uploads/attachments from S3 in a single zipped download?
Getting a zip of files from the local filesystem seems straight forward. It's getting the files from S3 that has me puzzled. I think it may have something to do with the way that rubyzip handles files referenced by URL. I've tried various approaches but can't seem to avoid errors.
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.to_f.to_s <<
".zip"
# rubyzip gem version 0.9.1
# rdoc http://rubyzip.sourceforge.net/
Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE) do |zip|
#get all of the attachments
# attempt to get files stored on S3
# FAIL
registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.url(:original, false)) }
# => No such file or directory - http://s3.amazonaws.com/bucket/original/abstract.txt
# Should note that these files in S3 bucket are publicly accessible. No ACL.
# works with local storage. Thanks to Henrik Nyh
# registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.path(:original)) }
end
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}
You almost certainly want to use e.abstract.to_file.path instead of e.abstract.url(...).
See:
Paperclip::Storage::S3::to_file (should return a TempFile)
TempFile::path
UPDATE
From the changelog:
New in 3.0.1:
API CHANGE: #to_file has been removed. Use the #copy_to_local_file method instead.
#vlard's solution is ok. However I've run into some issues with the to_file. It creates a tempfile and the garbage collector deletes (sometimes) the file before it was added to the zip file. Therefor, I'm getting random Errno::ENOENT: No such file or directory errors.
So I'm using the following code now (I've kept the initial code variables names for consistency with the initial question)
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
#please note that using nanoseconds option in strftime reduces the risks concerning the situation where 2 or more users initiate the download in the same time
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.strftime('%Y-%m-%d-%H%M%S-%N').to_s <<
".zip"
# rubyzip gem version 0.9.4
zip = Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE)
zip.close
registrations_with_attachments.each { |e|
file_to_add = e.file.to_file
zip = Zip::ZipFile.open(tmp_filename)
zip.add("abstracts/#{e.abstract.original_filename}", file_to_add.path)
zip.close
puts "added #{file_to_add.path} to #{tmp_filename}" #force garbage collector to keep the file_to_add until after the file has been added to zip
}
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}

Resources