amazon elastic transcode with shrine - ruby-on-rails

I am working on an app that require to upload videos. I added Shrine and s3 storage.
Till here everything is working. Now I need to transcode the videos and I added the following code to the video_uploader file
class VideoUploader < Shrine
plugin :processing
plugin :versions
process(:store) do |io|
transcoder = Aws::ElasticTranscoder::Client.new(
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
region: 'us-east-1',
)
pipeline = transcoder.create_pipeline(options = {
:name => "name",
:input_bucket => "bucket",
:output_bucket => "bucket",
:role => "arn:aws:iam::XXXXX:role/Elastic_Transcoder_Default_Role",
})
PIPELINE_ID = pipeline[:pipeline][:id]
transcode_hd = transcoder.create_job({
:pipeline_id=>PIPELINE_ID,
:input=> {
:key=> "cache/"+io.id,
:frame_rate=> "auto",
:resolution => "auto",
:aspect_ratio => "auto",
:container => 'auto'
},
:outputs=>[{
:key=>"store/"+io.id,
:preset_id=>"1351620000001-000010",
}]
})
end
end
The transcoding is working and basically is transcoding the new file uploaded to cache folder and put in the store folder with the same name.
The issue now is to attach this file to the record in the database. As of now the record is updated with a different name it creates a new file in the store folder of 0mb.
How can I attach the results of processing into Shrine's uploaded file for storage?

The process(:store) block expects you to return a file for Shrine to upload to permanent storage, so this flow won't work Amazon Elastic Transcoder, because Amazon Elastic Transcoder is now the one that will upload the cached file to permanent storage.
You can delay the transcoding request into a background job, poll the transcoding job every N seconds, and create a Shrine::UploadedFile from the results and update the record. Something like the following should work:
# superclass for all uploaders that use Amazon Elastic Transcoder
class TranscoderUploader < Shrine
plugin :backgrounding
Attacher.promote { |data| TranscodeJob.perform_async(data) }
end
class VideoUploader < TranscoderUploader
plugin :versions
end
class TranscodeJob
include Sidekiq::Worker
def perform(data)
attacher = TranscoderUploader::Attacher.load(data)
cached_file = attacher.get #=> #<Shrine::UploadedFile>
# create transcoding job, use `cached_file.id`
transcoder.wait_until(:job_complete, id: job.id)
response = transcoder.read_job(id: job.id)
output = response.output
versions = {
video: attacher.shrine_class::UploadedFile.new(
"id" => cached_file.id,
"storage" => "store",
"metadata" => {
"width" => output.width,
"height" => output.height,
# ...
}
),
...
}
attacher.swap(versions)
end
end
If you'll by any chance be interested in making a Shrine plugin for Amazon Elastic Transcoder, take a look at shrine-transloadit which provides integration for Transloadit, which uses practically the same flow as the Amazon Elastic Transcoder, and it works with webhooks rather than polling for the response.

Related

Rails Fog create a file based on a remote link

I have Rails application and use fog gem to upload files to cloud storage (Rackspace). As of now I have been successfully uploading local files to cloud storage.
#service = Fog::Storage.new(options)
directory = #service.directories.new :key => 'test'
directory.files.create :key => path, :body => file, :content_type => content_type
I have a new requirement now. I want to be able to use remote link (public-url) and have it uploaded to cloud storage. Is there a way to achieve this without downloading it locally or loading the whole thing in memory?
I'm looking for something like this:
directory.files.create :key => path, :body => 'url-to-remote-file', :content_type => content_type
A stream-based approach would also be quite helpful.
Thanks.

Shrine gem with Rails: generate versions with upload endpoint?

I use Shrine gem with Rails 5. I enabled plugins upload_endpoint, versions, processing and recache. I expected to get generated versions in upload endpoint response.
class VideoUploader < Shrine
plugin :processing
plugin :versions
plugin :recache
plugin :upload_endpoint
plugin :upload_endpoint, rack_response: -> (uploaded_file, request) do
# ??? I expected uploaded_file to have thumbnail version here ???
body = { data: uploaded_file.data, url: uploaded_file.url }.to_json
[201, { "Content-Type" => "application/json" }, [body]]
end
process(:recache) do |io, context|
versions = { original: io }
io.download do |original|
screenshot = Tempfile.new(["screenshot", ".jpg"], binmode: true)
movie = FFMPEG::Movie.new(original.path)
movie.screenshot(screenshot.path)
screenshot.open # refresh file descriptors
versions[:thumbnail] = screenshot
end
versions
end
end
Why process callback process(:recache) happens only when saving whole record? And how to make it generate versions right after direct uploading?
The :recache action only happens when you assign a file to a model instance, and after validation succeeded. So the recache plugin is not what you want here.
Whenever Shrine uploads a file, it includes an :action parameter in that upload, and this is what's matched when you register a process block. It's not currently documented, but the upload_endpoint includes action: :upload, so just use process(:upload):
process(:upload) do |io, context|
# ...
end
In your :rack_response block, uploaded_file will now be a hash of uploaded files, so you won't be able to call #data on it. But you can just include them in the hash directly, and they should automatically convert to JSON.
plugin :upload_endpoint, rack_response: -> (uploaded_file, request) do
body = { data: uploaded_file, url: uploaded_file[:original].url }.to_json
[201, { "Content-Type" => "application/json" }, [body]]
end

Couldnot upload images to AWS S3 bucket in Rails

I am having a scenario where the photos I am uploading has to store in a AWS S3 buckets and call the images in through email but after the Rails and corresponding gems upgradation I could not store the images in S3. I upgraded my aws-s3 version from 0.6.1 to 0.6.3, aws-sdk from 1.3.5 to 1.3.9, right_aws from 3.0.0 to 3.0.5 and finally Rails version from 3.2.1 to 4.2.6.
I have tested by putting puts commands, it is going to all the methods but I doubt whether there is any syntax change in upload method at #type (Here #type is the 2 bucket names photo_screenshots and indicator_screenshots).
Please help me.
This is my lib/screenshot.rb:
class Screenshot
attr_reader :user_id, :report_id, :type
def initialize(user_id, report_id, type)
#user_id, #report_id, #type = user_id, report_id, type
capture
resize(500, 686) if #type == 'report_screenshots'
upload
delete_local_copy
end
def capture
if Rails.env.production?
phantom = Rails.root.join('vendor/javascripts/phantomjs_linux/bin/phantomjs')
url = Rails.application.config.custom.domain_url + "users/#{#user_id}/reports/#{#report_id}"
end
js = Rails.root.join("vendor/javascripts/#{#type}.js")
image = Rails.root.join("public/#{#type}/#{#report_id}.png")
`/bin/bash -c "DISPLAY=:0 #{phantom} #{js} #{url} #{image}"`
end
def resize(width, height)
path = "public/#{#type}/#{#report_id}.png"
img = Magick::Image::read(path).first
#img.thumbnail!(width, height)
img.change_geometry("#{width}x#{height}") do |cols, rows, img|
img.thumbnail!(cols, rows)
end
img.write(path)
end
def upload
file_name = Rails.root.join("public/#{#type}/#{#report_id}.png")
s3config = YAML.load_file(Rails.root.join('config', 's3.yml'))[Rails.env]
s3 = RightAws::S3.new(s3config["access_key_id"], s3config["secret_access_key"])
#type == 'report_screenshots' ? s3.bucket("my_project.#{Rails.env}", true).put("#{#type}/#{#report_id}.png", File.open(file_name), {}, 'public-read', { 'content-type' => 'image/png' }) : s3.bucket("my_project.#{Rails.env}", true).put("indicator_screenshots/#{#report_id}.png", File.open(file_name), {}, 'public-read', { 'content-type' => 'image/png' })
report = Report.find(#report_id)
#type == 'report_screenshots' ? report.update_attribute(:report_screenshot_at, Time.now) : report.update_attribute(:indicator_screenshot_at, Time.now)
end
def delete_local_copy
file_name = Rails.root.join("public/#{#type}/#{#report_id}.png")
File.delete(file_name)
end
def self.delete_s3_copy(report_id, type)
s3config = YAML.load_file(Rails.root.join('config', 's3.yml'))[Rails.env]
s3 = RightAws::S3.new(s3config["access_key_id"], s3config["secret_access_key"])
s3.bucket("my_project.#{Rails.env}").key("#{type}/#{report_id}.png").delete
end
end
Whenever I click on send an email, this is what happens:
controller:
def send_test_email
if #report.photos.empty?
Rails.env.development? ? Screenshot.new(#user.id, #report.id, Rails.application.config.custom.indicator_screenshot_bucket) : Screenshot.delay.new(#user.id, #report.id, Rails.application.config.custom.indicator_screenshot_bucket)
else
Rails.env.development? ? Screenshot.new(#user.id, #report.id, "photo_screenshots") : Screenshot.delay.new(#user.id, #report.id, "photo_screenshots")
end
ReportMailer.delay.test_report_email(#user, #report)
respond_to do |format|
format.json { render :json => { :success => true, :report_id => #report.id, :notice => 'Test email was successfully sent!' } }
end
end
This is RAILS_ENV=production log:
New RightAws::S3Interface using shared connections mode Opening new
HTTPS connection to my_project.production.s3.amazonaws.com:443 Opening
new HTTPS connection to s3.amazonaws.com:443 2016-09-26T10:48:46+0000:
[Worker(delayed_job host:ip-172-31-24-139 pid:8769)] Job
Screenshot.new (id=528) FAILED (16 prior attempts) with Errno::ENOENT:
No such file or directory # rb_sysopen -
/var/www/html/project/my_project/public/photo_screenshots/50031.png
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] Job Screenshot.new (id=529) RUNNING
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] Job Screenshot.new (id=529) FAILED (16 prior attempts) with
Magick::ImageMagickError: unable to open file
`public/report_screenshots/50031.png' # error/png.c/ReadPNGImage/3733
2016-09-26T10:48:46+0000: [Worker(delayed_job host:ip-172-31-24-139
pid:8769)] 2 jobs processed at 1.6978 j/s, 2 failed
This is AWS production log:
New RightAws::S3Interface using shared connections mode
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] Job
Screenshot.new (id=50) FAILED (6 prior attempts) with Errno::ENOENT:
No such file or directory # rb_sysopen -
/home/abcuser/Desktop/project/my_project/public/photo_screenshots/10016.png
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] Job
Screenshot.new (id=51) RUNNING 2016-09-26T16:00:30+0530:
[Worker(host:OSI-L-0397 pid:7117)] Job Screenshot.new (id=51) FAILED
(6 prior attempts) with Magick::ImageMagickError: unable to open file
`public/report_screenshots/10016.png' # error/png.c/ReadPNGImage/3667
2016-09-26T16:00:30+0530: [Worker(host:OSI-L-0397 pid:7117)] 2 jobs
processed at 0.2725 j/s, 2 failed
You can try uploading in a simpler approach.
Uploading images to a fixed bucket with different folders for each object or application.The s3 keeps a limitation on the number of buckets creattion whereas there is no
limitation for content inside a bucket.
This code will upload image for a user to s3 using aws-sdk gem. The bucket and the image uploaded are made public
so that the images uploaded are directly accessible. The input it takes is the image complete path
where it is present, folder in which it should be uploaded and user_id for whom it should
be uploaded.
def save_screenshot_to_s3(image_location, folder_name,user_id)
service = AWS::S3.new(:access_key_id => ACCESS_KEY_ID,
:secret_access_key => SECRET_ACCESS_KEY)
bucket_name = "app-images"
if(service.buckets.include?(bucket_name))
bucket = service.buckets[bucket_name]
else
bucket = service.buckets.create(bucket_name)
end
bucket.acl = :public_read
key = folder_name.to_s + "/" + File.basename(image_location)
s3_file = service.buckets[bucket_name].objects[key].write(:file => image_location)
s3_file.acl = :public_read
user = User.where(id: user_id).first
user.image = s3_file.public_url.to_s
user.save
end
for handling the screenshot part, in your capture method you have use done something like this.
`/bin/bash -c "DISPLAY=:0 #{phantom} #{js} #{url} #{image}"`
Is the /bin/bash thing really required, change it to below code and it should work.
`DISPLAY=:0 "#{phantom}" "#{js}" "#{url}" "#{image}"`
let it be if it breaks something else.
Since you are aware of the final image location which is image. Pass this directly to save_screenshot_to_s3 and you should be able to save it. This will save the image path to user too if you pass your user_id as specified in method

Paperclip multiple storage

I want to move my assets folder to Amazon S3 and since it has a big size, during the transaction i need to upload files both in my local storage and amazon s3 through paperclip.
Is there a way to configure paperclip to store uploaded files both on filesystem and amazon s3?
Maybe you'd benefit from this:
http://airbladesoftware.com/notes/asynchronous-s3/
What you'll have to do is firstly upload to your local storage, and then "asynchronously" upload to S3
This is typically done through the likes of Resque or DelayedJob (as the tutorial demonstrates), and will require you to run some sort of third-party processing engine on your server (typically Redis or similar)
From the tutorial:
### Models ###
class Person < ActiveRecord::Base
has_attached_file :local_image,
path: ":rails_root/public/system/:attachment/:id/:style/:basename.:extension",
url: "/system/:attachment/:id/:style/:basename.:extension"
has_attached_file :image,
styles: {large: '500x500#', medium: '200x200#', small: '70x70#'},
convert_options: {all: '-strip'},
storage: :s3,
s3_credentials: "#{Rails.root}/config/s3.yml",
s3_permissions: :private,
s3_host_name: 's3-eu-west-1.amazonaws.com',
s3_headers: {'Expires' => 1.year.from_now.httpdate,
'Content-Disposition' => 'attachment'},
path: "images/:id/:style/:filename"
after_save :queue_upload_to_s3
def queue_upload_to_s3
Delayed::Job.enqueue ImageJob.new(id) if local_image? && local_image_updated_at_changed?
end
def upload_to_s3
self.image = local_image.to_file
save!
end
end
class ImageJob < Struct.new(:image_id)
def perform
image = Image.find image_id
image.upload_to_s3
image.local_image.destroy
end
end
### Views ###
# app/views/people/edit.html.haml
# ...
= f.file_field :local_image
# app/views/people/show.html.haml
- if #person.image?
= image_tag #person.image.expiring_url(20, :small)
- else
= image_tag #person.local_image.url, size: '70x70'

Convert and store to S3 with REST API / InkFilepicker

I have a Rails app on heroku. From the server side (using the REST API of InkFilepicker), I would like to convert a file, save it to my S3 bucket and store the S3 url to my model.
Concretely: Given an image (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG) I want to convert it (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?w=200&h=150&fit=clip) and store the converted image to my S3 bucket.
EDIT
Here is what I did at the end:
after_save :save_thumbnail_url_to_s3
def save_thumbnail_url_to_s3
convert_options = {
fit: 'clip',
h:500,
w:500
}
file = open("#{self.url}/convert?#{convert_options.to_query}")
# Writing file into S3 bucket
amazon = AWS::S3.new(access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])
bucket = amazon.buckets[ENV['AWS_BUCKET']]
object = bucket.objects[s3_media_path]
written_file = object.write(file, acl: :public_read) # :authenticated_read
self.update_column :thumbnail_url, written_file.public_url.to_s
end
If you are using the filepicker.io API you can convert your file with the API and then provide then use open-uri as below to create a file stream that can be sent to S3, Tempfile as below behaves like the File API in ruby
[3] pry(main)> require 'open-uri'
=> true
[4] pry(main)> file = open("https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?...")
=> #
[5] pry(main)> file.class
=> Tempfile
You can simply use the aws-s3 gem : https://github.com/marcel/aws-s3
But be careful, Heroku is read only oriented, you will only be able to work on temp files.

Resources