I have a rails app which uploads videos to an AWS S3 bucket using their CORS configuration, when this is completed and the rails video object is created an Elastic Transcoder job is created to encode the video to .mp4 format and generate a thumbnail image, AWS SNS is enabled to send push notifications when the job is complete.
The process all works nicely and I receive a SNS notification when the upload is complete, however I can fetch the video url just fine but the notification only contains the thumbnail pattern rather than the actual filename.
Below is a typical notification I receive from AWS SNS. NB. This is from the outputs hash
{"id"=>"1", "presetId"=>"1351620000001-000040", "key"=>"uploads/video/150/557874e9-4c67-40f0-8f98-8c59506647e5/IMG_0587.mp4", "thumbnailPattern"=>"uploads/video/150/557874e9-4c67-40f0-8f98-8c59506647e5/{count}IMG_0587", "rotate"=>"auto", "status"=>"Complete", "statusDetail"=>"The transcoding job is completed.", "duration"=>10, "width"=>202, "height"=>360}
As you can see under thumbnailPattern is just the filepattern to use, and not the actual file created.
Does anyone know how I can get the URLS to the files created over elastic transcoder and SNS?
transcoder.rb # => I create a new transcoder object when a video has been saved
class Transcoder < Video
def initialize(video)
#video = video
#directory = "uploads/video/#{#video.id}/#{SecureRandom.uuid}/"
#filename = File.basename(#video.file, File.extname(#video.file))
end
def create
transcoder = AWS::ElasticTranscoder::Client.new(region: "us-east-1")
options = {
pipeline_id: CONFIG[:aws_pipeline_id],
input: {
key: #video.file.split("/")[3..-1].join("/"), # slice off the amazon.com bit
frame_rate: "auto",
resolution: 'auto',
aspect_ratio: 'auto',
interlaced: 'auto',
container: 'auto'
},
outputs: [
{
key: "#{#filename}.mp4",
preset_id: '1351620000001-000040',
rotate: "auto",
thumbnail_pattern: "{count}#{#filename}"
}
],
output_key_prefix: "#{#directory}"
}
job = transcoder.create_job(options)
#video.job_id = job.data[:job][:id]
#video.save!
end
end
VideosController #create
class VideosController < ApplicationController
def create
#video = current_user.videos.build(params[:video])
respond_to do |format|
if #video.save
transcode = Transcoder.new(#video)
transcode.create
format.html { redirect_to videos_path, notice: 'Video was successfully uploaded.' }
format.json { render json: #video, status: :created, location: #video }
format.js
else
format.html { render action: "new" }
format.json { render json: #video.errors, status: :unprocessable_entity }
end
end
end
end
It doesn't appear that the actual name of the thumbnails are passed back, either from SNS notifications or from the request response upon creation of a job:
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-job.html#create-job-examples
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/notifications.html
Because the base path/name of your thumbnails is known, and the sequence number will always start at 00001, you can iterate from there to determine if/how many of the thumbnails exist upon job completion. Ensure you use HEAD requests against the objects in S3 to determine their presence; its about 10x cheaper than doing a LIST request.
It passed 4 years after last reply. New Cold War raised, there are a lot of political tensions but Amazon sill doesn't fixed this issue.
As workaround I found another solution: usually transcoded file (video/thumbnail) are placed into the new bucket. Or at least under some prefix. I created new S3 event for ObjectCreate(All) for target bucket and specified prefix and connected it to pre-created SNS topic. This topic pings my backend's endpoint twice - first time when video transcoded and second time - when thumbnail created. Using regexp it is quite easy to distinguish what is what.
Related
I have an android app and a rails app. When a order is made from android, it calls create order API in rails app. So whenever a create order is successful in rails app,i want to perform a print action that prints the order bill using the data send from the android.
But the controller i created only gives a response either in json or html format. i.e
class API::V1::OrdersController << Api::ApiController
def create
#order = Order.create(
item: params[:item],
quantity: params[:quantity],
price: params[:price]
)
if #order.persisted?
respond_to do |format|
format.json { notice: 'Order successfully created.' }
format.html { render :print }
end
end
end
end
This will only give back the response of 'Order successfully created.' when json is used or the html format page written in 'print.html.slim' to the android.
But i want to print a order bill page from 'print.html.slim' when a order is persisted.
print.html.slim
table.table
thead
th Item Name
th Quantity
th Price
tbody
tr
td = #order.item
td = #order.quantity
td = #order.price
Is there any way i can do this ?
What you have and what you seek are two different things.
Since you have set up your Rails server as an API end point, you can only return JSON values. For what you need, you should accept the JSON data in your android app and do what you wish with the data in your app (like displaying the confirmed order)
I'm using the aws-sdk v2 for Ruby and I find the methods available on objects really limiting. I created a bucket like so:
client = Aws::S3::Client.new(region: 'us-west-2')
s3 = Aws::S3::Resource.new(client: client)
S3_BUCKET = s3.bucket(ENV['AWS_BUCKET'])
I've found that the only methods available to write an object to my bucket is put. However, I don't see a 'success_action_status' available with this method. I've deployed my app to Elastic Beanstalk. Locally, I can write to this bucket, but when I try and write to my eb app, it's not working and I am working blind trying to figure out what's happening. Any info to help determine where my PUT request is going wrong would be helpful.
Here's what my method looks like now:
def create
username = params[:user][:user_alias]
key = "uploads/#{username}"
obj = S3_BUCKET.object(key)
obj.put({
acl: 'public-read',
body: params[:user][:image_uri],
})
#user = User.new(user_params)
if #user.save
render json: #user, status: :created, location: #user
else
render json: #user.errors, status: :unprocessable_entity
end
end
Here's the documentation I'm referring to for PUT methods: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#put-instance_method
It doesn't seem like you're puting the image file, only the uri. The docs say this about the put method's body option:
body: source_file, # file/IO object, or string data
You'll have to read or open the image file in order to actually upload the image:
obj.put({
acl: 'public-read',
body: File.open(params[:user][:image_uri], ?r),
})
Then you can check the success status with exists?
I have a rails app hosted on Heroku. Here's the situation: a user should be able to upload a PDF (an instance of Batch) to our app using s3; a user should also be able to take the s3 web address of the uploaded PDF and split it up into more PDFs using HyPDF by specifying the file path and the desired pages to be split out (to create instances of Essay).
All of this is happening in the same POST request to /essays.
Here's the code I've been working with today:
def create
if params[:essay].class == String
batch_id = params[:batch_id].gsub(/[^\d]/, '').to_i
break_up_batch(params, batch_id)
redirect_to Batch.find(batch_id), notice: 'Essays were successfully created.'
else
#essay = Essay.new(essay_params)
respond_to do |format|
if #essay.save
format.html { redirect_to #essay, notice: 'Essay was successfully created.' }
format.json { render :show, status: :created, location: #essay }
else
format.html { render :new }
format.json { render json: #essay.errors, status: :unprocessable_entity }
end
end
end
end
# this is a private method
def break_up_batch(params, batch_id)
essay_data = []
# create a seperate essay for each grouped essay
local_batch = File.open(Rails.root.join('tmp').to_s + "temppdf.pdf" , 'wb') do |f|
f.binmode
f.write HTTParty.get(Batch.find(batch_id).document.url).parsed_response
f.path
end
params["essay"].split("~").each do |data|
data = data.split(" ")
hypdf_url = HyPDF.pdfextract(
local_batch,
first_page: data[1].to_i,
last_page: data[2].to_i,
bucket: 'essay101',
public: true
)
object = {student_name: data[0], batch_id: batch_id, url: hypdf_url[:url]}
essay_data << object
end
essay_data.each {|essay| Essay.create(essay)}
File.delete(local_batch)
end
I can't get the file to show up on Heroku, and I'm checking with heroku run bash and ls tmp. So when the method is run, a blank file is uploaded to S3. I've written some jQuery to populate a hidden field which is why there's the funky splitting in the middle of the code.
Because of Heroku's ephemeral filesystem, I'd highly recommend getting that file off your filesystem as fast as possible. Perhaps using the following:
User uploads to S3 (preferably direct: https://devcenter.heroku.com/articles/direct-to-s3-image-uploads-in-rails)
Kick off a background worker to fetch the file and do the processing necessary in-memory
If the user needs to be informed when the file is properly processed, set a "status" field in your DB and allow the front-end app to poll the web server for updates. Show "Processing" to the user until the background worker changes its status.
This method also allows your web process to respond quickly without tying up resources, and potentially triggering an H12 (request timeout) error.
Turns out using the File class wasn't the right way to go about it. But using Tempfile works!
def break_up_batch(params, batch_id, current_user)
essay_data = []
# create a seperate essay for each grouped essay
tempfile = Tempfile.new(['temppdf', '.pdf'], Rails.root.join('tmp'))
tempfile.binmode
tempfile.write HTTParty.get(Batch.find(batch_id).document.url).parsed_response
tempfile.close
save_path = tempfile.path
params["essay"].split("~").each do |data|
data = data.split(" ")
hypdf_url = HyPDF.pdfextract(
save_path,
first_page: data[1].to_i,
last_page: data[2].to_i,
bucket: 'essay101',
public: true
)
object = {student_name: data[0], batch_id: batch_id, url: hypdf_url[:url]}
essay_data << object
end
essay_data.each do |essay|
saved_essay = Essay.create(essay)
saved_essay.update_attributes(:company_id => current_user.company_id) if current_user.company_id
end
tempfile.unlink
end
I'm in need of some guidance. I'm using the 6px.io API to resize images. The conversion can take a couple of seconds and 6px sends me a post callback when the image has been sent/stored in our S3 bucket. After the callback, when the image is resized and saved we'd like to download the file to the user's browser from our S3 bucket.
If the resized image is already in our S3 bucket and does not need to be processed by 6px we use send_data to download the file to the user. Unfortunately I can't use send_data on the callback controller action to initiate download of the file to the user. How is this done in Rails?
User story:
Click medium resize of image menu item of dropdown.
Rails controller action builds JSON with post callback url and sends to 6px.
6px returns JSON with input and output information plus the status (complete/fail).
File starts to download to user's computer/browser ***This is what I need help with.
Example code (this is for work, can't post the real code):
class FakeExampleController < ApplicationController
def convert_image
# code ommitted
#fake_example = FakeExample.find_by(:id)
image_conversion(params)
end
def callback
# width, height, type variable code ommitted
if params['status'] == 'complete'
converted_file = FakeConvertedImage.create!(filename: #new_file_name, attachment_id: #fake_example.id, format: type, width: width, height: height)
send_converted_file( :url => "OUR_S3_BUCKET", :filename => "#{converted_file.filename}", :type => "#{type}", :disposition => 'attachment' ) #This does not work to download file to user's browser.
else
raise
"Image Conversion Failed"
end
end
def image_conversion(params)
callback_url = fake_example_url.gsub(/localhost:3000/, '********.ngrok.com')
image_converter = ImageConverter.new(params) #This is a wrapper I made for the 6px-ruby gem
image_converter
.convert_type(#fakeexample.mime_type_for_6px(params[:mimetype])
.resize_images
.save("OUR_S3_BUCKET_URL", callback_url)
end
def send_converted_file(opts={})
other_opts = opts.select{ |opt,v| /(filename|type|disposition)/ === opt }
response = { url: opts.fetch(:url), :opts => other_opts }
send_data( open(URI.encode(response[:url].to_s)).read, response[:opts] )
end
end
I don't get/have any errors. I just need advice on how to download the file saved in our S3 to the user.
Two things I want:
a) I want to be able to save a record in the db only if the API call succeeds
b) I want to execute the API call only if the db record saves successfully
The goal, is to keep data stored locally (in the DB) consistent with that of the data on Stripe.
#payment = Payment.new(...)
begin
Payment.transaction do
#payment.save!
stripe_customer = Stripe::Customer.retrieve(manager.customer_id)
charge = Stripe::Charge.create(
amount: #plan.amount_in_cents,
currency: 'usd',
customer: stripe_customer.id
)
end
# https://stripe.com/docs/api#errors
rescue Stripe::CardError, Stripe::InvalidRequestError, Stripe::APIError => error
#payment.errors.add :base, 'There was a problem processing your credit card. Please try again.'
render :new
rescue => error
render :new
else
redirect_to dashboard_root_path, notice: 'Thank you. Your payment is being processed.'
end
The above following will work, because if the record (on line 5) doesn't save, the rest of the code doesn't execute.
But what if I needed the #payment object saved after the API call, because I need to assign the #payment object with values from the API results. Take for example:
#payment = Payment.new(...)
begin
Payment.transaction do
stripe_customer = Stripe::Customer.retrieve(manager.customer_id)
charge = Stripe::Charge.create(
amount: #plan.amount_in_cents,
currency: 'usd',
customer: stripe_customer.id
)
#payment.payment_id = charge[:id]
#payment.activated_at = Time.now.utc
#payment.save!
end
# https://stripe.com/docs/api#errors
rescue Stripe::CardError, Stripe::InvalidRequestError, Stripe::APIError => error
#payment.errors.add :base, 'There was a problem processing your credit card. Please try again.'
render :new
rescue => error
render :new
else
redirect_to dashboard_root_path, notice: 'Thank you. Your payment is being processed.'
end
You notice #payment.save! happens after the API call. This could be a problem, because the API call ran, before the DB tried to save the record. Which could mean, a successful API call, but a failed DB commit.
Any ideas / suggestions for this scenario?
You can't execute API => DB and DB => API at the same time (sounds like an infinite execution conditions), at least I can't image how you can achieve this workflow. I understand your data consistency needs, so I propose:
Check if record is valid #payment.valid? (probably with a custom method like valid_without_payment?)
Run api call
Save record (with payment_id) only if api call succeds
Alternatively:
Save record without payment_id
Run api call
Update record with payment_id (api response) if call succeds
Run a task (script) periodically (cron) to check inconsistent instances (where(payment_id: nil)) and delete it
I think both options are acceptable and your data will remain consistent.