Given that I have the following data for an event as a hash in ruby on rails :
event = {
start_at: Time.now(),
end_at: Time.now() + 3600,
summary: 'My meeting',
description: 'A zoom meeting about food',
event_url: 'https://food.example.com',
formatted_address: ''
}
How do I deliver that info to a user a dynamically created an ical/ics file?
The icalendar gem works well. https://github.com/icalendar/icalendar
I use this to generate outlook and ical versions. Works great.
cal = Icalendar::Calendar.new
filename = "Foo at #{foo.name}"
if params[:format] == 'vcs'
cal.prodid = '-//Microsoft Corporation//Outlook MIMEDIR//EN'
cal.version = '1.0'
filename += '.vcs'
else # ical
cal.prodid = '-//Acme Widgets, Inc.//NONSGML ExportToCalendar//EN'
cal.version = '2.0'
filename += '.ics'
end
cal.event do |e|
e.dtstart = Icalendar::Values::DateTime.new(foo.start_at, tzid: foo.time_zone)
e.dtend = Icalendar::Values::DateTime.new(foo.end_at, tzid: foo.course.time_zone)
e.summary = foo.summary
e.description = foo.description
e.url = event_url(foo)
e.location = foo.formatted_address
end
send_data cal.to_ical, type: 'text/calendar', disposition: 'attachment', filename: filename
Related
In my app I currently support upload from a direct download url, either the user inputs a url, or one is generated from a Box file picker widget. I do this with a Net:HTTP request, writing each segment as it comes to a filesystem.
Now I want to change to storing files from url in S3, for files too big to put in memory.
Below is a snippet I am currently working on:
queue = Queue.new
up_url = presigned_url_from_aws
down_uri = remote_download_url
producer = Thread.new do
# stream the file from the url,
# (code based on something currently working)
Net::HTTP.start(down_uri.host, down_uri.port, :use_ssl => (down_uri.scheme == 'https')) {|http|
http.request_get(down_uri.path) {|res|
res.read_body {|seg|
queue << seg
update_progress()
}
}
}
end
consumer = Thread.new do
# turn queue input into body_stream ?
end
# Use presigned url to upload file to aws
Net::HTTP.start(up_url.host) do |http|
http.send_request("PUT", up_url.request_uri, body_stream, {
# This is required, or Net::HTTP will add a default unsigned content-type.
"content-type" => "",
})
end
I eventually found a solution that worked. As before, this code is in a ProgressJob class. I used the aws multipart upload api. I created a queue for segments, a producer thread to put segments into the queue, a consumer thread to take segments out of the queue for further processing, and a thread to close the queue at the right time. In the consumer thread, I put the segments into a StringIO object until it each but the last was least 5 MB (minimum size for upload part), and sent the parts to s3 as I got them, to not fill up disk or memory. There were a lot of gotchas, but below is the working code I ended up with, in case this helps someone else:
require 'tempfile'
require 'open-uri'
require 'fileutils'
require 'net/http'
require 'aws-sdk-s3'
class CreateDatafileFromRemoteJob < ProgressJob::Base
Thread.abort_on_exception=true
FIVE_MB = 1024 * 1024 * 5
def initialize(dataset_id, datafile, remote_url, filename, filesize)
#remote_url = remote_url
#dataset_id = dataset_id
#datafile = datafile
#filename = filename
#filesize = filesize #string because it is used in display
if filesize.to_f < 4000
progress_max = 2
else
progress_max = (filesize.to_f / 4000).to_i + 1
end
super progress_max: progress_max
end
def perform
more_segs_to_do = true
upload_incomplete = true
#datafile.binary_name = #filename
#datafile.storage_root = Application.storage_manager.draft_root.name
#datafile.storage_key = File.join(#datafile.web_id, #filename)
#datafile.binary_size = #filesize
#datafile.save!
if IDB_CONFIG[:aws][:s3_mode]
upload_key = #datafile.storage_key
upload_bucket = Application.storage_manager.draft_root.bucket
if Application.storage_manager.draft_root.prefix
upload_key = "#{Application.storage_manager.draft_root.prefix}#{#datafile.storage_key}"
end
client = Application.aws_client
if #filesize.to_f < FIVE_MB
web_contents = open(#remote_url) {|f| f.read}
Application.storage_manager.draft_root.copy_io_to(#datafile.storage_key, web_contents, nil, #filesize.to_f)
upload_incomplete = false
else
parts = []
seg_queue = Queue.new
mutex = Mutex.new
segs_complete = false
segs_todo = 0
segs_done = 0
begin
upload_id = aws_mulitpart_start(client, upload_bucket, upload_key)
seg_producer = Thread.new do
uri = URI.parse(#remote_url)
Net::HTTP.start(uri.host, uri.port, :use_ssl => (uri.scheme == 'https')) {|http|
http.request_get(uri.path) {|res|
res.read_body {|seg|
mutex.synchronize {
segs_todo = segs_todo + 1
}
seg_queue << seg
update_progress
}
}
}
mutex.synchronize {
segs_complete = true
}
end
seg_consumer = Thread.new do
part_number = 1
partio = StringIO.new("", 'wb+')
while seg = seg_queue.deq # wait for queue to be closed in controller thread
partio << seg
if partio.size > FIVE_MB
partio.rewind
mutex.synchronize {
etag = aws_upload_part(client, partio, upload_bucket, upload_key, part_number, upload_id)
parts_hash = {etag: etag, part_number: part_number}
parts.push(parts_hash)
}
part_number = part_number + 1
partio.close if partio&.closed?
partio = StringIO.new("", 'wb+')
end
mutex.synchronize {
segs_done = segs_done + 1
}
end
# upload last part, less than 5 MB
mutex.synchronize {
partio.rewind
etag = aws_upload_part(client, partio, upload_bucket, upload_key, part_number, upload_id)
parts_hash = {etag: etag, part_number: part_number}
parts.push(parts_hash)
Rails.logger.warn("Another part bites the dust: #{part_number}")
partio.close if partio&.closed?
aws_complete_upload(client, upload_bucket, upload_key, parts, upload_id)
upload_incomplete = false
}
end
controller = Thread.new do
while more_segs_to_do
sleep 0.9
mutex.synchronize {
if segs_complete && ( segs_done == segs_todo)
more_segs_to_do = false
end
}
end
seg_queue.close
end
rescue Exception => ex
# ..|..
#
Rails.logger.warn("something went wrong during multipart upload")
Rails.logger.warn(ex.class)
Rails.logger.warn(ex.message)
ex.backtrace.each do |line|
Rails.logger.warn(line)
end
Application.aws_client.abort_multipart_upload({
bucket: upload_bucket,
key: upload_key,
upload_id: upload_id,
})
raise ex
end
end
else
filepath = "#{Application.storage_manager.draft_root.path}/#{#datafile.storage_key}"
dir_name = File.dirname(filepath)
FileUtils.mkdir_p(dir_name) unless File.directory?(dir_name)
File.open(filepath, 'wb+') do |outfile|
uri = URI.parse(#remote_url)
Net::HTTP.start(uri.host, uri.port, :use_ssl => (uri.scheme == 'https')) {|http|
http.request_get(uri.path) {|res|
res.read_body {|seg|
outfile << seg
update_progress
}
}
}
end
upload_incomplete = false
end
while upload_incomplete
sleep 1.3
end
end
def aws_mulitpart_start(client, upload_bucket, upload_key)
start_response = client.create_multipart_upload({
bucket: upload_bucket,
key: upload_key,
})
start_response.upload_id
end
def aws_upload_part(client, partio, upload_bucket, upload_key, part_number, upload_id)
part_response = client.upload_part({
body: partio,
bucket: upload_bucket,
key: upload_key,
part_number: part_number,
upload_id: upload_id,
})
part_response.etag
end
def aws_complete_upload(client, upload_bucket, upload_key, parts, upload_id)
response = client.complete_multipart_upload({
bucket: upload_bucket,
key: upload_key,
multipart_upload: {parts: parts, },
upload_id: upload_id,
})
end
end
I'm completely new to Ruby on Rails but I think I might be missing something obvious. I'm currently working on a webapp that scrapes auction websites. The bones of the app was created by someone else. I'm currently trying to add new website scrapes but they don't seem to be working.
I have read through some of the Nokogiri documentation, checked that the scraped information is indeed not being written to the database (the seeded URLs that are being targeted have been when I check via the rails console) and used the chrome extension CSS Selector Tester to check that I am targeting the correct CSS selectors. The record ids are correct when I check via the rails console.
I have put what I think are the important sections of code below, but I might be missing something that I don't realise is important.
The websites I'm having issues with are Lot-art.com & Lot-Tissimo.com
Any help will be much appreciated.
Seeded URLs
Source.create(name: "Auction.fr", query_template: "https://www.auction.fr/_en/lot/search/?contexte=futures&tri=date_debut%20ASC&query={query}&page={page}")
Source.create(name: "Invaluable.co.uk", query_template: "https://www.invaluable.co.uk/search/api/search-results?keyword={query}&size=1000")
Source.create(name: "Interencheres.com", query_template: "http://www.interencheres.com/en/recherche/lot?search%5Bkeyword%5D={query}&page={page}")
Source.create(name: "Gazette-drouot.com", query_template: "http://catalogue.gazette-drouot.com/html/g/recherche.jsp?numPage={page}&filterDate=1&query={query}&npp=100")
Source.create(name: "Lot-art.com", query_template: "http://www.lot-art.com/auction-search/?form_id=lot_search_form&page=1&mq=&q={query}&ord=recent")
Source.create(name: "Lot-tissimo.com", query_template: "https://lot-tissimo.com/en/cmd=s&lwr=&ww={query}&xw=&srt=SN&wg=EUR&page={page}")
Scheduler code
require 'rufus-scheduler'
require 'nokogiri'
require 'mechanize'
require 'open-uri'
require "net/https"
s = Rufus::Scheduler.singleton
s.interval '1m' do
setting = Setting.find(1)
agent = Mechanize.new
agent.user_agent_alias = 'Windows Chrome'
agent.cookie_jar.load(File.join(Rails.root, 'tmp/cookies.yaml'))
List.all.each do |list|
number_of_new_items = 0
list.actions.each do |action|
url = action.source.query_template.gsub('{query}', action.list.query)
case action.source.id
when 1 # Auction.fr
20.downto(1) do |page|
doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s)))
doc.css("div.list-products > ul > li").reverse.each do |item_data|
price = 0
if item_data.at_css("h3.h4.adjucation.ft-blue") && /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text)
price = /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text)[1].gsub(" ", "")
end
item = action.items.new(
title: item_data.at_css("h2").text.strip,
url: item_data.at_css("h2 a")["href"],
picture: item_data.at_css("div.image-wrap.lazy div.image img")["src"],
price: price,
currency: "€"
)
ActiveRecord::Base.logger.silence do # This disable writing logs
if item.save
number_of_new_items = number_of_new_items + 1
end
end
end
end
when 97 # Lot-Tissimo.com
5.downto(1) do |page|
doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s)))
doc.css("#inhalt > .objektliste").reverse.each do |item_data|
# price = 0
# if item_data.at_css("h3.h4.adjucation.ft-blue") && /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text)
# price = /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text)[1].gsub(" ", "")
# end
item = action.items.new(
title: item_data.at_css("div.objli-desc").text.strip,
url: item_data.at_css("td.objektliste-foto a")["href"],
picture: item_data.at_css("td.objektliste-foto a#lot_link img")["src"],
price: price,
currency: "€"
)
ActiveRecord::Base.logger.silence do # This disable writing logs
if item.save
number_of_new_items = number_of_new_items + 1
end
end
end
end
when 2 # Invaluable.co.uk
doc = JSON.parse(open(url).read)
doc["itemViewList"].reverse.each do |item_data|
puts item_data["itemView"]["photos"]
item = action.items.new(
title: item_data["itemView"]["title"],
url: "https://www.invaluable.co.uk/buy-now/" + item_data["itemView"]["title"].parameterize + "-" + item_data["itemView"]["ref"],
picture: item_data["itemView"]["photos"] != nil ? item_data["itemView"]["photos"].first["_links"]["medium"]["href"] : nil,
price: item_data["itemView"]["price"],
currency: item_data["itemView"]["currencySymbol"]
)
ActiveRecord::Base.logger.silence do # This disable writing logs
if item.save
number_of_new_items = number_of_new_items + 1
end
end
end
when 3 # Interencheres.com
# doc = Nokogiri::HTML(open(url))
5.downto(1) do |page|
doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s)))
doc.css("div#lots_0 div.ligne_vente").reverse.each do |item_data|
price = 0
item = action.items.new(
title: item_data.at_css("div.ph_vente div.des_vente p a").text.strip,
url: "http://www.interencheres.com" + item_data.at_css("div.ph_vente div.des_vente p a")["href"],
picture: item_data.at_css("div.ph_vente div.gd_ph_vente img")["src"],
price: price,
currency: "€"
)
ActiveRecord::Base.logger.silence do # This disable writing logs
if item.save
number_of_new_items = number_of_new_items + 1
end
end
end
end
when 4 # Gazette-drouot.com
5.downto(1) do |page|
# doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s)))
doc = agent.get(url.gsub('{page}', page.to_s))
# doc = agent.get(url)
doc.css("div#recherche_resultats div.lot_recherche").reverse.each do |item_data|
price = 0
picture = item_data.at_css("img.image_thumb_recherche") ? item_data.at_css("img.image_thumb_recherche")["src"] : nil
item = action.items.new(
title: item_data.at_css("#des_recherche").text.strip.truncate(140),
url: "http://catalogue.gazette-drouot.com/html/g/" + item_data.at_css("a.lien_under")["href"],
picture: picture,
price: price,
currency: "€"
)
ActiveRecord::Base.logger.silence do # This disable writing logs
if item.save
number_of_new_items = number_of_new_items + 1
end
end
end
end
when 69 # Lot-art.com
doc = agent.get(url)
doc.css("div.lot_list_holder").reverse.each do |item_data|
price = 0
item = action.items.new(
title: item_data.at_css("div.lot_list_body a")[0].text.strip.truncate(140),
url: item_data.at_css("div.lot_list_body")["href"],
picture: item_data.at_css("a.lot_list_thumb img") ["src"],
price: price,
currency: "€"
)
ActiveRecord::Base.logger.silence do # This disable writing logs
if item.save
number_of_new_items = number_of_new_items + 1
end
end
end
end
end
if number_of_new_items > 0 && setting.notifications_per_hour > setting.notifications_this_hour && setting.pushover_app_token.present? && setting.pushover_user_key.present?
url = URI.parse("https://api.pushover.net/1/messages.json")
req = Net::HTTP::Post.new(url.path)
req.set_form_data({
:token => setting.pushover_app_token,
:user => setting.pushover_user_key,
:message => "#{number_of_new_items} new items on #{list.name}!",
:url_title => "Check the list",
:url => "http://spottheauction.com/lists/#{list.id}"
})
res = Net::HTTP.new(url.host, url.port)
res.use_ssl = true
res.verify_mode = OpenSSL::SSL::VERIFY_PEER
res.start {|http| http.request(req) }
end
end
agent.cookie_jar.save(File.join(Rails.root, 'tmp/cookies.yaml'))
end
s.cron '0 * * * *' do
setting = Setting.find(1)
setting.notifications_this_hour = 0
setting.save
end
new just initializes an instance but doesn't save the instance. Do you actually call save somewhere?
You have two options:
Call save on the item:
item = action.items.new(
# ...
)
item.save
Or use create instead of new:
item = action.items.create(
# ...
)
In case someone else comes across this. I got the scraping of lot-art.com to work. It seemed that I was lacking specificity in the css selector for nokogiri to pull the correct data.
I am still having continuing issues with lot-tissimo although that appears to be from something else as other scrapers have issues such as scraping-hub's portia spiders.
I have set prawn in my rails app to generate:
format.pdf do
pdf = SalesByDayPdf.new(#daily_salesnp, #amount_total, #discount_total, #grand_total)
pdf.render_file "daily_sales.pdf"
send_data pdf.render, filename: 'daily_sales.pdf', type: 'application/pdf', disposition: 'inline'
end
And this is my SalesByDayPdf
def initialize(daily_salesnp, grand_total, discount_total, amount_total)
super()
#daily_salesnp = daily_salesnp
#amount_total = amount_total
#discount_total = discount_total
#grand_total = grand_total
header
text_content
table_content
footer
end
This works fine.
Now I want to send this pdf from action mailer. I have set it in my DailySalesMailer as:
def send_daily_sale(daily_salesnp, grand_total, discount_total, amount_total)
#daily_salesnp = daily_salesnp
#amount_total = amount_total
#discount_total = discount_total
#grand_total = grand_total
attachments["daily_sales.pdf"] = SalesByDayPdf.new(daily_salesnp, grand_total, discount_total, amount_total)
mail(:to => "email#gmail.com", :subject => 'Sales by Day Report')
end
So basically I copied the pdf generator in mailer and passed same arguments defined in my controller.
But I'm getting:
wrong number of arguments (given 1, expected 4)
What am I doing wrong?
The solution is to modify the attachment to:
attachments["daily_sales.pdf"] = SalesByDayPdf.new(daily_salesnp, grand_total, discount_total, amount_total).render
I'm currently using the icalendar gem to create a new ical calendar and then send it via the mandrill_mailer gem as an attachment. I've tried a variety of different methods - so far I believe I've gotten closest with:
Event.rb
require 'base64'
def self.export_events(user)
#event = Event.last
#calendar = Icalendar::Calendar.new
event = Icalendar::Event.new
event.summary = #event.title
event.dtstart = #event.start_time.strftime("%Y%m%dT%H%M%S")
event.dtend = #event.end_time.strftime("%Y%m%dT%H%M%S")
event.description = #event.desc
event.location = #event.location
#calendar.add_event(event)
encoded_cal = Base64.encode64(#calendar.to_ical)
CalendarMailer.send_to_ical(user, encoded_cal).deliver
end
calendar_mailer.rb
class CalendarMailer < MandrillMailer::TemplateMailer
default from: "blah#blah.com"
# iCal
def send_to_ical(user, encoded_cal)
mandrill_mail template: "ical-file",
subject: "Your iCal file",
to: { email: user.email, name: user.name },
inline_css: true,
async: true,
track_clicks: true,
attachments: [
{
type: "text/calendar",
content: encoded_cal,
name: "calendar.ics",
}
]
end
end
I know my mailer stuff is set up correctly since I'm able to send other types of transactional emails successfully. Also, according to this S.O. post I can't send it directly as a .ics file which is why I'm sending the base64 encoded version of it. Here is the error I keep getting regardless of what I do (whether it's the above or creating a tmp file and opening/reading the newly created tmp file in calendar_mailer.rb):
TypeError: no implicit conversion of nil into String
from /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/base64.rb:38:in pack'
from /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/base64.rb:38:inencode64'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/mandrill_mailer-0.4.13/lib/mandrill_mailer/core_mailer.rb:263:in block in mandrill_attachment_args'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/mandrill_mailer-0.4.13/lib/mandrill_mailer/core_mailer.rb:258:inmap'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/mandrill_mailer-0.4.13/lib/mandrill_mailer/core_mailer.rb:258:in mandrill_attachment_args'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/mandrill_mailer-0.4.13/lib/mandrill_mailer/template_mailer.rb:191:inmandrill_mail'
from /Users/alansalganik/projects/glyfe/app/mailers/calendar_mailer.rb:8:in send_to_ical'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/mandrill_mailer-0.4.13/lib/mandrill_mailer/core_mailer.rb:283:incall'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/mandrill_mailer-0.4.13/lib/mandrill_mailer/core_mailer.rb:283:in method_missing'
from (irb):763
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/railties-4.1.1/lib/rails/commands/console.rb:90:instart'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/railties-4.1.1/lib/rails/commands/console.rb:9:in start'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/railties-4.1.1/lib/rails/commands/commands_tasks.rb:69:inconsole'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/railties-4.1.1/lib/rails/commands/commands_tasks.rb:40:in run_command!'
from /usr/local/rvm/gems/ruby-2.0.0-p481#rails-4.0.2/gems/railties-4.1.1/lib/rails/commands.rb:17:in'
from bin/rails:4:in `require'
Thanks in advance.
Probably not the best code in the world, but an example:
class Outlook
def self.create_cal
#calendar = Icalendar::Calendar.new
event = Icalendar::Event.new
event.summary = "SUMMARY"
event.dtstart = Time.now.strftime("%Y%m%dT%H%M%S")
event.dtend = (Time.now + 1.hour).strftime("%Y%m%dT%H%M%S")
event.description = "DESC"
event.location = "Holborn, London WC1V"
#calendar.add_event(event)
return #calendar.to_ical
end
end
And
ics_file = Outlook.create_cal
mandrill_mail(
(...)
attachments: [
{ content: ics_file, name: 'ical.ics', type: 'text/calendar' }
]
)
I have the following problem. Sounds are hidden from the public folder, cause there are only certain Users who should have access to the sound files. So I made a certain method, which acts like a sound url, but calculates first, whether the current user is allowed to access this file.
The file gets sent by the send_data method. The problem is just, that I it works quite slow if it works even... The developer of the jplayer plugin, which I use to play the sound, told me that I should be able to accept byte range requests to make it work properly...
How can I do this within a rails controller by sending the file with send_data or send_file?
Thanks,
Markus
I've been able to serve up the files with some success using send_file. Although I have one hitch, seeking to an earlier part of the song causes a new request which makes the song restart from 0:00 instead of the true location from the seekbar. This is what I have working for me so far:
file_begin = 0
file_size = #media.file_file_size
file_end = file_size - 1
if !request.headers["Range"]
status_code = "200 OK"
else
status_code = "206 Partial Content"
match = request.headers['range'].match(/bytes=(\d+)-(\d*)/)
if match
file_begin = match[1]
file_end = match[1] if match[2] && !match[2].empty?
end
response.header["Content-Range"] = "bytes " + file_begin.to_s + "-" + file_end.to_s + "/" + file_size.to_s
end
response.header["Content-Length"] = (file_end.to_i - file_begin.to_i + 1).to_s
response.header["Last-Modified"] = #media.file_updated_at.to_s
response.header["Cache-Control"] = "public, must-revalidate, max-age=0"
response.header["Pragma"] = "no-cache"
response.header["Accept-Ranges"]= "bytes"
response.header["Content-Transfer-Encoding"] = "binary"
send_file(DataAccess.getUserMusicDirectory(current_user.public_token) + #media.sub_path,
:filename => #media.file_file_name,
:type => #media.file_content_type,
:disposition => "inline",
:status => status_code,
:stream => 'true',
:buffer_size => 4096)
Here is my version. I use gem 'ogginfo-rb' to calculate the duration which is required to serve ogg files properly.
p.s. I always have three formats - wav, mp3, ogg.
the_file = File.open(file_path)
file_begin = 0
file_size = the_file.size
file_end = file_size - 1
if request.headers['Range']
status_code = :partial_content
match = request.headers['range'].match(/bytes=(\d+)-(\d*)/)
if match
file_begin = match[1]
file_end = match[1] if match[2] and not match[2].empty?
end
response.headers['Content-Range'] = "bytes #{file_begin}-#{file_end.to_i + (match[2] == '1' ? 1 : 0)}/#{file_size}"
else
status_code = :ok
end
response.headers['Content-Length'] = (file_end.to_i - file_begin.to_i + 1).to_s
response.headers['Last-Modified'] = the_file.mtime
response.headers['Cache-Control'] = 'public, must-revalidate, max-age=0'
response.headers['Pragma'] = 'no-cache'
response.headers['Accept-Ranges'] = 'bytes'
response.headers['Content-Transfer-Encoding'] = 'binary'
require 'ogginfo-rb'
ogginfo = Ogg::Info::open(the_file.path.gsub(/.mp3|.wav/,'.ogg'))
duration = ogginfo.duration.to_f
response.headers['Content-Duration'] = duration
response.headers['X-Content-Duration'] = duration
send_file file_path,
filename: "#{call.id}.#{ext}",
type: Mime::Type.lookup_by_extension(ext),
status: status_code,
disposition: 'inline',
stream: 'true',
buffer_size: 32768
I used Garrett's answer and modified it (including one or two bug fixes). I also used send_data instead of reading from a file:
def stream_data data, options={}
range_start = 0
file_size = data.length
range_end = file_size - 1
status_code = "200"
if request.headers["Range"]
status_code = "206"
request.headers['range'].match(/bytes=(\d+)-(\d*)/).try do |match|
range_start = match[1].to_i
range_end = match[2].to_i unless match[2]&.empty?
end
response.header["Content-Range"] = "bytes #{range_start}-#{range_end}/#{file_size}"
end
response.header["Content-Length"] = (range_end - range_start + 1).to_s
response.header["Accept-Ranges"] = "bytes"
send_data(data[range_start, range_end],
filename: options[:filename],
type: options[:type],
disposition: "inline",
status: status_code)
end
Another amended version - I was trying to download a zip file as binary content and this is what worked for me -
def byte_range_response (request, response, content)
file_begin = 0
file_size = content.bytesize
file_end = file_size - 1
status_code = '206 Partial Content'
match = request.headers['range'].match(/bytes=(\d+)-(\d*)/)
if match
file_begin = match[1]
file_end = match[2] if match[2] && !match[2].empty?
end
content_length = file_end.to_i - file_begin.to_i + 1
response.header['Content-Range'] = 'bytes ' + file_begin.to_s + '-' + file_end.to_s + '/' + file_size.to_s
response.header['Content-Length'] = content_length.to_s
response.header['Cache-Control'] = 'public, must-revalidate, max-age=0'
response.header['Pragma'] = 'no-cache'
response.header['Accept-Ranges']= 'bytes'
response.header['Content-Transfer-Encoding'] = 'binary'
send_data get_partial_content(content, content_length, file_begin.to_i), type: 'application/octet-stream', status: status_code
end
def get_partial_content(content, content_length, offset)
test_file = Tempfile.new(['test-file', '.zip'])
test_file.puts(content)
partial_content = IO.binread(test_file.path, content_length, offset)
test_file.close
test_file.unlink
partial_content
end