ActiveStorage not generating "content disposition inline" urls - ruby-on-rails

I've configured Rails to serve inline images (actually overconfigured as it's the default anyway)
config.active_storage.content_types_allowed_inline << "image/png"
config.active_storage.content_types_to_serve_as_binary -= ["image/png"]
Then, in the console, running
app.rails_blob_url(model.thumbnail, disposition: :inline)
which generates http://localhost:3000/.../1668042826_thumbnail.png?disposition=inline
This still produces a "content-disposition: attachment" url, causing the file to be downloaded rather than served inline.
This can be confirmed with
curl -I http://localhost:3000/.../1668042826_thumbnail.png?disposition=inline
Which shows that it redirects to
https://our-bucket.eu-west-1.amazonaws.com/key?response-content-disposition=attachment...
How do we generate a response with the correct content disposition?

Related

How can I to make upload of file maintain your original `Content Type` to Amazon S3?

TL;DR
How can I to upload an image and maintain its original Content Type or, generate a signed or public URL that force a correct type for my file?
I explain more:
I have a problem with S3 (I really I'm using Minio, that is compatible with S3 protocol) in Rails app with
gem 'aws-sdk-s3', '~> 1.96'
I create the follow method to handle uploaded file (in Rails App) and send it to Minio.
def upload_file(file)
object_key = "#{Time.now.to_i}-#{file.original_filename}"
object = #s3_bucket.object(object_key)
object.upload_file(Pathname.new(file.path))
object
end
This is my uploaded file with correct Content-Type, before it was sent to Minio.
# file
#<ActionDispatch::Http::UploadedFile:0x00007f47918ef708
#content_type="image/jpeg",
#headers=
"Content-Disposition: form-data; name=\"images[]\"; filename=\"image_test.jpg\"\r\nContent-Type: image/jpeg\r\n",
#original_filename="image_test.jpg",
#tempfile=#<File:/tmp/RackMultipart20220120-9-gc3x7n.jpg>>
And here, is my file, with incorrect Type ("binary/octet-stream") on Minio
I need to send it to another service, and get the upload URL with correct Content-Type.
So, how can I to upload an image and maintain its original Content Type or, generate a signed or public URL that force a correct type for my file?
You could use the put_object method on the bucket instance that accepts a hash of options, one of which is content-type (Reference):
def upload_file(file)
object_key = "#{Time.now.to_i}-#{file.original_filename}"
#s3_bucket.put_object({
key: object_key,
body: Pathname.new(file.path),
content_type: "some/content_type"
})
end

How to upload a html file in s3 via Rails

So, I am trying to upload an HTML file to my Aws s3, the file is getting uploaded but it doesn't render as HTML file in the browser.
def upload_coverage_s3
path_to_file = Rails.root.to_s+'/public/coverage/index.html'
file = File.open(path_to_file)
aws_path = "test_coverage/#{Time.now.to_i}/index.html"
uploadObj = AwsHelper.upload_to_s3_html(aws_path,file)
uploadObj[:url]
end
def self.upload_to_s3_html(path,file)
if path.nil? || path.blank?
puts 'Cannot upload. Path is empty.'
return
end
obj = S3_BUCKET.objects[path]
obj.write(
file: file,
content_type: "text/html",
acl: :public_read
)
upload = {
url: obj.public_url.to_s,
name: obj.key
}
upload
end
All I am getting a white screen with a loading gif
I followed this link
Upload HTML file to AWS S3 and then serving it instead of downloading
As I want similar functionality uploading HTML file and then serving as an HTML file instead of downloading
PS:
I uploaded that HTML file manually also in my s3 bucket, the issue is the same.
How to resolve that.
Does s3 doesn't support HTML file upload?
You are only uploading an HTML file and no other dependencies.
It seems you are uploading a test coverage results. Usually index.html is just the entry point and you have a lot more files generated by your test coverage tool.
You need to upload all other resources and depending on how are they loaded it may or may not work.

Convert paperclip pdf from S3 to base64 (Rails)

I'm sending a base64 of a PDF to an external API endpoint in a Rails app.
This occurs regularly with different PDFs for different users. I'm currently using the Paperclip gem.
The problem is getting the PDF into a format that I can then convert to base64.
Below works if I start with a PDF locally and .read it, but not when it comes from S3.
Code:
def self.get_pdf(upload_id)
# get URL for file in S3 (for directly accessing the PDF in browser)
# `.generic` implemented via `has_attached_file :generic` in model
# `.expiring_url` is paperclip syntax for generating a URL
s3_url = Upload
.find(upload_id)
.generic
.expiring_url(100)
# open file from URL
file = open(s3_url)
# read file
pdf = File.read(file)
# convert to base64
base64 = Base64.encode64(File.open(pdf, "rb").read)
end
Error:
OpenURI::HTTPError (404 Not Found):
Ideally this can just occur in memory instead of actually download the file.
Streaming-in a base64 from S3 while streaming out the API request would be awesome but I don't think thats an option here.
UPDATE:
signed URLs from Cyberduck + Michael's answer will work
paperclip URLs fail + Michael's answer results in below error
Error:
The specified key does not exist.
Unfortunately I need to use Paperclip so I can generate links and download PDFs on the fly, based on the uploads table records in my db.
Is there is a technicality about paperclip links I don't understand?
base64 = Base64.encode64( get_me(s3_url).body ).gsub("\n", '')
def get_me(url)
uri = URI(url)
req = Net::HTTP::Get.new(uri)
req['Any_header_you_might_need'] = 'idem'
res = Net::HTTP.start(uri.host, uri.port, use_ssl: uri.scheme == 'https') do |http|
http.request(req)
end
return res
end

How to get file content from post rails

I have a client in java that sends form post requests with video file.
I get in the server following POST:
Parameters: {"video"=>#<ActionDispatch::Http::UploadedFile:0x007f26783b49d0
#original_filename="video", #content_type=nil,
#headers="Content-Disposition: form-data; name=\"video\"; filename=\"video\"\r\n",
#tempfile=#<Tempfile:/tmp/RackMultipart20160405-3-106c9nr>>, "id"=>"36"}
I am trying to save the file to s3 using following lines:
I know the connection and actual saving works because I tried with base64 string as parameter and it worked well.
body = params[:video].tempfile
video_temp_file = write_to_file(body)
VideoUploader.new.upload_video_to_s3(video_temp_file, params[:id].to_s+'.mp4')
I see on s3 empty files or 24 bytes.
where do i do wrong?
Edit: I am using carrierwave:
def write_to_file(content)
thumbnail_file = Tempfile.new(['video','.mp4'])
thumbnail_file.binmode # note that the tempfile must be in binary mode
thumbnail_file.write content
thumbnail_file.rewind
thumbnail_file
end

Opening File in Ruby returning empty file

I am currently trying to store a pdf in a hash for an api call in ruby. The path to the pdf is stored as:
/Users/myUserName/Desktop/REPOSITORY/path_to_file.pdf
I am using a block to store the file in the hash as so:
File.open(pdf_location, "rb") do |file|
params = {
other irrelevant entries
:document => file
}
pdf_upload_request('post', params, headers)
end
I am receiving a 400 error from the server that my document is empty and when I do puts file.read, it is empty. However, when I visit the filepath, it's clear that the file is not empty. Am I missing something here? Any help would be greatly appreciated. Thank you!
Edit------
I recorded my http request with vcr, here it is:
request:
method: post
uri: request_uri
body:
encoding: US-ASCII
string: ''
headers:
Authorization:
- Bearer 3ZOCPnwfoN7VfdGh7k4lrBuEYs4gN1
Content-Type:
- multipart/form-data; boundary=-----------RubyMultipartPost
Content-Length:
- '246659'
So i don't think the issue is with me sending the file with multipart encoding
Update--------
The filepaths to the pdf are generated from a url, and stored in a the tmp folder of my application. They are generated through this method:
def get_temporary_pdf(chrono_detail, recording, host)
auth_token = User.find(chrono_detail.user_id).authentication_token
# pdf_location = "https://54.84.224.252/recording/5/analysis.pdf/?token=Ybp37kw7HrSt8NyyPnBZ"
pdf_location = host + '/recordings/' + recording.id.to_s + '/analysis.pdf/?format=pdf&download=true&token=' + auth_token
filename = "Will" + '_' + recording.id.to_s + '_' + Date.new.to_s + '.pdf'
Thread.new do
File.open(Rails.root.join("tmp",filename), "wb") do |file|
file.write(open(pdf_location, {ssl_verify_mode: OpenSSL::SSL::VERIFY_NONE}).read)
end
end
Rails.root.join("tmp",filename)
end
They are then called using the api call:
client.upload_document(patient_id, file_path, description)
I can see them physically in my temp folder, and can view them with preview. Everything seems to work. But as a test of uncertainty, I changed file_path to point to a different pdf:
Users/myUsername/Desktop/example.pdf.
Using this file path worked. The pdf was uploaded to the third party system correctly, I can physically see it there. Do you think this means it is an issue with the tmp folder or how i generate the temporary pdf's?
Most likely, the API is expecting a POST with Content-Type: multipart/form-data. Just sending the file handle (which document: file does) won't work, as the file handle is only relevant to your local Ruby process; and even sending the binary string as a parameter won't work, since your content-type isn't properly set to encode a file.
Since you're already using HTTParty, though, you can fix this by using HTTMultiParty:
require 'httmultiparty'
class SomeClient
include HTTMultiParty
base_uri 'http://localhost:3000'
end
SomeClient.post('/', :query => {
:foo => 'bar',
:somefile => File.new('README.md')
})
Try this:
file = File.read(pdf_location)
params = {
# other irrelevant entries
document: file
}
headers = {}
pdf_upload_request('post', params, headers)
Not sure but may be you need to close file first...
So the issue arose from the multi threading i used to avoid timeout errors. The file path would get generated and referenced in the api call before anything was actually written into the document.

Resources