How to upload a html file in s3 via Rails - ruby-on-rails

So, I am trying to upload an HTML file to my Aws s3, the file is getting uploaded but it doesn't render as HTML file in the browser.
def upload_coverage_s3
path_to_file = Rails.root.to_s+'/public/coverage/index.html'
file = File.open(path_to_file)
aws_path = "test_coverage/#{Time.now.to_i}/index.html"
uploadObj = AwsHelper.upload_to_s3_html(aws_path,file)
uploadObj[:url]
end
def self.upload_to_s3_html(path,file)
if path.nil? || path.blank?
puts 'Cannot upload. Path is empty.'
return
end
obj = S3_BUCKET.objects[path]
obj.write(
file: file,
content_type: "text/html",
acl: :public_read
)
upload = {
url: obj.public_url.to_s,
name: obj.key
}
upload
end
All I am getting a white screen with a loading gif
I followed this link
Upload HTML file to AWS S3 and then serving it instead of downloading
As I want similar functionality uploading HTML file and then serving as an HTML file instead of downloading
PS:
I uploaded that HTML file manually also in my s3 bucket, the issue is the same.
How to resolve that.
Does s3 doesn't support HTML file upload?

You are only uploading an HTML file and no other dependencies.
It seems you are uploading a test coverage results. Usually index.html is just the entry point and you have a lot more files generated by your test coverage tool.
You need to upload all other resources and depending on how are they loaded it may or may not work.

Related

How make rails get data from aws s3 chunk by chunk and send it ti browser as pdf download file?

I have some confusion with how to get file from aws s3 without write it as file but maybe as a tempfile that is deleted automatically. so my friend tell me to buffer stream the data chunk by chunk and send it to browser as downloadable file.
So here it is my code for downloading the file
def download(key)
File.open('filename', 'wb') do |file|
s3.get_object(bucket: 'bucket-test', key:key) do |chunk|
send_data(chunk,:type => application/pdf, :disposition => 'inline')
end
end
end
it comes with and error about seahorse cannot be convert to string. and i dont actually understand that.
How to actually do stream the data from aws (pdf file) and send it to browser as downloadable pdf file? is my code not like what i inteded for?
thank you kindly
Just retrieve the whole file into memory and then send it out:
response = s3.get_object(bucket: 'bucket-test', key: key)
send_data(response.body.read, type: application/pdf, disposition: 'inline')
This method also has the benefit that it will retry network errors, so it's more resilient that the chunked method which disable retries on error.

Do I have to download an image before upload it to s3?

I have Rails app with embedded images. What I want is to upload these images to s3 and serve theme from there instead of form original source Do I have to download the img to my server before upload it to s3?
Short answer: If you're scraping someone's content, then...yes, you need to pull the file down before uploading to to S3.
Long answer: If the other site (the original source) is working with you, you can give them a Presigned URL that they can use to upload to your S3 bucket.
From Amazon's docs: https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjectPreSignedURLRubySDK.html
#Uploading an object using a presigned URL for SDK for Ruby - Version 3.
require 'aws-sdk-s3'
require 'net/http'
s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('BucketName').object('KeyName')
# Replace BucketName with the name of your bucket.
# Replace KeyName with the name of the object you are creating or replacing.
url = URI.parse(obj.presigned_url(:put))
body = "Hello World!"
# This is the contents of your object. In this case, it's a simple string.
Net::HTTP.start(url.host) do |http|
http.send_request("PUT", url.request_uri, body, {
# This is required, or Net::HTTP will add a default unsigned content-type.
"content-type" => "",
})
end
puts obj.get.body.read
# This will print out the contents of your object to the terminal window.

Convert paperclip pdf from S3 to base64 (Rails)

I'm sending a base64 of a PDF to an external API endpoint in a Rails app.
This occurs regularly with different PDFs for different users. I'm currently using the Paperclip gem.
The problem is getting the PDF into a format that I can then convert to base64.
Below works if I start with a PDF locally and .read it, but not when it comes from S3.
Code:
def self.get_pdf(upload_id)
# get URL for file in S3 (for directly accessing the PDF in browser)
# `.generic` implemented via `has_attached_file :generic` in model
# `.expiring_url` is paperclip syntax for generating a URL
s3_url = Upload
.find(upload_id)
.generic
.expiring_url(100)
# open file from URL
file = open(s3_url)
# read file
pdf = File.read(file)
# convert to base64
base64 = Base64.encode64(File.open(pdf, "rb").read)
end
Error:
OpenURI::HTTPError (404 Not Found):
Ideally this can just occur in memory instead of actually download the file.
Streaming-in a base64 from S3 while streaming out the API request would be awesome but I don't think thats an option here.
UPDATE:
signed URLs from Cyberduck + Michael's answer will work
paperclip URLs fail + Michael's answer results in below error
Error:
The specified key does not exist.
Unfortunately I need to use Paperclip so I can generate links and download PDFs on the fly, based on the uploads table records in my db.
Is there is a technicality about paperclip links I don't understand?
base64 = Base64.encode64( get_me(s3_url).body ).gsub("\n", '')
def get_me(url)
uri = URI(url)
req = Net::HTTP::Get.new(uri)
req['Any_header_you_might_need'] = 'idem'
res = Net::HTTP.start(uri.host, uri.port, use_ssl: uri.scheme == 'https') do |http|
http.request(req)
end
return res
end

Multiple simultaneous remote file downloads to server ruby on rails

How can be multiple files download to server simultaneous and check upload process ( I mean time left )
On rails application I have multiple text fields, they are for remote file urls like www.example.com/abc.pdf etc and these all files should be downloaded to a temp_uploads folder.
for this I have written code in remotefiles controller like below.
def remotefiles
params[:reference_file_url].each do |rfile|
temp_file = File.new("/public/temp_uploads", "w")
open(temp_file, 'wb') do |file|
file << open(rfile).read()
end
end
end
where rfile is remote file url.
I also have defined route remotefiles to call it in ajax
my ajax call sends form data in serialize format and this controller downloads all files 1 by 1.
with this code i have to wait untill all files are downloaded to folder that is obviously not acceptable.( I mean client asked me to reduce wait time by downloading all files simultaneous )
is there any way to get it done, all my code is custom and is not using any gem
For local file upload:
http://blueimp.github.io/jQuery-File-Upload/
For remote file download:
You can use a gem called Sidekiq to write a background job which would use http to download each file. Then update Redis with the status and poll for that status via ajax from the browser.
To download the file you can use HTTPParty
require "httparty"
File.open("myfile.txt", "wb") do |f|
f.write HTTParty.get(remote_file_url).response
end
Let me answer the parallel file download part of the question. you can use a library like Typhoeus or em-http-request for that
#Typhoeus
hydra = Typhoeus::Hydra.new
params[:reference_file_url].each do |rfile|
request = Typhoeus::Request.new(rfile, followlocation: true)
request.on_complete do |response|
#do_something_with response
end
hydra.queue(request)
end
hydra.run

What does this error mean in Rails: "Errno::ENOENT in AssetsController#get ...No such file or directory...."

Im writing an app to store files on Amazon S3. Im pretty close, Im able to save and retrieve files using a url. However since these links are public Im trying to use the following method in my assetscontroller to retrieve stored files from S3.
As links these files can be viewed/accessed in the browser, but if I use this code :
#This action will let the users download the files (after a simple authorization check)
def get
asset = current_user.assets.find_by_id(params[:id])
if asset
#Parse the URL for special characters first before downloading
data = open("#{URI.parse(URI.encode(asset.uploaded_file.url))}")
#then again, use the "send_data" method to send the above binary "data" as file.
send_data data, :filename => asset.uploaded_file_file_name
else
flash[:error]="Access Violation"
redirect_to assets_path
end
end
Im getting this error in my browser:
Errno::ENOENT in AssetsController#get
No such file or directory - http://s3.amazonaws.com/BUCKETNAME/assets/29/FILENAME.jpeg? 1339979591
When I click on the resource on the S3 site as Im logged into the S3 management console, the file is shown in my browser and its link is
https://s3.amazonaws.com/BUCKETNAME/assets/29/FILENAME.jpeg? AWSAccessKeyId=XXXXXXXXXXXExpires=1340003832&Signature=XXXXXXXXXXXXXXXXXX-amz-security- token=XXXXXXXXX//////////XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
So it does exist but cant be accessed through my app
Here is my Application Trace from my browser:
app/controllers/assets_controller.rb:99:in `initialize'
app/controllers/assets_controller.rb:99:in `open'
app/controllers/assets_controller.rb:99:in `get'
Any clues on whats going on?
Thanks
You can also just redirect the user to the file on S3.
Just try
redirect_to asset.uploaded_file.url
instead of send_file. send_file expects a path to a local file, which is then used by the webserver.
If you've set s3_permissions => :private, then you need to call
redirect_to asset.uploaded_file.expiring_url(10)
It's also interesting that in your error message it is http against https in s3 - you can also try to add the following option to your model's has_attached_file
:s3_protocol => 'https'
Hope this helps.

Resources