Rails - Is it possible to give external image a local url? - ruby-on-rails

Because it is not possible to uploade files to a Heroku app.
I am using a Amazon S3 storage server for images.
The problem is just that the image url is to the Amazon S3 server.
I would like it to be mydomain.com/myimage.png instead of s3.amazon.com/bucket/myimage.png
How is it possible to show a image on a amazon s3 server when visiting example: mydomain.com/myimage.png ?

My solution, I only use png images:
In routes:
match "/images/vind/:style/:id/:basename.png" => "public#image_proxy"
In public controller:
def image_proxy
image_url = "http://s3-eu-west-1.amazonaws.com/konkurrencerher#{request.path}"
response.headers['Cache-Control'] = "public, max-age=#{84.hours.to_i}"
response.headers['Content-Type'] = 'image/png'
response.headers['Content-Disposition'] = 'inline'
render :text => open(image_url, "rb").read,
end

You can create a rack middleware that will look for a pattern in the url (/images/* or *.png) and when there's a match, it will act as a proxy, requesting the image from S3 and serving the content that it receives.
Make sure you set the caching headers right, so that Heroku's reverse proxy will cache it and serve it quickly.

Related

Shrine - Derivation Endpoint and full path URLs with Cloudfront

I'm using Shrine with Rails to upload my images directly to S3, they are then served using Cloudfront.
My setup is working great, but I'm stuck trying to answer 2 issues:
1) User uploads a new Image via Uppy. They click "Create Image Page" which creates a new "page" object. The user is taken to a blank page, while the Image derivative is being created in a background process using Sidekiq. How can we have the image "pop-in" via JS once the derivative is successfully created and promoted to /store? Is there a callback we can pick up on, or is it possible to keep trying to find the derivative via JS until it exists?
2) Using the Derivatives Endpoint plugin, all of the image URLs are relative. Is there a way to serve this as absolute URLs from Cloudfront/Custom Domain? Here's an example using Rails:
Ruby code:
<%= image_tag(image.image_file.derivation_url(:banner, 800, 300)) %>
Resulting HTML:
<img src="/derivations/images/banner/800/300/...">
How can I instead serve this resulting URL:
<img src="http://abc123.cloudfront.net/derivations/images/banner/800/300/...">
Was this intentional to prevent DoS? Thanks.
Adding the host option to "derivation_endpoint" causes the following error returned as XML:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Could this be my Cloudfront access settings? Maybe CORS.
Here is my derivation_endpoint setup in image_uploader.rb
plugin :derivation_endpoint, host: "http://abc123.cloudfront.net", prefix: "derivations", secret_key: ..., upload: true
routes.rb
mount Shrine.presign_endpoint(:cache) => "s3/params"
mount PhotoUploader.derivation_endpoint => "derivations/images"
config/initializers/Shrine.rb
Shrine.plugin :url_options, store: { host: "http://abc123.cloudfront.net" }
Since the background job will update the record with derivatives, the easiest would be to poll the "show" route of the record (i.e. GET /images/:id) on the client side. In this case the route could have an alternative JSON response via respond_to.
Alternatively, you could send a message from the background job to the client once processing is finished, via WebSockets or something like message_bus.
You can configure the :host option for the derivation_endpoint plugin:
Shrine.plugin :derivation_endpoint, host: "https://cloudfront.net"

Do I have to download an image before upload it to s3?

I have Rails app with embedded images. What I want is to upload these images to s3 and serve theme from there instead of form original source Do I have to download the img to my server before upload it to s3?
Short answer: If you're scraping someone's content, then...yes, you need to pull the file down before uploading to to S3.
Long answer: If the other site (the original source) is working with you, you can give them a Presigned URL that they can use to upload to your S3 bucket.
From Amazon's docs: https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjectPreSignedURLRubySDK.html
#Uploading an object using a presigned URL for SDK for Ruby - Version 3.
require 'aws-sdk-s3'
require 'net/http'
s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('BucketName').object('KeyName')
# Replace BucketName with the name of your bucket.
# Replace KeyName with the name of the object you are creating or replacing.
url = URI.parse(obj.presigned_url(:put))
body = "Hello World!"
# This is the contents of your object. In this case, it's a simple string.
Net::HTTP.start(url.host) do |http|
http.send_request("PUT", url.request_uri, body, {
# This is required, or Net::HTTP will add a default unsigned content-type.
"content-type" => "",
})
end
puts obj.get.body.read
# This will print out the contents of your object to the terminal window.

Convert paperclip pdf from S3 to base64 (Rails)

I'm sending a base64 of a PDF to an external API endpoint in a Rails app.
This occurs regularly with different PDFs for different users. I'm currently using the Paperclip gem.
The problem is getting the PDF into a format that I can then convert to base64.
Below works if I start with a PDF locally and .read it, but not when it comes from S3.
Code:
def self.get_pdf(upload_id)
# get URL for file in S3 (for directly accessing the PDF in browser)
# `.generic` implemented via `has_attached_file :generic` in model
# `.expiring_url` is paperclip syntax for generating a URL
s3_url = Upload
.find(upload_id)
.generic
.expiring_url(100)
# open file from URL
file = open(s3_url)
# read file
pdf = File.read(file)
# convert to base64
base64 = Base64.encode64(File.open(pdf, "rb").read)
end
Error:
OpenURI::HTTPError (404 Not Found):
Ideally this can just occur in memory instead of actually download the file.
Streaming-in a base64 from S3 while streaming out the API request would be awesome but I don't think thats an option here.
UPDATE:
signed URLs from Cyberduck + Michael's answer will work
paperclip URLs fail + Michael's answer results in below error
Error:
The specified key does not exist.
Unfortunately I need to use Paperclip so I can generate links and download PDFs on the fly, based on the uploads table records in my db.
Is there is a technicality about paperclip links I don't understand?
base64 = Base64.encode64( get_me(s3_url).body ).gsub("\n", '')
def get_me(url)
uri = URI(url)
req = Net::HTTP::Get.new(uri)
req['Any_header_you_might_need'] = 'idem'
res = Net::HTTP.start(uri.host, uri.port, use_ssl: uri.scheme == 'https') do |http|
http.request(req)
end
return res
end

Using S3 Presigned-URL for upload a file that will then have public-read access

I am using Ruby on Rails and AWS gem.
I can get pre-signed URL for upload and download.
But when I get the URL there is no file, and so setting acl to 'public-read'
on the download-url doesn't work.
Use case is this: 1, server provides the user a path to upload content to my bucket that is not readable without credentials. 2, And that content needs to be public later: readable by anyone.
To clarify:
I am not uploading the file, I am providing URL for my users to upload. At that time, I also want to give the user a URL that is readable by the public. It seems like it would be easier if I uploaded the file by myself. Also, read URL needs to never expire.
When you generate a pre-signed URL for a PUT object request, you can specify the key and the ACL the uploader must use. If I wanted the user to upload an objet to my bucket with the key "files/hello.txt" and the file should be publicly readable, I can do the following:
s3 = Aws::S3::Resource.new
obj = s3.bucket('bucket-name').object('files/hello.text')
put_url = obj.presigned_url(:put, acl: 'public-read', expires_in: 3600 * 24)
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text?X-Amz-..."
obj.public_url
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text"
I can give the put_url to someone else. This URL will allow them to PUT an object to the URL. It has the following conditions:
The PUT request must be made within the given expiration. In the example above I specified 24 hours. The :expires_in option may not exceed 1 week.
The PUT request must specify the HTTP header of 'x-amz-acl' with the value of 'public-read'.
Using the put_url, I can upload any an object using Ruby's Net::HTTP:
require 'net/http'
uri = URI.parse(put_url)
request = Net::HTTP::Put.new(uri.request_uri, 'x-amz-acl' => 'public-read')
request.body = 'Hello World!'
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
resp = http.request(request)
Now the object has been uploaded by someone else, I can make a vanilla GET request to the #public_url. This could be done by a browser, curl, wget, etc.
You have two options:
Set the ACL on the object to 'public-read' when you PUT the object. This allows you to use the public url without a signature to GET the object.
Let the ACL on the object default to private and provide pre-signed GET urls for users. These expire, so you have to generate new URLs as needed. A pre-signed URL allows someone to send GET request to the object without credentials themselves.
Upload a public object and generate a public url:
require 'aws-sdk'
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/path/to/file', acl:'public-read')
s3.public_url
#=> "https://bucket-name.s3.amazonaws.com/key"
Upload a private object and generate a GET url that is good for 1-hour:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/path/to/file')
s3.presigned_url(:get, expires_in: 3600)
#=> "https://bucket-name.s3.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&..."

What does this error mean in Rails: "Errno::ENOENT in AssetsController#get ...No such file or directory...."

Im writing an app to store files on Amazon S3. Im pretty close, Im able to save and retrieve files using a url. However since these links are public Im trying to use the following method in my assetscontroller to retrieve stored files from S3.
As links these files can be viewed/accessed in the browser, but if I use this code :
#This action will let the users download the files (after a simple authorization check)
def get
asset = current_user.assets.find_by_id(params[:id])
if asset
#Parse the URL for special characters first before downloading
data = open("#{URI.parse(URI.encode(asset.uploaded_file.url))}")
#then again, use the "send_data" method to send the above binary "data" as file.
send_data data, :filename => asset.uploaded_file_file_name
else
flash[:error]="Access Violation"
redirect_to assets_path
end
end
Im getting this error in my browser:
Errno::ENOENT in AssetsController#get
No such file or directory - http://s3.amazonaws.com/BUCKETNAME/assets/29/FILENAME.jpeg? 1339979591
When I click on the resource on the S3 site as Im logged into the S3 management console, the file is shown in my browser and its link is
https://s3.amazonaws.com/BUCKETNAME/assets/29/FILENAME.jpeg? AWSAccessKeyId=XXXXXXXXXXXExpires=1340003832&Signature=XXXXXXXXXXXXXXXXXX-amz-security- token=XXXXXXXXX//////////XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
So it does exist but cant be accessed through my app
Here is my Application Trace from my browser:
app/controllers/assets_controller.rb:99:in `initialize'
app/controllers/assets_controller.rb:99:in `open'
app/controllers/assets_controller.rb:99:in `get'
Any clues on whats going on?
Thanks
You can also just redirect the user to the file on S3.
Just try
redirect_to asset.uploaded_file.url
instead of send_file. send_file expects a path to a local file, which is then used by the webserver.
If you've set s3_permissions => :private, then you need to call
redirect_to asset.uploaded_file.expiring_url(10)
It's also interesting that in your error message it is http against https in s3 - you can also try to add the following option to your model's has_attached_file
:s3_protocol => 'https'
Hope this helps.

Resources