How to specify server-side S3 encryption via ActiveStorage? - ruby-on-rails

Through paperclip I was able to specify server side encryption for S3, and also specify a content type (for a wonky file) like this:
has_attached_file :attachment,
s3_permissions: :private,
s3_server_side_encryption: 'AES256',
s3_headers: lambda { |attachment|
{
'content-Type' => 'text/csv; charset=utf-16le'
}
}
Where would I specify similar when using has_attached_one in ActiveStorage?

As you can see in Active Storage's S3Service, there's upload options are passed transparently from the upload key to the Aws::S3::Object#put method. This is also true for Rails 5.2.
So you just need to specify server_side_encryption key in your storage.yml this way:
amazon:
service: S3
bucket: mybucket
* other properties *
upload:
server_side_encryption: "AES256"

Related

Paperclip custom interpolation (aka custom path for AWS S3)

I have three questions related to paperclip and AWS S3.
1) In my model which has paperclip, I have following code:
has_attached_file :attachment,
:url => "/songs/:user_id/:basename.:extension",
:path => "/songs/:user_id/:basename.:extension"
What's the difference between URL and PATH?
2) What is :basename.:extension?
3) Let's say there are two models: User and File. User has many File. Paperclip path and url are configured in File model.
In config/initializers/paperclip.rb, I put below code:
Paperclip.interpolates :user_id do |attachment, style|
attachment.instance.criteria.user_id
end
I confirm that above code is working fine. My file gets saved at songs/5/song.mp3. I would like to save the mp3 file at songs/user_id_5/song.mp3. I tried doing below but it doesn't work.
Paperclip.interpolates :user_id do |attachment, style|
'user_id_' + attachment.instance.criteria.user_id
end
How do I make it as I want to ?
in S3 language path is the Key of your item and url is your s3 endpoint
From docs
url: There are four options for the S3 url. You can choose to have the bucket's name placed domain-style (bucket.s3.amazonaws.com) or
path-style (s3.amazonaws.com/bucket). You can also specify a CNAME
(which requires the CNAME to be specified as :s3_alias_url. You can
read more about CNAMEs and S3 at
docs.amazonwebservices.com/AmazonS3/latest/index.html?VirtualHosting.html
Normally, this won't matter in the slightest and you can leave the
default (which is path-style, or :s3_path_url). But in some cases
paths don't work and you need to use the domain-style
(:s3_domain_url). Anything else here will be treated like path-style.
path: This is the key under the bucket in which the file will be stored. The URL will be constructed from the bucket and the path. This is what you will want to interpolate. Keys should be unique, like filenames, and despite the fact that S3 (strictly speaking) does not support directories, you can still use a / to separate parts of your file name.
you can configure the bucket or url in your config and just pass the path (i.e. where to store the file) when you call the method
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: 'mybucket'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: 'aws_region_id',
}
}
I don't know
You need string interpolation
Paperclip.interpolates :user_id do |attachment, style|
"user_id_#{attachment.instance.criteria.user_id}"
end

Rails paperclip S3 attachment not deleted (bad request)

I'm developing a rails API. I use the paperclip gem to store images in Amazon S3. I'm just using my own access key for the bucket, without any added policies. The attachments are correctly uploaded and stored in S3, but when I destroy a record, the attachments are not deleted. I also tried deleting the attachment alone, and that gave the following error:
[AWS S3 400 0.382023 0 retries] head_object(:bucket_name=>"my-bucket-name",:key=>"the/url/to/the/image.jpg") AWS::S3::Errors::BadRequest AWS::S3::Errors::BadRequest
In my model:
has_attached_file :main_image
validates_attachment :main_image, presence: true,
content_type: { content_type: %w(image/jpeg image/png)},
size: { in: 0 .. 1.megabytes }
In my configuration:
# Paperclip config
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('AWS_S3_BUCKET'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('AWS_S3_REGION'),
}
}
The app is running on Heroku. Is this a permissions issue? Note that I'm using the aws-sdk gem version 1.66.
This is a permissions issue with AWS S3 since you are able to upload but not delete. Did you create an AWS IAM user to generate an Access Key and Secret Key? If so can you paste your policy?

Ignore s3_host_alias in paperclip conditionally?

We use cloudfront for our images hosted on s3 through paperclip; however it has some aggressive caching and we have part of our code that needs fresh data (some image manipulation).
Is there any way of overriding s3_host_alias on calling the url?
Everything I've found so far regarding this topic talks about adding cloudfront, not about ignoring it; and even then, everything is systemwide.
Our paperclip config:
# Paperclip Config
Paperclip.options[:command_path] = "/usr/bin/"
config.paperclip_defaults = {
storage: :s3,
s3_protocol: :https,
url: ':s3_alias_url',
default_url: "https://#{SETTINGS['s3']['bucket']}.s3.amazonaws.com/missing/:class/:attachment/:style.png",
s3_host_alias: SETTINGS['s3']['cdn_url'],
s3_credentials: {
bucket: SETTINGS['s3']['bucket'],
access_key_id: SETTINGS['s3']['access_key_id'],
secret_access_key: SETTINGS['s3']['secret_access_key']
}
}
This might be dirty but you can try this.
#set the value in your controller
Photo.my_custom_attr = "My custom value"
#In your model
class Photo < ActiveRecord::Base
cattr_accessor :my_custom_attr
Paperclip.interpolates :my_custom_attr do |attachment, style|
Photo.my_custom_attr
end
end
I had this same need so I did some digging through the paperclip code and here's what I found:
The s3 storage module has still registered the interpolation you're looking for, it just isn't used when you call #url on the attachment since you probably have set something like url: ':s3_alias_url' set in your call to has_attached_file. So, what you can do is use the Paperclip interpolator manually by calling interpolate.
For example:
Paperclip::Interpolations.interpolate(':s3_path_url', User.first.avatar, :original)
You can also substitute the other interpolations the s3 module defines for ':s3_path_url', e.g. ':s3_domain_url'.

Convert and store to S3 with REST API / InkFilepicker

I have a Rails app on heroku. From the server side (using the REST API of InkFilepicker), I would like to convert a file, save it to my S3 bucket and store the S3 url to my model.
Concretely: Given an image (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG) I want to convert it (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?w=200&h=150&fit=clip) and store the converted image to my S3 bucket.
EDIT
Here is what I did at the end:
after_save :save_thumbnail_url_to_s3
def save_thumbnail_url_to_s3
convert_options = {
fit: 'clip',
h:500,
w:500
}
file = open("#{self.url}/convert?#{convert_options.to_query}")
# Writing file into S3 bucket
amazon = AWS::S3.new(access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])
bucket = amazon.buckets[ENV['AWS_BUCKET']]
object = bucket.objects[s3_media_path]
written_file = object.write(file, acl: :public_read) # :authenticated_read
self.update_column :thumbnail_url, written_file.public_url.to_s
end
If you are using the filepicker.io API you can convert your file with the API and then provide then use open-uri as below to create a file stream that can be sent to S3, Tempfile as below behaves like the File API in ruby
[3] pry(main)> require 'open-uri'
=> true
[4] pry(main)> file = open("https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?...")
=> #
[5] pry(main)> file.class
=> Tempfile
You can simply use the aws-s3 gem : https://github.com/marcel/aws-s3
But be careful, Heroku is read only oriented, you will only be able to work on temp files.

How to allow users to upload to s3, yet not use own server resources

How is it possible to allow users to upload images on a website, but the actual uploading is done completely on amazon's servers (so as to not burden your own servers with upload throughput).
Can someone explain how this is performed?
i.e. a user wants to upload an image, instead of streaming the file to my server, and then from my server to amazon's s3 service, it bypasses my server altogether and sends it to amazon.
You can check out these docs provided by Amazon.
You can implement the process by using a SWF uploader, or this gem.
CarrierWave can be used with CarrierWaveDirect to upload images directly to S3. This will also allow you to process the image in a background job.
However, if you want to completely eliminate both the upload and processing burden from your dynos, check out Cloudinary which is unique in that it does all image processing on their servers as well as providing storage for them.
if your using paperclip cant you just do the following?
create a s3.yml file in config
development:
bucket: bucket-dev
access_key_id: xxx
secret_access_key: xxx
test:
bucket: bucket-test
access_key_id: xxx
secret_access_key: xxx
production:
bucket: bucket-pro
access_key_id: xxx
secret_access_key: xxx
#paperclip
has_attached_file :photo,
:styles => {
:thumb=> "100x100#",
:small => "400x400>" },
:storage => :s3,
:s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => "/:style/:id/:filename"

Resources