Change permissions for files in S3 using Dragonfly - ruby-on-rails

in my rails project i use dragonfly for uploading files and store them in S3.
initially i pass {'x-amz-acl' => 'private'} for uploaded files and use private urls with expiration date.
is there an easy way to change it to 'public-read' after the file was uploaded to S3?

I use the aws/s3 gem. Handling permissions can be done with something like this:
S3Object.store(
'kiss.jpg',
data,
'marcel',
:access => :public_read
)
In your case, you would use S3Object.find and then change the policy. The gem is documented here.

Related

Active Storage Uploads to S3 with encryption. How to remove encryption and save file as is in storage

When I upload a file with API multipart to rails server. Rails server uploads the file to s3. But when I go to s3, the file is encrypted which I don't want.
Also when I get the URL from in jbuilder
json.url Rails.application.routes.url_helpers.rails_blob_url(document.doc)
the URL I get is of rails path which redirects to S3 image with a key to show the image.
It will be better if I can put the S3 link directly there in the file URL.
The Active Storage doesn't store a file with the provided filename instead it uses Rails' has_secure_token key.
So even if you try to see the file stored on local (if you're using a deffault local service), it'll be saved with some random alpha-numeric string.
If you want to explicitly upload with AWS Server Side Encryption you might want to update the service config like -
amazon:
service: S3
bucket: somebucketname
upload:
server_side_encryption: AES-256
One possibility you can find if it works by implementing a custom ActiveStorage::Service. If somebody does it, please share! :D
Or else just go with the Carrierwave!!
Ref -
ActiveStorage::Blob
ActiveStorage::Service
If you are using Rails active storage and don't want to encrypt files by default you need to change storage.yml to something like this.
amazon:
service: S3
bucket: somebucketname
upload:
server_side_encryption: nil
Or if you are using aws-sdk-s3 gem, here is the link for official guide for aws-sdk-s3
Also you can change existing files encryption from AWS. Here is the documentation.
For URL part you can call object.images[0].service_url This will return you something like https://yourbucketname.s3.eu-central-1.amazonaws.com/yr2r6bo1ai1g9yavg7p9480ptdy4?response-content-disposition=inline%3B%20filename%3D%22images-museum_5_3.png%22%3B%20filename%2A%3DUTF-8%27%27images-museum_5_3.png&response-content-type=image%2Fjpeg&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWMS7WXXZREOOCQ%2F20200413%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20200413T123825Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=14bf4d26d9b3f3c98c29d84c647fc6fa88d903e157a0d41e0ad209937ffaf92b
You can store first part as object URL. It not changing part of URL.
That part is https://yourbucketname.s3.eu-central-1.amazonaws.com/yr2r6bo1ai1g9yavg7p9480ptdy4

Can I use zipline gem to download from s3 without model associations with paperclip or carrierwave

I want to allow my user to download a bundle of files that are stored on s3 using the zipline gem. The files are already hosted on an s3 server but they aren't there as part of a paperclip or carrierwave attachment in my app. Will I need to create some records in my database to sort of trick zipline into thinking they are paperclip attachments, or is there a way I can send the zip file without bothering with an attachment gem? At the moment, trying to download the files with zipline doesn't throw an error message at all. It just seems to skip right over and nothing downloads.
See the part of the zipline README where an enumerator gets used to include remote files into the ZIP. It uses absolute URLs, to generate those from your S3 objects you will need to use presigned URLs (which Zipline is going to pass on to Curb):
Aws::S3::Bucket.new(your_bucket_name).object(your_key).presigned_url(:get)

Rails + Paperclips + Rackspace CloudFiles with the Private CDN

I have a Rails application that uses Paperclip to handle uploaded files and we are currently hosted by Rackspace.
The application is currently hosted on a single server and I am building out a more scalable solution with load balancers, application servers, and a separate database server. The last thing I need to do is a solution for the uploaded assets. I have tried to use Rackspace's CloudFiles, but it seems the only way to use paperclip and CloudFiles is to have them on the public CDN, which I can't use, a user needs to be authenticate to access the files. Before I turn to Amazon S3, since they have the option for temporary URLs, does know how to use CloudFiles with Paperclip and require authentication to access the files?
Any help, tips, google searches, links, or solutions would be greatly appreciated.
As it happens, Cloud Files also supports the generation of temporary URLs, and it appears that Paperclip does allow you to make use of it. Just generate the URL from your Attachment with #expiring_url instead of #url in your views:
= image_tag #organization.logo.expiring_url(Time.now.to_i + 100, :original).gsub(/^http:/, "https")
Paperclip will only generate http urls, but since Rackspace's temporary URLs don't use the scheme in their checksums, you can use a gsub call to turn it into an https URL. Also, notice that the first argument to #expiring_url is an absolute timestamp (in seconds-since-the-epoch).
Expiring URLs for Rackspace only made it into fog somewhat recently -- v1.18.0 -- so if you're using an older version, you may need to upgrade fog to take advantage of them:
bundle upgrade fog
Paperclip also supports generating obfuscated URLs, which looks interesting, but would be less secure, since the server wouldn't expire it.
You can add the key like this:
class Rackspace
def self.add_temp_url_key
require 'fog'
puts "Creating Storage Service"
begin
service = Fog::Storage.new(
:provider => 'rackspace',
:rackspace_username => ENV['FOG_USERNAME'],
:rackspace_api_key => ENV['FOG_API_KEY'],
:rackspace_region => ENV['RACKSPACE_REGION'].to_sym
)
service.post_set_meta_temp_url_key(ENV['RACKSPACE_TEMP_URL_KEY'])
puts "X-Account-Meta-Temp-Url-Key successfully set to #{ENV['RACKSPACE_TEMP_URL_KEY']}"
rescue => e
puts "Unable to set X-Account-Meta-Temp-Url-Key - #{e.inspect}"
puts e.backtrace
end
end
end

How to store file in amazon which expire in one hour using carrierwave?

I recently implemented file uploading using carrier-wave with Amazon S3 storage.
I want to generate, or make available, the S3 URL only for one hour. After that the link should expire.
How can I do this using carrier-wave?
The way to handle this is to use a presigned URL for the S3 file. Once you upload the file using carrier wave, you access the actual S3 URL and use AWS::S3 to presign it with an expiration time. For example, if the key (file name) in your S3 bucket is "my_file", you could do this:
# Your Model
def presigned_url
s3 = AWS::S3.new
bucket = s3.buckets["MyBucket"]
object = bucket.objects["my_file"]
object.url_for(:read, secure:true, expires:1.hour)
end
The URL returned will be valid for 1 hour and then will never work again.
To use this, you will need to include the aws-s3 gem in your Gemfile:
# Gemfile
gem "aws-s3"

CKEditor gem with Paperclip and Amazon S3

I'm using CKEditor and I've configured it to work with Paperclip but I can't tell it to store files in S3, so it's storing them using Paperclip but on the local filesystem.
So I was wondering if there is some way to tell Paperclip to explicitly use S3 everytime it's used.
I know how I can configure Paperclip with S3 on certain models (pretty easy, described on the paperclip github wiki). I'm deploying on Heroku that's why I can't write to the local filesystem.
One way is to see what the ckeditor install generator is doing.
For example, if using ActiveRecord as ORM, take a look at the templates being used for the models that use Paperclip here.
The generator actually copies this templates into your app/models/ckeditor folder. You could edit them and configure as needed for Paperclip to use S3.
For ActiveRecord, the models are:
/app/models/ckeditor/attachment_file.rb
/app/models/ckeditor/picture.rb
Keep in mind that this approach could give you extra work in the future if the ckeditor gem is updated and the update process needs to overwrite this models.
Else, you can use Paperclip default options. In you Paperclip initializer (/config/initializers/paperclip.rb) use:
Paperclip::Attachment.default_options.merge!(
YOUR OPTIONS FOR S3 HERE
)
For carrierwave, you can generate the uploader and there you can configure your s3 or whatever you want.
class CkeditorAttachmentFileUploader < CarrierWave::Uploader::Base
include Ckeditor::Backend::CarrierWave
# Choose what kind of storage to use for this uploader:
if Rails.env.production?
storage :fog
else
storage :file
end
....
end
It's pretty straight forward. You can use this post to get you started. Alternatively, you can look at this similar question for further details.

Resources