Rails paperclip S3 attachment not deleted (bad request) - ruby-on-rails

I'm developing a rails API. I use the paperclip gem to store images in Amazon S3. I'm just using my own access key for the bucket, without any added policies. The attachments are correctly uploaded and stored in S3, but when I destroy a record, the attachments are not deleted. I also tried deleting the attachment alone, and that gave the following error:
[AWS S3 400 0.382023 0 retries] head_object(:bucket_name=>"my-bucket-name",:key=>"the/url/to/the/image.jpg") AWS::S3::Errors::BadRequest AWS::S3::Errors::BadRequest
In my model:
has_attached_file :main_image
validates_attachment :main_image, presence: true,
content_type: { content_type: %w(image/jpeg image/png)},
size: { in: 0 .. 1.megabytes }
In my configuration:
# Paperclip config
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('AWS_S3_BUCKET'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('AWS_S3_REGION'),
}
}
The app is running on Heroku. Is this a permissions issue? Note that I'm using the aws-sdk gem version 1.66.

This is a permissions issue with AWS S3 since you are able to upload but not delete. Did you create an AWS IAM user to generate an Access Key and Secret Key? If so can you paste your policy?

Related

Can I add Shrine upload credentials to the model

I have a multi-tenant site built on rails 5, each of the tenants adds their own s3 credentials, therefore, any uploads that happen on their tenant site get uploaded to their own s3 account.
The problem I have at the moment is that Shrine seems to only let me add s3 credentials in the initializer. This works great but I would like to add it to the model so that I can dynamically populate the s3 credentials depending on which tenant is being used at the time. Does anyone know anyway shrine can help me?
I managed to do this with paperclip but it came with other problems such as background processing etc.
You could define all the storages in the initializer:
Shrine.storages = {
first_storage: Shrine::Storage::S3.new(
bucket: "my-first-bucket", # required
region: "eu-west-1", # required
access_key_id: "abc",
secret_access_key: "xyz"),
second_storage: Shrine::Storage::S3.new(
bucket: "my-second-bucket", # required
region: "eu-east-1", # required
access_key_id: "efg",
secret_access_key: "uvw")
}
Note: This is not all the storages code - both the :cache and the :store storages should be defined.
And then use them in the models:
class Photo
include ImageUploader::Attachment(:image)
end
photo = Photo.new
photo.image_attacher.upload(io, :first_storage)
photo.image_attacher.upload(other_io, :second_storage)
See Shrine attacher's doc page and source code

How to specify server-side S3 encryption via ActiveStorage?

Through paperclip I was able to specify server side encryption for S3, and also specify a content type (for a wonky file) like this:
has_attached_file :attachment,
s3_permissions: :private,
s3_server_side_encryption: 'AES256',
s3_headers: lambda { |attachment|
{
'content-Type' => 'text/csv; charset=utf-16le'
}
}
Where would I specify similar when using has_attached_one in ActiveStorage?
As you can see in Active Storage's S3Service, there's upload options are passed transparently from the upload key to the Aws::S3::Object#put method. This is also true for Rails 5.2.
So you just need to specify server_side_encryption key in your storage.yml this way:
amazon:
service: S3
bucket: mybucket
* other properties *
upload:
server_side_encryption: "AES256"

Paperclip: Choose between saving files local or S3 at runtime

I'm using Paperclip for saving files. I have configured successfully for Paperclip saving files directly to Amazon S3. But in some situations, I need files only be saved locally. My question is: How can I do this.
Here is my example Paperclip configuration:
Paperclip.interpolates(:upload_url) { |attachment, style| "#{ENV.fetch('UPLOAD_PROTOCOL', 'http')}://#{ENV.fetch('UPLOAD_DOMAIN', 'localhost:3000')}/uploads/:class/:attachment/:id_:style.:extension" }
Paperclip::Attachment.default_options.merge!(
storage: :s3,
s3_region: ENV['CEPH_REGION'],
s3_protocol: 'http',
s3_host_name: ENV['CEPH_HOST_NAME'],
s3_credentials: {
access_key_id: ENV['CEPH_ACCESS_KEY_ID'],
secret_access_key: ENV['CEPH_SECRET_KEY'],
},
s3_options: {
endpoint: ENV['CEPH_END_POINT'],
force_path_style: true
},
s3_permissions: 'public-read',
bucket: ENV['CEPH_BUCKET'],
url: ':s3_path_url',
path: ':class/:id/:basename.:extension',
use_timestamp: false
)
module Paperclip
def self.string_to_io(options)
data = StringIO.new(options[:data])
data.class.class_eval{ attr_accessor :original_filename }
data.original_filename = options[:original_file_name]
data
end
end
You could use a lambda. As shown here. https://github.com/thoughtbot/paperclip#dynamic-configuration
it would look something like this.
class YOURMODEL < ActiveRecord::Base
has_attached_file :FILE, storage: lambda { |attachment| (attachment.instance.use_s3? ? :s3 : :filesystem) }
end
Then you would have to add a method use_s3? to your model to check if you wanted to store the file locally or with s3.

How to allow users to upload to s3, yet not use own server resources

How is it possible to allow users to upload images on a website, but the actual uploading is done completely on amazon's servers (so as to not burden your own servers with upload throughput).
Can someone explain how this is performed?
i.e. a user wants to upload an image, instead of streaming the file to my server, and then from my server to amazon's s3 service, it bypasses my server altogether and sends it to amazon.
You can check out these docs provided by Amazon.
You can implement the process by using a SWF uploader, or this gem.
CarrierWave can be used with CarrierWaveDirect to upload images directly to S3. This will also allow you to process the image in a background job.
However, if you want to completely eliminate both the upload and processing burden from your dynos, check out Cloudinary which is unique in that it does all image processing on their servers as well as providing storage for them.
if your using paperclip cant you just do the following?
create a s3.yml file in config
development:
bucket: bucket-dev
access_key_id: xxx
secret_access_key: xxx
test:
bucket: bucket-test
access_key_id: xxx
secret_access_key: xxx
production:
bucket: bucket-pro
access_key_id: xxx
secret_access_key: xxx
#paperclip
has_attached_file :photo,
:styles => {
:thumb=> "100x100#",
:small => "400x400>" },
:storage => :s3,
:s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => "/:style/:id/:filename"

Paperclip Fail with Heroku

What are some possible implications for Image upload with Paperclip working on my local machine but not when deployed to Heroku?
When its deployed to Heroku, the image won't save.
As far as I know you can't write directly to Heroku's file system, so I am assuming that is your problem. It makes sense to use something like Amazon s3 for image storing. Take a look at this: Amazon S3 in Heroku
Once you have configured your s3, you want to change the paperclip's has_attached_file to something like this:
has_attached_file :my_picture,
:styles => { :medium => "275x275>" },
:storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => "user/:attachment/:style/:id.:extension"
Where s3.yml would be the configuration file where you define access keys, buckets...
It should look something like this:
production:
access_key_id: [Your Key]
secret_access_key: [Your Secret]
bucket: [Your bucket name]
Here's another guide/article written by one of Paperclip's developers, it explains in detail how to integrate Paperclip with Heroku and S3

Resources