Can I add Shrine upload credentials to the model - ruby-on-rails

I have a multi-tenant site built on rails 5, each of the tenants adds their own s3 credentials, therefore, any uploads that happen on their tenant site get uploaded to their own s3 account.
The problem I have at the moment is that Shrine seems to only let me add s3 credentials in the initializer. This works great but I would like to add it to the model so that I can dynamically populate the s3 credentials depending on which tenant is being used at the time. Does anyone know anyway shrine can help me?
I managed to do this with paperclip but it came with other problems such as background processing etc.

You could define all the storages in the initializer:
Shrine.storages = {
first_storage: Shrine::Storage::S3.new(
bucket: "my-first-bucket", # required
region: "eu-west-1", # required
access_key_id: "abc",
secret_access_key: "xyz"),
second_storage: Shrine::Storage::S3.new(
bucket: "my-second-bucket", # required
region: "eu-east-1", # required
access_key_id: "efg",
secret_access_key: "uvw")
}
Note: This is not all the storages code - both the :cache and the :store storages should be defined.
And then use them in the models:
class Photo
include ImageUploader::Attachment(:image)
end
photo = Photo.new
photo.image_attacher.upload(io, :first_storage)
photo.image_attacher.upload(other_io, :second_storage)
See Shrine attacher's doc page and source code

Related

Rails carrierwave link generated different from s3 storage link

I created a rails api but I have a problem with image upload.
I'm using carrierwave , the upload of picture is working but I get a wrong link.
Example :
This is the link I find in the RESTful api :
https://s3.eu-west-2.amazonaws.com/gpsql/uploads/driver/picture/35/imagename.png
But when I check S3 storage I find a different link :
https://s3.eu-west-2.amazonaws.com/gpsql/gpsql/gpsql/uploads/driver/picture/35/imagename.png
This is initializer for s3 carrierwave :
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '...', # required
aws_secret_access_key: '...', # required
region: 'us-west-2',
path_style: true,
}
config.fog_directory = 'gpsql' # required
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
config.fog_attributes = {'Cache-Control' => "max-age=#{365.day.to_i}"} # optional, defaults to {}
end
In picture uploader :
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
How can I fix the link that is shown in the RESTful api also why there is so much "bucket name" in amazon link why not something straightforward link/bucketname/image.png
For the first link I find in restful api it doesn't work at all I get access denied or key not found for the second one in amazon s3 it works without any problem.
One of the problem is this
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
it should be
config.asset_host = 'https://s3.eu-west-2.amazonaws.com'
Anyway I don't know why it's repeating twice...
So, if you can you should fix it in the configuration and move the folder in S3 to the proper place
If you can't move it I would try to change the store dir to "gpsql/gpsql/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
I'm not sure if that works but that would be my first step

Paperclip custom interpolation (aka custom path for AWS S3)

I have three questions related to paperclip and AWS S3.
1) In my model which has paperclip, I have following code:
has_attached_file :attachment,
:url => "/songs/:user_id/:basename.:extension",
:path => "/songs/:user_id/:basename.:extension"
What's the difference between URL and PATH?
2) What is :basename.:extension?
3) Let's say there are two models: User and File. User has many File. Paperclip path and url are configured in File model.
In config/initializers/paperclip.rb, I put below code:
Paperclip.interpolates :user_id do |attachment, style|
attachment.instance.criteria.user_id
end
I confirm that above code is working fine. My file gets saved at songs/5/song.mp3. I would like to save the mp3 file at songs/user_id_5/song.mp3. I tried doing below but it doesn't work.
Paperclip.interpolates :user_id do |attachment, style|
'user_id_' + attachment.instance.criteria.user_id
end
How do I make it as I want to ?
in S3 language path is the Key of your item and url is your s3 endpoint
From docs
url: There are four options for the S3 url. You can choose to have the bucket's name placed domain-style (bucket.s3.amazonaws.com) or
path-style (s3.amazonaws.com/bucket). You can also specify a CNAME
(which requires the CNAME to be specified as :s3_alias_url. You can
read more about CNAMEs and S3 at
docs.amazonwebservices.com/AmazonS3/latest/index.html?VirtualHosting.html
Normally, this won't matter in the slightest and you can leave the
default (which is path-style, or :s3_path_url). But in some cases
paths don't work and you need to use the domain-style
(:s3_domain_url). Anything else here will be treated like path-style.
path: This is the key under the bucket in which the file will be stored. The URL will be constructed from the bucket and the path. This is what you will want to interpolate. Keys should be unique, like filenames, and despite the fact that S3 (strictly speaking) does not support directories, you can still use a / to separate parts of your file name.
you can configure the bucket or url in your config and just pass the path (i.e. where to store the file) when you call the method
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: 'mybucket'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: 'aws_region_id',
}
}
I don't know
You need string interpolation
Paperclip.interpolates :user_id do |attachment, style|
"user_id_#{attachment.instance.criteria.user_id}"
end

Rails paperclip S3 attachment not deleted (bad request)

I'm developing a rails API. I use the paperclip gem to store images in Amazon S3. I'm just using my own access key for the bucket, without any added policies. The attachments are correctly uploaded and stored in S3, but when I destroy a record, the attachments are not deleted. I also tried deleting the attachment alone, and that gave the following error:
[AWS S3 400 0.382023 0 retries] head_object(:bucket_name=>"my-bucket-name",:key=>"the/url/to/the/image.jpg") AWS::S3::Errors::BadRequest AWS::S3::Errors::BadRequest
In my model:
has_attached_file :main_image
validates_attachment :main_image, presence: true,
content_type: { content_type: %w(image/jpeg image/png)},
size: { in: 0 .. 1.megabytes }
In my configuration:
# Paperclip config
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('AWS_S3_BUCKET'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('AWS_S3_REGION'),
}
}
The app is running on Heroku. Is this a permissions issue? Note that I'm using the aws-sdk gem version 1.66.
This is a permissions issue with AWS S3 since you are able to upload but not delete. Did you create an AWS IAM user to generate an Access Key and Secret Key? If so can you paste your policy?

Ignore s3_host_alias in paperclip conditionally?

We use cloudfront for our images hosted on s3 through paperclip; however it has some aggressive caching and we have part of our code that needs fresh data (some image manipulation).
Is there any way of overriding s3_host_alias on calling the url?
Everything I've found so far regarding this topic talks about adding cloudfront, not about ignoring it; and even then, everything is systemwide.
Our paperclip config:
# Paperclip Config
Paperclip.options[:command_path] = "/usr/bin/"
config.paperclip_defaults = {
storage: :s3,
s3_protocol: :https,
url: ':s3_alias_url',
default_url: "https://#{SETTINGS['s3']['bucket']}.s3.amazonaws.com/missing/:class/:attachment/:style.png",
s3_host_alias: SETTINGS['s3']['cdn_url'],
s3_credentials: {
bucket: SETTINGS['s3']['bucket'],
access_key_id: SETTINGS['s3']['access_key_id'],
secret_access_key: SETTINGS['s3']['secret_access_key']
}
}
This might be dirty but you can try this.
#set the value in your controller
Photo.my_custom_attr = "My custom value"
#In your model
class Photo < ActiveRecord::Base
cattr_accessor :my_custom_attr
Paperclip.interpolates :my_custom_attr do |attachment, style|
Photo.my_custom_attr
end
end
I had this same need so I did some digging through the paperclip code and here's what I found:
The s3 storage module has still registered the interpolation you're looking for, it just isn't used when you call #url on the attachment since you probably have set something like url: ':s3_alias_url' set in your call to has_attached_file. So, what you can do is use the Paperclip interpolator manually by calling interpolate.
For example:
Paperclip::Interpolations.interpolate(':s3_path_url', User.first.avatar, :original)
You can also substitute the other interpolations the s3 module defines for ':s3_path_url', e.g. ':s3_domain_url'.

s3 link expire with aws-sdk

i have been using gem 'aws-sdk' for uploading the file with rails, now i getting the created link, basically this link will expire after one hour(i think thats default), but i need to give this link as public, so is there anyway to prevent the link from expire? as i tried like this
AWS.config(:access_key_id => 'XXXXXXXXXX',
:secret_access_key => 'XXXXXXX')
s3 = AWS::S3.new
my_bucket = s3.buckets['xxx/xxxx/xxxx']
object = my_bucket.objects[filename]
puts object.url_for(:read).to_s
Set your file access permission as public read
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new('akid', 'secret'),
region: 'us-west-1'
)
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/source/file/path', acl:'public-read')
obj.public_url
This link will help you

Resources