s3 link expire with aws-sdk - ruby-on-rails

i have been using gem 'aws-sdk' for uploading the file with rails, now i getting the created link, basically this link will expire after one hour(i think thats default), but i need to give this link as public, so is there anyway to prevent the link from expire? as i tried like this
AWS.config(:access_key_id => 'XXXXXXXXXX',
:secret_access_key => 'XXXXXXX')
s3 = AWS::S3.new
my_bucket = s3.buckets['xxx/xxxx/xxxx']
object = my_bucket.objects[filename]
puts object.url_for(:read).to_s

Set your file access permission as public read
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new('akid', 'secret'),
region: 'us-west-1'
)
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/source/file/path', acl:'public-read')
obj.public_url
This link will help you

Related

Can I add Shrine upload credentials to the model

I have a multi-tenant site built on rails 5, each of the tenants adds their own s3 credentials, therefore, any uploads that happen on their tenant site get uploaded to their own s3 account.
The problem I have at the moment is that Shrine seems to only let me add s3 credentials in the initializer. This works great but I would like to add it to the model so that I can dynamically populate the s3 credentials depending on which tenant is being used at the time. Does anyone know anyway shrine can help me?
I managed to do this with paperclip but it came with other problems such as background processing etc.
You could define all the storages in the initializer:
Shrine.storages = {
first_storage: Shrine::Storage::S3.new(
bucket: "my-first-bucket", # required
region: "eu-west-1", # required
access_key_id: "abc",
secret_access_key: "xyz"),
second_storage: Shrine::Storage::S3.new(
bucket: "my-second-bucket", # required
region: "eu-east-1", # required
access_key_id: "efg",
secret_access_key: "uvw")
}
Note: This is not all the storages code - both the :cache and the :store storages should be defined.
And then use them in the models:
class Photo
include ImageUploader::Attachment(:image)
end
photo = Photo.new
photo.image_attacher.upload(io, :first_storage)
photo.image_attacher.upload(other_io, :second_storage)
See Shrine attacher's doc page and source code

Rails carrierwave link generated different from s3 storage link

I created a rails api but I have a problem with image upload.
I'm using carrierwave , the upload of picture is working but I get a wrong link.
Example :
This is the link I find in the RESTful api :
https://s3.eu-west-2.amazonaws.com/gpsql/uploads/driver/picture/35/imagename.png
But when I check S3 storage I find a different link :
https://s3.eu-west-2.amazonaws.com/gpsql/gpsql/gpsql/uploads/driver/picture/35/imagename.png
This is initializer for s3 carrierwave :
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '...', # required
aws_secret_access_key: '...', # required
region: 'us-west-2',
path_style: true,
}
config.fog_directory = 'gpsql' # required
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
config.fog_attributes = {'Cache-Control' => "max-age=#{365.day.to_i}"} # optional, defaults to {}
end
In picture uploader :
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
How can I fix the link that is shown in the RESTful api also why there is so much "bucket name" in amazon link why not something straightforward link/bucketname/image.png
For the first link I find in restful api it doesn't work at all I get access denied or key not found for the second one in amazon s3 it works without any problem.
One of the problem is this
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
it should be
config.asset_host = 'https://s3.eu-west-2.amazonaws.com'
Anyway I don't know why it's repeating twice...
So, if you can you should fix it in the configuration and move the folder in S3 to the proper place
If you can't move it I would try to change the store dir to "gpsql/gpsql/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
I'm not sure if that works but that would be my first step

"hostname does not match the server certificate" with aws-sdk-ruby when accessing S3 objects

I have created a new S3 bucket on aws.
I have a certificate issue I didn't have with the same code on my original bucket.
Here is the code :
AWS.config(access_key_id: AWS_ACCESS_KEY_ID, secret_access_key:AWS_SECRET_ACCESS_KEY, region: S3_REGION)
s3 = AWS::S3.new
bucket = s3.buckets[S3_BUCKET_NAME]
#resp = bucket.objects.with_prefix('categories/'+#category.id.to_s+"/")
#resp.each do |item|
end
returns the following error when "#resp.each" is executed:
hostname does not match the server certificate (OpenSSL::SSL::SSLError)
ENV variables were updated with new region and new bucket name
Uploading images is working
#resp is returning AWS::S3::ObjectCollection:0x007f815e099d18
my bucket name doesn't contain dots
Is there something to configurate on AWS S3 to avoid this error?
I was having the same issue, and I solved it by doing:
Aws::S3::Client.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:region => 'YOUR_REGION',
:force_path_style => true)
Basically, by specifying also a path style.
Let me know if it works!

Convert and store to S3 with REST API / InkFilepicker

I have a Rails app on heroku. From the server side (using the REST API of InkFilepicker), I would like to convert a file, save it to my S3 bucket and store the S3 url to my model.
Concretely: Given an image (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG) I want to convert it (https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?w=200&h=150&fit=clip) and store the converted image to my S3 bucket.
EDIT
Here is what I did at the end:
after_save :save_thumbnail_url_to_s3
def save_thumbnail_url_to_s3
convert_options = {
fit: 'clip',
h:500,
w:500
}
file = open("#{self.url}/convert?#{convert_options.to_query}")
# Writing file into S3 bucket
amazon = AWS::S3.new(access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])
bucket = amazon.buckets[ENV['AWS_BUCKET']]
object = bucket.objects[s3_media_path]
written_file = object.write(file, acl: :public_read) # :authenticated_read
self.update_column :thumbnail_url, written_file.public_url.to_s
end
If you are using the filepicker.io API you can convert your file with the API and then provide then use open-uri as below to create a file stream that can be sent to S3, Tempfile as below behaves like the File API in ruby
[3] pry(main)> require 'open-uri'
=> true
[4] pry(main)> file = open("https://www.filepicker.io/api/file/hFHUCB3iTxyMzseuWOgG/convert?...")
=> #
[5] pry(main)> file.class
=> Tempfile
You can simply use the aws-s3 gem : https://github.com/marcel/aws-s3
But be careful, Heroku is read only oriented, you will only be able to work on temp files.

Amazon s3 bucket name warning from Fog gem

I get this warning when all rails server, console setup.
[WARNING] fog: the specified s3 bucket name(hesaplabakalim-production/assets/new_opengraph) is not a valid dns name, which will negatively impact performance.
My fog configuration is like
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => "dummy",
:aws_secret_access_key => "dummy"
})
$directory = connection.directories.create(
:key => "dummy/assets/new_opengraph",
:public => true
)
I must actually create an bucket that's name is dummy and after that walk to assets/new_opengraph folder but i could not find it in fog documentation
I searched on fog gem's github page and i found this issue and solution.
"on the empty folder, I have used zero byte files that we hide in the console that gives the semblance of creating empty folders. Cheap trick but works."
https://github.com/fog/fog/issues/1370

Resources