I uploaded multiple PDF files in following path (user/pdf/) in AWS S3. So that path for each file is going to be like user/pdf/file1.pdf, user/pdf/file2.pdf, etc.
In my website(Angular front-end and Rails backend), I'm trying to do 3 things.
1) Retrieving files in certain path (user/pdf/).
2) Make a view which lists names of the files I retrieved from certain path.
3) Let users to click the name of the file and it will open the file using S3 endpoint
4) Delete the file by clicking a button.
I was looking into AWS S3 doc, but I could not find related API calls from the doc. Would love to get some help on performing above actions.
you should review the ruby S3 sdk doc
listing objects from a bucket
# enumerate ALL objects in the bucket (even if the bucket contains
# more than 1k objects)
bucket.objects.each do |obj|
puts obj.key
end
# enumerate at most 20 objects with the given prefix
bucket.objects.with_prefix('photos/').each(:limit => 20) do |photo|
puts photo.key
end
getting an object
# makes no request, returns an AWS::S3::S3Object
obj = bucket.objects['key']
deleting an object
bucket.objects.delete('abc')
Related
I create a rails application for uploading files through carrierwave to S3 bucket,
I uploaded them to one bucket and I want to upload them to two buckets and regions at the same time .
How can I do that?
You can create an upload method and send your bucket name as an argument. A quick and dirty version would look something like:
def upload_file(specific_bucket = nil)
unless specific_bucket
BUCKET_LIST.each do |bucket|
# send file to bucket
end
else
# upload to specific_bucket
end
end
Store your bucket list in an appropriate location
BUCKET_LIST = [bucket_name_one, bucket_name_two]
I'm making Angular-Rails web app now. I successfully retrieve files from certain path in AWS S3.
Let's say I call below function
#files = bucket.objects.with_prefix('pdf/folder/')
#files.each(:limit => 20) do |file|
puts file.key
end
file.key prints pdf/folder/file1.pdf, pdf/folder.file2.pdf, etc.
I do not want the whole path but just name of files like file1.pdf, file2.pdf, etc.
Is regex the only way or is there a API call for this in AWS S3? I was reading the doc and could not find related API function.
The call you want is probably File#basename:
puts File.basename(file.key)
I have a paperclip instance that I am migrating my files to a different area. Originally the files were stored on my server and just given a filename based on the id of the record created and the original id. Now I'm moving them to s3 and want to update the filenames to work appropriately. I setup my paperclip config like so:
:path => ":class/:attachment/:hash-:style.:extension",
:url => ":s3_domain_url",
:hash_secret => SECRET,
:hash_data => ":class/:attachment/:id/:updated_at"
I updated the original records filenames for my files to be unique and moved them over to my s3 instance. Unfortunately now I am unable to pull down the files from s3 and I think it is because paperclip is using the wrong path for the filenames. One that is based off the path default that is now set using my config file. I want to be able to update my files file_name field so that the path is correct for the new files and I am able to download them appropriately. Is there a way to call paperclips hashing function based on my secret and hash_data directly so I can update those file_name fields and be able to pull those records now? Everything that has been uploaded since the move from my original servers seems to work appropriately.
Say you have a model User with an attachment named profile_pic;
Go into the rails console eg. rails c and then get an object for the model you have the attachment on, eg. u = User.find(100).
Now type u.profile_pic.url to get the url or u.profile_pic_file_name to get the filename.
To see the effect of other options (for example your old options) you can do;
p = u.profile_pic # gets the paperclip attachment for profile_pic
puts p.url # gets the current url
p.options.merge!(url: '/blah/:class/:attachment/:id_partition/:style/:filename')
puts p.url # now shows url with the new options
Similarly p.path will show the local file path with whatever options you pick.
Long story short, something like;
User.where('created_at < some_date').map do |x|
"#{x.id} #{x.profile_pic_file_name} #{x.profile_pic.path}"
end
should give you what you want :)
I am building an application that has a chat component to it. The application allows users to upload files to the chat. The chat is all javascript but i wanted to use Carrierwave for the uploads because i am using it elsewhere in the application. I am doing the handling of the uploads through AJAX so that i can get into Rails land and let Carrierwave take over.
I have been able to get the chat to successfully upload the files to the correct location in my S3 bucket. The thing i can't figure out is how to delete the files. Here is my code the uploads the files - this is the method that is called from the route that the AJAX call hits.
def upload
file = File.open(params[:file_0].tempfile)
uploader = ChatUploader.new
uploader.store!(file)
end
There is little to no documentation with Carrierwave on how to upload files without going through a model and basically NO documentation on how to remove files without going through a model. I assume it is possible though - i just need to know what to call. So i guess my question is how do i delete files?
UPDATE (11/23)
I got the code to save and delete files from S3 using these methods:
# code to save the file
def upload
file = File.open(params[:file_0].tempfile)
uploader = ChatUploader.new
uploader.store!(file)
uploader.store_path()
end
# code to remove files
def remove_file
file = params[:file]
uploader = ChatUploader.new
uploader.retrieve_from_store!(file)
uploader.remove!
end
My only issue now is that the filename for the uploaded file is not correct. It saves all files with a "RackMultipart" and then some numbers which look like a date, time, and identifier? (example: RackMultipart20141123-17740-1tq4j1g) Need to try and use the original filename plus maybe a timestamp for uniqueness.
I believe it has something to do with these two lines:
file = File.open(params[:file_0].tempfile)
and
uploader.store!(file)
On my Rails 3 application, I would like to export static JSON data to an Amazon S3 bucket, which can be later retrieved and parsed by an AJAX call from said application.
The JSON will be generated from the app's database.
My design requirements probably will only need something like a rake task to initiate the export to S3. Every time the rake task is initiated, it'll overwrite the files. Preferably the file name will correspond to the ID number of the record where the JSON data is generated from.
Does anyone have any experience with this and can point me in the right direction?
This can be accomplished with the aws-sdk gem.
Your task could be broken into two basic steps: 1) generate a temporary local file with your json data, 2) upload to S3. A very basic, procedural example of this:
require 'aws-sdk'
# generate local file
record = Record.find(1)
file_name = "my-json-data-#{record.id}"
local_file_path = "/tmp/#{file_name}"
File.open(local_file_path, 'w') do |file|
file.write(record.to_json)
end
# upload to S3
s3 = AWS::S3.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY')
bucket = s3.buckets['my-s3-bucket-key']
object = bucket.objects[file_name]
object.write(Pathname.new(local_file_path))
Check out the S3Object docs for more info.