Rails uploading to Amazon S3 bucket - ruby-on-rails

I have two differnt buckets in the same Amazon S3 account. I can upload fine to one but not the other.
pictures.rb
def self.set_s3_direct_post
return S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read')
end
aws.rb
S3_BUCKET = Aws::S3::Resource.new.bucket(ENV['S3_FIRST_BUCKET'])
Trying to change bucket (with S3_SECOND_BUCKET instead of S3_FIRST_BUCKET) results in a failed upload. I notice that the url for a failed upload is in the form:
https://s3.amazonaws.com/%E2%80%9Csecond_bucket%E2%80%9D
but a successful upload looks like:
https://first_bucket.s3.amazonaws.com
How do I control this??

Unbelievable. I edited my ~/.profile file in TextEdit. The addition of the line:
export S3_SECOND_BUCKET="second_bucket"
put slightly curly double-quotes around second_bucket. Turns out these double-quotes are different to the ones Sublime Text 2 uses, which are straight (like above). The TextEdit versions weren't being recognised as quotes. Cheers TextEdit, you t**t.

Related

Rails - Resave All Models for S3 Migration

rails 6.1.3.2
aws-sdk-s3 gem
I currently have a rails app in production that uses ActiveStorage to attach image data to a wrapper Image model. It's currently using the local strategy to save images to disk and I am migrating it to S3. I am not using paperclip or anything similar.
I succeeded in setting it up. Currently it is set to use local primarily and have S3 as a mirror so that I can write to two places during the migration. However the documentation says that it will only save new images to S3 upon create and update of a record. I would like to "re-save" all models in production to force the migration to happen. Does anyone know how to do this?
Looks like it was already answered!
If you happen to be stuck with only access to the Rails Console like I was, this solution worked perfectly. If you copy-paste this code into the console, it will begin to produce output of the S3 uploads. After 5k of those, I was done. An immense thank you to Tayden for the solution.
all_services = [ActiveStorage::Blob.service.primary, *ActiveStorage::Blob.service.mirrors]
# Iterate through each blob
ActiveStorage::Blob.all.each do |blob|
# Select services where file exists
services = all_services.select { |file| file.exist? blob.key }
# Skip blob if file doesn't exist anywhere
next unless services.present?
# Select services where file doesn't exist
mirrors = all_services - services
# Open the local file (if one exists)
local_file = File.open(services.find{ |service| service.is_a? ActiveStorage::Service::DiskService }.path_for blob.key) if services.select{ |service| service.is_a? ActiveStorage::Service::DiskService }.any?
# Upload local file to mirrors (if one exists)
mirrors.each do |mirror|
mirror.upload blob.key, local_file, checksum: blob.checksum
end if local_file.present?
# If no local file exists then download a remote file and upload it to the mirrors (thanks #Rystraum)
services.first.open blob.key, checksum: blob.checksum do |temp_file|
mirrors.each do |mirror|
mirror.upload blob.key, temp_file, checksum: blob.checksum
end
end unless local_file.present?

Ruby on Rails copy S3 file with special characters (%C5) in path

I have a problem with my attachment system on web page. I store them on amazon S3 using paperclip. I have an option to copy attachment to new file. Everything works fine until there are polish special characters in title, like: ŁĄKA.jpg. Then I get an error:
Saving error: Appendix Paperclip::Errors::NotIdentifiedByImageMagickError
/Users/michal/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.5/lib/active_record/validations.rb:79:in `raise_record_invalid'
/Users/michal/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.5/lib/active_record/validations.rb:43:in `save!'
My code:
instance.appendixes.select {|a| a.temporary? && !a.appendix.exists?}.each do |a|
a.appendix = S3File.new(a.s3path)
a.process = false
a.appendix_url = nil
puts "CREATING NEW FILE from (temporary?) appendix: #{a.id}, path: #{a.s3path}, is_public: #{a.is_public}, determine_is_public: #{a.determine_is_public}"
a.is_public = a.determine_is_public
logger.debug("CREATING NEW FILE from (temporary?) appendix: #{a.id}, path: #{a.s3path}, is_public: #{a.is_public}, determine_is_public: #{a.determine_is_public}")
a.save! # bo delayed_job
end
I'm getting error on a.save! when path is like: appendixes/appendixes/242/original/%25C5%2581A%25CC%25A8KA.jpg, but works like charm when it is: appendixes/appendixes/243/original/laka.jpg or another file name without polish letters. Anybody had this kind of problem or have suggestions how to fix it?
Ok, I found what was wrong. I had to replace in a.s3path, the last part with original name (łąka.jpg) and everything works fine. So when I have:
S3File.new(appendixes/appendixes/243/original/łąka.jpg) it works good and finds the correct file on s3 server.

Rails 4, Fog, Amazon s3 - retrieving all the images as an array from a specific folder in a bucket.

I am using amazon s3, rails 4, and the FOG gem. I have an amazon bucket called uipstudy with 100 folders, each containing about 20 images. I use the following to get all the images in a specific folder (In my application_helper.rb which is included in the application_controller.rb).
def get_files(image_folder)
connection = Fog::Storage.new(
provider: 'AWS',
aws_access_key_id: '######',
aws_secret_access_key: '#######'
)
connection.directories.get('uipimages', prefix:image_folder).files.map do |file|
file.key
end
end
In my controller I have this....in this example I am looking in the folder "1" in the uipstudy bucket.
#Amazon solution:
#images = get_files('1')
#images.each do |image|
image = "https://s3.amazonaws.com/uipstudy/#{image}"
#image_array << image
end
The problem is that its returning the files inside the folder labelled "1" but also in 10, 11, 12,13....etc. I assumed that the prefix was an absolute but it appears not. Is there a way to enforce that the prefix gets exactly the folder specified in the prefix?
I think you should be able to make a small change in your script to get the behavior you want. Simply append a forward slash to the prefix so that it clearly shows you want things that are like a directory instead of any/all things that begin with a particular character.
So, that would get you something like:
directory = connection.directories.get('upimages', prefix: image_folder + '/')
directory.files.map do |file|
file.key
end
(I just split it into two commands to make it format/read easier)
Below is my solution using the aws-sdk gem.
initialize s3 client
s3 = AWS::S3.new
bucket = s3.buckets[ENV['AWS_BUCKET']]
regex for ipa files in _inbox folder
regex = %r{_inbox/(?:[^/]+/)*[^/]+\.ipa}i
get and process ipa files
bucket.objects.select { |o| o.key.match(regex) }.each do |ipa|

retrieve carrierwave file uploaded to Amazon S3

Using Rails 3.2, carrier wave, and recently switched to store on Amazon S3. My setup and uploads are all working fine.
1. I have image_uploader.rb to upload and store images. Displaying them all works fine
2. I have file_uploader.rb to upload and store files. I've even taken it a step further to upload ZIP files and extract a version so that both the ZIP file and TXT files are stored in the correct place on S3.
My problem is I run a method on the TXT file. In the past, I used storage :file
With that I was able to:
Dir.chdir("public/uploads/")
import_file = Dir['*.TXT'].first
f = File.new(import_file)
Now, that I'm using storage :fog I can't get seem to retrieve/File.new/Open the file.
I see the file with the usual commands:
#upload1.team_file # stored file
#upload1.team_file.url # url
#upload1.team_file_url(:data_file).to_s # version created
I've been pouring through all kinds of very limited leads on retrieving and/or opening the file, but everything I try seems to return errors, such as:
Errno::ENOENT: No such file or directory - https://teamfiles.s3.amazonaws.com/data_files…
Thoughts on the difference here of retrieving and USING a file from AmazonS3? Thanks!
Pulling from multiple threads, APIs, etc. I'm answering my own question with what I've found. I welcome any corrections or improvements:
To retrieve carrierwave files uploaded to AmazonS3, you have to understand that open(#upload.file_url) or File.open(#upload.file_url) does NOT open the file, it only opens the PATH to the file. (ref: Ruby OpenURI )
I use: open_uri_url = open(#upload.file_url)
You then have to find the specific file in that path that you want. For me, I then find a ZIP file that was uploaded to AmazonS3 and Extract the specific file within the ZIP file that I want with a unique *.ABC extension:
zip_content_file = Zip::File.open(open_uri_url).map{|content| content if content.to_s.split('.').last == "ABC"}.compact.first
Now, from here, where to extract to?? I create a unique directory in the Rails tmp directory to extract the file to, use it and then delete the directory:
tmp_directory = "tmp/extracts/#{#upload.parent_id}/"
FileUtils.mkdir_p(tmp_directory) unless File.directory?(tmp_directory)
extract = zip_content_file.extract(tmp_directory + content_file.to_s)
Now with found from the AmazonS3 stored ZIP file and extracted, I can open, read, etc:
f = File.new(tmp_directory + extract.to_s)
I hope this helps with Carrierwave, AmazonS3, ZIP files and using them once uploaded.

ruby reading files from S3 with open-URI

I'm having some problems reading a file from S3. I want to be able to load the ID3 tags remotely, but using open-URI doesn't work, it gives me the following error:
ruby-1.8.7-p302 > c=TagLib2::File.new(open(URI.parse("http://recordtemple.com.s3.amazonaws.com/music/745/original/The%20Stranger.mp3?1292096514")))
TypeError: can't convert Tempfile into String
from (irb):8:in `initialize'
from (irb):8:in `new'
from (irb):8
However, if i download the same file and put it on my desktop (ie no need for open-URI), it works just fine.
c=TagLib2::File.new("/Users/momofwombie/Desktop/blah.mp3")
is there something else I should be doing to read a remote file?
UPDATE: I just found this link, which may explain a little bit, but surely there must be some way to do this...
Read header data from files on remote server
Might want to check out AWS::S3, a Ruby Library for Amazon's Simple Storage Service
Do an AWS::S3:S3Object.find for the file and then an use about to retrieve the metadata
This solution assumes you have the AWS credentials and permission to access the S3 bucket that contains the files in question.
TagLib2::File.new doesn't take a file handle, which is what you are passing to it when you use open without a read.
Add on read and you'll get the contents of the URL, but TagLib2::File doesn't know what to do with that either, so you are forced to read the contents of the URL, and save it.
I also noticed you are unnecessarily complicating your use of OpenURI. You don't have to parse the URL using URI before passing it to open. Just pass the URL string.
require 'open-uri'
fname = File.basename($0) << '.' << $$.to_s
File.open(fname, 'wb') do |fo|
fo.print open("http://recordtemple.com.s3.amazonaws.com/music/745/original/The%20Stranger.mp3?1292096514").read
end
c = TagLib2::File.new(fname)
# do more processing...
File.delete(fname)
I don't have TagLib2 installed but I ran the rest of the code and the mp3 file downloaded to my disk and is playable. The File.delete would clean up afterwards, which should put you in the state you want to be in.
This solution isn't going to work much longer. Paperclip > 3.0.0 has removed to_file. I'm using S3 & Heroku. What I ended up doing was copying the file to a temporary location and parsing it from there. Here is my code:
dest = Tempfile.new(upload.spreadsheet_file_name)
dest.binmode
upload.spreadsheet.copy_to_local_file(:default_style, dest.path)
file_loc = dest.path
...
CSV.foreach(file_loc, :headers => true, :skip_blanks => true) do |row|}
This seems to work instead of open-URI:
Mp3Info.open(mp3.to_file.path) do |mp3info|
puts mp3info.tag.artist
end
Paperclip has a to_file method that downloads the file from S3.

Resources