Can't read image path: No such file or directory # rb_sysopen - /assets/ - ruby-on-rails

I have an image named test.jpg in my assets/images folder.
I'm trying to read the image in my controller:
image_path = ActionController::Base.helpers.asset_path("test.jpg")
image_data = File.read(image_path)
I get the following error:
No such file or directory # rb_sysopen - /assets/test-80cc818a7ee3f5d3cab23996fb09f4685b38b78258084a1ff23eca1c626646f6.jpg
Any ideas? Why is it appending that code to my image url? Can I get rid of it so it can read the original image url?
Thanks!

So, basically asset_path returns a URL-path, relative to the root domain - ie. "/assets/test-{SHADigest}.jpg".
That path is not a path for the file system, that File.read will accept, therefore, on unix systems you're trying to access the file system at /assets/xyz, like had it been /home/xyz-path.
What you can do, is either read from file system, raw file (Ignoring any thing the asset pipeline may or may not be set up to do)
image_path = Rails.root.join("assets", "images", "test.jpg")
image_data = File.read(image_path) # Maybe check with File.exist?(image_path) as well.
Or you can read from its own webserver, with any HTTP Client tool (RestClient, Net::HTTP or alike), and use asset_url instead of asset_path.
This won't work in rails console unless the server is "accidentally" running, and reachable on config.asset_host from where ever the rails console is running.

Related

Rails 4, Fog, Amazon s3 - retrieving all the images as an array from a specific folder in a bucket.

I am using amazon s3, rails 4, and the FOG gem. I have an amazon bucket called uipstudy with 100 folders, each containing about 20 images. I use the following to get all the images in a specific folder (In my application_helper.rb which is included in the application_controller.rb).
def get_files(image_folder)
connection = Fog::Storage.new(
provider: 'AWS',
aws_access_key_id: '######',
aws_secret_access_key: '#######'
)
connection.directories.get('uipimages', prefix:image_folder).files.map do |file|
file.key
end
end
In my controller I have this....in this example I am looking in the folder "1" in the uipstudy bucket.
#Amazon solution:
#images = get_files('1')
#images.each do |image|
image = "https://s3.amazonaws.com/uipstudy/#{image}"
#image_array << image
end
The problem is that its returning the files inside the folder labelled "1" but also in 10, 11, 12,13....etc. I assumed that the prefix was an absolute but it appears not. Is there a way to enforce that the prefix gets exactly the folder specified in the prefix?
I think you should be able to make a small change in your script to get the behavior you want. Simply append a forward slash to the prefix so that it clearly shows you want things that are like a directory instead of any/all things that begin with a particular character.
So, that would get you something like:
directory = connection.directories.get('upimages', prefix: image_folder + '/')
directory.files.map do |file|
file.key
end
(I just split it into two commands to make it format/read easier)
Below is my solution using the aws-sdk gem.
initialize s3 client
s3 = AWS::S3.new
bucket = s3.buckets[ENV['AWS_BUCKET']]
regex for ipa files in _inbox folder
regex = %r{_inbox/(?:[^/]+/)*[^/]+\.ipa}i
get and process ipa files
bucket.objects.select { |o| o.key.match(regex) }.each do |ipa|

Attach a pdf in asset pipeline using ActionMailer Rails 4

I'm trying to attach a file to an email. The file is in assets/downloads/product.pdf
In the mailer, I have:
attachments["product.pdf"] = File.read(ActionController::Base.helpers.asset_path("product.pdf"))
I've tried:
attachments["product.pdf"] = File.read(ActionController::Base.helpers.asset_url("product.pdf"))
...and even:
attachments["product.pdf"] = File.read(ActionController::Base.helpers.compute_asset_host("product.pdf") + ActionController::Base.helpers.compute_asset_path("product.pdf"))
I always get the same error:
EmailJob crashed!
Errno::ENOENT: No such file or directory - //localhost:3000/assets/product.pdf
...or a variation on the theme. But even when I try using asset_url in the view or just put the url in the browser it works:
http://localhost:3000/assets/product.pdf
I've also tried using straight up:
File.read("app/assets/downloads/product.pdf")
File.read("downloads/product.pdf")
...which works in dev environment but not on staging server (heroku). Error is still:
Errno::ENOENT: No such file or directory - downloads/product-market-fit-storyboard.pdf
Also tried:
File.read("/downloads/product.pdf")
File.read("http://lvh.me:3000/assets/product.pdf")
...don't work at all.
Ideas?
you should use syntex like this.it is work for me may be will work for you also.
File.open(Dir.glob("#{Rails.root}/app/assets/downloads/product.pdf"), "r")
When using mailer, you shouldn't use assets pipline. Asset pipeline would be useful if you wanted to have link to a file inside your email. When rendering an email, action mailer has access to files in app directory.
Please read about attachments in action mailer guide. As you can see, you just need to pass path to a file, not url:
attachments['filename.jpg'] = File.read('/path/to/filename.jpg')

Sanitising a download URL with params being passed to send_file

I have numerous files in a directory app/assets/downloadables/ and I want authenticated users to be able to download arbitrary files by name in params within that directory or any subdirectories.
How can I sanitise the input to send_file to prevent users from accessing arbitrary files on the server?
At the moment I'm looking at doing something like this, but I'm not confident that it's safe:
DOWNLOADS_ROOT = File.join(Rails.root, "app", "assets", "downloadables")
# e.g. request.path = "/downloads/subdir/file1.pdf"
file = request.path.gsub(/^\/downloads\//,'').gsub('..','').split('/')
# send file /app/assets/downloadables/subdir/file1.pdf
send_file File.join(DOWNLOADS_ROOT, file)
Would this sufficiently protect against app-wide arbitrary file access or are there improvements or a different approach that would be better?
I found an answer to this here: http://makandracards.com/makandra/1083-sanitize-user-generated-filenames-and-only-send-files-inside-a-given-directory
This file needed to be created per the link:
app/controllers/shared/send_file_inside_trait.rb
module ApplicationController::SendFileInsideTrait
as_trait do
private
def send_file_inside(allowed_path, filename, options = {})
path = File.expand_path(File.join(allowed_path, filename))
if path.match Regexp.new('^' + Regexp.escape(allowed_path))
send_file path, options
else
raise 'Disallowed file requested'
end
end
end
end
To be used as follows in controllers:
send_file_inside File.join(Rails.root, 'app', 'assets', 'downloadables'), request.path.gsub(/^\/downloads\//,'').split('/')
The magic happens where the calculated path of allowed_path + file_name is entered into expand_path which will strip out any special directory browsing strings and just return an absolute path. That resolved path is then compared against the allowed_path to ensure that the file being requested resides within the allowed path and/or sub-directories.
Note that this solution requires the Modularity gem v2 https://github.com/makandra/modularity

retrieve carrierwave file uploaded to Amazon S3

Using Rails 3.2, carrier wave, and recently switched to store on Amazon S3. My setup and uploads are all working fine.
1. I have image_uploader.rb to upload and store images. Displaying them all works fine
2. I have file_uploader.rb to upload and store files. I've even taken it a step further to upload ZIP files and extract a version so that both the ZIP file and TXT files are stored in the correct place on S3.
My problem is I run a method on the TXT file. In the past, I used storage :file
With that I was able to:
Dir.chdir("public/uploads/")
import_file = Dir['*.TXT'].first
f = File.new(import_file)
Now, that I'm using storage :fog I can't get seem to retrieve/File.new/Open the file.
I see the file with the usual commands:
#upload1.team_file # stored file
#upload1.team_file.url # url
#upload1.team_file_url(:data_file).to_s # version created
I've been pouring through all kinds of very limited leads on retrieving and/or opening the file, but everything I try seems to return errors, such as:
Errno::ENOENT: No such file or directory - https://teamfiles.s3.amazonaws.com/data_files…
Thoughts on the difference here of retrieving and USING a file from AmazonS3? Thanks!
Pulling from multiple threads, APIs, etc. I'm answering my own question with what I've found. I welcome any corrections or improvements:
To retrieve carrierwave files uploaded to AmazonS3, you have to understand that open(#upload.file_url) or File.open(#upload.file_url) does NOT open the file, it only opens the PATH to the file. (ref: Ruby OpenURI )
I use: open_uri_url = open(#upload.file_url)
You then have to find the specific file in that path that you want. For me, I then find a ZIP file that was uploaded to AmazonS3 and Extract the specific file within the ZIP file that I want with a unique *.ABC extension:
zip_content_file = Zip::File.open(open_uri_url).map{|content| content if content.to_s.split('.').last == "ABC"}.compact.first
Now, from here, where to extract to?? I create a unique directory in the Rails tmp directory to extract the file to, use it and then delete the directory:
tmp_directory = "tmp/extracts/#{#upload.parent_id}/"
FileUtils.mkdir_p(tmp_directory) unless File.directory?(tmp_directory)
extract = zip_content_file.extract(tmp_directory + content_file.to_s)
Now with found from the AmazonS3 stored ZIP file and extracted, I can open, read, etc:
f = File.new(tmp_directory + extract.to_s)
I hope this helps with Carrierwave, AmazonS3, ZIP files and using them once uploaded.

ruby reading files from S3 with open-URI

I'm having some problems reading a file from S3. I want to be able to load the ID3 tags remotely, but using open-URI doesn't work, it gives me the following error:
ruby-1.8.7-p302 > c=TagLib2::File.new(open(URI.parse("http://recordtemple.com.s3.amazonaws.com/music/745/original/The%20Stranger.mp3?1292096514")))
TypeError: can't convert Tempfile into String
from (irb):8:in `initialize'
from (irb):8:in `new'
from (irb):8
However, if i download the same file and put it on my desktop (ie no need for open-URI), it works just fine.
c=TagLib2::File.new("/Users/momofwombie/Desktop/blah.mp3")
is there something else I should be doing to read a remote file?
UPDATE: I just found this link, which may explain a little bit, but surely there must be some way to do this...
Read header data from files on remote server
Might want to check out AWS::S3, a Ruby Library for Amazon's Simple Storage Service
Do an AWS::S3:S3Object.find for the file and then an use about to retrieve the metadata
This solution assumes you have the AWS credentials and permission to access the S3 bucket that contains the files in question.
TagLib2::File.new doesn't take a file handle, which is what you are passing to it when you use open without a read.
Add on read and you'll get the contents of the URL, but TagLib2::File doesn't know what to do with that either, so you are forced to read the contents of the URL, and save it.
I also noticed you are unnecessarily complicating your use of OpenURI. You don't have to parse the URL using URI before passing it to open. Just pass the URL string.
require 'open-uri'
fname = File.basename($0) << '.' << $$.to_s
File.open(fname, 'wb') do |fo|
fo.print open("http://recordtemple.com.s3.amazonaws.com/music/745/original/The%20Stranger.mp3?1292096514").read
end
c = TagLib2::File.new(fname)
# do more processing...
File.delete(fname)
I don't have TagLib2 installed but I ran the rest of the code and the mp3 file downloaded to my disk and is playable. The File.delete would clean up afterwards, which should put you in the state you want to be in.
This solution isn't going to work much longer. Paperclip > 3.0.0 has removed to_file. I'm using S3 & Heroku. What I ended up doing was copying the file to a temporary location and parsing it from there. Here is my code:
dest = Tempfile.new(upload.spreadsheet_file_name)
dest.binmode
upload.spreadsheet.copy_to_local_file(:default_style, dest.path)
file_loc = dest.path
...
CSV.foreach(file_loc, :headers => true, :skip_blanks => true) do |row|}
This seems to work instead of open-URI:
Mp3Info.open(mp3.to_file.path) do |mp3info|
puts mp3info.tag.artist
end
Paperclip has a to_file method that downloads the file from S3.

Resources