I have been trying to download a file using Net::SFTP and it keeps getting an error.
The file is partially downloaded, and is only 2.1 MB, so it's not a huge file. I removed the loop over the files and even tried just downloading the one file and got the same error:
yml = YAML.load_file Rails.root.join('config', 'ftp.yml')
Net::SFTP.start(yml["url"], yml["username"], password: yml["password"]) do |sftp|
sftp.dir.glob(File.join('users', 'import'), '*.csv').each do |f|
sftp.download!(File.join('users', 'import', f.name), Rails.root.join('processing_files', 'download_files', f.name), read_size: 1024)
end
end
NoMethodError: undefined method `close' for #<Pathname:0x007fc8fdb50ea0>
from /[my_working_ap_dir]/gems/net-sftp-2.1.2/lib/net/sftp/operations/download.rb:331:in `on_read'
I have prayed to Google all I can and am not getting anywhere with it.
Rails.root returns a Pathname object, but it looks like the sftp code doesn't check to see whether it got a Pathname or a File handle, it just runs with it. When it runs into entry.sink.close it crashes because Pathnames don't implement close.
Pathnames are great for manipulating paths to files and directories, but they're not substitutes for file handles. You could probably tack on to_s which would return a string.
Here's a summary of the download call from the documentation that hints that the expected parameters should be a String:
To download a single file from the remote server, simply specify both the
remote and local paths:
downloader = sftp.download("/path/to/remote.txt", "/path/to/local.txt")
I suspect that if I dig into the code it will check to see whether the parameters are strings, and, if not, assumes that they are IO handles.
See ri Net::SFTP::Operations::Download for more info.
Here's an excerpt from the current download! code, and you can see how the problem occurred:
def download!(remote, local=nil, options={}, &block)
require 'stringio' unless defined?(StringIO)
destination = local || StringIO.new
result = download(remote, destination, options, &block).wait
local ? result : destination.string
end
local was passed in as a Pathname. The code checks to see if there's something passed in, but not what that is. If nothing is passed in it assumes it's something with IO-like features, which is what StringIO provides for the in-memory caching.
Apparently you can't use Rails.root.join, which was causing the problem. It is really stupid though because it would download part of the file.
Changed:
sftp.download!(File.join('users', 'import', f.name), Rails.root.join('processing_files', 'download_files', f.name))
To:
sftp.download!(File.join('users', 'import', f.name), File.join('processing_files', 'download_files', f.name))
argument remote can be a Pathname object while argument local when set should be a String or else an object that responds to #write method.
Below is the working code
local_stringified_path = Rails.root.join('processing_files', f.name).to_s
sftp.download!(Pathname.new('/users/import'), local_stringified_path)
For all those curious minds please read below to understand this behaviour..
The issue NoMethodError: undefined method close' for #<Pathname:0x007fc8fdb50ea0> happens exactly here
in the #on_read method and below is the code snippet of the concerned statements.
if response.eof?
update_progress(:close, entry)
entry.sink.close # ERRORED OUT LINE.. ideally when eof, file IO handler is supposed to be closed
WHAT IS entry.sink ?
We know already that #download! methods takes two args as below
sftp.download!(remote, local)
The given args remote and local is converted to an Entry object here
[Entry.new(remote, local, recursive?)]
and Entry is a nothing but a Struct here
Entry = Struct.new(:remote, :local, :directory, :size, :handle, :offset, :sink)
okay then what is sink attribute? we will jump to that right away..
Once the concerned remote file is open to be read, the #on_open method updates this sink attribute with a File handler here.
Find the snippet below,
entry.sink = entry.local.respond_to?(:write) ? entry.local : ::File.open(entry.local, "wb")
This actually happens only when given local path object doesn't implement it's own #write method In our scenario, Pathname objects does respond to write
Below are some snippets of the console outputs, I inspected in between multiple download chunk calls while debugging this.. which shows the entry and entry.sink displaying the above discussed objects.
Here I chose my remote to be a Pathname object and local to be String path which returns proper value for the entry.sink and there by downloading successfully..
0> entry
=> #<struct Net::SFTP::Operations::Download::Entry remote=#<Pathname:214010463.xml>, local="214010463.xml", directory=nil, size=nil, handle="1", offset=32000, sink=#<File:214010463.xml>>
0> entry.sink
=> #<File:214010463.xml>
Related
I'm currently trying to make a level loading system for a game.
function love.filedropped(file)
ofile=io.open(file:getFilename(),"r")
io.input(ofile)
file:close
levelc=io.read()
for i=1,levelc do
levels[i]=io.read()
print levels[i]
end
levelc should be the first line of the file, and file:getFilename is the file to open (path included) the project gives an error message on startup, and i've used a similar structure before, but for an output. The error is at line 30, which is the levelc=io.read().
I've tried changing the name of the file pointer (it was "f" before, now "ofile") and i've tried using io.read("*l") instead of io.read() but same result.
EDITS:
-this is a love.filedropped(file)
-i need to open other files from a .txt later and i don't really understand how do do that
The parameter given by love.filedropped is a DroppedFile.
In your case helpful could be File:lines().
For example:
function love.filedropped(file)
-- Open for reading
file:open("r")
-- Iterate over the lines
local i = 0
for line in file:lines() do
i = i + 1
levels[i] = line
print(i, levels[i]) -- Notice the parentheses missing in your code
end
-- Close the file
file:close()
end
Notice that love2d usually only allows reading/writing files within the save or working directory. Dropped files are an exception.
Unrelated to this answer but things I noticed in your code:
Use locals, oFile should be local
file:close() required parentheses as its a function call
Same for the print
The filedropped callback has no end
You mentioned reading other files too, to do so, you can either:
Use love.filesystem.newFile and a similar approach as before
The recommended one-liner love.filesystem.lines
I am trying to check whether a particular pdf file exists on AWS S3 using aws-sdk gem (version 2) inside ruby on rails application.
I have the AWS connection established and currently using exists? method:
puts #bucket.objects(prefix:"path/sample_100.pdf").exists?
on running the above statement, I get the below no method error:
undefined method 'exists?' for Aws::Resources::Collection
Checked few documents but of not much help. Is there any other way to achieve the same?
Thanks in advance
I'm not a Ruby developer myself, but I might be able to suggest something.
The usual way to check whether an object exists in Amazon S3 is using the HEAD Object operation. Basically, it returns the metadata (but no content) of an object if it exists, or a 404 error if it doesn't. It's like GET Object, but without the contents of the object.
I just looked up in the AWS SDK for Ruby API Reference and found this method:
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Client.html#head_object-instance_method
Take a look at that, it's probably what you are looking for.
I'd recommend you to use the much simpler S3 gem: https://github.com/qoobaa/s3 If you only need to deal with S3. You'll be able to do it this way:
object = bucket.objects.find("example.pdf")
As mentioned by Bruno, you can use head_object to get info on the file, without actually fetching it. If it is not found (or other problems, such as permissions), an exception will be raised. So if head_object returns, the file exists.
Here's a file that exists:
> head = s3.head_object(bucket: bucket, key: path)
=> #<struct Aws::S3::Types::HeadObjectOutput last_modified=2020-06-05 16:18:05 +0000, content_length=553, etc...>
And here's one that does not exist, and the exception it raises:
> path << '/not-really'
=> "example/file/not-really"
> head = s3.head_object(bucket: bucket, key: path)
Aws::S3::Errors::NotFound
Traceback (most recent call last):
1: from (irb):18
Aws::S3::Errors::NotFound ()
And here's how you can roll your own s3_exists? method:
def s3_exists?(bucket, path)
s3.head_object(bucket: bucket, key: path)
true
rescue
false
end
I want to change the behaviour of my ESP module if some of my parameter was changed and then was restarted. I mean something like this.
if (????) then
print ("default value") else
print ("modified value") end
First I thought of writing a flag into a file, but it causes error during boot if it is not existing yet.
Any better idea?
If you want to store values beyond reboot you have to store them in some non-volatile memory. So using a file is a good way as you already suggested.
Unfortunately you did not provide the error message you get when it is not existing yet and you did not say if the flag or the file does not exist.
What you have to do is handling the error. So if your file does not exist ask the user to create a new one or create a file with default content from your program.
Same goes with the flag. If the file does not contain a flag yet, use a default value or ask the user to give one.
It's not bad or wrong to get errors as long learn from them or handle them properly.
io.open(filename[,mode]) returns nil plus an error message in case of an error.
So simply do something like:
local fileName = "C:\\superfile.txt"
local fileHandle, errorMsg = io.open(fileName)
if not fileHandle then
print("File access error: ", errorMsg)
-- add some error handling here
end
So in case you don't have that file you'll get
File access error: C:\superfile.txt: No such file or directory
I have been using open_uri to pull down an ftp path as a data source for some time, but suddenly found that I'm getting nearly continual "530 Sorry, the maximum number of allowed clients (95) are already connected."
I am not sure if my code is faulty or if it is someone else who's accessing the server and unfortunately there's no way for me to really seemingly know for sure who's at fault.
Essentially I am reading FTP URI's with:
def self.read_uri(uri)
begin
uri = open(uri).read
uri == "Error" ? nil : uri
rescue OpenURI::HTTPError
nil
end
end
I'm guessing that I need to add some additional error handling code in here...
I want to be sure that I take every precaution to close down all connections so that my connections are not the problem in question, however I thought that open_uri + read would take this precaution vs using the Net::FTP methods.
The bottom line is I've got to be 100% sure that these connections are being closed and I don't somehow have a bunch open connections laying around.
Can someone please advise as to correctly using read_uri to pull in ftp with a guarantee that it's closing the connection? Or should I shift the logic over to Net::FTP which could yield more control over the situation if open_uri is not robust enough?
If I do need to use the Net::FTP methods instead, is there a read method that I should be familiar with vs pulling it down to a tmp location and then reading it (as I'd much prefer to keep it in a buffer vs the fs if possible)?
I suspect you are not closing the handles. OpenURI's docs start with this comment:
It is possible to open http/https/ftp URL as usual like opening a file:
open("http://www.ruby-lang.org/") {|f|
f.each_line {|line| p line}
}
I looked at the source and the open_uri method does close the stream if you pass a block, so, tweaking the above example to fit your code:
uri = ''
open("http://www.ruby-lang.org/") {|f|
uri = f.read
}
Should get you close to what you want.
Here's one way to handle exceptions:
# The list of URLs to pass in to check if one times out or is refused.
urls = %w[
http://www.ruby-lang.org/
http://www2.ruby-lang.org/
]
# the method
def self.read_uri(urls)
content = ''
open(urls.shift) { |f| content = f.read }
content == "Error" ? nil : content
rescue OpenURI::HTTPError
retry if (urls.any?)
nil
end
Try using a block:
data = open(uri){|f| f.read}
My application accepts file uploads, with some metadata being stored in the DB, and the file itself on the file system. I am trying to make the metadata visible in the application before the file upload and post-processing are finished, but because saves are transactional, I have had no success. I have tried the callbacks and calling create_or_update() instead of save(), all to no avail. Is there a way to do this without re-writing the guts of ActiveRecord::Base? I've even attempted naming the method make() instead of save(), but perplexingly that had no effect.
The code below "works" fine, but the database is not modified until everything else is finished.
def save(upload)
uploadFile = upload['datafile']
originalName = uploadFile.original_filename
self.fileType = File.extname(originalName)
create_or_update()
# write the file
File.open(self.filePath, "wb") { |f| f.write(uploadFile.read) }
begin
musicFile = TagLib::File.new(self.filePath())
self.id3Title = musicFile.title
self.id3Artist = musicFile.artist
self.id3Length = musicFile.length
rescue TagLib::BadFile => exc
logger.error("Failed to id track: \n #{exc}")
end
if(self.fileType == '.mp3')
convertToOGG();
end
create_or_update()
end
Any ideas would be quite welcome, thanks.
Have you considered processing the file upload as a background task? Save the metadata as normal and then perform the upload and post-processing using Delayed Job or similar. This Railscast has the details.
You're getting the meta-data from the file, right? So is the problem that the conversion to OGG is taking too long, and you want the data to appear before the conversion?
If so, John above has the right idea -- you're going to need to accept the file upload, and schedule a conversion to occur sometime in the future.
The main reason why is that your rails thread will process the OGG conversion and can't respond to any other web-requests until it's complete. Blast!
Some servers compensate for this by having multiple rails threads, but I recommend a background queue (use BJ if you host yourself, or Heroku's background jobs if you host there).