I have a block of code that creates Tempfiles
#tmp_file = Tempfile.new("filename")
I keep them closed after creation,
#tmp_file.close unless #tmp_file.closed?
When there is a need to add data to the temp files I open them and add data as below
def add_row_to_file(row)
#tmp_file.open
#tmp_file.read
#tmp_file.print(row.to_json + "\n")
end
All is well, but for testing the same I have stubbed tempfile as below and is creating an error when the test case runs into add_row_to_file(row)
buffers = {}
Tempfile.stub(:new) do |file_name|
buffer = StringIO.new
buffers[file_name] = buffer
end
Error message is :
Failure/Error: ]],
NoMethodError:
private method `open' called for #<StringIO:0x00000010b867c0>
I want to keep the temp files closed on creation as there is a max temp files open issue at OS level (I have to deal with uploading lot of tempfiles to S3)
but for testing I have a problem accessing the private method of StringIO.
Any idea how to solve this problem? Thanks.
I have a work around, which is to skip closing the StringIO when in test environment.
#tmp_file.close unless #tmp_file.closed? || Rails.env.test?
and update add_row_to_file(row) as below
def add_row_to_file(row)
#tmp_file.open unless Rails.env.test?
#tmp_file.read unless Rails.env.test?
#tmp_file.print(row.to_json + "\n")
end
Apart from the work-around provided if we want 100% code coverage while in test we can try the below.
Do not close the stream if it is StringIO
#tmp_file.close unless #tmp_file.is_a?(StringIO)
so we dont need to open it while in test.
def add_row_to_file(row)
#tmp_file = File.open(#tmp_file, 'a+') unless #tmp_file.is_a?(StringIO)
#tmp_file.print(row.to_json + "\n")
end
In this way we can achieve the Tempfile testing without actually creating Files while in test environment and having 100% code coverage while in test.
Related
This is my code (recompilation of How do I temporarily redirect stderr in Ruby? (which can't be used because of native extension writes):
def silence_stdout(log = '/dev/null')
orig = $stdout.dup
$stdout.reopen(File.new(log, 'w'))
begin
yield
ensure
$stdout = orig
end
end
silence_stdout('ttt.log') do
#do something
end
But I have a problem, the file is filled with the code only after the puma stops (Ctrl + C).
Probably should I close the file? But I do not understand how to do it. All my attempts to close the file end as "log writing failed. closed stream" or "no block given (yield)".
I ask for advice.
I'm new to ruby and rspec.
I have a module that interacts with S3 in ruby.
In my code I :
create a new S3 instance : s3 = AWS::S3.new()
Get my bucket : #s3bucket = s3.buckets[#bucket]
Retrieve my S3Object : object = #s3bucket.objects[key]
Finally, I save the object to a local file:
File.open(local_filename, 'wb') do |s3file|
object.read do |chunk|
return completed if stop?
s3file.write(chunk)
end
My code works well, but I'm having problems unit testing it,
specifically I'm having problems mocking the object.read do |chunk| part.
No matter what I try the chunk turns out empty.
Can some one help?
Thanks.
try this:
s3_object = class_double("S3Object")
allow(s3_object).to receive(:read).and_return(<what ever you want>)
If you need to need to store API responses in your tests without making multiple calls, check out https://github.com/vcr/vcr
If you want to mock this:
create a new S3 instance : s3 = AWS::S3.new()
you do
allow_any_instance_of(AWS::S3).to_receive(:new).and_return(<the return value of the method>)
You can use VCR as suggested previously but you will run into issues if you are working on a team and you run your tests at the same time as another team member if you both have deleted the same cassette.
I have a Rails application where users upload Audio files. I want to send them to a third party server, and I need to connect to the external server using Web sockets, so, I need my Rails application to be a websocket client.
I'm trying to figure out how to properly set that up. I'm not committed to any gem just yet, but the 'faye-websocket' gem looks promising. I even found a similar answer in "Sending large file in websocket before timeout", however, using that code doesn't work for me.
Here is an example of my code:
#message = Array.new
EM.run {
ws = Faye::WebSocket::Client.new("wss://example_url.com")
ws.on :open do |event|
File.open('path/to/audio_file.wav','rb') do |f|
ws.send(f.gets)
end
end
ws.on :message do |event|
#message << [event.data]
end
ws.on :close do |event|
ws = nil
EM.stop
end
}
When I use that, I get an error from the recipient server:
No JSON object could be decoded
This makes sense, because the I don't believe it's properly formatted for faye-websocket. Their documentation says:
send(message) accepts either a String or an Array of byte-sized
integers and sends a text or binary message over the connection to the
other peer; binary data must be encoded as an Array.
I'm not sure how to accomplish that. How do I load binary into an array of integers with Ruby?
I tried modifying the send command to use the bytes method:
File.open('path/to/audio_file.wav','rb') do |f|
ws.send(f.gets.bytes)
end
But now I receive this error:
Stream was 19 bytes but needs to be at least 100 bytes
I know my file is 286KB, so something is wrong here. I get confused as to when to use File.read vs File.open vs. File.new.
Also, maybe this gem isn't the best for sending binary data. Does anyone have success sending binary files in Rails with websockets?
Update: I did find a way to get this working, but it is terrible for memory. For other people that want to load small files, you can simply File.binread and the unpack method:
ws.on :open do |event|
f = File.binread 'path/to/audio_file.wav'
ws.send(f.unpack('C*'))
end
However, if I use that same code on a mere 100MB file, the server runs out of memory. It depletes the entire available 1.5GB on my test server! Does anyone know how to do this is a memory safe manner?
Here's my take on it:
# do only once when initializing Rails:
require 'iodine/client'
Iodine.force_start!
# this sets the callbacks.
# on_message is always required by Iodine.
options = {}
options[:on_message] = Proc.new do |data|
# this will never get called
puts "incoming data ignored? for:\n#{data}"
end
options[:on_open] = Proc.new do
# believe it or not - this variable belongs to the websocket connection.
#started_upload = true
# set a task to send the file,
# so the on_open initialization doesn't block incoming messages.
Iodine.run do
# read the file and write to the websocket.
File.open('filename','r') do |f|
buffer = String.new # recycle the String's allocated memory
write f.read(65_536, buffer) until f.eof?
#started_upload = :done
end
# close the connection
close
end
end
options[:on_close] = Proc.new do |data|
# can we notify the user that the file was uploaded?
if #started_upload == :done
# we did it :-)
else
# what happened?
end
end
# will not wait for a connection:
Iodine::Http.ws_connect "wss://example_url.com", options
# OR
# will wait for a connection, raising errors if failed.
Iodine::Http::WebsocketClient.connect "wss://example_url.com", options
It's only fair to mention that I'm Iodine's author, which I wrote for use in Plezi (a RESTful Websocket real time application framework you can use stand alone or within Rails)... I'm super biased ;-)
I would avoid the gets because it's size could include the whole file or a single byte, depending on the location of the next End Of Line (EOL) marker... read gives you better control over each chunk's size.
I'm developing a system in python, and one functionality I need is the ability to have console output go to both the console and a user-specified file. This is replicating the Diary function in MATLAB. I have the following that works perfectly well on both IDLE on windows and python cmdline in ubuntu (this all exists inside a module that gets loaded):
class diaryout(object):
def __init__(self):
self.terminal = sys.stdout
self.save = None
def __del__(self):
try:
self.save.flush()
self.save.close()
except:
# do nothing, just catch the error; maybe it self was instantiated, but never opened
1/1
self.save = None
def dclose(self):
self.__del__()
def write(self, message):
self.terminal.write(message)
self.save.write(message)
def dopen(self,outfile):
self.outfile = outfile
try:
self.save = open(self.outfile, "a")
except Exception, e:
# just pass out the error here so the Diary function can handle it
raise e
def Diary(outfile = None):# NEW TO TEST
global this_diary
if outfile == None:
# None passed, so close the diary file if one is open
if isinstance(this_diary, diaryout):
sys.stdout = this_diary.terminal # set the stdout back to stdout
this_diary.dclose() # flush and close the file
this_diary = None # "delete" it
else:
# file passed, so let's open it and set it for the output
this_diary = diaryout() # instantiate
try:
this_diary.dopen(outfile) # open & test that it opened
except IOError:
raise IOError("Can't open %s for append!"%outfile)
this_dairy=none # must uninstantiate it, since already did that
except TypeError:
raise TypeError("Invalid input detected - must be string filename or None: %s"%Diary.__doc__)
this_dairy=none # must uninbstantiate it, since already did that
sys.stdout = this_diary # set stdout to it
Far superior to both IDLE and the plain python cmline, I'm using ipython; herein my problem lies. I can turn on the "diary" perfectly fine with no error but the display on the console gets messed. The attached screenshot shows this . The output file also becomes similarly garbled. Everything goes back to normal when I undo the redirection with Diary(None). I have tried editing the code so that it never even writes to the file, with no effect. It seems almost like something is forcing an unsupported character set or something I don't understand.
Anyone have an idea about this?
I have been using open_uri to pull down an ftp path as a data source for some time, but suddenly found that I'm getting nearly continual "530 Sorry, the maximum number of allowed clients (95) are already connected."
I am not sure if my code is faulty or if it is someone else who's accessing the server and unfortunately there's no way for me to really seemingly know for sure who's at fault.
Essentially I am reading FTP URI's with:
def self.read_uri(uri)
begin
uri = open(uri).read
uri == "Error" ? nil : uri
rescue OpenURI::HTTPError
nil
end
end
I'm guessing that I need to add some additional error handling code in here...
I want to be sure that I take every precaution to close down all connections so that my connections are not the problem in question, however I thought that open_uri + read would take this precaution vs using the Net::FTP methods.
The bottom line is I've got to be 100% sure that these connections are being closed and I don't somehow have a bunch open connections laying around.
Can someone please advise as to correctly using read_uri to pull in ftp with a guarantee that it's closing the connection? Or should I shift the logic over to Net::FTP which could yield more control over the situation if open_uri is not robust enough?
If I do need to use the Net::FTP methods instead, is there a read method that I should be familiar with vs pulling it down to a tmp location and then reading it (as I'd much prefer to keep it in a buffer vs the fs if possible)?
I suspect you are not closing the handles. OpenURI's docs start with this comment:
It is possible to open http/https/ftp URL as usual like opening a file:
open("http://www.ruby-lang.org/") {|f|
f.each_line {|line| p line}
}
I looked at the source and the open_uri method does close the stream if you pass a block, so, tweaking the above example to fit your code:
uri = ''
open("http://www.ruby-lang.org/") {|f|
uri = f.read
}
Should get you close to what you want.
Here's one way to handle exceptions:
# The list of URLs to pass in to check if one times out or is refused.
urls = %w[
http://www.ruby-lang.org/
http://www2.ruby-lang.org/
]
# the method
def self.read_uri(urls)
content = ''
open(urls.shift) { |f| content = f.read }
content == "Error" ? nil : content
rescue OpenURI::HTTPError
retry if (urls.any?)
nil
end
Try using a block:
data = open(uri){|f| f.read}