Connecting Ruby(Rails) to Nodejs through a pipe - ruby-on-rails

I have a rails app that needs to make use of a Javascript library on the server. Up until now I have been running system commands from rails to nodejs whenever this is necessary. However, I have a particularly computationally intensive task that has made it necessary to cache data to speed it up. I also have to pass large inputs to the node program. As a result I've hit the buffer size of inputs to the node program. I am currently just sending it to separate node processes multiple times in chunks small enough to fit in the buffer, but this is causing performance problems because I now no longer get to take advantage of caching over as many runs. I would like to use a pipe to do this, but my pipe hits the buffer as well, and I don't know how to empty it. So far I have...
#ruby file
output=[]
node_pipe=IO.popen("nodejs /home/user/node_program.js","w+")
10_000.times do |time|
node_pipe.write("a lot of stuff")
#here I would like to read contents and push contents to output array but still be
#able to write to the same process in the next loop to take advantage of the cache
end
//node_program.js
var input=process.stdin;
var cache={};
input.resume();
input.on('data',function(chunk){
cache[chunk]=library_function(chunk);
console.log(String(other_library_function(chunk)));
}
Any suggestions?
`

Related

Read a pickle from another pipeline in Beam?

I'm running batch pipelines in Google Cloud Dataflow. I need to read objects in one pipeline that another pipeline has previously written. The easiest wa objects is pickle / dill.
The writing works well, writing a number of files, each with a pickled object. When I download the file manually, I can unpickle the file. Code for writing: beam.io.WriteToText('gs://{}', coder=coders.DillCoder())
But the reading breaks every time, with one of the errors below. Code for reading: beam.io.ReadFromText('gs://{}*', coder=coders.DillCoder())
Either...
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
KeyError: '\x90'
...or...
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in find_class
return StockUnpickler.find_class(self, module, name)
File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
__import__(module)
ImportError: No module named measur
(the class of the object sits in a path with measure, though not sure why it misses the last character there)
I've tried using the default coder, and a BytesCoder, and pickling & unpickling as a custom task in the pipeline.
My working hypothesis is the reader splitting the file by line, and so treating a single pickle (which has new lines within it) as multiple objects. If so, is there a way of avoiding that?
I could attempt to build a reader myself, but I'm hesitant since this seems like a well-solved problem (e.g. Beam already has a format to move objects from one pipeline stage to another).
Tangentially related: How to read blob (pickle) files from GCS in a Google Cloud DataFlow job?
Thank you!
ReadFromText is designed to read new line separated records in text files hence is not suitable for your use-case. Implementing FileBasedSource is not a good solution either since it's designed for reading large files with multiple records (and usually splits these files into shards for parallel processing). So, in your case, the current best solution for Python SDK is to implement a source yourself. This can be as simple as a ParDo that reads files and produces a PCollection of records. If your ParDo produce a large number of records consider adding a apache_beam.transforms.util.Reshuffle step following that which will allow runners to parallelize following steps better. For Java SDK we have FileIO which already provides transforms to make this bit easier.
Encoding as string_escape escapes the newlines, so the only newlines that Beam sees are those between pickles:
class DillMultiCoder(DillCoder):
"""
Coder that allows multi-line pickles to be read
After an object is pickled, the bytes are encoded as `unicode_escape`,
meaning newline characters (`\n`) aren't in the string.
Previously, the presence of newline characters these confues the Dataflow
reader, as it can't discriminate between a new object and a new line
within a pickle string
"""
def _create_impl(self):
return coder_impl.CallbackCoderImpl(
maybe_dill_multi_dumps, maybe_dill_multi_loads)
def maybe_dill_multi_dumps(o):
# in Py3 this needs to be `unicode_escape`
return maybe_dill_dumps(o).encode('string_escape')
def maybe_dill_multi_loads(o):
# in Py3 this needs to be `unicode_escape`
return maybe_dill_loads(o.decode('string_escape'))
For large pickles, I also needed to set the buffersize much higher to 8MB - on the previous buffer size (8kB), a 120MB file spun for 2 days of CPU time:
class ReadFromTextPickle(ReadFromText):
"""
Same as ReadFromText, but with a really big buffer. With the standard 8KB
buffer, large files can be read on a loop and never finish
Also added DillMultiCoder
"""
def __init__(
self,
file_pattern=None,
min_bundle_size=0,
compression_type=CompressionTypes.AUTO,
strip_trailing_newlines=True,
coder=DillMultiCoder(),
validate=True,
skip_header_lines=0,
**kwargs):
# needs commenting out, not sure why
# super(ReadFromTextPickle, self).__init__(**kwargs)
self._source = _TextSource(
file_pattern,
min_bundle_size,
compression_type,
strip_trailing_newlines=strip_trailing_newlines,
coder=coder,
validate=validate,
skip_header_lines=skip_header_lines,
buffer_size=8000000)
Another approach would be to implement a PickleFileSource inherited from FileBasedSource and call pickle.load on the file - each call would yield a new object. But there's a bunch of complication around offset_range_tracker that looked like more lift than strictly necessary

Why does VkAccessFlagBits include both read bits and write bits?

In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.

Does Ruby on Rails have read stream for files?

Does rails have a way to implement read streams like Node js for file reading?
i.e.
fs.createReadStream(__dirname + '/data.txt');
as apposed to
fs.readFile(__dirname + '/data.txt');
Where I see ruby has
file = File.new("data.txt")
I am unsure of the equivalent in ruby/rails for creating a stream and would like to know if this is possible. The reasons I ask is for memory management as a stream will be delivered piece by piece as apposed to one whole file.
If you want to read a file in Ruby piece-by-piece, there are a host of methods available to you.
IO#each_line/IO::foreach, also implemented in File to iterate over each line of the file. Neither reads the whole file into memory; instead, both simply read up until the next newline, return, and pause reading, barring a possible buffer.
IO#read/IO::read takes a length parameter, which allows you to specify for it to read up to length bytes from the file. This will only read that many, and not the whole thing.
IO::binread does the same as IO::read, but will open the file in binary mode.
IO#readpartial appears to be very similar or identical to IO#read, but is also worth looking at.
IO#getc and IO#gets both read from the file until they reach the end of what they'll return, as far as I can tell.
There are several more that I'm looking for right now.

ChromeWorker to write a huge file

In my extension, I need to write a huge file (say around 20 gigs) to the disk. Currently I am doing it in the main thread, but file creation is very expensive operation. I was about to move the whole file creation process to a ChromeWorker, but based on https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Functions_and_classes_available_to_workers I cannot have access to the nsiFile from a ChromeWorker.
So my questions are:
1. Is it possible to access Cc, Ci, and Cu from within a ChromeWorker?
2. If not what would be the most efficient way to create and fill large files in Firefox. Note that I need to write the file based on segments and offsets (Ci.nsISeekableStream).
It's not possible to access nsIFile from ChromeWorker. But nsIFile is horrible synchronus option.
Go with OS.File: https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/OSFile.jsm
On that page go to the link for usage on workers: https://developer.mozilla.org/docs/Mozilla/JavaScript_code_modules/OSFile.jsm/OS.File_for_workers
On the mainthread os.file returns promises.
In worker they are synchronus. Wrap your os.file functions in worker with a try-catch, as when an error occurs, (like os.file.remove with option of ignoreAbsent set to false) then the catch will hold the OS.File.Error object.
Great move to ChromeWorker btw! I'm a huge fan of ChromeWorkers. I wrote a simple example of jsm using chromeworker here: https://github.com/Noitidart/jpm-chromeworker
For segments, you'll have to OS.File.open and then on the return value do a .setPosition() then you can read certain number of bytes from that position, or write, or whatever. Its awesome stuff. OS.File is the new way and the recommended way to do file operations. Its been around awhile now though since like Firefox 29 or before that.

Suds is not reusing cached WSDLs and XSDs, although I expect it to

I'm pretty sure suds is not caching my WSDLs and XSDs like I expect it to. Here's how I know that cached objects are not being used:
It takes about 30 seconds to create a client: client = Client(url)
The logger entries show consistent digestion of the XSD and WSDL files during the entire 30 seconds
Wireshark is showing consistent TCP traffic to the server storing the XSD and WSDL files during the entire 30 seconds
I see the files in the cache being updated each time I run my program
I have a small program that creates a suds client, sends a single request, gets the response, then ends. My expectation is that each time I run the program, it should fetch the WSDL and XSD files from the file cache, not from the URLs. Here's why I think that:
client.options.cache.duration is set to ('days', 1)
client.options.cache.location is set to c:\docume~1\mlin\locals~1\temp\suds and I see the cache files being generated and re-generated each time I run the program
For a moment I thought that maybe the cache is not reused between runs of a program, but I don't think a file cache would be used if that were the case, because an in-memory cache would do just fine
Am I misunderstanding how suds caching is supposed to work?
The problem is in the suds library itself. In cache.py, although ObjectCache.get() is always getting a valid file pointer, it's hitting an exception (EOFError) doing pickle.load(fp). When that happens, the file is just downloaded again.
Here's the sequence of events:
DocumentReader.open():
Trying http://172.28.50.249/wsdl/billingServices/v3.0/RequestScrubAddress.wsdl
Loading ObjectCache 51012453-document
Loading pickled object...
Exception raised:
Got None from cache
Downloading... Done
Saving FileCache 51012453-document... Done
So it doesn't really matter that the new cache file was saved, because the same thing happens the next time I run. This happens for ALL of WSDL and XSD files.
I fixed that problem by opening the cache file in binary mode when reading and writing. Specifically, the changes I made were in cache.py:
1) In FileCache.put(), change this line:
f = self.open(fn, 'w')
to
f = self.open(fn, 'wb')
2) In FileCache.getf(), change this line:
return self.open(fn)
to
return self.open(fn, 'rb')
I don't know the codebase well enough to know if these changes are safe, but it is pulling the objects from the file cache, the service is still running successfully, and loading the client went from 16 seconds down to 2.5 seconds. Much better if you ask me.
Hopefully this fix, or something similar can be introduced back into the suds main line. I've already sent this to the suds mailing list (fedora-suds-list at redhat dot com).

Resources