Where does stderr go on G-WAN? - stderr

Where does stderr go on G-WAN, for example, if I have this ruby script:
require 'time'
START_TIME = Time.now
ENV.each do |k,v|
puts "#{k} => #{v} <br/>"
end
RUNTIME = Time.now - START_TIME
puts "<br/>%.2f ms" % (RUNTIME*1000.0)
$stderr.puts "Test 123"
exit(200)
There are no Test 123 string on the logs/* directory or in the console when that script requested..

Where does stderr go on G-WAN?
If I remember well, stdin, strerr and stdout are closed in the daemon mode because they are unreachable anyway as G-WAN was dettached from its parent terminal.
But in the 'interactive' mode, where G-WAN still shows console output like compilation errors and 'gracefull' crash reports (servlet crashes not crashing the server), these file descriptors are left opened.
This can be easily checked from any G-WAN using scripts that are loaded as modules (C, C++, C#, Java, PH7, Objective-C, etc.), that is, when scripts share the G-WAN memory space.
But when a CGI process is used instead because the script runtime was not loaded as a module (Ruby, Perl, etc.) this is different as G-WAN is running an external process and piping stdout.
In this later case, G-WAN would need another pipe to stream stderr. This was not done and this is why you see nothing when directing the output to stderr from a Ruby script.
Using more pipes would consume more file descriptors, and faster under server loads so, unless you have a compelling use for this feature that is worth sharing with us, it probably does not make much sense implementing it.
Ultimally, all scripted languages should use modules loaded in the G-WAN memory space because this allows far higher performance and concurrency.
Therefore, it is only a matter of time (and help from the Ruby community) before G-WAN can use stderr at no additional cost with Ruby.

Related

Lua: how to block the io library by blacklisting strings? and a sandbox dilemma

I want to block the io library so that community created scripts don't have access to it.
I could simply create a Lua state without it, but here's the dilemma: community scripts should still be able to use the io library by calling loadfile() on libraries created by the dev team that have wrapped io functions in them.
I found no way to achieve this duality of blocking functions/libraries from community scripts while still allowing said scripts to run the offending functions/libraries if they are wrapped (for sanitization purposes) in another dev-maintainted library which community scripts can load with loadfile(). I'm resorting to the ugly method of blacklisting certain strings so if the script has them, it doesn't run. BTW, the blacklist is checked from the C++ side where the script to run is a string variable that is fed to the VM if it's clean.
If I blacklist...
"_G", "io.", "io .", "io}", "io }", "io ", "=io", "= io", "{io", "{ io", ",io", ", io", " io"
...is it still possible to call io library functions, or do I have everything covered? The reason blocking _G is a must is this:
a = "i"
b = "o"
boom = _G[a..b]
I want to know if another 'hack' is possible. Also, I welcome alternatives on how I can achieve the aforementioned duality without blacklisting strings.
Write your own loadfile function that will check the location of the loaded files (presumably all dev-maintained libraries have a defined location) and add the io library to the environment available to the loaded scripts (using env parameter in Lua 5.2+). The sandbox itself won't have access to the io library, but the dev libraries will.

rails: how to open the ENV in binary mode?

One day my application declared all passwords invalid.
After tedious search the problem was found: a cipher initialization vector (just a bunch of random bits) is given to the application via ENV. And rails had decided to convert this string (which is arbitrary binary data) to UTF-8.
I'm doing basically this, before server start:
ENV["RAILS_ACC_VEC"] = "\xB3n%-\x9E^\xE1\x93 \x17\xEER\x1B\n\x84S"
Rack::Server.start( ...
and later
if Rails.env != "production"
salt = "dummy"
else
salt = ENV["RAILS_ACC_VEC"]
end
The bitstring should be 128 bit long. But it happened to be 176 bit long and contained valid UTF-8. (Obviousely, the cipher routines did utterly fail with that.)
The application currently runs on Rails 4.2.8 and ruby 2.4, and with default encoding.
The reason for the problem could be found: usually the application is started with the server or from deploy, with no locale in the environment. This time it was started from a console, and that console happened to be set to ISO 8859.
The consequence is also clear: one needs to take care that the application is always started with a definite locale in the ENV - either LC_CTYPE=C (equivalent to no locale), or -maybe better- UTF-8 (in case the application has default config.encoding).
What I am now trying to figure out, is, when and why does ruby/rails do such things?
I know that transcoding may happen with an IO object, but there the intended charset can be specified when opening.
It may make some sense, if the system seems to run in ISO 8859, and rails itself runs with UTF-8, that the ENV, when moved from outside to inside, may need transcoding. But that holds true only if language is concerned, and not all ENV content might be language.
So, how is the ENV opened in binary mode?
The more ambitioned question then is, are there more evil dangers of such kind around with the Encoding feature?
You should not store binary data in the system environment. The operating system is not designed to store binary data in its environment. I don't believe any provide that feature. All environment variables should be text. Maybe an OS can store binary data in the environment, but I don't believe that is a standard. I doubt they can store a null byte (\x00). It is probably a security risk for operating systems, leading to buffer overflow exploits for other programs that read the environment. Try a search of 'posix env binary'.
You should store your IV as base64 encoded data whenever you store it as text.
ENV['IV'] = 'VGhpcyBjYW4gYmUgYmluYXJ5Lg=='
export IV=VGhpcyBjYW4gYmUgYmluYXJ5Lg== # or from the shell
...
iv = Base64.decode64 ENV['IV']

Oneliner to load Lua script from online (Gist) and run in current context

I have a lua REPL, and would like to run a lua script file stored as plain text at HTTPS://URL. I understand os.execute() can run OS commands so we can use curl etc. to grab the script then load(). Is that something possible to do in lua REPL with a single line?
Note: If you're going to run source code directly from the web, use https at least, to avoid easy MitM attacks.
To give this question an answer, since Egor will probably not post it as such:
(loadstring or load)(io.popen("wget -qO- https://i.imgur.com/91HtaFp.gif"):read"*a")()
For why this prints Hello world:
loadstring or load is to be compatible with different Lua versions, as the functions loadstring and load were merged at some point (5.2 I believe). io.popen executes its first argument in the shell and returns a file pointer to its stdout.
The "gif" from Egor is not really a GIF (open this in your browser: view-source:https://i.imgur.com/91HtaFp.gif) but a plain text file that contains this text:
GIF89a=GIF89a
print'Hello world'
Basically a GIF starts with GIF89a and the =GIF89a afterwards is just to produce valid Lua, meaning you don't have to use imgur or gifs, you can just as well use raw gists or github.
Now, it's rather unlikely that os.execute is available in a sandbox when io.popen is not, but if it is, you can achieve a one-liner (though drastically longer) using os.execute and temporary files
Lets first write this out because in a single line it will be a bit complex:
(function(u,f)
-- get a temp file name, Windows prefixes those with a \, so remove that
f=f or os.tmpname():gsub('^\\','')
-- run curl, make it output into our temp file
os.execute(('curl -s "%s" -o "%s"'):format(u,f))
-- load/run temp file
loadfile(f)()
os.remove(f)
end)("https://i.imgur.com/91HtaFp.gif");
And you can easily condense that into a single line by removing comments, tabs and newlines:
(function(u,f)f=f or os.tmpname():gsub('^\\','')os.execute(('curl -s "%s" -o "%s"'):format(u,f))loadfile(f)()os.remove(f)end)("https://i.imgur.com/91HtaFp.gif");

In Erlang, how can I independently capture stdout and stderr of a subprocess?

I'm trying to figure out how to pull the stdout and stderr from a system subprocess in Erlang. (Not to be confused with an Erlang process.) The gotcha is I'm trying to pull the output of the streams independently.
open_port/2 seems to get me most of the way there, however it doesn't seem to provide a way to differentiate between the two streams. There is the stderr_to_stdout option, but that's not what I want; I want to get data from both data streams but be able to distinguish the two streams.
Any suggestions? Thanks.
Update: I'm ideally looking for a solution that will work on both Windows and Linux.
Try this :
Path = filename:join(["./priv", "log", "log_file_name"]),
{ok, F} = file:open(Path, [write]),
group_leader(F, self()),
erlang:display("Anything this process outputs now gets redirected").
You may want to try erlexec. As its documentation explains, it allows separate control over stdout and stderr, and in general it's much more flexible than open_port/2 for managing OS processes from Erlang.

Connecting Ruby(Rails) to Nodejs through a pipe

I have a rails app that needs to make use of a Javascript library on the server. Up until now I have been running system commands from rails to nodejs whenever this is necessary. However, I have a particularly computationally intensive task that has made it necessary to cache data to speed it up. I also have to pass large inputs to the node program. As a result I've hit the buffer size of inputs to the node program. I am currently just sending it to separate node processes multiple times in chunks small enough to fit in the buffer, but this is causing performance problems because I now no longer get to take advantage of caching over as many runs. I would like to use a pipe to do this, but my pipe hits the buffer as well, and I don't know how to empty it. So far I have...
#ruby file
output=[]
node_pipe=IO.popen("nodejs /home/user/node_program.js","w+")
10_000.times do |time|
node_pipe.write("a lot of stuff")
#here I would like to read contents and push contents to output array but still be
#able to write to the same process in the next loop to take advantage of the cache
end
//node_program.js
var input=process.stdin;
var cache={};
input.resume();
input.on('data',function(chunk){
cache[chunk]=library_function(chunk);
console.log(String(other_library_function(chunk)));
}
Any suggestions?
`

Resources