Rhino: load multiple scripts in command line with -e 'load([...])' option - rhino

I'm trying to run multiple scripts with Rhino shell using the command:
java org.mozilla.javascript.tools.shell.Main -e 'load(["script_a.js", "script_b.js"])'
And here is the error:
js: Couldn't read source file "script_a.js,script_b.js: script_a.js,script_b.js (No such file or directory)".
It looks like Rhino receives 2 script names as a single string because of Bash interpreter. As far as I know, special characters enclosed in the single quotes should not be interpreted.
Tried a lot of different combinations with no luck. What I'm missing?

I'm sure you know about this, but still... You may consider to use:
java org.mozilla.javascript.tools.shell.Main -e 'load("script_a.js");load("script_b.js");'
Or reload 'load' function that is not recommended. Or something like that:
Resolving modules using require.js and Java/Rhino
require.config({
baseUrl: "js/app"
});
require (["a", "b"], function(a, b) {
print('modules loaded');
});

Related

How do I run a beam file compiled by Elixir or Erlang?

I have installed Erlang/OTP and Elixir, and compiled the HelloWorld program into a BEAM using the command:
elixirc test.ex
Which produced a file named Elixir.Hello.beam
How do I run this file?
Short answer: no way to know for sure without also knowing the contents of your source file :)
There are a few ways to run Elixir code. This answer will be an overview of various workflows that can be used with Elixir.
When you are just getting started and want to try things out, launching iex and evaluating expressions one at a time is the way to go.
iex(5)> Enum.reverse [1,2,3,4]
[4, 3, 2, 1]
You can also get help on Elixir modules and functions in iex. Most of the functions have examples in their docs.
iex(6)> h Enum.reverse
def reverse(collection)
Reverses the collection.
[...]
When you want to put some code into a file to reuse it later, the recommended (and de facto standard) way is to create a mix project and start adding modules to it. But perhaps, you would like to know what's going on under the covers before relying on mix to perform common tasks like compiling code, starting applications, and so on. Let me explain that.
The simplest way to put some expressions into a file and run it would be to use the elixir command.
x = :math.sqrt(1234)
IO.puts "Your square root is #{x}"
Put the above fragment of code into a file named simple.exs and run it with elixir simple.exs. The .exs extension is just a convention to indicate that the file is meant to be evaluated (and that is what we did).
This works up until the point you want to start building a project. Then you will need to organize your code into modules. Each module is a collection of functions. It is also the minimal compilation unit: each module is compiled into a .beam file. Usually people have one module per source file, but it is also fine to define more than one. Regardless of the number of modules in a single source file, each module will end up in its own .beam file when compiled.
defmodule M do
def hi(name) do
IO.puts "Hello, #{name}"
end
end
We have defined a module with a single function. Save it to a file named mymod.ex. We can use it in multiple ways:
launch iex and evaluate the code in the spawned shell session:
$ iex mymod.ex
iex> M.hi "Alex"
Hello, Alex
:ok
evaluate it before running some other code. For example, to evaluate a single expression on the command line, use elixir -e <expr>. You can "require" (basically, evaluate and load) one or more files before it:
$ elixir -r mymod.ex -e 'M.hi "Alex"'
Hello, Alex
compile it and let the code loading facility of the VM find it
$ elixirc mymod.ex
$ iex
iex> M.hi "Alex"
Hello, Alex
:ok
In that last example we compiled the module which produced a file named Elixir.M.beam in the current directory. When you then run iex in the same directory, the module will be loaded the first time a function from it is called. You could also use other ways to evaluate code, like elixir -e 'M.hi "..."'. As long as the .beam file can be found by the code loader, the module will be loaded and the appropriate function in it will be executed.
However, this was all about trying to play with some code examples. When you are ready to build a project in Elixir, you will need to use mix. The workflow with mix is more or less as follows:
$ mix new myproj
* creating README.md
* creating .gitignore
* creating mix.exs
[...]
$ cd myproj
# 'mix new' has generated a dummy test for you
# see test/myproj_test.exs
$ mix test
Add new modules in the lib/ directory. It is customary to prefix all module names with your project name. So if you take the M module we defined above and put it into the file lib/m.ex, it'll look like this:
defmodule Myproj.M do
def hi(name) do
IO.puts "Hello, #{name}"
end
end
Now you can start a shell with the Mix project loaded in it.
$ iex -S mix
Running the above will compile all your source file and will put them under the _build directory. Mix will also set up the code path for you so that the code loader can locate .beam files in that directory.
Evaluating expressions in the context of a mix project looks like this:
$ mix run -e 'Myproj.M.hi "..."'
Again, no need to compile anything. Most mix tasks will recompile any changed files, so you can safely assume that any modules you have defined are available when you call functions from them.
Run mix help to see all available tasks and mix help <task> to get a detailed description of a particular task.
To specifically address the question:
$ elixirc test.ex
will produce a file named Elixir.Hello.beam, if the file defines a Hello module.
If you run elixir or iex from the directory containing this file, the module will be available. So:
$ elixir -e Hello.some_function
or
$ iex
iex(1)> Hello.some_function
Assume that I write an Elixir program like this:
defmodule PascalTriangle do
defp next_row(m), do: for(x <- (-1..Map.size(m)-1), do: { (x+1), Map.get(m, x, 0) + Map.get(m, x+1, 0) } ) |> Map.new
def draw(1), do: (IO.puts(1); %{ 0 => 1})
def draw(n) do
(new_map = draw(n - 1) |> next_row ) |> Map.values |> Enum.join(" ") |> IO.puts
new_map
end
end
The module PascalTriangle can be used like this: PascalTriangle.draw(8)
When you use elixirc to compile the ex file, it will create a file called Elixir.PascalTriangle.beam.
From command line, you can execute the beam file like this:
elixir -e "PascalTriangle.draw(8)"
You can see the output similar to the photo:

wkhtmltopdf attempting to load from http rather than file

Here's an odd little problem that's led me to post my first question on SO. I am using wkhtmltopdf to convert an HTML document to a PDF as part of a Rails app. To do so, I am rendering the Rails web page to a static HTML file in a temp directory, copying a static header, footer and images to the same temp directory, then executing wkhtmltopdf using "system".
This works perfectly in Development and Test environments. In my Staging env, it does not. I suspected permissions at first, but the first couple of parts of that process (creating the HTML static files and copying them to the directory) are working. I can run wkhtmltopdf from the command line in that temp directory and get the expected outcome. Finally, I ran wkhtmltopdf via both "system" and backticks through the Rails console in staging environment, and here's what I get as output:
> `wkhtmltopdf --footer-html tmp/invoices/footer.html --header-html tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in tmp/invoices/test.html tmp/invoices/this.pdf`
Loading pages (1/6)
QPainter::begin(): Returned false ] 10%
Error: Unable to write to destination
Error: Failed loading page http://tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore) => ""
Notice that last bit. I'm pointing to local files, but it's looking for them via http. OK, I think, maybe I need to be explicit and feed it the file:// protocol so it doesn't look for http. So I try this:
> system("wkhtmltopdf --footer-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf")
Loading pages (1/6)
Error: Failed loading page file://library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore)
=> false
Notice that this one fails with a lowercase "l" on Library. What the heck? (And no, it doesn't get any better with the recommendation to ignore the error with that switch.)
Any ideas? Is there a Rails or Ruby setting that would cause system commands to get rewritten? Is there an option I can add to wkhtmltopdf to make sure it loads from local file? I'm quite baffled. Thanks!
I have had success when using the absolute file path (notice the extra slash after the file://)
wkhtmltopdf --footer-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf
This is the same on windows
Unix path
file:///absolute/path/to/file
Windows path
file:///C:/absolute/path/to/file
In last 0.11 whicked-pdf i found one bug
Example
C:\Ruby193\lib\ruby\gems\1.9.1\gems\wicked_pdf-0.11.0\lib>wicked_pdf.rb
Line 198 I change from:
options[hf][:html][:url] = "file://#{tf.path}" to options[hf][:html][:url] = "file:///#{tf.path}" - (change // to ///)
After change whicked-pdf again worked.
Take a look at the wicked_pdf gem.
You can add a PDF mime type and then whatever page you want pdf'd, just tack on a .pdf to the URL.
I am using this in prod and it works quite well.
No need to call wkhtmltopdf directly.

check syntax of ruby/jruby script using jruby -c syntax check (inside ruby code)

I am in a search of some way , using which in ruby code I should be able to create a temp file and then append some ruby code in that, then pass that temp file path to jruby -c to check for any syntax errors.
Currently I am trying the following approach:
script_file = File.new("#{Rails.root}/test.rb", "w+")
script_file.print(content)
script_file.close
command = "#{RUBY_PATH} -c #{Rails.root}/test.rb"
eval(command);
new_script_file.close
When I inspect command var, it is properly showing jruby -c {ruby file path}. But when I execute the above piece of code I am getting the following error:
Completed 500 Internal Server Error in 41ms
SyntaxError ((eval):1: dhunknown regexp options - dh):
Let me know if any one has any idea on this.
Thanks,
Dean
eval evaluates the string as Ruby code, not as a command line invocation:
Since your command is not valid Ruby syntax, you get this exception.
If you want to launch a command in Ruby, you can use %x{} or ``:
output1 = ls
output2 = %x{ls}
Both forms will return the output of the launched command as a String, if you want to process it. If you want this output to be directly displayed in the user terminal, you can use system():
system("ls")

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

how to invoke java.exe in bash under windows in cygwin with space in path

I tried to invoke java inside bash script on windows (Win XP) using cygwin.
However path to java.exe contain spaces.
only literaly putting in bash sometghing like this worked:
/cygdrive/c/Program\ Files/Java/jdk1.5.0_10/bin/java -cp "$TOOL_HOME" DateParse "$DATE" "$FORMAT"
My attemts to put java path to a variable failed:
export JAVA_EXE="/cygdrive/c/Program\ Files/Java/jdk1.5.0_10/bin/java"
$JAVA_EXE -cp "$TOOL_HOME" DateParse "$DATE" "$FORMAT"
also different combination with cygpath, quotes, brackets did not work. I am not finding the the right combination
Put quotes around $JAVA_EXE:
"$JAVA_EXE" -cp "$TOOL_HOME" DateParse "$DATE" "$FORMAT"
The problem is that every time a variable is expanded, its also broken into words at spaces, UNLESS you put quotes around it. So if you don't want things broken at spaces, you need quotes.
Another alternative is to always use short (DOS) names for things, which don't allow spaces. To see what the short name is, run
cygpath -d "$JAVA_EXE"
to convert that back to a unix-like cygwin path, use
cygpath -u $(cygpath -d "$JAVA_EXE")
thank you for your ideas. It worked in proper combination. The issue was that I was escaping space character and at the same time putting JAVA_EXE in quotes.
export JAVA_EXE="/cygdrive/c/Program Files/Java/jdk1.5.0_10/bin/java"
"$JAVA_EXE" -cp "$TOOL_HOME" DateParse "$DATE" "$FORMAT"
produce this effect:
line 30: /cygdrive/c/Program\ Files/Java/jdk1.5.0_10/bin/java: No such file or directory
on the other hand, converting to DOS 8.3 does not work neither:
cannot create short name of \\?\C:\Program\ Files\Java\jdk1.5.0_10
\bin\java
Finally, putting JAVA_EXE in quotes but without escaping space in path worked fine for me:
export JAVA_EXE="/cygdrive/c/Program Files/Java/jdk1.5.0_10/bin/java"
"$JAVA_EXE" -cp "$TOOL_HOME" DateParse "$DATE" "$FORMAT"

Resources