When I run with a.pl
...
use xmlparser;
...
I've got an error message as belows
#0localhost.localdomain:/home/sylee/work] perl a.pl
Can't locate xmlparser.pm in #INC (#INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at a.pl line 4.
BEGIN failed--compilation aborted at a.pl line 4.
and I've found a lot of solver like this yum install "perl(XML::Parser)".. etc,
But these don't work. Now, I almost gave up.
Would you please help me please how to solve this problem?
My system is "Centos5-64bit"
update1
a.pl is consist like this
# Load PERL libraries
# ------------------------------------------------------------------------------
use strict;
use warnings;
use lib 'bin/lib'; # Collapse namespace 'lib::'
use xmlparser; # Load the XML parser module
How to know where correct "use lib 'bin/lib';"?
That does not sound like the name of the perl module.
Try:
use XML::Parser;
More than anything try reading the XML::Parser documentation on CPAN.
Related
This is basically the same issue as in mingw ld cannot find some library which is exist in the search path, MinGW linker can't find MPICH2 libraries - and I'm aware that there are heaps of posts on StackOverflow regarding the issue of static and dynamic linking with MinGW - but I couldn't find anything that explains how I can troubleshoot.
I am building a project with a huge linker command like (via g++) on MinGW, in a MSYS2 shell (git-bash.exe). The process fails with, among others:
/z/path/to/../../../../i686-w64-mingw32/bin/ld.exe: cannot find -lssl
I add -Wl,--verbose to the g++ linker call (to be passed to ld), and I can see for the -L/z/path/to/libs/openssl/lib/mingw -lssl:
...
attempt to open /z/path/to/libs/openssl/lib/mingw/libssl.a failed
...
/z/path/to/libs/openssl/lib/mingw/ssl.dll failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
But this is weird, because the file exists?
$ file /z/path/to/libs/openssl/lib/mingw/libssl.a
/z/path/to/libs/openssl/lib/mingw/libssl.a: current ar archive
(... and it was built with the same compiler on the same machine)?
Weirdly, once it attempts to open with forward slash .../libssl.a, once with backslash ...\libssl.a - but at least the first path checks out in a bash shell, as shown above?
It gets even worse if I try to specify -l:libssl.a -- or if I specify -L/z/path/to/libs/openssl/lib/mingw -Wl,-Bstatic -lssl -- instead; then all attempts to open are with a backslash:
...
attempt to open /z/path/to/scripts/other/build/openssl/build/mingw/lib\libssl.a failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
To top it all off, if I look it up manually through the command line using ld, it is found ?!:
$ ld -L/z/path/to/libs/openssl/lib/mingw -lssl --verbose
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/ssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
Does anyone have an idea why this happens, and how can I get ld to finally find these libraries? Or rather - how can I troubleshoot, and understand why these libraries are not found, when they exist at the paths where ld tries to open them?
OK, found something more - not sure if this is a bug; but my problem is that I'm actually reading arguments from a file (otherwise I get g++: Argument list too long). So, to simulate that:
$ echo " -Wl,--verbose -L/z/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
# nothing
$ g++ `cat tcmd3` 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
... it turns out, if the very same arguments are fed on the command line, then static library lookup succeeds - but if the arguments are read from file through the # at-sign, then static library lookup fails?! Unfortunately, I cannot use on my actual project, since even with cat, I'd still get g++: Argument list too long ... So how can I fix this?
MSYS has special handling of directories as arguments when they are used in the shell. This translates e.g. /<drive_letter>/blabla to the proper Windows style paths. This is to accomodate Unix programs that don't handle Z: style directory root.
What you see here is that MSYS isn't performing this interpretation for string read from a file. When you think about it, it's very logical, but as you have experienced first-hand, also sometimes annoying.
Long story short: don't put Unix style paths in files with command arguments. Instead, pass them through e.g. cygpath -w, which works in MSYS2 (which should be the MSYS that Git for Windows 2+ comes with).
Ok, with some more experiments, I noticed that:
-L/z/path/to/libs/openssl/lib/mingw, the Unix path specification, tends to fail - while if we specify the same, except starting with a Windows drive letter, that is:
-LZ:/path/to/libs/openssl/lib/mingw, then things work - also from an arguments file with # at-sign:
$ echo " -Wl,--verbose -LZ:/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
I guess, since the shell is MSYS2/git-bash.exe, entering full POSIX paths on the shell with /z/... is not a problem, because the shell will convert them - but in a file, there is nothing to convert them, so we must use Windows/MingW convention to specify them...
When changing Spinach step defined in a Spinach steps file, it is useful to run all those features which use that step.
e.g:
I have step 'I have an empty array' do..
defined in features/steps/test_how_spinach_works.rb
I would like to run spinach for every .feature file which includes:
I have an empty array.
Assuming you use bash:
Install ack.
Update your ack.rc to include Spinach features:
--type-set=spinach=.feature`
Add the following to your bashrc:
function ack-spinach() {
ack --spinach --print0 -l '$1' | xargs -0 spinach
}
You may now run all the features with:
$ ack-spinach 'I have an empty array'
I have installed Erlang/OTP and Elixir, and compiled the HelloWorld program into a BEAM using the command:
elixirc test.ex
Which produced a file named Elixir.Hello.beam
How do I run this file?
Short answer: no way to know for sure without also knowing the contents of your source file :)
There are a few ways to run Elixir code. This answer will be an overview of various workflows that can be used with Elixir.
When you are just getting started and want to try things out, launching iex and evaluating expressions one at a time is the way to go.
iex(5)> Enum.reverse [1,2,3,4]
[4, 3, 2, 1]
You can also get help on Elixir modules and functions in iex. Most of the functions have examples in their docs.
iex(6)> h Enum.reverse
def reverse(collection)
Reverses the collection.
[...]
When you want to put some code into a file to reuse it later, the recommended (and de facto standard) way is to create a mix project and start adding modules to it. But perhaps, you would like to know what's going on under the covers before relying on mix to perform common tasks like compiling code, starting applications, and so on. Let me explain that.
The simplest way to put some expressions into a file and run it would be to use the elixir command.
x = :math.sqrt(1234)
IO.puts "Your square root is #{x}"
Put the above fragment of code into a file named simple.exs and run it with elixir simple.exs. The .exs extension is just a convention to indicate that the file is meant to be evaluated (and that is what we did).
This works up until the point you want to start building a project. Then you will need to organize your code into modules. Each module is a collection of functions. It is also the minimal compilation unit: each module is compiled into a .beam file. Usually people have one module per source file, but it is also fine to define more than one. Regardless of the number of modules in a single source file, each module will end up in its own .beam file when compiled.
defmodule M do
def hi(name) do
IO.puts "Hello, #{name}"
end
end
We have defined a module with a single function. Save it to a file named mymod.ex. We can use it in multiple ways:
launch iex and evaluate the code in the spawned shell session:
$ iex mymod.ex
iex> M.hi "Alex"
Hello, Alex
:ok
evaluate it before running some other code. For example, to evaluate a single expression on the command line, use elixir -e <expr>. You can "require" (basically, evaluate and load) one or more files before it:
$ elixir -r mymod.ex -e 'M.hi "Alex"'
Hello, Alex
compile it and let the code loading facility of the VM find it
$ elixirc mymod.ex
$ iex
iex> M.hi "Alex"
Hello, Alex
:ok
In that last example we compiled the module which produced a file named Elixir.M.beam in the current directory. When you then run iex in the same directory, the module will be loaded the first time a function from it is called. You could also use other ways to evaluate code, like elixir -e 'M.hi "..."'. As long as the .beam file can be found by the code loader, the module will be loaded and the appropriate function in it will be executed.
However, this was all about trying to play with some code examples. When you are ready to build a project in Elixir, you will need to use mix. The workflow with mix is more or less as follows:
$ mix new myproj
* creating README.md
* creating .gitignore
* creating mix.exs
[...]
$ cd myproj
# 'mix new' has generated a dummy test for you
# see test/myproj_test.exs
$ mix test
Add new modules in the lib/ directory. It is customary to prefix all module names with your project name. So if you take the M module we defined above and put it into the file lib/m.ex, it'll look like this:
defmodule Myproj.M do
def hi(name) do
IO.puts "Hello, #{name}"
end
end
Now you can start a shell with the Mix project loaded in it.
$ iex -S mix
Running the above will compile all your source file and will put them under the _build directory. Mix will also set up the code path for you so that the code loader can locate .beam files in that directory.
Evaluating expressions in the context of a mix project looks like this:
$ mix run -e 'Myproj.M.hi "..."'
Again, no need to compile anything. Most mix tasks will recompile any changed files, so you can safely assume that any modules you have defined are available when you call functions from them.
Run mix help to see all available tasks and mix help <task> to get a detailed description of a particular task.
To specifically address the question:
$ elixirc test.ex
will produce a file named Elixir.Hello.beam, if the file defines a Hello module.
If you run elixir or iex from the directory containing this file, the module will be available. So:
$ elixir -e Hello.some_function
or
$ iex
iex(1)> Hello.some_function
Assume that I write an Elixir program like this:
defmodule PascalTriangle do
defp next_row(m), do: for(x <- (-1..Map.size(m)-1), do: { (x+1), Map.get(m, x, 0) + Map.get(m, x+1, 0) } ) |> Map.new
def draw(1), do: (IO.puts(1); %{ 0 => 1})
def draw(n) do
(new_map = draw(n - 1) |> next_row ) |> Map.values |> Enum.join(" ") |> IO.puts
new_map
end
end
The module PascalTriangle can be used like this: PascalTriangle.draw(8)
When you use elixirc to compile the ex file, it will create a file called Elixir.PascalTriangle.beam.
From command line, you can execute the beam file like this:
elixir -e "PascalTriangle.draw(8)"
You can see the output similar to the photo:
I want use erl_tidy to format erlang code, including escript files.
But this comes out when I format one escript file (source) after adding -module(erl_pprint). :
1> erl_tidy:file("erl_pprint").
erl_pprint: error: cannot determine module name.
** exception exit: error
But When I remove the she-bang line #!/usr/bin/env escript, formatting goes well.
So how can I formatteing the code while keep the she-bang line?
You can't treat an escript file as a normal module and give it to erl_tidy. Perhaps you can drop the comment lines using "tail -n+2 erl_pprint > /tmp/erl_pprint.erl", run erl_tidy on the temp file, and then use "cat escript-header.txt /tmp/erl_pprint.erl > erl_pprint.new", if you create a file called escript-header.txt containing the leading shebang line (or lines).
I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.