Ruby roo not finding complex file names - fileutils

I am using roo to process some Excel files with complex file names (e.g. "Patient Status Up-Date-V2 051812.xlsx"). They are found with proper escaping with OS commands, but not from Ruby roo (which uses fileutils):
ls -lt Patient\ Status\ Up-Date-V2\ 051812.xlsx
shows:
-rw-r--r-- 1 hamid hamid 128770 May 22 09:22 Patient Status Up-Date-V2 051812.xlsx
but
ruby -rubygems ./findbi.rb Patient\ Status\ Up-Date-V2\ 051812.xlsx
gives:
/usr/local/lib/ruby/gems/1.8/gems/roo-1.10.1/lib/roo/excelx.rb:103:in
`initialize': file Patient\ Status\ Up-Date-V2\ 051812.xlsx does not
exist (IOError)
I have tried many variations of escaping (on "-" for example), permission changes, run as root, etc to no avail. Line #103 in excelx.rb is line:
raise IOError, "file #{#filename} does not exist"
Please help before I pull out the Perl!
Thanks,
Hamid.

Related

Apoc failed to find a compatible version

I'm brand new, fresh and clean with Neo4J. Just downladed and installed the Neo4J Desktop application, working offline. Noticed that the plugins don't get the install button enabled.
Creating a graph DB and trying to install manually the apoc plugin with latest jar file (compatible one), it fails to get loaded apparently.
Using NEO4J Desktop 1.1.17 offline + server 3.5.2 + APOC 3.5.0.2 jar in plugins folder
I've followed the online doc and updated neo4j conf auhtorizing things in there.
dbms.security.procedures.unrestricted=apoc.*
dbms.security.procedures.whitelist=apoc.*
Restarted things but still with no success. What Am I doing wrong in here ?
Seems quite a basic issue but as there is no stupid question...
Thanks for your feedbacks
Best regards
Any hint.
I have neo4j server (not desktop) version 3.5.4.
I downloaded apoc 3.5.0.3 which if memory serves was a zip archive. After unzipping, I copied the one jar into my plugins directory.
I modified the config file as you indicated. I used commas to separate the entries.
I did not update the whitelist parameter which remains commented out in my config file.
Next I restarted neo4j and the apoc procedures seem to work.
Have a look at my transcript below for the details of my setup:
gmc#linux-ihon:/usr/local/neo4j-community-3.5.4> ls -l plugins
total 14808
-rw-r--r-- 1 gmc users 13695353 Apr 18 09:51 apoc-3.5.0.3-all.jar
-rw-r--r-- 1 gmc users 1459334 Apr 11 00:34 graph-algorithms-algo-3.5.4.0.jar
-rw-r--r-- 1 gmc users 2217 Apr 3 18:09 README.txt
gmc#linux-ihon:/usr/local/neo4j-community-3.5.4> grep whitelist conf/neo4j.conf
#dbms.security.procedures.whitelist=apoc.coll.*,apoc.load.*
gmc#linux-ihon:/usr/local/neo4j-community-3.5.4> grep unrestricted conf/neo4j.conf
#dbms.security.procedures.unrestricted=my.extensions.example,my.procedures.*
dbms.security.procedures.unrestricted=apoc.*,algo.*
gmc#linux-ihon:~> cypher-shell --username neo4j
password: ****
Connected to Neo4j 3.5.4 at bolt://localhost:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j> call apoc.help("apoc.help");
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| type | name | text | signature | roles | writes |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "procedure" | "apoc.help" | "Provides descriptions of available procedures. To narrow the results, supply a search string. To also search in the description text, append + to the end of the search string." | "apoc.help(proc :: STRING?) :: (type :: STRING?, name :: STRING?, text :: STRING?, signature :: STRING?, roles :: LIST? OF STRING?, writes :: BOOLEAN?)" | NULL | NULL |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row available after 31 ms, consumed after another 1 ms
neo4j>
FWIW, the graph algorithms procedures also work.
It is possible that you have two installations and modified the non-running one???

TinyOS not compiling/uploading to TelosB mote

I'm running into a problem attempting to upload the "blink" app onto the motes. I can't seem to run the command make telosb reinstall bsl,/dev/ttUSB0 or make telosb reinstall while in the apps/Blink directory, which is preventing me from moving on with my project. I've tried as a user, superuser, and as root. I've outlined below the responses from variable commands. [Running Ubuntu 16.04, TinyOS 2.1.2, ncc version 1.4.2, nescc version 1.3.6]
(A)With root and the make telosb command I get back:
mkdir -p build/telosb
compiling BlinkAppC to a telosb binary
ncc -o build/telosb/main.exe -Os -fnesc-separator=__ -Wall -Wshadow -Wnesc-all -target=telosb -fnesc-cfile=build/telosb/app.c -board= -DDEFINED_TOS_AM_GROUP=0x22 -DIDENT_APPNAME=\"BlinkAppC\" -DIDENT_USERNAME=\"root\" -DIDENT_HOSTNAME=\"liam-Latitude-E\" -DIDENT_USERHASH=0x9236fe46L -DIDENT_TIMESTAMP=0x59384a62L -DIDENT_UIDHASH=0xdc08609fL BlinkAppC.nc -lm
compiled BlinkAppC to build/telosb/main.exe
2538 bytes in ROM
56 bytes in RAM
msp430-objcopy --output-target=ihex build/telosb/main.exe build/telosb/main.ihex
writing TOS image
(B)With regular user and the make telosb command I get back:
mkdir -p build/telosb
/bin/sh: 1: cannot create build/telosb/ident_flags.txt: Permission denied
/home/liam/tinyos-main/support/make/ident_flags.extra:13: recipe for target 'ident_cache' failed
make: *** [ident_cache] Error 2
(C)With a super user and the sudo make telosb command I get back:
make: *** No rule to make target 'telosb'. Stop.
(D)With root and the make telosb reinstall command I get back:
cp build/telosb/main.ihex build/telosb/main.ihex.out
found mote on /dev/ttyUSB0 (using bsl,auto)
installing telosb binary using bsl
tos-bsl --telosb -c /dev/ttyUSB0 -r -e -I -p build/telosb/main.ihex.out
MSP430 Bootstrap Loader Version: 1.39-goodfet-8
Mass Erase...
Transmit default password ...
Invoking BSL...
Transmit default password ...
Current bootstrap loader version: 1.61 (Device ID: f16c)
Changing baudrate to 38400 ...
MSP430 Bootstrap Loader Version: 1.39-goodfet-8
Mass Erase...
Transmit default password ...
Invoking BSL...
Transmit default password ...
Current bootstrap loader version: 1.61 (Device ID: f16c)
Changing baudrate to 38400 ...
Traceback (most recent call last):
File "/usr/bin/tos-bsl", line 1918, in <module>
main(0);
File "/usr/bin/tos-bsl", line 1843, in main
speed=speed,
File "/usr/bin/tos-bsl", line 1218, in actionStartBSL
self.actionChangeBaudrate(speed) #change baudrate
File "/usr/bin/tos-bsl", line 1345, in actionChangeBaudrate
self.serialport.setBaudrate(baudrate)
AttributeError: 'Serial' object has no attribute 'setBaudrate'
/home/liam/tinyos-main/support/make/msp/bsl.extra:45: recipe for target 'program' failed
make: *** [program] Error 1
(E)Whereas with a regular user and make telosb reinstall I get back:
cp build/telosb/main.ihex build/telosb/main.ihex.out
cp: cannot create regular file 'build/telosb/main.ihex.out': Permission denied
/home/liam/tinyos-main/support/make/msp/msp.rules:92: recipe for target 'setid' failed
make: *** [setid] Error 1
I have been all over the internet and online forums and haven't found a fix yet. I researched around (D) and found that perhaps python 2 might have changed the name of 'setBaudRate' variable. I'm not sure how to change that either.
Thank you for your time and help!
edit: added ncc and nescc versions.
This stack overflow answer is applicable to TinyOS.
So it turns out that pyserial 3.0.1 is not compatible with the TinyOS app, Blink. I'm fairly certain that pyserial 3.0.1 came with the package for TinyOS. To fix this, use the command sudo pip install "pySerial>=2.0,<=2.99999". For me, it kicked back:
The directory '/home/liam/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/liam/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting pySerial<=2.99999,>=2.0
Installing collected packages: pySerial
Found existing installation: pyserial 3.0.1
DEPRECATION: Uninstalling a distutils installed project (pySerial) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling pyserial-3.0.1:
Successfully uninstalled pyserial-3.0.1
Successfully installed pySerial-2.7
But the blink app still uploaded and worked fine.

wkhtmltopdf attempting to load from http rather than file

Here's an odd little problem that's led me to post my first question on SO. I am using wkhtmltopdf to convert an HTML document to a PDF as part of a Rails app. To do so, I am rendering the Rails web page to a static HTML file in a temp directory, copying a static header, footer and images to the same temp directory, then executing wkhtmltopdf using "system".
This works perfectly in Development and Test environments. In my Staging env, it does not. I suspected permissions at first, but the first couple of parts of that process (creating the HTML static files and copying them to the directory) are working. I can run wkhtmltopdf from the command line in that temp directory and get the expected outcome. Finally, I ran wkhtmltopdf via both "system" and backticks through the Rails console in staging environment, and here's what I get as output:
> `wkhtmltopdf --footer-html tmp/invoices/footer.html --header-html tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in tmp/invoices/test.html tmp/invoices/this.pdf`
Loading pages (1/6)
QPainter::begin(): Returned false ] 10%
Error: Unable to write to destination
Error: Failed loading page http://tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore) => ""
Notice that last bit. I'm pointing to local files, but it's looking for them via http. OK, I think, maybe I need to be explicit and feed it the file:// protocol so it doesn't look for http. So I try this:
> system("wkhtmltopdf --footer-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf")
Loading pages (1/6)
Error: Failed loading page file://library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore)
=> false
Notice that this one fails with a lowercase "l" on Library. What the heck? (And no, it doesn't get any better with the recommendation to ignore the error with that switch.)
Any ideas? Is there a Rails or Ruby setting that would cause system commands to get rewritten? Is there an option I can add to wkhtmltopdf to make sure it loads from local file? I'm quite baffled. Thanks!
I have had success when using the absolute file path (notice the extra slash after the file://)
wkhtmltopdf --footer-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf
This is the same on windows
Unix path
file:///absolute/path/to/file
Windows path
file:///C:/absolute/path/to/file
In last 0.11 whicked-pdf i found one bug
Example
C:\Ruby193\lib\ruby\gems\1.9.1\gems\wicked_pdf-0.11.0\lib>wicked_pdf.rb
Line 198 I change from:
options[hf][:html][:url] = "file://#{tf.path}" to options[hf][:html][:url] = "file:///#{tf.path}" - (change // to ///)
After change whicked-pdf again worked.
Take a look at the wicked_pdf gem.
You can add a PDF mime type and then whatever page you want pdf'd, just tack on a .pdf to the URL.
I am using this in prod and it works quite well.
No need to call wkhtmltopdf directly.

vim nerdtree files show up with * appended [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
gVim displays every file with an asterisk on the right (and bold)?
I'm using vim with nerdtree plugin for my rails projects and some of the files show up with a * appended to the filename. They are also a different color from the other files.
edit.html.erb*
index.html.erb
show.html.erb*
What does the * mean?
The key is the executable bit. For example, if you do this:
$touch no_exec_file exec_file
$chmod -v u+x exec_file
$ls -lF
total 0
-rwxr--r-- 1 reoo reoo 0 2012-09-19 19:14 exec_file*
-rw-r--r-- 1 reoo reoo 0 2012-09-19 19:14 no_exec_file
You can see the '*' in the exec_file, now, if you open VIM, you can see the '*' symbol again in the exec_file.
So, the NERDTree plugin shows the '*' symbol for those files that can be execute by the user.
It means that your files are executable, meaning you gave them the permission to be executable. Or they are files like .exe for example.

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources