Here's an odd little problem that's led me to post my first question on SO. I am using wkhtmltopdf to convert an HTML document to a PDF as part of a Rails app. To do so, I am rendering the Rails web page to a static HTML file in a temp directory, copying a static header, footer and images to the same temp directory, then executing wkhtmltopdf using "system".
This works perfectly in Development and Test environments. In my Staging env, it does not. I suspected permissions at first, but the first couple of parts of that process (creating the HTML static files and copying them to the directory) are working. I can run wkhtmltopdf from the command line in that temp directory and get the expected outcome. Finally, I ran wkhtmltopdf via both "system" and backticks through the Rails console in staging environment, and here's what I get as output:
> `wkhtmltopdf --footer-html tmp/invoices/footer.html --header-html tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in tmp/invoices/test.html tmp/invoices/this.pdf`
Loading pages (1/6)
QPainter::begin(): Returned false ] 10%
Error: Unable to write to destination
Error: Failed loading page http://tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore) => ""
Notice that last bit. I'm pointing to local files, but it's looking for them via http. OK, I think, maybe I need to be explicit and feed it the file:// protocol so it doesn't look for http. So I try this:
> system("wkhtmltopdf --footer-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf")
Loading pages (1/6)
Error: Failed loading page file://library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore)
=> false
Notice that this one fails with a lowercase "l" on Library. What the heck? (And no, it doesn't get any better with the recommendation to ignore the error with that switch.)
Any ideas? Is there a Rails or Ruby setting that would cause system commands to get rewritten? Is there an option I can add to wkhtmltopdf to make sure it loads from local file? I'm quite baffled. Thanks!
I have had success when using the absolute file path (notice the extra slash after the file://)
wkhtmltopdf --footer-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf
This is the same on windows
Unix path
file:///absolute/path/to/file
Windows path
file:///C:/absolute/path/to/file
In last 0.11 whicked-pdf i found one bug
Example
C:\Ruby193\lib\ruby\gems\1.9.1\gems\wicked_pdf-0.11.0\lib>wicked_pdf.rb
Line 198 I change from:
options[hf][:html][:url] = "file://#{tf.path}" to options[hf][:html][:url] = "file:///#{tf.path}" - (change // to ///)
After change whicked-pdf again worked.
Take a look at the wicked_pdf gem.
You can add a PDF mime type and then whatever page you want pdf'd, just tack on a .pdf to the URL.
I am using this in prod and it works quite well.
No need to call wkhtmltopdf directly.
Related
Earlier today, I was trying to generate a certificate with a DNSName entry in the SubjectAltName extension:
$ openssl req -new -subj "/C=GB/CN=foo" -addext "subjectAltName = DNS:foo.co.uk" \
-addext "certificatePolicies = 1.2.3.4" -key ./private-key.pem -out ~/req.pem
This command led to the following error message:
name is expected to be in the format /type0=value0/type1=value1/type2=... where characters may be escaped by . This name is not in that format: 'C:/Program Files/Git/C=GB/CN=foo'
problems making Certificate Request
How can I stop Git Bash from treating this string parameter as a filepath, or at least stop it from making this alteration?
The release notes to the Git Bash 2.21.0 update today mentioned this as a known issue. Fortunately, they also described two solutions to the problem:
If you specify command-line options starting with a slash, POSIX-to-Windows path conversion will kick in converting e.g. "/usr/bin/bash.exe" to "C:\Program Files\Git\usr\bin\bash.exe". When that is not desired -- e.g. "--upload-pack=/opt/git/bin/git-upload-pack" or "-L/regex/" -- you need to set the environment variable MSYS_NO_PATHCONV temporarily, like so:
MSYS_NO_PATHCONV=1 git blame -L/pathconv/ msys2_path_conv.cc
Alternatively, you can double the first slash to avoid POSIX-to-Windows path conversion, e.g. "//usr/bin/bash.exe".
Using MSYS_NO_PATHCONV=1 can be problematic if your script accesses files.
Prefixing with a double forward slash doesn't work for the specific case of OpenSSL, as it causes the first DN segment key to be read as "/C" instead of "C", so OpenSSL drops it, outputting:
req: Skipping unknown attribute "/C"
Instead, I used a function that detects if running on bash for Windows, and prefixes with a "dummy" segment if so:
# If running on bash for Windows, any argument starting with a forward slash is automatically
# interpreted as a drive path. To stop that, you can prefix with 2 forward slashes instead
# of 1 - but in the specific case of openssl, that causes the first CN segment key to be read as
# "/O" instead of "O", and is skipped. We work around that by prefixing with a spurious segment,
# which will be skipped by openssl
function fixup_cn_subject() {
local result="${1}"
case $OSTYPE in
msys|win32) result="//XX=x${result}"
esac
echo "$result"
}
# Usage example
MY_SUBJECT=$(fixup_cn_subject "/C=GB/CN=foo")
Found a workaround by passing a dummy value as the first attribute, for example: -subj '//SKIP=skip/C=gb/CN=foo'
I had the same issue using bash, but running the exact same command in Powershell worked for me. Hopefully this will help someone.
This is basically the same issue as in mingw ld cannot find some library which is exist in the search path, MinGW linker can't find MPICH2 libraries - and I'm aware that there are heaps of posts on StackOverflow regarding the issue of static and dynamic linking with MinGW - but I couldn't find anything that explains how I can troubleshoot.
I am building a project with a huge linker command like (via g++) on MinGW, in a MSYS2 shell (git-bash.exe). The process fails with, among others:
/z/path/to/../../../../i686-w64-mingw32/bin/ld.exe: cannot find -lssl
I add -Wl,--verbose to the g++ linker call (to be passed to ld), and I can see for the -L/z/path/to/libs/openssl/lib/mingw -lssl:
...
attempt to open /z/path/to/libs/openssl/lib/mingw/libssl.a failed
...
/z/path/to/libs/openssl/lib/mingw/ssl.dll failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
But this is weird, because the file exists?
$ file /z/path/to/libs/openssl/lib/mingw/libssl.a
/z/path/to/libs/openssl/lib/mingw/libssl.a: current ar archive
(... and it was built with the same compiler on the same machine)?
Weirdly, once it attempts to open with forward slash .../libssl.a, once with backslash ...\libssl.a - but at least the first path checks out in a bash shell, as shown above?
It gets even worse if I try to specify -l:libssl.a -- or if I specify -L/z/path/to/libs/openssl/lib/mingw -Wl,-Bstatic -lssl -- instead; then all attempts to open are with a backslash:
...
attempt to open /z/path/to/scripts/other/build/openssl/build/mingw/lib\libssl.a failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
To top it all off, if I look it up manually through the command line using ld, it is found ?!:
$ ld -L/z/path/to/libs/openssl/lib/mingw -lssl --verbose
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/ssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
Does anyone have an idea why this happens, and how can I get ld to finally find these libraries? Or rather - how can I troubleshoot, and understand why these libraries are not found, when they exist at the paths where ld tries to open them?
OK, found something more - not sure if this is a bug; but my problem is that I'm actually reading arguments from a file (otherwise I get g++: Argument list too long). So, to simulate that:
$ echo " -Wl,--verbose -L/z/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
# nothing
$ g++ `cat tcmd3` 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
... it turns out, if the very same arguments are fed on the command line, then static library lookup succeeds - but if the arguments are read from file through the # at-sign, then static library lookup fails?! Unfortunately, I cannot use on my actual project, since even with cat, I'd still get g++: Argument list too long ... So how can I fix this?
MSYS has special handling of directories as arguments when they are used in the shell. This translates e.g. /<drive_letter>/blabla to the proper Windows style paths. This is to accomodate Unix programs that don't handle Z: style directory root.
What you see here is that MSYS isn't performing this interpretation for string read from a file. When you think about it, it's very logical, but as you have experienced first-hand, also sometimes annoying.
Long story short: don't put Unix style paths in files with command arguments. Instead, pass them through e.g. cygpath -w, which works in MSYS2 (which should be the MSYS that Git for Windows 2+ comes with).
Ok, with some more experiments, I noticed that:
-L/z/path/to/libs/openssl/lib/mingw, the Unix path specification, tends to fail - while if we specify the same, except starting with a Windows drive letter, that is:
-LZ:/path/to/libs/openssl/lib/mingw, then things work - also from an arguments file with # at-sign:
$ echo " -Wl,--verbose -LZ:/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
I guess, since the shell is MSYS2/git-bash.exe, entering full POSIX paths on the shell with /z/... is not a problem, because the shell will convert them - but in a file, there is nothing to convert them, so we must use Windows/MingW convention to specify them...
On my MacBook (OSX Mountain Lion), I used to use this Pandoc command to convert Markdown to PDF:
$ markdown2pdf -N -o pandoc_output.pdf --xetex --toc --template=mytemplate.tex myfile.md
But markdown2pdf no longer works, and --xetex option in markdown2pdf -N -o ../../Desktop/pandoc_output.pdf --xetex --toc --template=mytemplate-headers-garamond_date.tex is deprecated.
If I do this:
$ pandoc -N -o Desktop/pandoc_output.pdf --xetex --toc --template=mytemplate.tex myfile.md
I get this:
pandoc: unrecognized option `--xetex'
But if I take out --xetex and do this:
$ pandoc -N -o Desktop/pandoc_output.pdf --toc --template=mytemplate.tex myfile.md
then I get this:
pandoc: Error producing PDF from TeX source.
! Package hyperref Error: Wrong driver option `xetex',
(hyperref) because XeTeX is not detected.
See the hyperref package documentation for explanation.
Type H <return> for immediate help.
...
l.3925 \ProcessKeyvalOptions{Hyp}
What's the solution?
Try --latex-engine=xelatex instead of --xetex
The prior answers to this question were helpful to me, as I had installed pandoc a couple years ago, but never Tex Live. Consequently I had no idea if I had installed it correctly, so putting in the entire path helped me to see that it was working, as follows:
pandoc --latex-engine=/usr/local/texlive/2012basic/bin/universal-darwin/xelatex
This is the default install location for the BasicTex setup which you download from the Pandoc install page.
I had also forgotten about using pandoc -D Latex >my-latex-template.tex to generate a template. After giving a .tex template instead of my .html one (which caused a 'you don't have BEGIN {' error) , I got .PDF: In other words, the default template worked.
Also, I had inaccurately entered -t pdf (not shown above) to set pdf as an output format, but this was not correct. The output format is Latex, which is then translated to PDF. It is not necessary to specify an output format with the dash -t option.
I hope this record of my minor stumbles saves someone some time.
See the pandoc User's Guide (or man page) for the --latex-engine option.
I am in a search of some way , using which in ruby code I should be able to create a temp file and then append some ruby code in that, then pass that temp file path to jruby -c to check for any syntax errors.
Currently I am trying the following approach:
script_file = File.new("#{Rails.root}/test.rb", "w+")
script_file.print(content)
script_file.close
command = "#{RUBY_PATH} -c #{Rails.root}/test.rb"
eval(command);
new_script_file.close
When I inspect command var, it is properly showing jruby -c {ruby file path}. But when I execute the above piece of code I am getting the following error:
Completed 500 Internal Server Error in 41ms
SyntaxError ((eval):1: dhunknown regexp options - dh):
Let me know if any one has any idea on this.
Thanks,
Dean
eval evaluates the string as Ruby code, not as a command line invocation:
Since your command is not valid Ruby syntax, you get this exception.
If you want to launch a command in Ruby, you can use %x{} or ``:
output1 = ls
output2 = %x{ls}
Both forms will return the output of the launched command as a String, if you want to process it. If you want this output to be directly displayed in the user terminal, you can use system():
system("ls")
I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.