I'm on OSX Lion using wkhtmtopdf that came with homebrew (0.9.9) and all of the sudden I cannot generate PDFs anymore. When I tack .pdf on the end of a URL, the wkhtmltopdf process fires up, but the process never completes. I suspect it's an issue with wkhtmltopdf because the process is not completing when I run it via the cli either. When I issue the following command, I get a test.pdf file, but the process never finishes, even though it says "done".
Is there something I can do to force the process to complete?
curl google.com | wkhtmltopdf - test.pdf
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 100 219 0 0 1685 0 --:--:-- --:--:-- --:--:-- 5475
Loading page (1/2)
Printing pages (2/2)
Done
Turns out it had nothing to do with the cli and it was actually due to Jammit running in development. Disabling Jammit fixes the PDFkit hanging issue.
Related
I'm building a Docker image on Ubuntu 16.04.
It consists of curl command that downloads a file, it passes for small files, but when I download a large one (40GB), it crashes with a following error:
Step 35/68 : RUN curl -L ${PBF_URL} --create-dirs -o /srv/nominatim/src/data.osm.pbf
---> Running in 9fb68ab31988
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39.9G 100 39.9G 0 0 8855k 0 1:18:55 1:18:55 --:--:-- 9.8M
Error processing tar file(exit status 1): unexpected EOF
Here is a link to a Docker file that I'm running:
https://github.com/merlinnot/nominatim-docker/blob/master/Dockerfile
I use a strong server to build it, 50 GB of RAM, 10 cores. I tried tinkering with memory parameters like --memory-swap -1 --memory 32g, but it didn't really help.
I would like to point out that I'm not downloading tar file and I have no intentions of uncompressing it.
The file itself is fine, I tried downloading it separately, works great.
Any ideas on how could I solve this problem?
The tar command is used to package the new layer of the image, and in 18.06, there appears to be an 8GB limit in that step. I'd recommend:
Following the issue on github to be notified when it's been resolved: https://github.com/moby/moby/issues/37581
Moving your data out of the image. Your image should be the application binaries and libraries, but not the data itself. The data should be mounted into the running container as a volume.
I'm running long (few hours) Rsync backup tasks in Jenkins.
Rsync prints progress output to log. While looking in terminal - last line is "refreshing", i.e. it constantly prints over itself. But when Jenkins runs the task - it doesn't show that info.
Here is what I see in Jenkins, while the task is running:
And here is what I see, after it's completed (and that's what I want to see live, while it running):
sending incremental file list
35-openMeet-flat.vmdk
131,072 0% 0.00kB/s 0:00:00
9,437,184 0% 8.88MB/s 1:38:26
21,757,952 0% 10.30MB/s 1:24:49
32,899,072 0% 10.40MB/s 1:23:58
44,302,336 0% 10.49MB/s 1:23:12
55,443,456 0% 10.92MB/s 1:19:55
66,191,360 0% 10.56MB/s 1:22:40
78,118,912 0% 10.73MB/s 1:21:17
How can I configure Jenkins to print complete output while the task is running?
P.S. I would be happy, even if I would need to find and watch some Jenkins log file. But currently I can't find anything.
For example I tried this:
slavik#ubhome:/var/lib/jenkins/jobs/backup ESXI VM/builds/34$ tail log
sending incremental file list
35-openMeet.vmdk
532 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/1)
sent 434 bytes received 41 bytes 190.00 bytes/sec
total size is 532 speedup is 1.12
Comparing file sizes...
Sizes are different, calculating delta checksums, can take a while, time for a coffee...
sending incremental file list
35-openMeet-flat.vmdk
slavik#ubhome:/var/lib/jenkins/jobs/backup ESXI VM/builds/34$
But as you can see, it doesn't show last lines with progress data. Looks like it holds is in RAM and doesn't write to disk until it is really new line.
Sure this is over a year later but I stumbled upon this question in hopes of finding the answer. Failing that I managed to find a solution and thought I'd share for the rest of the internet.
What you want to do is "unbuffer" the rsync command. There are a couple projects out there that can "unbuffer" things (unbuffered, expect's unbuffer command) but these didn't work quite right for me. Unbuffered was close but threw a bunch of newlines into my log that I didn't want (after using the -c flag which prevented the duplicate lines everywhere).
What ended up working was this: piping my rsync command through stbuf with the flags found at the link. Now my jenkins console is nice and pretty and keeps up with rsync's progress:
rsync -avz --progress <source> <destination> | stdbuf -oL tr '\r' '\n'
I am having trouble installing Grails via GVM. I installed GVM via the instructions on GVM's website, and it appears it was installed correctly - restarting the terminal and running gvm help produces a list of possible commands. However, when I go to install Grails (or Groovy), I get the following output in the terminal:
$ gvm install grails
Downloading: grails 2.3.2
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (1) Protocol [http not supported or disabled in libcurl
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of /Users/neilpoulin/.gvm/archives/grails- 2.3.2.zip or
/Users/neilpoulin/.gvm/archives/grails-2.3.2.zip.zip, and cannot find /Users/neilpoulin/.gvm/archives/grails-2.3.2.zip.ZIP, period.
Stop! The archive was corrupt and has been removed! Please try installing again.
I looked to make sure zip, unzip and curl were found:
$ which zip
/usr/bin/zip
$ which unzip
/usr/bin/unzip
$ which curl
/usr/bin/curl
Prior to this, the only thing I have done with Grails/Groovy is to execute the example project include on the Grails website - http://grails.org/learn > step 2.
What am I missing here? is there some configuration of libcurl I need to change? Any help is much appreciated!
On investigation, it seems to be down to inconsistent versions (and behaviour) of MongoDB between our dev and prod environments. This was resulting in our prod server returning an array of urls on the download request (ie [theurl]). This was working perfectly on our dev environment, but started serving the array when the release was promoted to prod. Hope this is making sense!
I was having this problem persist— for me, it turned out that there were corrupt candidates caches by gvm during a previously failed install.
gvm flush candidates
set things back to rights, here.
I am trying to install RVM, but I get this error.
Last login: Thu Mar 31 15:41:06 on ttys000
G-Mac-5:~ macbookpro$ bash < <(curl -B http://rvm.beginrescueend.com/install/rvm)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5335 100 5335 0 0 7948 0 --:--:-- --:--:-- --:--:-- 14113
bash: line 114: git: command not found
bash: line 115: git: command not found
ERROR: Unable to clone the RVM repository, attempted both git:// and https://
Do I need to install GIT first? And whats the diffrents between zsh and bash?
There really should be better diagnostic messages when something about the installation fails. Phusion Passenger is an example of how to do it right, where not only are rigorous tests done prior to the installation attempt, but any problems are explained with cut-and-pasteable examples on how to fix them. They're even customized to the particular OS you're using, differentiating between apt-get and yum among other things.
If you're missing git, you won't get very far with rvm.
I admit I've cobbled together a mostly working production setup on Ubuntu with Capistrano from the official docs (which seem dated and make a lot of assumptions) and various blog posts of varying outdatedness. Anyway, the last annoying hang up is that indexing works when I do it by hand (and on deploy I'm pretty sure), but doesn't work from Cron.
Here's my crontab:
$ crontab -l
# m h dom mon dow command
* * * * * cd /var/www/app/current && /usr/local/bin/rake RAILS_ENV=production thinking_sphinx:index >> /var/www/app/current/log/cron.log 2>&1
Here is the log output (this actually appears 3 times per call):
Sphinx cannot be found on your system. You may need to configure the following
settings in your config/sphinx.yml file:
* bin_path
* searchd_binary_name
* indexer_binary_name
For more information, read the documentation:
http://freelancing-god.github.com/ts/en/advanced_config.html
This is when I run the same command by hand (also works when logging):
$ cd /var/www/app/current && /usr/local/bin/rake RAILS_ENV=production thinking_sphinx:index
(in /var/www/app/releases/20100729042739)
Generating Configuration to /var/www/app/releases/20100729042739/config/production.sphinx.conf
Sphinx 0.9.9-release (r2117)
Copyright (c) 2001-2009, Andrew Aksyonoff
using config file '/var/www/app/releases/20100729042739/config/production.sphinx.conf'...
indexing index 'app_core'...
collected 5218 docs, 3.9 MB
collected 5218 attr values
sorted 0.0 Mvalues, 100.0% done
sorted 0.7 Mhits, 100.0% done
total 5218 docs, 3898744 bytes
total 0.616 sec, 6328760 bytes/sec, 8470.28 docs/sec
distributed index 'app' can not be directly indexed; skipping.
total 3 reads, 0.008 sec, 1110.2 kb/call avg, 2.6 msec/call avg
total 15 writes, 0.016 sec, 540.4 kb/call avg, 1.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=20101).
Also relevant:
$ which rake
/usr/local/bin/rake
$ which indexer
/usr/local/bin/indexer
The error is somewhat common, but it smells funny that it works fine from the command line, I suspect something else is weird. I have 2 other mission-critical cron jobs that run rake tasks that look identical and run fine, not sure what's different about this one. Any help would be greatly appreciated!
PS-is there an authoritative deploy config for this with current Capistrano and TS versions? It seems everyone rolls their own, and the official docs seem to be as idiosyncratic as the blog posts out there.
Is the crontab owned by the same user as the one you are logged in as when you run things manually?
Since it seems like a clear PATH issue, and cron runs with a restricted PATH (i.e. not what's in your .profile), try adding this to the top of your crontab file.
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
Or if you don't want to modify cron's PATH, you could symlink the files you need into /usr/sbin, which is likely in the PATH by default.
I can confirm I had a similar error like #kbighorse where the commands ran fine manually on the command line, but did not run from the cron job. I did not receive any errors, but the log file would only output the directory the sphinx command was being run from. Once I added the following path variable from #jdl to the top of the crontab file the cron job would run properly:
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin