Error "Too many open files" in pdflatex - latex

When compiling a latex document with 15 or so packages and about five includes, pdflatex throws a "too many open files"-error. All includes are ended with \endinput. Any ideas what might cause the error?
The error seems to depend on how many packages are used (no surprise...); however, this is not the first time I use this many packages, while I've never encountered such an error before.
#axel_c: This is not about linux. As you may or may not know, LaTeX is also available on windows (which just happens to be what I'm using right now).

Try inserting
\let\mypdfximage\pdfximage
\def\pdfximage{\immediate\mypdfximage}
before \documentclass.
See also these threads from the pdftex mailing list:
Error message: Too many open files.
Too many files open

Type
ulimit -n
to get the maximum number of open files. To change it to e.g. 2048, type
ulimit -S -n 2048

What is this command giving you:
$ ulimit -n
You might want to increase it by editing /etc/security/limits.conf file.

This could be caused by a low value in your 'max open file descriptors' kernel configuration. Assuming you're using Linux, you can run this command to find out current limit:
cat /proc/sys/fs/file-max
If the limit is low (say, 1024 or so, which is the default in some Linux distros), you could try raising it by editing /etc/sysctl.conf:
fs.file-max = 65536
Details may differ depending on your Linux distribution, but a quick google search will let you fix it easily.

Related

RStudio Server C-Stack memory allocation setting in rsession.conf

I'm trying to increase C-stack size in rstudio server 0.99 on CentOS6 editing /etc/rstudio/rserver.conf file as follow:
rsession-stack-limit-mb=20
But "rstudio-server verify-installation" returns this message:
The option 'rsession-stack-limit-mb' is deprecated and will be discarded.
If I put this setting within /etc/rstudio/rsession.conf I obtain this message:
unrecognised option 'rsession-stack-limit-mb'
Someone can help me to find right configuration?
Thanks in advance
Diego
I guess you use the free version of RStudio Server. According to https://github.com/rstudio/rstudio/blob/master/src/cpp/server/ServerOptions.cpp, it seems like you have to need a commercial version if you'd like to manage memory limits in RStudio Server.
Or, you can use "ulimit" command on CentOS, e.g., "ulimit -s 20000". Then, run R from the Linux command line or in batch mode.

texvc does not render latex math in Mediawiki

I have the Math extension installed in my MediaWiki 1.19. After I updated Ubuntu Server from 12.04 to 14.04 something seems to have messed it up and it has stopped working. Basically I get the following error when I try to display anything between the <math> and </math> tags:
Failed to parse (PNG conversion failed; check for correct installation
of latex and dvipng (or dvips + gs + convert))
I have tried the common troubleshooting that one can find online regarding this issue, and have recompiled texvc to check if that fixed the issue. The texvc executable in the extensions/Math/math directory seems to do its job when invoked from the command line. I have obviously checked that all the other executables (latex, dvipng, etc.) work as they should.
When I try to render math from my wiki, the corresponding *.tex file is created in images/tmp with the correct latex code in it, but nothing else happens.
The problem seems to be related to texvc having trouble invoking latex and dvipng.
What could be causing this issue and how can I fix it?
Well, I figured it out. Basically, any shell command is passed by a security filter. So in practice, texvc is executed by Mediawiki through bin/ulimit4.sh:
#!/bin/bash
ulimit -t $1 -v $2 -f $3
eval "$4"
where $4 is the actual texvc command being run and $2 is the amount of memory allowed for this process. The memory that comes by default is 102400 KB (exactly 100MB), which seems to be insufficient for this process to run. The amount of memory can be set in LocalSettings.php with the variable $wgMaxShellMemory. In my case I set it to 300MB, $wgMaxShellMemory = 307200;, which seems to be enough.
Why this small process of generating a png needs so much memory I do not know.
The reason why this stopped working after updating to Ubuntu 14.04 probably has to do with some newly shipped version of either latex, dvipng, convert, etc. requiring more memory than it did with the version that came with Ubuntu 12.04.

grep command that works on Ubuntu, but not on Fedora

I clean hacked Wordpress installations for clients on a regular basis, and I have a set of scripts that I have written to help me track down where the sites are hacked. One code snippet that I use on a regular basis worked fine on Ubuntu, but since switching to Fedora on Friday has quite behaving as expected. The command is this:
grep -Iri --exclude "*.js" "eval\s*(" * | grep -rivf ~/safeevals.txt >../foundevals.txt;
What it is supposed to happen (and did happen when I was using Ubuntu): grep through all non-binary files, excluding Javascript includes, for all occurances of the eval() function, then perform a negative match on a line by line basis against all known occurances of the eval() function in a vanilla installation of Wordpress (the patterns of which are in ~/safeevals.txt).
What is actually happening: The first part is working fine, as I ran it separately and it did find all instances of eval() in the installation. However, instead of greping through those results, after the pipe is it re-grepping through all of the files, returning a negative match of ~/safeevals.txt (ie. pretty much every line of every file in the installation).
Any idea why the second grep isn't acting on the piped data, or what I need to do to fix it? Thanks.
-Michael
Just tested on my Debian box: apparently, grep -r likes to assume a default argument of .. I am really wondering if that behaviour is valid. Anyway, I guess dropping the -r option from the second grep command will fix it.
Edit: rgrep defaulting to $PWD seems to be a recent change in grep, see this discussion on unix.stackexchange and the link there to the commit in the upstream grep code repository.

Where is my /etc/sysctl.conf file? Postgresql Fatal could not create shared memory segment

My goal is to install and fully setup Postgresql by following railscast video.
P.S I am on a Mountain Lion 10.8
$ brew install postgresql
seems okay.
$ initdb /usr/local/var/postgres
ok's ok's then...
FATAL: could not create shared memory segment: Cannot allocate memory
DETAIL: Failed system call was shmget(key=1, size=2072576, 03600).
HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space, or exceeded your kernel's SHMALL parameter. You can either reduce the request size or reconfigure the kernel with larger SHMALL. To reduce the request size (currently 2072576 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
So like a good young SO grasshopper I start googling. and come to This SO post:
PostgreSQL installation error -- Cannot allocate memory
the suggested answer in this post lead me to this answer:http://willbryant.net/software/mac_os_x/postgres_initdb_fatal_shared_memory_error_on_leopard
$ sudo sysctl -w kern.sysv.shmall=65536
Password:
kern.sysv.shmall: 1024 -> 65536
$ sudo sysctl -w kern.sysv.shmmax=16777216
kern.sysv.shmmax: 4194304 -> 16777216
looks like everything worked so far, but in order to protect my changes from reboot, I need to update my /etc/sysctl.conf file. The problem is that I can't find it!
how do I locate this file? From my peanut sized understanding of computers, there is no filepath that exists, and if it did what is before the /etc ?? it certainly is not on my desktop. all I get is no such file exists, but I don't know how to find this file.
Embarrassing. I was trying to CD into my file. just do $ cd /etc

AIX 6.1 , tar issue

AIX6.1, I use java to execute a tar command to extract a tar package. one stange thing I met is that some files with long name in thi tar package failed to be extracted to where they should be. but occurs at current working folder. and the file owner of these files are not correct too.
I googled and found that there many post for use GUN tar instead to avoid long file name issue. but I am sure this is not the same issue as I met.
is there anyone know why this happen? any tips are appreciate much. thanks.
The man pages are pretty instructive on this topic. Probably your tar file is not strictly POSIX compatible. On AIX:
The prefix buffer can be a maximum of 155 bytes and the name buffer can
hold a maximum of 100 bytes. If the path name cannot be split into
these two parts by a slash, it cannot be archived.
The Linux man page for GNU tar says it can handle a variety of tar file format variants. One of these is the 'ustar' POSIX standard, which appears to be the one handled by AIX tar. There is a separate gnu format, which is the default for GNU tar.
I'd suspect you're opening a GNU tar archive with a tar tool which only understands the POSIX standard, and it can't quite cope.

Resources