texvc does not render latex math in Mediawiki - latex

I have the Math extension installed in my MediaWiki 1.19. After I updated Ubuntu Server from 12.04 to 14.04 something seems to have messed it up and it has stopped working. Basically I get the following error when I try to display anything between the <math> and </math> tags:
Failed to parse (PNG conversion failed; check for correct installation
of latex and dvipng (or dvips + gs + convert))
I have tried the common troubleshooting that one can find online regarding this issue, and have recompiled texvc to check if that fixed the issue. The texvc executable in the extensions/Math/math directory seems to do its job when invoked from the command line. I have obviously checked that all the other executables (latex, dvipng, etc.) work as they should.
When I try to render math from my wiki, the corresponding *.tex file is created in images/tmp with the correct latex code in it, but nothing else happens.
The problem seems to be related to texvc having trouble invoking latex and dvipng.
What could be causing this issue and how can I fix it?

Well, I figured it out. Basically, any shell command is passed by a security filter. So in practice, texvc is executed by Mediawiki through bin/ulimit4.sh:
#!/bin/bash
ulimit -t $1 -v $2 -f $3
eval "$4"
where $4 is the actual texvc command being run and $2 is the amount of memory allowed for this process. The memory that comes by default is 102400 KB (exactly 100MB), which seems to be insufficient for this process to run. The amount of memory can be set in LocalSettings.php with the variable $wgMaxShellMemory. In my case I set it to 300MB, $wgMaxShellMemory = 307200;, which seems to be enough.
Why this small process of generating a png needs so much memory I do not know.
The reason why this stopped working after updating to Ubuntu 14.04 probably has to do with some newly shipped version of either latex, dvipng, convert, etc. requiring more memory than it did with the version that came with Ubuntu 12.04.

Related

running pycharm interpreter using nvidia-docker2

Im working on Ubuntu 20. I've installed docker, nvidia-docker2. On Pycharm, I've followed jetbrain guide, but in the advanced steps it isn't consistent with what I see in my setup. I use PyCharm Proffesional 2022.2.
In this step:
in the run options I put additionally --runtime=nvidia and --gpus=all.
Step 4 finishes as same as in the guide (almost, but it seems that it doesn't bother anything so on that later) and on step 5 I put manually the path to the interpreter in the virtual environment I've created using the Dockerfile.
In that way I am able to run the command of nvidia-smi and see correctly the GPU, but I don't see any packages I've installed during the Dockerfile build.
There is another option to connect the interpreter a little bit differently in which I do see the packages, but I can't run the nvidia-smi command and the torch.cuda.is_availble return False.
The way is instead of doing this as in the guide:
I press on the little down arrow in left of the Add Interpreter button and then click on Show all:
After which I can press the + button :
works, so it might be PyCharm "Python Console" issue.
and then I can choose Docker:
which will result in the difference mentioned above in functionality and also in the path dispalyed (the first one is the first remote interpreter top to bottom direction and the second is the second correspondingly):
Here of course the effect of the first and the second correspondingly:
Here is the results of the interpreter run with the first method connected interpreter:
and here is the second:
Of the following code:
Here is the Dockerfile file if you want to take a look:
Anyone configured it correctly and can help ?
Thank you in advance.
P.S: if I run the docker from services and enter the terminal the command nvidia-smi works fine and also the import of torch and the command torch.cuda.is_available return True.
P.S.2:
The thing that has worked for me for now is to change the Dockerfile to install directly torch with pip without create conda environement.
Then I set the path to the python2.7 and I can run the code, but not debug it.
for run the result is as expected (the packages list as was shown before is still empty, but it works, I guess somehow my IDE cannot access the packages list of the remote interpreter in that case, I dont know why):
But the debugger outputs the following error:
Any suggestions for the debugger issue also will be welcome, although it is a different issue.
Please update to 2022.2.1 as it looks like a known regression that has been fixed.
Let me know if it still does not work well.

Trying to port application to docker nanoserver container. Running exe fails with exit code -1073741515 (Dependency missing)

I'm currently trying to port my image optimizer application to a NanoServer docker image. One of the tools my image optimizer uses is truepng.exe. (Can be downloaded here: http://x128.ho.ua/clicks/clicks.php?uri=TruePNG_0625.zip)
I simply created a nanoserver container and mounted a folder that contained truepng.exe:
docker run --rm -it -v C:\data:C:\data mcr.microsoft.com/windows/nanoserver:2004-amd64
When I now run truepng.exe I expect some output regarding command line arguments missing:
C:\MyLocalWindowsMachine>truepng
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
...
However when I call this from inside the nanoserver docker container I basically see no output:
C:\data>truepng
C:\data>echo %ERRORLEVEL%
-1073741515
As you can see above, the exit code is set to -1073741515. According to this it typically means that there's a dependency missing.
I then downloaded https://github.com/lucasg/Dependencies to see the dependencies of truepng:
It seems it has some dependencies on 5 DLL's. Looking these up I found that there's apparently something called 'Reverse Forwarders': https://cloudblogs.microsoft.com/windowsserver/2015/11/16/moving-to-nano-server-the-new-deployment-option-in-windows-server-2016/
According to the following post though they should already be included in nanoserver: https://social.technet.microsoft.com/Forums/en-US/5b36a6d3-84c9-4940-8b7a-9e2a38468291/reverse-forwarders-package-in-tp5?forum=NanoServer
After all this investigation I've also been playing around with manually copying over the DLL's from my local machine (system32) to the docker machine without any success (it just kept breaking other things like the copy command which required me to recreate the container). Next to that I've also copied the files from SysWOW64, but this didn't help either.
I'm currently quite stranded on how to proceed further as I'm not even sure if the tool is missing dependencies or if something else is going on. Is there a way to investigate what DLL's are missing once a tool is starting?
Kind regards,
Devedse
Edit 1: Idea from #CherryDT
I tried running gflags (https://social.msdn.microsoft.com/Forums/en-US/f004a7e5-9024-4555-9ada-e692fbc3160d/how-to-start-quotloader-snapsquot?forum=vcgeneral) which gave the following output:
C:\data>"C:\data\gflags.exe" /i TruePNG.exe +sls
Current Registry Settings for TruePNG.exe executable are: 00000000
After this I tried running Dbgview.exe, this however never resulted in a log file being written:
C:\data>"C:\data\DebugView\Dbgview.exe" /v /l debugview-log.txt /g /n
C:\data>
I also started TruePNG.exe again, but again, no log file was written.
I tried querying the EventLogs using a dotnet core application, but this resulted in the following exception:
Unhandled exception. System.InvalidOperationException: Cannot open log Application on computer '.'. This function is not supported on this system.
at System.Diagnostics.EventLogInternal.OpenForRead(String currentMachineName)
at System.Diagnostics.EventLogInternal.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.EntriesEnumerator.MoveNext()
at EventLogReaderTest.ConsoleApp.Program.Main(String[] args) in C:\data\EventLogReaderTest.ConsoleApp\Program.cs:line 22
Windows Nano Server is tiny and only supports 64-bit applications, tools, and agents. The missing dependency in this case is the entire x86 emulation layer (WoW64), as TruePNG is a 32-bit application.
Windows Server Core contains WoW64 and other components missing from Nano Server. Use a Windows Server Core image instead.
Example command:
docker run --rm -it -v C:\Temp:C:\Temp mcr.microsoft.com/windows/servercore:2004 C:\Temp\TruePNG.exe
Yields the expected output:
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
TruePNG {options} files
options:
/f# PNG delta filters 0=None, 1=Sub, 2=Up, 3=Average, 4=Paeth, 5=Mixed
/fe PNG extra filters, overrides /f switch
/i# PNG interlace method 0=None, 1=Adam7 (default input)
/g# PNG gamma 0=Remove, 1=Apply & Remove, 2=Keep (default)
[...]

grep command that works on Ubuntu, but not on Fedora

I clean hacked Wordpress installations for clients on a regular basis, and I have a set of scripts that I have written to help me track down where the sites are hacked. One code snippet that I use on a regular basis worked fine on Ubuntu, but since switching to Fedora on Friday has quite behaving as expected. The command is this:
grep -Iri --exclude "*.js" "eval\s*(" * | grep -rivf ~/safeevals.txt >../foundevals.txt;
What it is supposed to happen (and did happen when I was using Ubuntu): grep through all non-binary files, excluding Javascript includes, for all occurances of the eval() function, then perform a negative match on a line by line basis against all known occurances of the eval() function in a vanilla installation of Wordpress (the patterns of which are in ~/safeevals.txt).
What is actually happening: The first part is working fine, as I ran it separately and it did find all instances of eval() in the installation. However, instead of greping through those results, after the pipe is it re-grepping through all of the files, returning a negative match of ~/safeevals.txt (ie. pretty much every line of every file in the installation).
Any idea why the second grep isn't acting on the piped data, or what I need to do to fix it? Thanks.
-Michael
Just tested on my Debian box: apparently, grep -r likes to assume a default argument of .. I am really wondering if that behaviour is valid. Anyway, I guess dropping the -r option from the second grep command will fix it.
Edit: rgrep defaulting to $PWD seems to be a recent change in grep, see this discussion on unix.stackexchange and the link there to the commit in the upstream grep code repository.

ToolTwist Controller hangs while generating images

While generating a large site using the ToolTwist Controller, the server hangs. Using ps -ef I can see that there is an ImageMagick 'convert' command that never seems to finish. If I kill the convert process, the generate continues.
If I get the full convert command from the log file or using ps, I can run it from the command line with no problem. Each time I run the generate process in the Controller it gets stuck in a different place.
How often it hangs seems to be sporadic, and only occurs maybe every 1,000 images.
I'm running OSX 10.7.3 on a Macbook Pro.
This is a known bug in ImageMagick - see http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=19962
The solution is to define an environment variable:
export MAGICK_THREAD_LIMIT=1
You'll need to do this before starting the Controller's tomcat server.

Error "Too many open files" in pdflatex

When compiling a latex document with 15 or so packages and about five includes, pdflatex throws a "too many open files"-error. All includes are ended with \endinput. Any ideas what might cause the error?
The error seems to depend on how many packages are used (no surprise...); however, this is not the first time I use this many packages, while I've never encountered such an error before.
#axel_c: This is not about linux. As you may or may not know, LaTeX is also available on windows (which just happens to be what I'm using right now).
Try inserting
\let\mypdfximage\pdfximage
\def\pdfximage{\immediate\mypdfximage}
before \documentclass.
See also these threads from the pdftex mailing list:
Error message: Too many open files.
Too many files open
Type
ulimit -n
to get the maximum number of open files. To change it to e.g. 2048, type
ulimit -S -n 2048
What is this command giving you:
$ ulimit -n
You might want to increase it by editing /etc/security/limits.conf file.
This could be caused by a low value in your 'max open file descriptors' kernel configuration. Assuming you're using Linux, you can run this command to find out current limit:
cat /proc/sys/fs/file-max
If the limit is low (say, 1024 or so, which is the default in some Linux distros), you could try raising it by editing /etc/sysctl.conf:
fs.file-max = 65536
Details may differ depending on your Linux distribution, but a quick google search will let you fix it easily.

Resources