Can a LyX/LaTeX file be too large to create a PDF? - latex

I've been working on my bachelors thesis in LyX for about a month without encountering any problems and today, all of a sudden, when creating a PDF LyX just loads indefinitely and even asks me at some point if I want to stop the PDF creating since it takes such a long time. Am I doing something wrong? I have about 100 pages and the PDFs I created lately have been around 100 mb large since they hold very high res images and a lot of them.

In case anyone is struggling with the "convert" functionality usage in Lyx, this is some additional info:
Initially I struggled to make eps to load and be displayed on screen as well as to get it exported to PDF file. I saw that the Lyx latest install had already all "convert blah-blah $$ii $$o" commands predefined and it was still not working.
Here is what worked for me:
sudo mv /etc/ImageMagick-6/policy.xml /etc/ImageMagick-6/policy.xmlout
Here there are two parts -
a) imagemagick needs to be installed on the machine as it provides most of the converters. Following command on terminal would check if imagemagick is installed or not on your system.
identify -version
b) Imagemagick tools should be "allowed" to run - "convert" being one of those. You need to relax some default security policies for that. That is what the above renaming of the policy file does. Detailed information is given in answer to this question on ubuntu forum.
Note - This security policy relaxation is not recommended for web-server machines. Only desktop users may take the risk.

Related

ghostscript pdf/a conversion problem on ubuntu 18.10 and docker

I am using Ghostscript to convert pdf1.3 to pdf/a-1b using this command:
gs -dPDFA -dBATCH -dNOPAUSE -dNOOUTERSAVE -sColorConversionStrategy=sRGB -sDEVICE=pdfwrite -sOutputFile=output.pdf PDFA_def.ps input.pdf
The PDFA_def.ps is customized to use the srgb icc profile. Except that change it is the standard def file which comes with GS 9.26.
Now comes the tricky part:
1- running this conversion locally on a ubuntu 18.10, GS 9.26 it works fine an i get a valid pdf/a
2- running the same command in a docker container (ubuntu 18.10. GS 9.26) creates a pdf/a as well, which is considered to be valid
However, in the first scenario I can process the file using mustang (https://github.com/ZUGFeRD/mustangproject) to create a valid electronic invoice. In the second scenario (docker container) this fails, since the file is not considered to be valid pdf by mustang.
checking both pdf files I would have expected them to be identical since i am running the same converison on it. However they are not. The PDF create in the dockerfile is 10 bytes smaller and shows some different metainformation in the file itself.
I suspect that there must be some "hidden depdencies" that make GS to act different on my host system compared to a docker container, but it feels entirely wrong and I am running out of means to debug further.
Does anyone know, wether GS has some more depdencies that might cause the same command to produce different results?
The answer is 'maybe'. It depends on how Ghostscript was built for starters.
I assume you are using a package, and not building from source yourself. In which case there are a number of dependencies including; FreeType, LibJPEG, JBIG2dec, LibTIFF, JPEG-XR, LCMS2, LibPNG, OpenJPEG, Expat, zlib, potentially IJS, CUPS and X-Windows, depending on what devices were built in.
Any or all of these could be system shared libraries instead of being built using the specific version shipped by Artifex. They could also be different versions on the two systems.
That said, I think its 'unlikely' that this is your problem. However, without seeing the PDF files I can't tell you why there is a difference. Differences in the metadata are to be expected, since that includes a date/time stamp.
I'd really need to see examples of the original and the two output PDF files to be able to comment further.
[Edit]
Looking at the files they have been produced compressed (unsurprisingly) which can obviously lead to differences in size if there are small differences in the input streams. So the first task was to decompress the files.
That done I see there are, essentially, no differences between them. One of the operating systems is using a time zone 7 hours behind UTC, the other is in UTC so where one of the systems is time stamping with (eg)
2019-04-26T19:49:01Z
The other is using
2019-04-26T12:51:35-07:00
So instead of Z (for UTC) you get -07:00 which is where the extra 10 bytes are coming from. Other than that, the Unique IDs are (as you would imagine) different, the Length values for the streams are different for the streams containing dates, and the startxref is different because the streams are different lengths.
Both files claim to be compatible with PDF/A-1b. In short I can see no real differences between them. Since you are using a tool to further process the files, I'd suggest you try taking the PDF file from your working system and try processing it on the non-working system, and vice versa, it seems to me that the problem may be the later processing rather than the PDF file itself. Perhaps you have different versions of that tool on the two systems.
For what it may be worth, Ghostscript can be induced into creating a ZUGFeRD file directly, see this bug report and this commit to the repository.

Saving Python code in Anaconda

I am having problems saving the Python code in Anaconda. I write code, Go to File Save as the file is saved but when I open it is empty, no code. I read that IDLE does not save code, it erases it when I close Anaconda.
I have searched in books, youtube tutorials and nothing. I could not find this topic. I can find advanced topics, but this one no.
Thank you for your help!
Best,
Tiberiu
This somewhat depends on what OS you are on. I can speak from my experience. I would highly recommend using PyCharm as an IDE.
But more fundamentally than that, lets talk about saving files. On Mac OS X or Ubuntu 14.04 (or the like), lets say you want to create a python file. One way is to do the following in a terminal:
nano hello.py
This opens up a text editor whose instructions for use are on the bottom of the screen. On Windows you could do:
notepad hello.py
In both cases you then write your code. Lets say the content was:
#content of hello.py
print "Hello World!"
Then you need to save the file and execute it with Python.
Which brings us to the Python issue.
Once you have installed Anaconda, and assuming that there are no other Python installations on your computer. The Anaconda Python should be the active Python on system.
Suffice to say that there are other ways of saving files rather than using IDLE. Personally I have found PyCharm to be a much better IDE for writing Python code.
To address the IDLE issue more directly once you type in the terminal/command prompt idle to launch an IDLE session, likely the IDLE Shell will open up. Perhaps you are trying to save this, in which case you will only save the Shell session.
So go to File - New - Save. This should work without any problem (it does on my system). Good luck! Hope this helps.

chromebook crouton install raring with cli-extra, so no pdf view in emacs?

Total newbie questions, here. But I installed raring with cli-extra options. Then installed emacs24. I installed texlive. Now with C-c C-c command I can compile .tex documents in emacs. But I don't get a view of it.
My question is, is this because I never installed an X interface? Or is because I am accessing emacs through SecureShell in the ChromeOS?
Thanks so much, please correct any misunderstandings I have. Anyone have suggestions where I can look to find out?
Without a desktop environment there isn't anything that will be able to display images (the pdf).
An example of this is when you have images like pngs in your .emacs.desktop and you open it in no window mode (-nw), emacs gives an error as linux terminals don't support images.
If you want this functionality I'd recommend either loading a desktop manager like xfce, or open the pdf in chrome os, and hopefully it will update automatically when the pdf is re-rendered, otherwise you should just be able to refresh to update it.

not enough space for environment appears when executing ".exe" file

I am trying to use an application called CLUT.exe which is an old application for MS-DOS that can be used to reindex NTX files for DBF databases.
(This is not the main topic, but I am just writing this if someone wants to test the app and don't trust at all about the content).
The problem starts when trying to run the command line version through console (cmd.exe) and this error appears:
C:\>CLUT.exe [arg1] [arg2] [arg3]
run-time error R6009
- not enough space for environment
So, according to what I've searched, this could be a possible solution:
http://support.microsoft.com/default.aspx?scid=kb;en-us;230205
but it doesn't work and every alternative that I found to solve this over the internet is the same.
Another alternative could be to make right-click in the .exe file, go to Properties then Memory tab and increase the Initial environment memory from Auto to the max value but it doesn't work too.
Well, I am stuck and no "possible" solution is working for me. If someone is interested, knows more about this issue and want to test, you can download the application from here (click "Free Download" green button):
http://www.filebasket.com/free/Development-Clipper-programming-language/clut-exe/13996.html
or directly from my DropBox:
https://dl.dropbox.com/u/15208254/stackoverflow/clut_214.rar
Just to know, I am using Windows 7 and the CLUT.exe application is a Clipper based app (old programming language) that may run under windows console (cmd.exe).
Wikipedia does mention other dos emulators but, oddly, doesn't mention BOCHS.
Reindexing NTX files is not a difficult thing to do, and can be done with tools other than CLUT. For example, many of the utilities listed on this part of Download32 could be used. Otherwise, you could write your own using Harbour Project or xHarbour. Or contact me off list and I'll cook up something in Clipper 5.3.
LATER
If I read the README correctly for CLUT, it's a replacement for the DBU utility that comes with Clipper 5.x. I can supply you with a build of that if you're unsuccessful with other approaches.

Optimizing command line GIMP

I am running a script-fu macro using GIMP from the command line. However, it is quite slow to startup and run - about 20-25 seconds. I think a lot of this time is spent on startup - loading all the plugins and such. What are some ways to optimize GIMP on the CL? Is there any way to keep it always running?
Some promising options from the GIMP docs (some of which you may already be using):
--no-interface: Run without a user interface.
--no-data: Do not load patterns, gradients, palettes, or brushes. Often useful in non-interactive situations where start-up time is to be minimized.
--no-fonts: Do not load any fonts. This is useful to load GIMP faster for scripts that do not use fonts, or to find problems related to malformed fonts that hang GIMP.
--no-splash: Do not show the splash screen while starting.
The GIMP FAQ:
The GIMP takes too long to load - how can I speed it up?
The main things are to make sure you are running at least version 1.0, and make sure you compiled with optimization on, debugging turned off, and the shared memory and X shared memory options turned on.
Or, buy a faster system with more memory. 8^)
This question on SuperUser addresses slow GIMP startup time in general and recommends:
Rebuild the font cache file by deleting C:\Documents and Settings\<username>\.fonts-cache1 and then opening GIMP.
Check for slow-loading plugins by starting up with --verbose and seeing where it hangs. Then remove problematic plugins by renaming them in C:\Program Files\GIMP-2.0\lib\gimp\<version>\plug-ins. Alternately, remove all plugins by renaming the whole plugins folder.
Not so much a solution as a different possibility for the future, but have you considered not using GIMP?
GIMP is first and foremost a GUI-based app. If you're doing a lot of repetitive image manipulation from the command line, you might be better off with a tool like ImageMagick that's designed expressly for such use. I don't know how complex your script-fu scripts are, or how easily they could be translated to ImageMagick's (admittedly complex) syntax, but you definitely wouldn't have problems with long startup time.
You could use "Script-fu Server" .
image window > Main menu > filters > script-fu > Start server.
You will be provided with a popup asking for the port to run it in. There is also "help" provided on the same popup, which also describes the protocol used by the server.

Resources