So after writing a large .tex file and using many packages I want to archive everything, not just the .tex .jpg files, but also the .sty files.
This is because sometimes some options in the sty files are changed, and then I can't compile the file.
The "problem" is that in using Ubuntu, I already installed all the packages in my system.
I don't want to have to copy them manually.
Is there a program that can do this automatically?
Thanks.
See https://texfaq.org/FAQ-filesused, quote:
All the files used by this document When you’re sharing a document
with someone else (perhaps as part of a co-development cycle) it’s as
well to arrange that both correspondents have the same set of
auxiliary files, as well as the document in question. Your
correspondent obviously needs the same set of files (if you use the
url package, she has to have url too, for example). But
suppose you have a bug-free version of the shinynew package but her
copy is still the unstable original; until you both realise what is
happening, such a situation can be very confusing.
The simplest solution is the LaTeX \listfiles command. This places a
list of the files used and their version numbers in the log file. If
you extract that list and transmit it with your file, it can be used
as a check-list in case that problems arise.
Note that \listfiles only registers things that are input by the
“standard” LaTeX mechanisms (\documentclass, \usepackage,
\include, \includegraphics and so on). The \input command, as
modified by LaTeX and used, with LaTeX syntax, as:
\input{mymacros}
records file details for mymacros.tex, but if you use TeX primitive
syntax for \input, as:
\input mymacros
mymacros.tex won’t be recorded, and so won’t listed by \listfiles
— you’ve bypassed the mechanism that records its use.
The snapshot package helps the owner of a LaTeX document obtain
a list of the external dependencies of the document, in a form that
can be embedded at the top of the document. The intended use of the
package is the creation of archival copies of documents, but it has
application in document exchange situations too.
The bundledoc system uses the snapshot to produce an archive
(e.g., tar.gz or zip) of the files needed by your document; it
comes with configuration files for use with TeX Live-Unix and MiKTeX.
It’s plainly useful when you’re sending the first copy of a document.
The mkjobtexmf finds which files are used in a “job”, either via the
-recorder option of TeX, or by using the (Unix) command strace to
keep an eye on what TeX is doing. The files thus found are copied (or
linked) to a directory which may then be saved for transmission or
archiving.
Latex logfiles indicate all files loaded as follows:
Files specified using absolute paths are shown (\$PATH followed by whitespace (a space or a newline; I think Tex forbids whitespace in paths, certainly paths with whitespace are a pairn to pass to \input);
Local paths are the same except they have a dot: (.\$PATH followed by whitespace
Fonts are shown within <...>.
You can easily scrape these filenames out of the .log file.
Related
How does clangd know where a function definition is when only one file has been indexed through the LSP (Language Server Protocol) message textDocument/didOpen?
This question is based off of the assumption that there is no compile_commands.json file for clangd to work with.
To the best of my knowledge clangd will partially index(?) a given file when clangd receives the LSP message textDocument/didOpen with no compile_commands.json file in the workspace(?).
Thus the index of the file being partially indexed will only reside in memory.
So how is clangd aware of definitions outside of the partially indexed file when it has no awareness of any outside files?
Or is it aware?
Or is it made aware of other files by some heuristic that looks at the relative path or the includes (#include "<filename.hpp>") that only reside in directories root-project-dir/src and the likes thereof?
I've been reading on the compilation process, I understand some of the earlier concepts like parsing but I stop short of understanding how the executable file is created at the end.
In the examples I've seen around the "compiler" takes input in the form of a lang defined by BNF and then upon parsing it outputs assembly.
Is the executable file literally just that assembly in binary form? I feel like this can't be the case given that there are applications for making executables from assembly?
If this isn't answerable (ie it's too complex for the stack overflow format) I'd totally be happy with links/books so I can educate myself.
The compiler (or more specifically, the linker) creates the executable.
The format of the file generally vary depending on the operating system.
There are currently two main formats ELF and COFF
http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
http://en.wikipedia.org/wiki/COFF
If you understand the concept of a structure, this is the same, only within a file. Each file has a first structure called a header, and from there you can access the other structures as required.
In most cases, only the resulting binary code is saved in these files, although you often find debug information. Some formats could save the source along the code, but now a day it only saves the necessary references to the source.
With dynamic linking, you also find symbol tables that include the actual symbol name. Otherwise, only relocation tables would be required.
Under the Amiga we also had the possibility to define code in a "segment". Only one segment could be loaded at a time. Once you were done with the segment, you could unload it and load another. Yet, in the end the concepts were similar. Structures in a file.
Microsoft offers a PDF about the COFF format. I could not find it on their website just now, but it looks like others have it. ELF has many links in the Wikipedia page so you should be able to find a PDF to get started.
Not all but some (gcc, etc) compilers go from the high level language to assembly language then spawn the assembler. The assembler reads the assembly language and generates machine code and generates an object file which as you have guessed contains more than just the machine code bits. If you think of it for second you may realize that a variable or function that is defined in another source file which means its code lives in another object file, until link time one object doesnt know how to get at that external function, so 1) the machine code is not finished, patching up external addresses is not done until link time 2) there needs to be some information in the object file that defines what public items are in this object file and what external items are missing, names of functions for example which are obviously not embedded in the machine code. So the objects have machine code in various states of completion as well as other data needed by the linker. the linker then...links...the objects together into one program with everything resolved, it basically completes all the machine code and puts the fragments of machine code (in separate objects) into one place. Then it has to save all that on the disk in some format and typically that format is not just raw machine code. It has extra stuff in the file, starting with a header and the a way to define each binary blob and where it needs to live in memory before executing. When you run a program on the command line of your operating system or double clicking or whatever in a file manager gui, the operating system knows how to read that file format, extract the blobs of binary, place those blobs of binary in ram defined by this file format and then start executing at the place defined by this file format.
aout, elf, coff, intel hex, motorola s-record are all popular formats as well as raw binary which some toolchains can produce. the gnu tools will default to one (coff or elf or exe or aout) and then objcopy is used to convert from one to another or at least the default one to the others and there is help to show what your possible choices are. then simply google those or wikipedia them and find the definitions of the file formats. Intel hex or motorola srecord are good ones to start with at wikipedia then elf perhaps.
if you want to produce native executable file you have 2 options. you can assembly the binary form yourself or you can transalte your program to another language and use its compiler to producte the executable
I have some shared projects that are under version control (concretely svn and bazaar, but I'm seeking for a general solution), but the datasets the projects use are not (too big and shared by different projects).
In the source code I need to "store" somewhere the path to the dataset. The path is possibly different for each user, so hardcoding is definitely a bad idea (as always, I guess).
My actual workaround is to hardcode a text file (say "dataPath.txt") where the actual path is stored, and this file is not under version control (each project contributor creates his own file with his customised info).
The solution is, however, quite fragile:
1) if some contributor add to versione control the file it is annoying
2) when I export the "executable", I need to move around the file that is supposed to be in the same dir (relative path).
In my concrete case I'm using Java, so I find this question relevant (even if I've never used properties), but I would like to know if there are more general techniques that can be reused with different programming languages.
Write your program so that it accepts the path to the dataset as a command-line argument. Make sure there are a) sensible defaults if the dataset file is not specified, or b) the program exits gracefully if no dataset file is provided. There is no need to hard-code dataset paths in the source. Then you'd invoke the program e.g. like this (of course you can take any other command-line option character you like :-) ):
prog -d dataPath.txt
In general, providing such settings in a config file is a good idea. With Java, properties help (as pointed out in the SO question you linked). In other languages I'd probably use a JSON-formatted settings file -- parsing libraries are available.
I want to use isabelle build -D xxx to produce a LaTeX .tex file out of an Isabelle .thy file.
But Isabelle checks all the theory dependencies, and all the related .thy files must be involved.
Is it possible that I casually use a .thy file that has syntax errors to produce a .tex file? In fact I only need a part of it to write a paper.
Does that mean you want to write a paper based on a faulty or incomplete formal theory?
The Isabelle document preparation system was intended to publish formal theories that actually work out, with nice typography so that this does not look like "code". So all the defaults are for producing LaTeX from well-formed and checked theories.
Nonetheless, there are numerous ways to get unofficial LaTeX output from the system. A very basic mechanism is the latex print mode. Various diagnostic commands of Isabelle allow such print mode specifications in round parentheses, e.g. like this:
thm (latex) exI exE
or
print_statement (latex) exI exE
You can do this interactively and copy-paste the output into your raw tex file. You need to ensure that it gets proper surroundings with environments from the isabelle.sty file.
To the best of my knowledge, no. The LaTeX generation requires the file to be processed successfully, e.g. due to notation (latex) commands, and due to antiquotations.
If you only need parts of your file, simply copy’n’paste it from the generated .tex file or, if you want something more automated, have a look at the Generate TeX Snippets wiki page.
Scenario: I have a main Latex file (main.tex) in which I include a subfile (appendix.tex) using the subfiles package.
Role of appendix.tex: It further includes all the appendices as subfiles kept in an appendix subfolder, so that I just need to include the appendix.tex in the main.tex file.
Current Situation: I have to manually list the appendices in appendix.tex which can be cumbersome to manage.
Target: I want to create a foreach loop kind of thing in the appendix.tex file such that it looks in the appendix subfolder and includes each of the .tex files present in it.
Question: How can this be done?
This can be relatively easy implemented with python.sty from here. This would require you to make sure that the style file and python is available on all machines were you plan to compile this document, but should be more portable than using shell scripts or preprocessors like cpp.
Probably easiest done externally via a shell script. Provide some more info on your OS (Win/Apple/Linux) and someone will no doubt provide the necessary script.
I did it the other way, with a python pre-processor for LaTeX. My preprocessor generates tables and allows raw python to be put into the LaTeX file. Since python.sty requires that LaTeX be compiled with shell escapes, this may be a better way.
I can post the preprocessor if there is interest.