Problem
I write down lectures at university in LaTeX (which is really convenient for this purpose), and i want tex files to automatically compile in pdf.
I have couple of .tex files in my repository like this:
.
├── .gitlab-ci.yml
└── lectures
├── math
| ├── differentiation
| | ├── lecture_math_diff.tex
| | ├── chapter_1.tex
| | └── chapter_2.tex
| └── integration
| ├── lecture_math_int.tex
| ├── chapter_1.tex
| └── chapter_2.tex
└── physics
└── mechanics
├── lecture_physics_mech.tex
├── chapter_1.tex
└── chapter_2.tex
So main file, for example, lecture_math_diff.tex using
\include{chapter_1}
\include{chapter_2}
tags, to form whole lecture.
And as result, i want to have my build artifacts in pdf like this:
├── math
| ├── lecture_math_diff.pdf
| └── lecture_math_int.pdf
└── physics
└── lecture_physics_mech.pdf
What can be done here? Do i have to write any sh script to collect all tex files or use gitlab runners?
One approach would be to use a short script (e.g python or bash) and to run latexmk to generate the PDF files.
latexmk is a perl script, which compiles latex files automatically. A short introduction can be found here
With python3 the script could look like the following one:
# filename: make_lectures.py
import os
from subprocess import call
# configuration:
keyword_for_main_tex = "lecture"
if __name__ == "__main__":
tex_root_directory = os.getcwd()
for root, _, files in os.walk("."):
for file_name in files:
# check, if file name ends with `tex` and starts with the keyword
if file_name[-3:] == "tex" and file_name[0:len(keyword_for_main_tex)] == keyword_for_main_tex:
os.chdir(root) # go in the direcotry
os.system("latexmk -lualatex "+ file_name) # run latexmk on the mainfile
os.chdir(tex_root_directory) # go back to root directory in case of relative pathes
This script assumes, that only files to be compiled to PDF start with the keyword lecture (as in the question). But the if statement, which checks for files to build, could also be extended to more elaborate comparison as matching regular expressions.
latexmk is called with the command line flag -lualatex here to demonstrate how to configure the build process gloally. A local configuration possibility (for each single project) is given with .latexmkrc files, which are read and processed by latexmk.
If we call latexmk as shell command, we have to make sure, that it is installed on our gitlab runner (and also texlive). If Dockercontainer runners are registered (see here how it is done), then you just need to specify the name of an image from DockerHub, which leads to the example gitlab-ci.yml file below:
compile_latex_to_pdf:
image: philipptempel/docker-ubuntu-tug-texlive:latest
script: python3 make_lectures.py
artifacts:
paths:
- ./*.pdf
expire_in: 1 week
Fell free, to change the image to any other image you like (e.g. blang/latex:latest). Note, that the artifacts extraction assumes, that no other PDF files are in the repository.
A final remark: I did not try it, but it should also be possible to install texlive and latexmk directly on the gitlab runner (if you have access to it).
You can have a look at https://github.com/reallyinsane/mathan-latex-maven-plugin. With the maven or gradle plugin you can also use "dependencies" for your projects.
Related
Here is my issue: I want to include/exclude specific file when building a service called django. But when playing with Dockerfile.dockerignore file, I do not succeed what I want and I can't figure out which syntax to adopt in order to do so.
Repo structure is the following:
.
└── root_dir
├── django
│ ├── Dockerfile
│ ├── Dockerfile.ignore
│ └── stuff
├── documentation
│ └── stuff
├── useless dir
└── another useless dir
My docker-compose.yml used to make the build is the following:
version: '3.5'
services:
[...]
django:
build:
context: $ROOT_DIR
dockerfile: $ROOT_DIR/django/Dockerfile
[...]
As you can see, context is root_dir and not django (because I need to copy a few things from documentation dir at build time).
What I need when building my image:
django rep
documentation rep
What I want to ignore when building my image:
everything but django and documentation dirs
Dockerfile and Dockerfile.dockerignore
and a few other things (virtual env, *.pyc etc, but we'll stick with Dockerfile and Dockerfile.dockerignore for that question!)
What my Dockerfile.dockerignore looks like in order to do so:
# Ignore everything
*
# Allows django and documentation rep (context is root of project)
!/django
!/documentation
# Ignoring some files
**/Dockerfile
**/Dockerfile.dockerignore
but my issue is that the files I want to ignore are still present in my image! And this is not good for me. I think I tried any syntax possible (**/, */*/......) but I can't find one that suits my need.
Many thanks if you can help me with that!
EDIT: for those wondering about that Dockerfile.dockerignore file, you can have a look here: https://docs.docker.com/engine/reference/commandline/build/#use-a-dockerignore-file
Docker ignore file name should be .dockerignore
You can check the documentation here https://docs.docker.com/engine/reference/builder/#dockerignore-file
Running this command before the 'docker build' fixed it for me:
export DOCKER_BUILDKIT=1
FYI: I did not need to install 'buildkit' first.
I have a complex config search path consisting of multiple locations where each location looks similar to this:
├── conf
│ └── foo
│ ├── foo.yaml
│ └── bar.yaml
└── files
├── foo.txt
└── bar.txt
with foo.yaml:
# #package _group_
path: "../../files/foo.txt"
and bar.yaml:
# #package _group_
path: "../../files/bar.txt"
Now the problem is: how do I find the correct location of the files specified in the configurations? I am aware of the to_absolute_path() method provided by hydra, but it interprets the path relative to the directory in which the application was started. However, I would like to interpret that path relative to the position of the configuration file. I cannot do this manually in my code, because I don't know how hydra resolved the configuration file and where exactly it is used to.
Is there some mechanism to determine the location of a config file from hydra? I really want to refrain from putting hard coded absolute paths in my configurations.
You can't get the path of a config file. In fact, it may not be a file at all (such as the case for Structured Configs), or it can be inside a python wheel (even in a zipped wheel).
You can do something like
path = os.path.join(os.path.dirname(__file__), "relative_path_from_config")
You can use also APIs designed for loading resources files from Python modules.
Here is a good answer in the topic.
I've an external dependency declared in WORKSPACE as a new_git_repository and provided a BUILD file for it.
proj/
├── BUILD
├── external
│ ├── BUILD.myDep
│ └── code.bzl
└── WORKSPACE
in the BUILD.myDep file, I want to load code.bzl nearby, but when I load it (load("//:external/code.bzl", "some_func")) bazel tries to load #myDep//:external/code.bzl instead!
Of course it's not a target in #myDep repository, but in my local worksapce.
Seems I Rubber Duck ed the Stackoverflow. since the solution appeared when writing the question!
However, the solution is to explicitly mention the local workspace when loading the .bzl file:
Suppose we have declared the name in the WORKSPACE as below:
workspace(name = "local_proj")
Now instead of load("//:external/code.bzl", "some_func"), just load it explicitly as a local workspace file:
load("#local_proj//:external/code.bzl", "some_func")
NOTE: When using this trick just be careful about potential dependency loops (i.e. loading a generated file that itself is produced by a rule depending on the same external repo!)
So I've documented my whole API with swagger editor, and now I have my .yaml file. I'm really confused how I take that and generate the whole nodejs stuff now so that all those functions are already defined and then I just fill them in with the appropriate code.
Swagger Codegen generates server stubs and client SDKs for a variety of languages and frameworks, including Node.js.
To generate a Node.js server stub, run codegen with the -l nodejs-server argument.
Windows example:
java -jar swagger-codegen-cli-2-2-2.jar generate -i petstore.yaml -l nodejs-server -o .\PetstoreServer
You get:
.
├── api
| └── swagger.yaml
├── controllers
| ├── Pet.js
| ├── PetService.js
| ├── Store.js
| ├── StoreService.js
| ├── User.js
| └── UserService.js
├── index.js
├── package.json
├── README.md
└── .swagger-codegen-ignore
I have this setup:
├── bin
│ ├── all.dart
│ ├── details
│ │ ├── script1.dart
│ │ └── script2.dart
| | .....
all.dart simply imports script1.dart and script2.dart and calls their main. The goal is to have a bunch of scripts under details that can be run individually. Additionally I want a separate all.dart script that can run them all at once. This will make debugging individual scripts simpler, yet still allowing all to run.
all.dart
import 'details/script1.dart' as script1;
import 'details/script2.dart' as script2;
main() {
script1.main();
script2.main();
}
script1.dart
main() => print('script1 run');
script2.dart
main() => print('script2 run');
So, this is working and I see the print statements expected when running all.dart but I have two issues.
First, I have to softlink packages under details. Apparently pub does not propagate packages softlinks down to subfolders. Is this expected or is there a workaround?
Second, there are errors flagged in all.dart at the point of the second import statement. The analyzer error is:
The imported libraries 'script1.dart' and 'script2.dart' should not have the same name ''
So my guess is since I'm importing other scripts as if they are libraries and since they do not have the library script[12]; statement at the top they both have the same name - the empty name?
Note: Originally I had all of these under lib and I could run them as scripts specifying a suitable --package-root on the command line even though they were libraries with main. But then to debug I need to run in Dart Editor, which is why I'm moving them to bin. Perhaps the editor should allow libraries under lib with a main to be run as a script since they run outside the editor just fine? The actual differences between script/library seems a bit unnecessary (as other scripting languages allow files to be both).
How do I clean this up?
I'm not sure what the actual question is.
If a library has not library statement then the empty string is used as a name.
Just add a library statement with an unique name to fix this.
Adding symlinks to subdirectories solves the problem with the imports for scripts in subdirectories.
I do this regularily.
It was mentioned several times in dartbug.com that symlinks should go away entirely but I have no idea how long this will take.
I have never tried to put script files with a main in lib but it is just against the package layout conventions and I guess this is why DartEditor doesn't support it.