Here is my issue: I want to include/exclude specific file when building a service called django. But when playing with Dockerfile.dockerignore file, I do not succeed what I want and I can't figure out which syntax to adopt in order to do so.
Repo structure is the following:
.
└── root_dir
├── django
│ ├── Dockerfile
│ ├── Dockerfile.ignore
│ └── stuff
├── documentation
│ └── stuff
├── useless dir
└── another useless dir
My docker-compose.yml used to make the build is the following:
version: '3.5'
services:
[...]
django:
build:
context: $ROOT_DIR
dockerfile: $ROOT_DIR/django/Dockerfile
[...]
As you can see, context is root_dir and not django (because I need to copy a few things from documentation dir at build time).
What I need when building my image:
django rep
documentation rep
What I want to ignore when building my image:
everything but django and documentation dirs
Dockerfile and Dockerfile.dockerignore
and a few other things (virtual env, *.pyc etc, but we'll stick with Dockerfile and Dockerfile.dockerignore for that question!)
What my Dockerfile.dockerignore looks like in order to do so:
# Ignore everything
*
# Allows django and documentation rep (context is root of project)
!/django
!/documentation
# Ignoring some files
**/Dockerfile
**/Dockerfile.dockerignore
but my issue is that the files I want to ignore are still present in my image! And this is not good for me. I think I tried any syntax possible (**/, */*/......) but I can't find one that suits my need.
Many thanks if you can help me with that!
EDIT: for those wondering about that Dockerfile.dockerignore file, you can have a look here: https://docs.docker.com/engine/reference/commandline/build/#use-a-dockerignore-file
Docker ignore file name should be .dockerignore
You can check the documentation here https://docs.docker.com/engine/reference/builder/#dockerignore-file
Running this command before the 'docker build' fixed it for me:
export DOCKER_BUILDKIT=1
FYI: I did not need to install 'buildkit' first.
Related
I have a complex config search path consisting of multiple locations where each location looks similar to this:
├── conf
│ └── foo
│ ├── foo.yaml
│ └── bar.yaml
└── files
├── foo.txt
└── bar.txt
with foo.yaml:
# #package _group_
path: "../../files/foo.txt"
and bar.yaml:
# #package _group_
path: "../../files/bar.txt"
Now the problem is: how do I find the correct location of the files specified in the configurations? I am aware of the to_absolute_path() method provided by hydra, but it interprets the path relative to the directory in which the application was started. However, I would like to interpret that path relative to the position of the configuration file. I cannot do this manually in my code, because I don't know how hydra resolved the configuration file and where exactly it is used to.
Is there some mechanism to determine the location of a config file from hydra? I really want to refrain from putting hard coded absolute paths in my configurations.
You can't get the path of a config file. In fact, it may not be a file at all (such as the case for Structured Configs), or it can be inside a python wheel (even in a zipped wheel).
You can do something like
path = os.path.join(os.path.dirname(__file__), "relative_path_from_config")
You can use also APIs designed for loading resources files from Python modules.
Here is a good answer in the topic.
I've an external dependency declared in WORKSPACE as a new_git_repository and provided a BUILD file for it.
proj/
├── BUILD
├── external
│ ├── BUILD.myDep
│ └── code.bzl
└── WORKSPACE
in the BUILD.myDep file, I want to load code.bzl nearby, but when I load it (load("//:external/code.bzl", "some_func")) bazel tries to load #myDep//:external/code.bzl instead!
Of course it's not a target in #myDep repository, but in my local worksapce.
Seems I Rubber Duck ed the Stackoverflow. since the solution appeared when writing the question!
However, the solution is to explicitly mention the local workspace when loading the .bzl file:
Suppose we have declared the name in the WORKSPACE as below:
workspace(name = "local_proj")
Now instead of load("//:external/code.bzl", "some_func"), just load it explicitly as a local workspace file:
load("#local_proj//:external/code.bzl", "some_func")
NOTE: When using this trick just be careful about potential dependency loops (i.e. loading a generated file that itself is produced by a rule depending on the same external repo!)
So I've documented my whole API with swagger editor, and now I have my .yaml file. I'm really confused how I take that and generate the whole nodejs stuff now so that all those functions are already defined and then I just fill them in with the appropriate code.
Swagger Codegen generates server stubs and client SDKs for a variety of languages and frameworks, including Node.js.
To generate a Node.js server stub, run codegen with the -l nodejs-server argument.
Windows example:
java -jar swagger-codegen-cli-2-2-2.jar generate -i petstore.yaml -l nodejs-server -o .\PetstoreServer
You get:
.
├── api
| └── swagger.yaml
├── controllers
| ├── Pet.js
| ├── PetService.js
| ├── Store.js
| ├── StoreService.js
| ├── User.js
| └── UserService.js
├── index.js
├── package.json
├── README.md
└── .swagger-codegen-ignore
I'm trying to use kitchen-terraform to verify a terraform module I'm building. This particular module is a small piece in a larger infrastructure. It depends on some pieces of the network being available and will then be used later to spin up additional servers and whatnot.
I'm curious if there's a way with kitchen-terraform to create some pieces of infrastructure before the module under test runs and to also add in some extra pieces that aren't part of the module proper.
In this particular case, the module is creating a new VPC with some peering connections with an existing VPC, security groups, and subnets. I want to verify that the peering connections were established correctly as well as spin up some ec2 instances to verify the status of the network.
Does anyone have examples of doing something like this?
I'm curious if there's a way with kitchen-terraform to create some
pieces of infrastructure before the module under test runs and to also
add in some extra pieces that aren't part of the module proper.
You can do all of this. Your .kitchen.yml will specify where the terraform code exists to execute here:
provisioner:
name: terraform
directory: path/to/terraform/code
variable_files:
- path/to/terraform/variables.tfvars
More to the point, create a main.tf in a test location that builds all the infrastructure you want, including the modules. The order of execution will be controlled by the dependencies of the resources themselves.
Assuming you are testing in the same repo as your module, maybe arrange something like this:
├── .kitchen.yml
├── Gemfile
├── Gemfile.lock
├── README.md
├── terraform
│ ├── my_module
│ ├── main.tf
│ └── variables.tf
├── test
├── main.tf
└── terraform.tfvars
The actual .kitchen.yml will include this:
provisioner:
name: terraform
directory: test
variable_files:
- test/variables.tfvars
variables:
access_key: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
And your test/main.tf will instantiate the module along with any other code under test.
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
...
module "my_module" {
name = "foo"
source = "../terraform/my_module"
...
}
resource "aws_instance" "test_instance_1" {
...
}
Problem
I write down lectures at university in LaTeX (which is really convenient for this purpose), and i want tex files to automatically compile in pdf.
I have couple of .tex files in my repository like this:
.
├── .gitlab-ci.yml
└── lectures
├── math
| ├── differentiation
| | ├── lecture_math_diff.tex
| | ├── chapter_1.tex
| | └── chapter_2.tex
| └── integration
| ├── lecture_math_int.tex
| ├── chapter_1.tex
| └── chapter_2.tex
└── physics
└── mechanics
├── lecture_physics_mech.tex
├── chapter_1.tex
└── chapter_2.tex
So main file, for example, lecture_math_diff.tex using
\include{chapter_1}
\include{chapter_2}
tags, to form whole lecture.
And as result, i want to have my build artifacts in pdf like this:
├── math
| ├── lecture_math_diff.pdf
| └── lecture_math_int.pdf
└── physics
└── lecture_physics_mech.pdf
What can be done here? Do i have to write any sh script to collect all tex files or use gitlab runners?
One approach would be to use a short script (e.g python or bash) and to run latexmk to generate the PDF files.
latexmk is a perl script, which compiles latex files automatically. A short introduction can be found here
With python3 the script could look like the following one:
# filename: make_lectures.py
import os
from subprocess import call
# configuration:
keyword_for_main_tex = "lecture"
if __name__ == "__main__":
tex_root_directory = os.getcwd()
for root, _, files in os.walk("."):
for file_name in files:
# check, if file name ends with `tex` and starts with the keyword
if file_name[-3:] == "tex" and file_name[0:len(keyword_for_main_tex)] == keyword_for_main_tex:
os.chdir(root) # go in the direcotry
os.system("latexmk -lualatex "+ file_name) # run latexmk on the mainfile
os.chdir(tex_root_directory) # go back to root directory in case of relative pathes
This script assumes, that only files to be compiled to PDF start with the keyword lecture (as in the question). But the if statement, which checks for files to build, could also be extended to more elaborate comparison as matching regular expressions.
latexmk is called with the command line flag -lualatex here to demonstrate how to configure the build process gloally. A local configuration possibility (for each single project) is given with .latexmkrc files, which are read and processed by latexmk.
If we call latexmk as shell command, we have to make sure, that it is installed on our gitlab runner (and also texlive). If Dockercontainer runners are registered (see here how it is done), then you just need to specify the name of an image from DockerHub, which leads to the example gitlab-ci.yml file below:
compile_latex_to_pdf:
image: philipptempel/docker-ubuntu-tug-texlive:latest
script: python3 make_lectures.py
artifacts:
paths:
- ./*.pdf
expire_in: 1 week
Fell free, to change the image to any other image you like (e.g. blang/latex:latest). Note, that the artifacts extraction assumes, that no other PDF files are in the repository.
A final remark: I did not try it, but it should also be possible to install texlive and latexmk directly on the gitlab runner (if you have access to it).
You can have a look at https://github.com/reallyinsane/mathan-latex-maven-plugin. With the maven or gradle plugin you can also use "dependencies" for your projects.