I have some common configurations files that are the same for many projets. It's very painful to manage because they are just duplicate and when I have a modification for one file I need to do it for all of them.
I want to find a solution with gitlab ci/cd to share common configuration files across different projects.
I saw in the Gitlab documentation that you can include other gitlab files here / cd here. But I don't want to share only the yaml configuration file, I want to share all the configuration files that are the same in my projects.
I found an idea but I don't know if it's a good one.
Imagine a project named "A" and a project named "Common Configurations Files".
The "Common Configurations Files" project will contain all the common configurations files in a "config" folder and a gitlab ci/cd file with a function that can copy the files to the root folder.
Project A will include files and will call the before_script step to copy the files to the root folder.
I don't know if it can work and I don't know how to do it.
I saw two other options but I don't know if it's suitable:
Gitlab artifact
git submodule
Both options, either working with a submodule or artifacts, should work for your case. Using submodules would add a little bit more complexity as you would need to keep the submodules updated in all your dependent repositories.
Another option that would be suitable here would be the use of Gitlab Generic Packages. If you have a Repository that includes all your configuration files, I would suggest creating a pipeline for this repository which versions/tags the repo upon changes. After versioning, I would zip the content up and push it to this projects package repository.
curl --header "PRIVATE-TOKEN: <your_access_token>" \
--upload-file path/to/file.txt \
"https://gitlab.example.com/api/v4/projects/24/packages/generic/my_package/0.0.1/dependencies.zip"
In all your dependent projects, in the before_script you could just download and extract the zip file.
curl --header "PRIVATE-TOKEN: <your_access_token>" \
"https://gitlab.example.com/api/v4/projects/24/packages/generic/my_package/0.0.1/dependencies.zip"
The example in the docs looks like this and might help you get started.
image: curlimages/curl:latest
stages:
- upload
- download
upload:
stage: upload
script:
- 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file path/to/file.txt "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/my_package/0.0.1/file.txt"'
download:
stage: download
script:
- 'wget --header="JOB-TOKEN: $CI_JOB_TOKEN" ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/my_package/0.0.1/file.txt'
Related
I have two projects, first is SOURCES (technical base) and second is an ant software.
I have create a gitlab-ci.yml on the second project but the build pipeline is always in error because sources libraries are not referenced.
In my build.xml, sources are referenced like this: ${SOURCES_DIRRECTORY}/folder/lib/library.jar
In the gitlab-ci.yml I can set SOURCES_DIRRECTORY but putting the url of the other project doesn't seem to work.
A prohibited solution is to add the sources in an local lib folder. (so as not to duplicate)
Is it possible to reference another gitlab project from my CI ?
Otherwise, how can I retrieve and use the artifact generated by the sources project to obtain the libraries on my software project?
Thank you for your answers
You can use the GitLab CI APIs to download the artifacts made in the other project.
I do something similar in one of my project and it work for me
curl -k --location --output artifacts.zip --header "JOB-TOKEN: $CI_JOB_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/jobs/artifacts/devel/download?job=$STAGE_NAME"
In this case i download the artifacts made by another branch in the same project, so I already have setted by default the variables and I can use JOB-TOKEN, instead you should use the PRIVATE-TOKEN that is an ACCESS TOKEN with the API permission grant. You should have something like that
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/1/jobs/artifacts/main/download?job=test"
I have made a ROS workspace and inside a package.
I did catkin_make and everything is working well.
I would like to give this package (or should I give the entire workspace?) to another person.
I am thinking to give him a zip file of the files and folders (it contains launch files, python scripts, rviz files etc) so I am expecting he will unzip it in his machine
I would like he can run the launch files without problems
What is what he needs to do for this? (of course he will have ROS installed, that is no problem)
I am thinking perhaps he should do source devel/setup.bash but is this enough?
When sharing a workspace with somebody only the source space src has to be shared. It should contain all our packages with their launch files (*.launch), Python (*.py) and C++ nodes (*.cpp, *.hpp), YAML configuration files (*.yaml), RViz configurations (*.rviz), robot descriptions (*.urdf, *.xacro) and describe how each node should be compiled in a CMakeLists.txt. Additionally you are supposed to keep track of all the Debian packages you install inside the package.xml file of each package.
If for some obscure reason there are things that I have to do that can't be accommodated in the standard installation instructions given above, I will actually write a bash script that performs these steps for me and add it either to the package itself or the workspace. This way I can automate also more complex steps such as installing OpenCV or modifying the .bashrc. Here a small example of what such a minimal script (I generally name them install_dependencies.sh) might look like:
#!/bin/bash
# Get current workspace
WS_DIR="$(dirname "$(dirname "$(readlink -fm "$0")")")"
# Check if script is run as root '$ sudo ...'
if ["$EUID" -ne 0]
then
echo "Error: This script has to be run as root '$ sudo ./install_dependencies.sh'
exit 1
fi
echo "Installing dependencies..."
# Modify .bashrc
echo "- Modifying '~/.bashrc'..."
echo "source ${WS_DIR}/devel/setup.bash" >> ~/.bashrc
echo ""
echo "Dependencies installed."
If for some reason even that is not possible I make always sure to document it properly either in a Markdown *.md read-me either in a /doc folder inside your package, in the read-me.md inside the base folder of your repository or inside the root folder of your workspace.
The receiver then only has to
Create a new workspace
Copy or clone the package files to its src folder
Install all the Debian package dependencies listed in the package.xml files with $ rosdep install
(If any: Execute the bash scripts I created by hand $ sudo ./install_dependencies.sh or perform the steps given in the documentation)
Build the workspace with $ catkin_make or $ catkin build from catkin-tools
Source the current environment variables with $ source devel/setup.bash
Make sure that the Python nodes are executable either by $ chmod +x <filename> or right-clicking the corresponding Python nodes (located in src or scripts of your package), selecting Properties/Permissions and enabling Allow executing file as program.
Run the desired Python or C++ nodes ($ rosrun <package_name> <executable_name>) and launch files ($ roslaunch <package_name> <launch_file_name>)
It is up to you to share the code as a compressed file, in form of a Git repository or a more advanced way (see below) but I will introduce some best practices in the following paragraphs that will pay off in the long run with larger packages.
Sharing a package or sharing a workspace?
One can either share a single package or an entire workspace. I personally think that most of the time one should share the entire workspace instead of the package alone even if you only cloned the other packages from a public Github repo. This might save the receiver a lot of headache e.g. when checking out the wrong branch.
Version control with Git
Arguably the best way to arrange your packages is by using Git. I'd actually make a repository for every package you create (if a couple of packages are virtually inseparable you can also bundled them to a single Git repo or better use metapackages). Then create an additional repository for your workspace and include your own packages and packages from other sources as submodules. This allows your code to be modular and re-usable: You can share only a package or the entire workspace!
As a first step I always add a .gitignore file to each package repository which excludes *.pyc files and another one to the workspace repository that ignores the build, devel and install folders.
You can add a particular repository as submodule to your workspace Git repository by opening a console inside the src folder of your workspace repository and typing
$ git submodule add -b <branch_name> <git_url_to_package> <optional_directory_rename>
Note that you can actually track a particular branch of a repository that you include as a submodule. In case you need a submodule at some point follow this guide.
If you share the workspace repository with someone they will have to have access to each individual submodule repository and they will have to not only pull the repository but also update the submodules with
$ git clone --recurse-submodules <git_url_to_workspace_repository>
and potentially update them to the latest commit with
$ git submodule update --remote
After these two steps they should have a full version of the repository with submodules and they should be able to progress with the steps listed in the section above.
1.1 Unit-testing and continuous integration
Before sharing a repository you will have to verify that everything is working correctly. This can take a decent amount of time, in particular if the code base is large and you are modifying it frequently. In the ideal case you would have to install it on a brand new machine or inside a virtual box in order to make sure that the set-up works which would take quite some time. This is where unit testing comes into play: For every class and function you program you will write a test. This way you can simply run these tests and make sure everything is working correctly. Generally these unit tests will be performed automatically and the working branches merged continuously. Generally the test routines are written with the libraries Boost::Test (C++), GoogleTest (generally used in ROS with C++), unittest (for Python) and QtTest (for GUIs). For ROS launch files there is additionally rostest. How this can be done in ROS is described here and here.
ROSjects
If you do not even want the person you are sending the code to to go through the hassle to set it up you might consider sending them a ROSject. A ROSject is an online virtual ROS environment (by the guys behind The Construct, the main source of ROS courses and of ROS tutorials on Youtube) that can be created and shared very easily from your existing Git repository as can be seen here. The simulation runs entirely in the cloud on a virtual machine. This way the potential of failure is very low but it is not a choice if your code is supposed to run on hardware and not only in simulation.
Docker
If your installation procedure is complex you might as well use a container such as a Docker.
More information about using Docker in combination with ROS can be found here. The Docker container might introduce though a bit of overhead and it is probably no choice for code which should have real-time priority in combination with a real-time patched operating system.
Debian or snap package
Another way of sending somebody a ROS package is by packing it into a Debian or snap package. This process takes a while and is in particular favourable if you want to give your code to a large number of users that should use the code out of the box. Instructions on how this can be done for Debian packages can be found here and here, while a guide for snap can be found here.
I have a Dockerfile and a tex file in my repository. I use Github Actions to build an image(ubuntu 18.10 with packages for PDFLaTeX) and run a container, which gets main.tex and produces main.pdf with PDFLaTeX. So far everything seems to work OK, but the problem is I can't copy the pdf from container to repository. I tried using docker cp:
docker cp pdf-creator:/main.tex .
But it doesn't seem to work, as pdf doesn't appear in my repository. Can you think of any other way to solve this?
The docker cp command copies a file into the local filesystem. In the context of a GitHub action, this is just whatever virtual environment is being used to run your code: it has nothing to do with your repository.
The only way to add something to your repository is to git add the file, git commit the change, and git push the change to your repository (which of course requires providing your Action with the necessary credentials to push changes to your repository, probably using a GitHub Secret).
But rather than adding the file to your repository, maybe you want to look at support for Artifacts? This lets you save files generated as part of your workflow and make them available for Download.
The workflow step would look something like:
- name: Archive generated PDF file
uses: actions/upload-artifact#v2
with:
name: main.pdf
path: /main.pdf
See the linked docs for more information.
We are thinking to move our ci from jenkins to gitlab. We have several projects that have the same build workflow. Right now we use a shared library where the pipelines are defined and the jenkinsfile inside the project only calls a method defined in the shared library defining the actual pipeline. So changes only have to be made at a single point affecting several projects.
I am wondering if the same is possible with gitlab ci? As far as i have found out it is not possible to define the gitlab-ci.yml outside the repository. Is there another way to define a pipeline and share this config with several projects to simplify maintainance?
GitLab 11.7 introduces new include methods, such as include:file:
https://docs.gitlab.com/ee/ci/yaml/#includefile
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
This will allow you to create a new project on the same GitLab instance which contains a shared .gitlab-ci.yml.
First let me start by saying: Thank you for asking this question! It triggered me to search for a solution (again) after often wondering if this was even possible myself. We also have like 20 - 30 projects that are quite identical and have .gitlab-ci.yml files of about 400 - 500 loc that have to each be changed if one thing changes.
So I found a working solution:
Inspired by the Auto DevOps .gitlab-ci.yml template Gitlab itself created, and where they use one template job to define all functions used and call every before_script to load them, I came up with the following setup.
Multiple project repo's (project-1, project-2) requiring a shared set of CI jobs / functions
Functions script containing all shared functions in separate repo
Files
So using a shared ci jobs scipt:
#!/bin/bash
function list_files {
ls -lah
}
function current_job_info {
echo "Running job $CI_JOB_ID on runner $CI_RUNNER_ID ($CI_RUNNER_DESCRIPTION) for pipeline $CI_PIPELINE_ID"
}
A common and generic .gitlab-ci.yml:
image: ubuntu:latest
before_script:
# Install curl
- apt-get update -qqq && apt-get install -qqqy curl
# Get shared functions script
- curl -s -o functions.sh https://gitlab.com/giix/demo-shared-ci-functions/raw/master/functions.sh
# Set permissions
- chmod +x functions.sh
# Run script and load functions
- . ./functions.sh
job1:
script:
- current_job_info
- list_files
You could copy-paste your file from project-1 to project-2 and it would be using the same shared Gitlab CI functions.
These examples are pretty verbose for example purposes, optimize them any way you like.
Lessons learned
So after applying the construction above on a large scale (40+ projects) I want to share some lessons learned so you don't have to find out the hard way:
Version (tag / release) your shared ci functions script. Changing one thing can now make all pipelines fail.
Using different Docker images could cause an issue in the requirement for bash to load the functions (e.g. I use some Alpine-based images for CLI tool based jobs that have sh by default)
Use project based CI/CD secret variables to personalize build jobs for projects. Like environment URL's etc.
Since gitlab version 12.6, it's possible define a external .gitlab-cy.yml file.
To customize the path:
Go to the project's Settings > CI / CD.
Expand the General pipelines section.
Provide a value in the Custom CI configuration path field.
Click Save changes.
...
If the CI configuration will be hosted on an external site, the URL link must end with .yml:
http://example.com/generate/ci/config.yml
If the CI configuration will be hosted in a different project within
GitLab, the path must be relative to the root directory in the other
project, with the group and project name added to the end:
.gitlab-ci.yml#mygroup/another-project
my/path/.my-custom-file.yml#mygroup/another-project
Use include feature, (available from GitLab 10.6):
https://docs.gitlab.com/ee/ci/yaml/#include
So, i always wanted to post, with what i came up with now:
Right now we use a mixed approach of #stefan-van-gastel's idea of a shared ci library and the relatively new include feature of gitlab 11.7. We are very satisfied with this approach as we can now manage our build pipeline for 40+ repositories in a single repository.
I have created a repository called ci_shared_library containing
a shell script for every single build job containing the execution logic for the step.
a pipeline.yml file containing the whole pipeline config. In the before script we load the ci_shared_library to /tmp/shared to be able to execute the scripts.
stages:
- test
- build
- deploy
- validate
services:
- docker:dind
before_script:
# Clear existing shared library
- rm -rf /tmp/shared
# Get shared library
- git clone https://oauth2:${GITLAB_TOKEN}#${SHARED_LIBRARY} /tmp/shared
- cd /tmp/shared && git checkout master && cd $CI_PROJECT_DIR
# Set permissions
- chmod -R +x /tmp/shared
# open access to registry
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test:
stage: test
script:
- /tmp/shared/test.sh
build:
stage: build
script:
- /tmp/shared/build.sh
artifacts:
paths:
- $CI_PROJECT_DIR/target/RPMS/x86_64/*.rpm
expire_in: 3h
only:
- develop
- /release/.*/
deploy:
stage: deploy
script:
- /tmp/shared/deploy.sh
artifacts:
paths:
- $CI_PROJECT_DIR/tmp/*
expire_in: 12h
only:
- develop
- /release/.*/
validate:
stage: validate
script:
- /tmp/shared/validate.sh
only:
- develop
- /release\/.*/
Every project that want's to use this pipeline config has to have a .gitlab-ci.yml. In this file the only thing to do is to import the shared pipeline.yml file from the ci_shared_library repo.
# .gitlab-ci.yml
include:
- project: 'ci_shared_library'
ref: master
file: 'pipeline.yml'
With this approach really everything regarding to the pipeline lives in one single repository and is reusable. We have the whole pipeline-template in one file, but i think it would even be possible to split this up to have every single job in a yml-file. This way it would be more flexible and one could create default jobs that can be merged together differently for projects that have similar jobs but not every project needing all jobs...
With GitLab 13.5 (October 2020), the include feature is even more useful:
Validate expanded GitLab CI/CD configuration with the API
Writing and debugging complex pipelines is not a trivial task. You can use the include keyword to help reduce the length of your pipeline configuration files.
However, if you wanted to validate your entire pipeline via the API previously, you had to validate each included configuration file separately which was complicated and time consuming.
Now you have the ability to validate a fully-expanded version of your pipeline configuration through the API, with all the include configuration included.
Debugging large configurations is now easier and more efficient.
See Documentation and Issue.
And:
See GitLab 13.6 (November 2020)
Include multiple CI/CD configuration files as a list
Previously, when adding multiple files to your CI/CD configuration using the include:file syntax, you had to specify the project and ref for each file. In this release, you now have the ability to specify the project, ref, and provide a list of files all at once. This prevents you from having to repeat yourself and makes your pipeline configuration less verbose.
See Documentation) and Issue.
You could look into the concept of Dynamic Child pipeline.
It has evolved with GitLab 13.2 (July 2020):
Dynamically generate Child Pipeline configurations with Jsonnet
We released Dynamic Child Pipelines back in GitLab 12.9, which allow you to generate an entire .gitlab-ci.yml file at runtime.
This is a great solution for monorepos, for example, when you want runtime behavior to be even more dynamic.
We’ve now made it even easier to create CI/CD YAML at runtime by including a project template that demonstrates how to use Jsonnet to generate the YAML.
Jsonnet is a data templating language that provides functions, variables, loops, and conditionals that allow for fully parameterized YAML configuration.
See documentation and issue.
I'm getting started with Laravel and Jenkins. First time using either of these technologies. I have Laravel installed and got the welcome page showing. I now want to install Jenkins. I was looking at this tutorial but that installs Laravel differently. I've used composer to install Laravel so I'm not entirely sure how I should do the "configure the build" step:
Configure build
Now clone my github repository laravel-jenkins which is the boilerplate for all the config files and the Jenkins job.
cd /var/www
git clone git://github.com/modess/laravel-jenkins.git
mv laravel-jenkins/* laravel/
cd /var/www/laravel
Now you should have these files in your Laravel directory as well:
build/
- code-browser/
- coverage/
- logs/
- pdepend/
- phpcs.xml (PHP Code Sniffer config)
- phpmd.xml (PHP Mess Detector config)
build.xml (build config)
config.xml (Jenkins job config)
phpunit-bootstrap.php (PHPUnit bootstrap script)
phpunit.xml.dist (PHPUnit config)
Can anyone offer any suggestions on how best I proceed with this?
There is no issue that you installed Laravel via Composer opposed to the git clone recommended in the article (your approach is actually the recommended approach according to Laravel Docs).
To answer your primary question the instructions are having you git clone the authors public Github repo which provides files that will be moved over into your laravel project (they are unique files so nothing from Laravel will be overwritten). Here is a breakdown:
For Reference: I am assuming you followed his example and your Laravel project will is located in /var/www/laravel.
"cd /var/www" - This will bring you to the parent folder of your Laravel project (there should only be laravel/ and an index file of some sort in this folder)
"git clone git://github.com/modess/laravel-jenkins.git" - This will pull all the files from the authors public git repo, if you feel more comfortable you can download as ZIP directly from Github and upload via SFTP or FTP (or just use wget). These files will be located in a newly created folder located at /var/www/laravel-jenkins
"mv laravel-jenkins/* laravel/" - This command just moves all files and folders from /var/www/laravel-jenkins/ to /var/www/laravel (if the files being moved already exist in /var/www/laravel they will be overwritten)
That is all that is needed to be done to have the authors "Jenkins/Laravel Boilerplate" active on your Laravel install.
IMPORTANT NOTE: You said you were using Laravel 5 and the authors instructions are specifically for Laravel 4, there is a very high possibility that this tutorial will not work in a Laravel 5 project as the file structures between Laravel 4 and 5 have many differences.