I've got a problem with gitlab-ci multi runner. I have several stages in my setup. Let's pretend build, test. Build works fine, but when it comes to the test stage the job is failing because of some infrastructure issue. Then I fix the reason of failure and want to repeat only the last step assuming the cache between stages is alive. But it fails again because of the empty cache. Here is an example to demonstrate my layout
eg.
stages:
- build
- test
build_step:
stage: build
tags:
- docker
cache:
key: ${CI_PIPELINE_ID}
untracked: true
paths:
- bld/
script:
- rm -rf bld
- mkdir -p bld
- cd bld
- touch build_here
test:
stage: test
cache:
key: ${CI_PIPELINE_ID}
untracked: true
paths:
- bld/
tags:
- docker
script:
- cd bld
- ls -all
Here is my gitlab-runner version:
# gitlab-ci-multi-runner --version
Version: 9.5.1
Git revision: 96b34cc
Git branch: 9-5-stable
GO version: go1.8.3
Built: Wed, 04 Oct 2017 16:26:27 +0000
OS/Arch: linux/amd64
Thanks for your help!
Cache is served on a best-effort basis; to pass data through jobs you need to use artifacts, as it explained in the documentation:
cache - Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don't have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts - Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI.
You need to use dependencies along with artifacts to obtain what you want
Related
I am rewriting my CircleCI config. Everything was put in only one job and everything was working well, but for some good reasons I want more structure.
Now I have two jobs build and test, and I want the second job to reuse the machine exactly where the build job stopped.
I will later have a third and four job.
My desire would be a line that says I want to reuse the previous machine/executor, built-in from CircleCI.
Other options are Workspaces that save data on CircleCI machine, or building and deploying my own docker that represents the machine after the build job
What is the easiest way to achieve what I want to do ?
Currently, I have basically in my yaml:
jobs:
build:
docker:
- image: cypress/base:14.16.0
steps:
- checkout
- node/install:
install-yarn: true
node-version: '16.13'
- other-long-commands
test:
# NOT GOOD: need an executor
steps:
- run:
name: 'test'
command: 'npx cypress run'
environment:
TEST_SUITE: SMOKE
workflows:
build-and-test:
jobs:
- build
- smoke:
requires:
- build
Can't be done. Workspaces is the solution instead.
My follow up would be, why do you need two jobs? Depending on your use case, pulling steps out into reusable commands might help, or even an orb.
I am trying to create a base image for my repo that is optionally re-built when branches (merge requests) make changes to dependencies.
Let's say I have this pipeline configuration:
stages:
- Test
- Build
variables:
- image: main
Changes A:
stage: Test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
changes:
- path/to/a
script:
- docker build -t a .
- docker push a
- echo 'image=a' > dotenv
artifacts:
reports:
dotenv: dotenv
Build:
stage: Build
image: $image
script:
- echo build from $image
Let's say I push to a new branch and the first commit changes /path/to/a, the docker image is build and pushed, the dotenv is updated and the Build job successfully uses image=a.
Now, let's say I push a new commit to the same branch. However, the new commit does not change /path/to/a so the Changes A job does not run. Now, the Build stage pulls the "wrong" default image=main while I would like it to still pull image=a since it builds on top of the previous commit.
Any ideas on how to deal with this?
Is there a way to make rules.changes refer to origin/main?
Any other ideas on how to achieve what I am trying to do?
Is there a way to make rules.changes refer to origin/main?
Yes, there is, since GitLab 15.3 (August 2022):
Improved behavior of CI/CD changes with new branches
Improved behavior of CI/CD changes with new branches
Configuring CI/CD jobs to run on pipelines when certain files are changed by using rules: changes is very useful with merge request pipelines.
It compares the source and target branches to see what has changed, and adds jobs as needed.
Unfortunately, changes does not work well with branch pipelines.
For example, if the pipeline runs for a new branch, changes has nothing to compare to and always returns true, so jobs might run unexpectedly.
In this release we’re adding compare_to to rules:changes for both jobs and workflow:rules, to improve the behavior in branch pipelines.
You can now configure your jobs to check for changes between the new branch and the defined comparison branch.
Jobs that use rules:changes:compare will work the way you expect, comparing against the branch you define.
This is useful for monorepos, where many independent jobs could be configured to run based on which component in the repo is being worked on.
See Documentation and Issue.
You can use it only as part of a job, and it must be combined with rules:changes:paths.
Example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
paths:
- Dockerfile
compare_to: 'refs/heads/branch1'
In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event.
There is a project setting, which defines how your MR pipelines are setup. This is only working for Merge requests and can be found in Settings -> Merge Requests under the section Merge options
each commit individually - nothing checked
this means, each commit is treated on its own, and changes checks are done against the triggering commit on it's own
Enabled merged results pipeline
This will merge your MR with the target branch before running the CI Jobs. This also will evaluate all your changes, and take a look at all of them within the MR and not commit wise.
Merge trains
This is a whole different chapter, and for this usecase not relevant. But for completeness i have to mention it see https://gitlab.com/help/ci/pipelines/merge_trains.md
What you are looking for is Option 2 - Merged pipeline results. but as i said, this will only work in Merge Request pipelines and not general pipelines. So you would also need to adapt your rules to something like:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes:
- path/to/a
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
I'm trying to create two different actions within travis.ci. The first action is to execute a script on every push on every branch. This is currently working as desired. The second is to trigger a different script only when git push origin --tags. In short:
Execute script1 always (currently working)
Execute script2 when tags are pushed
Here is what I'm trying:
language: python
python:
- 3.7
matrix:
include:
- python: 3.7
sudo: true
install:
- pip install -r requirements.txt
script: # Always want this to happen
- invoke package
branches:
only:
- master
- /^x\/.*/
deploy: # Want this to occur on git push origin --tags
provider: script
script: invoke release
on:
tags: true
The deploy section is not being triggered, and I can find no evidence of the invoke release script being invoked.
Update:
It may be due to the way I'm pushing tags..? I'm seeing this log in travis now:
Skipping a deployment with the script provider because this is not a tagged commit
Solved it from this github issue. Changed the deploy section to this:
deploy:
provider: script
script: invoke release
on:
tags: true
all_branches: true
but had to remove the branches section. Deployment script was invoked, nonetheless.
Inside the .travis.yml configuration file what is the practical difference between before_install, install, before_script and script options?
I have found no documentation explaining the differences between these options.
You don't need to use these sections, but if you do, you communicate the intent of what you're doing:
before_install:
# execute all of the commands which need to be executed
# before installing dependencies
- composer self-update
- composer validate
install:
# install all of the dependencies you need here
- composer install --prefer-dist
before_script:
# execute all of the commands which need to be executed
# before running actual tests
- mysql -u root -e 'CREATE DATABASE test'
- bin/doctrine-migrations migrations:migrate
script:
# execute all of the commands which
# should make the build pass or fail
- vendor/bin/phpunit
- vendor/bin/php-cs-fixer fix --verbose --diff --dry-run
See, for example, https://github.com/localheinz/composer-normalize/blob/0.8.0/.travis.yml.
The difference is in the state of the job when something goes wrong.
Git 2.17 (Q2 2018) illustrates that in commit 3c93b82 (08 Jan 2018) by SZEDER Gábor (szeder).
(Merged by Junio C Hamano -- gitster -- in commit c710d18, 08 Mar 2018)
That illustrates the practical difference between before_install, install, before_script and script options
travis-ci: build Git during the 'script' phase
Ever since we started building and testing Git on Travis CI (522354d: Add Travis CI support, 2015-11-27, Git v2.7.0-rc0), we build Git in the
'before_script' phase and run the test suite in the 'script' phase
(except in the later introduced 32 bit Linux and Windows build jobs,
where we build in the 'script' phase').
Contrarily, the Travis CI practice is to build and test in the
'script' phase; indeed Travis CI's default build command for the
'script' phase of C/C++ projects is:
./configure && make && make test
The reason why Travis CI does it this way and why it's a better
approach than ours lies in how unsuccessful build jobs are
categorized. After something went wrong in a build job, its state can
be:
'failed', if a command in the 'script' phase returned an error.
This is indicated by a red 'X' on the Travis CI web interface.
'errored', if a command in the 'before_install', 'install', or
'before_script' phase returned an error, or the build job exceeded
the time limit.
This is shown as a red '!' on the web interface.
This makes it easier, both for humans looking at the Travis CI web
interface and for automated tools querying the Travis CI API, to
decide when an unsuccessful build is our responsibility requiring
human attention, i.e. when a build job 'failed' because of a compiler
error or a test failure, and when it's caused by something beyond our
control and might be fixed by restarting the build job, e.g. when a
build job 'errored' because a dependency couldn't be installed due to
a temporary network error or because the OSX build job exceeded its
time limit.
The drawback of building Git in the 'before_script' phase is that one
has to check the trace log of all 'errored' build jobs, too, to see
what caused the error, as it might have been caused by a compiler
error.
This requires additional clicks and page loads on the web interface and additional complexity and API requests in automated tools.
Therefore, move building Git from the 'before_script' phase to the
'script' phase, updating the script's name accordingly as well.
'ci/run-builds.sh' now becomes basically empty, remove it.
Several of our build job configurations override our default 'before_script' to do nothing; with this change our default 'before_script' won't do
anything, either, so remove those overriding directives as well.
I test on three different Node versions (mainly to alert me to any compatibility issues that might arise if I was forced to switch to another version in production):
sudo: false
language: node_js
node_js:
- iojs
- '0.12'
- '0.10'
deploy:
skip_cleanup: true
provider: script
script: ./deploy.sh
on:
branch: master
matrix:
allow_failures:
- node_js: iojs
But that means my ./deploy.sh script is run three times, from three different containers! I obviously only want one of the successful builds to be deployed. The other builds are just for catching Node issues.
Is there a way to configure it so it only runs my deploy script after one of the jobs? Maybe another setting under on:?
The docs for script provider don't cover this.
What about setting a node: '0.10' option under on:? Like so:
deploy:
skip_cleanup: true
provider: script
script: ./deploy.sh
on:
branch: master
node: '0.10'
This should run the deploy job only on the node: '0.10' target.
From the Travis Deployment official docs:
jdk, node, perl, php, python, ruby, scala, go: For language runtimes
that support multiple versions, you can limit the deployment to happen
only on the job that matches the desired version.
You could try using a conditional release.