I'm looking for a way to run a "cleanup" job/pipeline/etc when a GitLab merge request is closed (either merged or not).
The issue is this - we create a feature deployment on our cluster anytime a merge request is opened. Currently, I have no mechanism of detecting when an MR is closed. Over time these old 'feature deployments' accumulate on the cluster.
I could write a manual cleanup script (look at all open features, remove no-longer-existing-ones) from the cluster but that is going to be a bit hairy and error-prone. Was hoping GitLab has a method to use the really easy/nice pipeline+jobs features for this type of cleanup
We use GitLab environments for review apps. Environments can be automatically stopped after x weeks with environment.auto_stop_in. This has the advantage that, also when MR's stay open for months as they might be forgotten, the data will be cleaned up after x weeks (we use 2 weeks).
In the script, you can do whatever you need to clean up things. Like in our case, a helm uninstall.
gitlab-ci.yaml
deploy_review:
stage: deploy
script: "./deploy_review.sh"
environment:
auto_stop_in: 2 weeks
name: review/$CI_COMMIT_REF_SLUG
on_stop: stop_review_app
deployment_tier: development
resource_group: deploy-review-$CI_COMMIT_REF_SLUG
stop_review_app:
stage: after_deploy
when: manual
only:
- branches
variables:
GIT_STRATEGY: none
script: "./stop_review_app.sh"
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
Related
I am rewriting my CircleCI config. Everything was put in only one job and everything was working well, but for some good reasons I want more structure.
Now I have two jobs build and test, and I want the second job to reuse the machine exactly where the build job stopped.
I will later have a third and four job.
My desire would be a line that says I want to reuse the previous machine/executor, built-in from CircleCI.
Other options are Workspaces that save data on CircleCI machine, or building and deploying my own docker that represents the machine after the build job
What is the easiest way to achieve what I want to do ?
Currently, I have basically in my yaml:
jobs:
build:
docker:
- image: cypress/base:14.16.0
steps:
- checkout
- node/install:
install-yarn: true
node-version: '16.13'
- other-long-commands
test:
# NOT GOOD: need an executor
steps:
- run:
name: 'test'
command: 'npx cypress run'
environment:
TEST_SUITE: SMOKE
workflows:
build-and-test:
jobs:
- build
- smoke:
requires:
- build
Can't be done. Workspaces is the solution instead.
My follow up would be, why do you need two jobs? Depending on your use case, pulling steps out into reusable commands might help, or even an orb.
I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.
I am trying to create a base image for my repo that is optionally re-built when branches (merge requests) make changes to dependencies.
Let's say I have this pipeline configuration:
stages:
- Test
- Build
variables:
- image: main
Changes A:
stage: Test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
changes:
- path/to/a
script:
- docker build -t a .
- docker push a
- echo 'image=a' > dotenv
artifacts:
reports:
dotenv: dotenv
Build:
stage: Build
image: $image
script:
- echo build from $image
Let's say I push to a new branch and the first commit changes /path/to/a, the docker image is build and pushed, the dotenv is updated and the Build job successfully uses image=a.
Now, let's say I push a new commit to the same branch. However, the new commit does not change /path/to/a so the Changes A job does not run. Now, the Build stage pulls the "wrong" default image=main while I would like it to still pull image=a since it builds on top of the previous commit.
Any ideas on how to deal with this?
Is there a way to make rules.changes refer to origin/main?
Any other ideas on how to achieve what I am trying to do?
Is there a way to make rules.changes refer to origin/main?
Yes, there is, since GitLab 15.3 (August 2022):
Improved behavior of CI/CD changes with new branches
Improved behavior of CI/CD changes with new branches
Configuring CI/CD jobs to run on pipelines when certain files are changed by using rules: changes is very useful with merge request pipelines.
It compares the source and target branches to see what has changed, and adds jobs as needed.
Unfortunately, changes does not work well with branch pipelines.
For example, if the pipeline runs for a new branch, changes has nothing to compare to and always returns true, so jobs might run unexpectedly.
In this release we’re adding compare_to to rules:changes for both jobs and workflow:rules, to improve the behavior in branch pipelines.
You can now configure your jobs to check for changes between the new branch and the defined comparison branch.
Jobs that use rules:changes:compare will work the way you expect, comparing against the branch you define.
This is useful for monorepos, where many independent jobs could be configured to run based on which component in the repo is being worked on.
See Documentation and Issue.
You can use it only as part of a job, and it must be combined with rules:changes:paths.
Example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
paths:
- Dockerfile
compare_to: 'refs/heads/branch1'
In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event.
There is a project setting, which defines how your MR pipelines are setup. This is only working for Merge requests and can be found in Settings -> Merge Requests under the section Merge options
each commit individually - nothing checked
this means, each commit is treated on its own, and changes checks are done against the triggering commit on it's own
Enabled merged results pipeline
This will merge your MR with the target branch before running the CI Jobs. This also will evaluate all your changes, and take a look at all of them within the MR and not commit wise.
Merge trains
This is a whole different chapter, and for this usecase not relevant. But for completeness i have to mention it see https://gitlab.com/help/ci/pipelines/merge_trains.md
What you are looking for is Option 2 - Merged pipeline results. but as i said, this will only work in Merge Request pipelines and not general pipelines. So you would also need to adapt your rules to something like:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes:
- path/to/a
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
I use CircleCI and the pipeline is as follows:
build
test
build app & nginx Docker images and push them to a GitLab registry
deploy Docker stack to the development server (currently the Swarm manager)
I just pushed my develop branch to my repository and faced a "Symfony4 new Controller page" on the development server after a successful message from CircleCI.
I logged via SSH in it and executed (with output for the application service):
docker stack ps my-development-stack --format "{{.Name}} {{.Image}} {{.CurrentState}}"
my-stack_app.1 gitlab-image:latest-develop Running 33 minutes ago
On my GitLab repository's registry, the application image has been "Last Updated" 41 minutes ago. The service's image has apparently been refreshed before with the last version.
Is it a common issue/error ?
How could (or should) I fix this timing issue ?
Can CircleCI help about this ?
Perhaps it is best ( though not ideal ) to introduce a delay between build and deploy , you can refer to this example here CircelCI Delay Between Jobs
I found a workaround using a CircleCI scheduled workflow triggered by a CRON. I scheduled a nightly build workflow which will run every day at midnight.
Sample of my config.yml file
# Beginning of the config.yml
# ...
workflows:
version: 2
# Push workflow
# ...
# Nightly build workflow
nightly-dev-deploy:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- build
- test:
requires:
- build
- deploy-dev:
requires:
- test
Read more about scheduled workflow with a nightly build example in the CircleCI official documentation
Looks more like a workaround to me. I'd be glad to hear how do you avoid this issue, which could lead to a better answer to the question.
We're developing an application which consists of three parts:
Backend (Java EE) (A)
Frontend (vuejs) (B)
Admin frontend (React) (C)
For each of the above applies as status quo:
Maintained in its own Git repository
Has its own docker-compose.yml
Has its own Jenkinsfile
The Jenkinsfile for each component includes a "Deploy" stage which basically just runs the following command:
docker stack deploy -c docker-compose.yml $stackName.
This approach however doesn't feel "right". We're struggling with some questions like:
How can we deploy the "complete application"? First guess was using a separate docker-compose.yml which contains services of A, B and C.
But where would we keep this file? Definitely not in one of the Git repos as it doesn't belong there. A fourth repo?
How could we start the deployment of this combined docker compose file if there are changes in one of the above repos for A, B, C?
We're aware that these question might be not quite specific but they show our confusion regarding this topic.
Do you have any good practices how to orchestrate these three service components?
Well, one way to do that is to make the 3 deployments separate pipelines, so then as the last step per application you would just call the particular deployment. For example for backend:
stage("deploy backend") {
steps {
build 'deploy backend'
}
}
Then a separate pipeline to deploy all the apps just doing
stage("deploy all") {
steps {
build 'deploy backend'
build 'deploy frontend'
build 'deploy admin frontend'
}
}
Open question would be where would you keep the docker-compose.yml?
I'm assuming that automatic deployment would be available just for your master, so I would keep it still in each project. You would also need additional Jenkins configuration file for deployment pipeline - meaning you would have a simple pipeline 'deploy backend' pointing to this new jenkins configuration file in master branch of 'backend'. But then it all depends on your gitflow.