Good morning,
i cant find some useful doc to get started with env variables in Bitbucket.
My Starting Point: .env file with APP_ENV=$APP_ENV. I commit to dev-master branch where is set a repo variable APP_ENV | dev. Ok, something is going wrong after commit, because this var isnt replaced. Is'nt it the default behaviour to replace this var after commit?
The second Question is: i want to set APP_ENV | prod after merge to production build but since the first case isnt work, the prod doesnt work too and where i can set the branch specific vars?
Thanks for your ideas and time.
.env file
APP_ENV=$APP_ENV
pipelines.yml
pipelines:
branches:
dev-master:
- step:
script:
- 'sed -i "s/APP_ENV/$APP_ENV/" .env'
Related
I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.
I am trying to create a base image for my repo that is optionally re-built when branches (merge requests) make changes to dependencies.
Let's say I have this pipeline configuration:
stages:
- Test
- Build
variables:
- image: main
Changes A:
stage: Test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
changes:
- path/to/a
script:
- docker build -t a .
- docker push a
- echo 'image=a' > dotenv
artifacts:
reports:
dotenv: dotenv
Build:
stage: Build
image: $image
script:
- echo build from $image
Let's say I push to a new branch and the first commit changes /path/to/a, the docker image is build and pushed, the dotenv is updated and the Build job successfully uses image=a.
Now, let's say I push a new commit to the same branch. However, the new commit does not change /path/to/a so the Changes A job does not run. Now, the Build stage pulls the "wrong" default image=main while I would like it to still pull image=a since it builds on top of the previous commit.
Any ideas on how to deal with this?
Is there a way to make rules.changes refer to origin/main?
Any other ideas on how to achieve what I am trying to do?
Is there a way to make rules.changes refer to origin/main?
Yes, there is, since GitLab 15.3 (August 2022):
Improved behavior of CI/CD changes with new branches
Improved behavior of CI/CD changes with new branches
Configuring CI/CD jobs to run on pipelines when certain files are changed by using rules: changes is very useful with merge request pipelines.
It compares the source and target branches to see what has changed, and adds jobs as needed.
Unfortunately, changes does not work well with branch pipelines.
For example, if the pipeline runs for a new branch, changes has nothing to compare to and always returns true, so jobs might run unexpectedly.
In this release we’re adding compare_to to rules:changes for both jobs and workflow:rules, to improve the behavior in branch pipelines.
You can now configure your jobs to check for changes between the new branch and the defined comparison branch.
Jobs that use rules:changes:compare will work the way you expect, comparing against the branch you define.
This is useful for monorepos, where many independent jobs could be configured to run based on which component in the repo is being worked on.
See Documentation and Issue.
You can use it only as part of a job, and it must be combined with rules:changes:paths.
Example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
paths:
- Dockerfile
compare_to: 'refs/heads/branch1'
In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event.
There is a project setting, which defines how your MR pipelines are setup. This is only working for Merge requests and can be found in Settings -> Merge Requests under the section Merge options
each commit individually - nothing checked
this means, each commit is treated on its own, and changes checks are done against the triggering commit on it's own
Enabled merged results pipeline
This will merge your MR with the target branch before running the CI Jobs. This also will evaluate all your changes, and take a look at all of them within the MR and not commit wise.
Merge trains
This is a whole different chapter, and for this usecase not relevant. But for completeness i have to mention it see https://gitlab.com/help/ci/pipelines/merge_trains.md
What you are looking for is Option 2 - Merged pipeline results. but as i said, this will only work in Merge Request pipelines and not general pipelines. So you would also need to adapt your rules to something like:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes:
- path/to/a
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
While all sites about environment variables on Azure Pipelines seem to talk about setting variables before pipeline start, I want to set (change) the variable PATH in one step and use it in a later step.
But using
steps:
- script: source ./export_variables.sh
displayName: "export variables"
- script: $PATH
displayName: "verify"
condition: succeeded()
where ./export_variables.sh contains something like
#!/bin/bash
export PATH=abc/def/bin:$PATH
does not fulfill the task: In the verify step PATH does not contain abc/def/bin.
What has to be changed so that upates of $PATH become permanent on the machine?
I have had the same issue. It was solved with the following lines in the YAML
steps:
- bash: |
export PATH="${PATH}:${HOME}/.local/bin"
echo "##vso[task.setvariable variable=PATH;]${PATH}"
Which gives me the variable set over the entire stage. Note that this will overwrite the PATH so use with care :)
Within .gitlab-ci.yml I've created a new variable under script: by using $CI_COMMIT_SHA and modifying it. When I echo the variable it returns the proper value. However, I'm not having any success passing it along to Docker. What am I not doing right?
Ultimately, I would like access this custom variable inside my container.
build:
script:
# converts commit SHA to UNIX time
- export COMMIT_TIME_UNIX=$(git show -s --format=%ct $CI_COMMIT_SHA)
- echo $COMMIT_TIME_UNIX
You would need to check, when the same script is executed in a Docker/container environment, if it is still in the right Git repository path.
You can add, before the first export:
pwd
git status
env|grep GIT
That way, you will check if you are doing Git commands where you should, and if there is any GIT_xxx environnement variable which might influence said command.
In my travis script I have the following:
after_success:
- ember build --environment=production
- ember build --environment=staging --output-path=dist-staging
After both of these build, I conditionally deploy to S3 the one that is appropriate, based on the current git branch.
It works, but it would save time if I only built the one I actually need. What is the easiest way to build based on the branch?
use the test command as used here.
after_success:
- test $TRAVIS_BRANCH = "master" &&
ember build
All travis env variables are available here.
You can execute shell script in after_success and check the current branch using travis environment variables:
#!/bin/sh
if [[ "$TRAVIS_BRANCH" != "master" ]]; then
echo "We're not on the master branch."
# analyze current branch and react accordingly
exit 0
fi
Put the script somewhere in the project and use it like:
after_success:
- ./scripts/deploy_to_s3.sh
There might be other useful travis variables to you, they are listed here.
With the following entry the script will only be executed if it is not a PR and the branch is master.
after_success:
- 'if [ "$TRAVIS_PULL_REQUEST" = "false" -a "$TRAVIS_BRANCH" = "master" ]; then bash doit.sh; fi'
It is not enough to evaluate TRAVIS_BRANCH. TRAVIS_BRANCH is set to master when a PR against master is created by a fork.
See also the description of TRAVIS_BRANCH on https://docs.travis-ci.com/user/environment-variables/:
for push builds, or builds not triggered by a pull request, this is the name of the branch
for builds triggered by a pull request this is the name of the branch targeted by the pull request
for builds triggered by a tag, this is the same as the name of the tag (TRAVIS_TAG)
If you work with tags you have to consider TRAVIS_TAG as well. If TRAVIS_TAG is set, TRAVIS_BRANCH is set to the value of TRAVIS_TAG.
after_success:
- if [ "$TRAVIS_PULL_REQUEST" = "false" -a \( "$TRAVIS_BRANCH" = "master" -o -n "$TRAVIS_TAG" \) ]; then doit.sh; fi
I would say the above solutions are good because they would transfer to non-travis-ci build systems as well, but there is a feature in TravisCI for similar to this:
stages:
- name: deploy
# require the branch name to be master (note for PRs this is the base branch name)
if: branch = master
Although I could not get it to work with after_success, the following page has a section on "Testing Conditions" which I didn't bother setting that up.
https://docs.travis-ci.com/user/conditional-builds-stages-jobs/