I have defined the following CI/CD variable (VAULT_PATH) in my gitlab project.
As you can see in the image, the variable is environment scoped so, in order to access its value within my jobs ($VAULT_PATH), I have added the environment property to each job.
job_build_preprod:
environment: preprod
script:
- echo $VAULT_PATH
job_deploy_preprod:
environment: preprod
script:
- echo $VAULT_PATH
job_build_production:
environment: production
script:
- echo $VAULT_PATH
job_deploy_production:
environment: production
script:
- echo $VAULT_PATH
The problem I am facing following this approach is that my "build" jobs are being tagged as deployment jobs (due to the fact that I am adding the environment property) when they are not.
But if I do not add the environment property, I cannot access the environment scoped variable that I need.
So, is there another way to access environment scoped variables within jobs without using the environment property?
I need to use them within build jobs, but I do not want gitlab to tag those build jobs as deployment jobs to the environment.
Check out actions inside environment https://docs.gitlab.com/ee/ci/yaml/#environmentaction.
There are a few actions which you can use which won't trigger deployment.
eg: for build u can use prepare
job_build_preprod
script:
- echo $VAULT_PATH
environment:
name: preprod
action: prepare
url: https://test.com
Related
I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.
I have been ordered to migrate a dotnet build from Bamboo to Jenkins. I used a Freestyle job to run a powershell script, using the PowerShell plugin and successfully built it. However I need to add version number to the built artifact. The Bamboo job uses:
~\.dotnet\tools\dotnet-lambda.exe package -pl $fullDir -f "netcoreapp3.1" -o Payment.${bamboo.majorVersion}.${bamboo.minorVersion}.${bamboo.revisionVersion}.${bamboo.buildNumber}.zip
I went into Jenkins Configuration and in Global Properties, created Environment variables named - buildNumber, majorVersion, minorVersion and revisionVersion, giving it values and in the Build part of the Freestyle job, I used:
~\.dotnet\tools\dotnet-lambda.exe package -pl $fullDir -f "netcoreapp3.1" -o Payment.${env.majorVersion}.${env.minorVersion}.${env.revisionVersion}.${env.buildNumber}.zip
However the name of the built artifact is: Payment.....zip
How can I pass the variable values?
Is there a way to auto increment the revisionNumber and buildNumber, instead of hard coding it?
I'm very new to both Bamboo and Jenkins. Any help would be extremely helpful.
Regards
Ramesh
Personally, I'd not configure this things globally as they seem job specific. Nevertheless,
Install the Environment Injector plugin. You now have two options:
General tab
[ X ] Prepare an environment for the run
Build Environment tab
[ X ] Inject environment variables to the build process
Set the "Properties Content" (that's an environment variable).
In your shell step( no need to preface with ${env....} ):
Execute Shell step:
#!sh -
echo ${FOO}.${BUILD_NUMBER}
echo ${LABEL}
Output:
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting contributions.
Building in workspace C:\Users\jenkins\.jenkins\workspace\Foo
[EnvInject] - Executing scripts and injecting environment variables after the SCM step.
[EnvInject] - Injecting as environment variables the properties content
FOO=bar
[EnvInject] - Variables injected successfully.
[Foo] $ sh - C:\Users\jenkins\AppData\Local\Temp\jenkins281351632631450693.sh
bar.8
Finished: SUCCESS
You'll also see at the bottom of the Execute Shell step a link to ${JENKINS_URL}/env-vars.html which lists variables available to shell scripts, which includes BUILD_NUMBER; use that in lieu of buildNumber.
The plugin also supports configuration same at both the Global and the Node level.
You can also have individual build steps to inject / change variables between job steps (we use this to set specific JAVA_HOME for SonarQube step).
You will also see an [Environment variables] button on the LH side of each build log to inspect what you ran with (see below).
If you add Build With Parameters plugin then you can be prompted to supply values for variables when triggering the job which can be consumed in the same fashion without re-configuring the job (it won't remember them, but you will see a [Parameters] button on the LH side of each build log to inspect what you ran with.
The Version Number plugin can provides greater flexibility, say you want to AutoIncrement and the "BUILD_NUMBER" option is too limiting, it offers a variable BUILDS_ALL_TIME, which can use the above defined variables or hard-coded constants to aggregate a version label and optionally control which it is incremented (eg: only increment on successful builds). eg:
[ X ] Prepare an environment for the run
Properties Content
FOO=bar
[ X ] Create a formatted version number
Environment Variable Name [ BUILD-${FOO}.${BUILDS_ALL_TIME} ]
Skip Builds worse than [ SUCCESS ]
Execute Shell step:
#!sh -
echo ${FOO}.${BUILD_NUMBER}
echo ${LABEL}
Output:
[Foo] $ sh - C:\Users\jenkins\AppData\Local\Temp\jenkins4160554383847615506.sh
bar.10
BUILD-bar.2
I have a Gatsby project that is integrated with a jenkins CI/CD pipeline. I define a variable in the jenkins pipeline like so:
environment {
my_env = "${env.GIT_BRANCH"
}
I have pipelines that run from the dev and master branches of the repo hosting my Gatsby project. I want to use this variable in my Gatsby config file so that when I run a pipeline gatsby will pull content from either the dev or master environments of the CMS I'm using.
The problem is Gatsby seems only able to read environment variables from .env files out of the box. I am not sure how to get it to read variable from something that's not a .env but also stored in the root (in this case, a jenkinsfile). Is there any workaround for this?
If you set your enviornment variable in jenkinsfile,and using the same agent , then you can just access that variable using env..
environment {
MY_ENV_VAR="myvalue"
}
// you can access using:
env.MY_ENV_VAR
In Travis docs, it states that Build stages is a way to group jobs, and run jobs in each stage in parallel, but run one stage after another sequentially.
I know that all jobs in a stage in run in parallel, but do these tests run in the same instance, i.e. do they share the same env variables?
Say I have 3 tests under a stage.
- stage: 'Tests'
name: 'Test1'
script: ./dotest1
-
name: 'Test2'
script: ./dotest2
-
name: 'Test3'
script: ./dotest3
If I set export $bleh_credential=$some_credential in test1, does it get carried over to test2? It seems like it shouldn't, as they run in parallel, correct? If that's the case, can I set a stage-wide env variable, or should I set them every time I run a new test?
No, jobs are all run on new containers, so nothing in the job process can be shared between. If you need some persistence between them, Travis requires you to use an external storage system like S3. Read more about it here: https://docs.travis-ci.com/user/build-stages/#data-persistence-between-stages-and-jobs
I would set the env vars for each job, perhaps using YAML anchors for the defaults: https://gist.github.com/bowsersenior/979804
Is there a way to export environment variables from one stage to the next in GitLab CI? I'm looking for something similar to the job artifacts feature, only for environment variables instead of files.
Let's say I'm configuring the build in a configure stage and want to store the results as (secret, protected) environment variables for the next stages to use. I could safe the configuration in files and store them as job artifacts but I'm concerned about secrets being made available in files than can be downloaded by everyone.
Since Gitlab 13 you can inherit environment variables like this:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Note: for GitLab < 13.1 you should enable this first in Gitlab Rails console:
Feature.enable(:ci_dependency_variables)
Although not exactly what you wanted since it uses artifacts:reports:dotenv artifacts, GitLab recommends doing the below in their guide: 'Pass an environment variable to another job':
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo "$BUILD_VERSION" # Output is: 'hello'
needs:
- job: build
artifacts: true
I believe using the needs keyword is preferable over the dependencies keyword (as used in hd-deman`'s top answer) since:
When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs: configuration.
Furthermore, you could minimise the risk by setting the build's artifacts:expire_in time to be very small.
No this feature is not here yet, but there is already an issue for this topic.
My suggestion would be that you are saving the variables in a files and cache them, as these will be not downloadable and will be removed on finish of the job.
If you want to be 100% sure you can delete it manually. See the clean_up stage.
e.g.
cache:
paths:
- save_file
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
job_name_2:
script:
- cat save_file | do_something_with_content
clean_up:
script:
- rm save_file
when: always
You want to use Artefacts for this.
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
artifacts:
paths:
- save_file
# Hint: You can set an expiration for them too.
job_name_2:
needs:
- job: job_name_1
artifacts: true
script:
- cat save_file | do_something_with_content