How to avoid interpolation of sensitive variables in Jenkins - jenkins

I have a variable in environment something like this
PACT_ARGUMENTS = "--pacticipant ${APP_NAME} \
--broker-base-url ${PACT_BROKER_URL} \
--broker-username ${PACT_BROKER_BASIC_CREDENTIALS_USR} \
--broker-password ${PACT_BROKER_BASIC_CREDENTIALS_PSW} \
--version ${GIT_COMMIT}"
I have two stages where I use them like this
stage('Can I Deploy to Dev') {
agent none
steps {
sh 'docker run --rm ${PACT_CLI_IMAGE} broker can-i-deploy ${PACT_ARGUMENTS} --to ${PACT_DEFAULT_ENV}'
}
}
stage('Create Dev Version Tag') {
agent none
steps {
sh 'docker run --rm ${PACT_CLI_IMAGE} broker create-version-tag ${PACT_ARGUMENTS} --tag ${PACT_DEFAULT_ENV}'
}
}
It works fine but I am getting notifications in Jenkins saying that The following steps that have been detected may have insecure interpolation of sensitive variables
The solution that I used to have is
stage('Can I Deploy to Dev') {
agent none
steps {
sh 'docker run --rm ${PACT_CLI_IMAGE} broker can-i-deploy \
--pacticipant ${APP_NAME} \
--broker-base-url ${PACT_BROKER_URL} \
--broker-username ${PACT_BROKER_BASIC_CREDENTIALS_USR} \
--broker-password ${PACT_BROKER_BASIC_CREDENTIALS_PSW} \
--version ${GIT_COMMIT}\
--to ${PACT_DEFAULT_ENV}'
}
}
but I chose to extract few of those arguments because it would be look a bit cleaner. I tried to replace PACT_ARGUMENTS with single quote but it just take the whole thing as a string. Any suggestions how to handle this scenario?

As #daggett suggest here
I replace double quotes to single quotes to PACT_ARGUMENTS variable like this
PACT_ARGUMENTS = '--pacticipant $APP_NAME \
--broker-base-url $PACT_BROKER_URL \
--broker-username $PACT_BROKER_BASIC_CREDENTIALS_USR \
--broker-password $PACT_BROKER_BASIC_CREDENTIALS_PSW \
--version $GIT_COMMIT'
Then I used double quotes for sh like this
stage('Can I Deploy to Dev') {
agent none
steps {
sh "docker run --rm ${PACT_CLI_IMAGE} broker can-i-deploy ${PACT_ARGUMENTS} --to ${PACT_DEFAULT_ENV}"
}
}

It is quite simple, you need to use them as environment variable.
I have change the username and password below to be used as environment variables, you can do the same for the others if required.
stage('Can I Deploy to Dev') {
agent none
steps {
sh 'docker run --rm ${PACT_CLI_IMAGE} broker can-i-deploy \
--pacticipant ${APP_NAME} \
--broker-base-url ${PACT_BROKER_URL} \
--broker-username \$PACT_BROKER_BASIC_CREDENTIALS_USR \
--broker-password \$PACT_BROKER_BASIC_CREDENTIALS_PSW \
--version ${GIT_COMMIT}\
--to ${PACT_DEFAULT_ENV}'
}
}

Related

How to run parallel tests with cypress in Jenkins

I am trying to execute my tests in parallel but it is not working.
I have this config in jeniks file .sh
docker run --rm -i --name integration-tests-$p --network=host $DOCKER_VOLUME -e CYPRESS_INCLUDE_TAGS=$TAG_TO_INCLUDES -e CYPRESS_EXCLUDE_TAGS=disabled -w $WORKING_DIR cypress/included:10.10.0 --record --group 4x-electron --key keyId --parallel --ci-build-id $BUILD_NUMBER
and the other config in the .jenkins file
stage('Integration Tests') {
when {
expression { return RUN_E2E != ''}
}
steps {
sh "$RUSH hc -t web" // Check if web is running
sh "$RUSH integration-tests --tags=smoke --to manager"
}
}
I need to run these tests in parallel, can anyone help?

Running sonarqube as container with same network as host

I am trying to run a Sonarqube container that gets created as below Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
My sonar link is not accessible , I did confirm on all the network checks like checking its reachability from my Jenkins host and its fine. Only it is the Sonarqube container from where the link is unreachable:
ERROR: SonarQube server [https://sonar.***.com] can not be reached
Below is my Jenkinsfile stage for Sonarqube:
stage('SonarQube') {
agent
{dockerfile { filename 'sonar/Dockerfile'
args '-u root:root'
}
}
steps {
withCredentials([string(credentialsId: 'trl-mtr-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
Plugin 'withCredentials' is used in above snippet of code. I would want to add the network in container just like host.
As a result of browsing I found manual command to do the same and also the docker.image.inside plugin. I still can not consolidate all to be used in my pipeline for sonarqube :
# Start a container attached to a specific network
docker run --network [network] [container]
# Attach a running container to a network
docker network connect [network] [container]
I also created the stage as below but even it seems to be failing:
stage('SonarTests') {
steps{
docker.image('sonar/Dockerfile').inside('-v /var/run/docker.sock:/var/run/docker.sock --entrypoint="" --net bridge')
{
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
Could someone please assist here.

Jenkins pipeline how to use string interpolation correctly

In my docker run, I would like to pass a secret as an env variable. And that works just fine. However, I also have a second variable "foo" that I would like to echo inside my container. This does not work. How can I get the variable "foo" to expand inside the triple single quotes?
stages {
stage('test') {
steps {
script {
def foo = "bar"
sh '''
docker run \
--rm \
--env secret=$ENV_SECRET \
python:3.8.12 /bin/bash -c \
"echo $foo"
'''
}
}
}
}
The syntax for interpolating strings with Groovy variables in Groovy involves ". If your ENV_SECRET is an environment variable bound from a withCredentials block, then you want to interpolate it within the shell interpreter. Therefore, we can update your shell step method with:
sh """
docker run \
--rm \
--env secret=\$ENV_SECRET \
python:3.8.12 /bin/bash -c \
'echo $foo'
"""
Your syntax would have interpolated shell variables instead. Note that both will interpolate environment variables, but with different interpreters.

Ansible ad-hoc inventory not working when executed in a shell command in a Jenkins pipeline?

Ansible v2.11.x
I have a Jenkins pipeline that does this. All the $VARIABLES are passed-in from the job's parameters.
withCredentials([string(credentialsId: "VAULT_PASSWORD", variable: "VAULT_PASSWORD")]) {
stage("Configure $env.IP_ADDRESS") {
sh """
ansible-playbook -i \\"$env.IP_ADDRESS,\\" \
-e var_host=$env.IP_ADDRESS \
-e web_branch=$env.WEB_BRANCH \
-e web_version=$env.WEB_VERSION \
site.yml
"""
}
}
My playbook is this
---
- hosts: "{{ var_host | default('site') }}"
roles:
- some_role
I have a groups_vars/all.yml file meant to be used by ad-hoc inventories like this. When I run the pipeline, I simply get the following, and the run does nothing
22:52:29 + ansible-playbook -i "10.x.x.x," -e var_host=10.x.x.x -e web_branch=development -e web_version=81cdedd6fe-20210811_2031 site.yml
22:52:31 [WARNING]: Could not match supplied host pattern, ignoring: 10.x.x.x
If I go on the build node and execute exactly the same command, it works. I can also execute the same command on my Mac, and it works too.
So why does the ad-hoc inventory not work when executed in the pipeline?
This post gave me a clue
The correct syntax that worked for me is
withCredentials([string(credentialsId: "VAULT_PASSWORD", variable: "VAULT_PASSWORD")]) {
stage("Configure $env.IP_ADDRESS") {
sh """
ansible-playbook -i $env.IP_ADDRESS, \
-e var_host=$env.IP_ADDRESS \
-e web_branch=$env.WEB_BRANCH \
-e web_version=$env.WEB_VERSION \
site.yml
"""
}
}

How do I set docker-credential-ecr-login in my PATH before anything else in GitLab CI

I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.
Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Thank you.
I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.
The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:
stages:
- test
variables:
IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
REGION: "ap-northeast-1"
tests:
stage: test
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded for this job
before_script:
- ': before_script'
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login --no-include-email --region "$REGION")
- docker pull "$IMAGE"
script:
- ': script'
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m($CI_JOB_NAME # line \$LINENO) \$\e[0m ' # optional
set -ex
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
bundle install
bundle exec rspec
## (cont'd) ##
"
- ': done'
allow_failure: true # for now as we do not have tests
This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.
The above template already contains comments, but to be self-contained:
You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
I replaced the echo "stuff" or so commands with their more effective colon counterpart:
set -x
: stuff
: note that these three shell commands do nothing
: but printing their args thanks to the -x option.
[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]
Related remark on a pitfall of set -e
Beware that the following script is buggy:
set -e
command1 && command2
command3
Namely, write instead:
set -e
command1 ; command2
command3
or:
set -e
( command1 && command2 )
command3
To be convinced about this, you can try running:
bash -e -c 'false && true; echo $?; echo this should not be run'
→ 1
→ this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'
From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $( aws ecr get-login --no-include-email )
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.
I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.
I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.
Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.
I faced the same problem using docker executor mode of gitlab runner.
SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.
gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"
More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948
We have a similar setup where we need to run CI jobs based off of an Image that is hosted on ECR.
Steps to follow:-
follow this guide here>> https://github.com/awslabs/amazon-ecr-credential-helper
gist of this above link is if you are on "Amazon Linux 2"
sudo amazon-linux-extras enable docker
sudo yum install amazon-ecr-credential-helper
open the ~/.docker/config.json on your gitlab runner in VI editor
Paste this code in the ~/.docker/config.json
{
"credHelpers":
{
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
}
}
source ~/.bashrc
systemctl restart docker
also remove any references of DOCKER_AUTH_CONFIG from your GitLab>>CI/CD>> Variables
That's it

Resources