how to use secret variables from Azure variable group in pipeline? - docker

We have the following varables defined in a variable group in Azure:
"DATABASE": {
"isSecret": null,
"value": "smdb_all"
},
"HOSTNAME": {
"isSecret": null,
"value": "localhost"
},
"PASSWORD": {
"isSecret": true,
"value": null
},
"ROOT_PASSWORD": {
"isSecret": true,
"value": null
},
"USER": {
"isSecret": null,
"value": "IntegrationTest"
}
}
Then this group variables are read in an Azure pipeline:
steps:
- powershell: |
# Get variables from group
$be_common=az pipelines variable-group variable list --group-id <some_id> --output json | ConvertFrom-Json
echo "$be_common"
$user = $be_common.USER.value
$root_password = $be_common.ROOT_PASSWORD.value
$database = $be_common.DATABASE.value
$password = $be_common.PASSWORD.value
echo "Get single values"
echo "$user"
echo "$database"
echo "$password"
echo "$root_password"
echo "##vso[task.setvariable variable=task_user]$user"
echo "##vso[task.setvariable variable=task_database]$database"
echo "##vso[task.setvariable variable=task_password]$password"
echo "##vso[task.setvariable variable=task_root_password]$root_password"
echo "End"
env:
AZURE_DEVOPS_EXT_PAT: $(System.AccessToken)
displayName: 'Get backend common variables'
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: ${{ parameters.dockerRegistryServiceConnection }}
- task: Docker#2
displayName: Build an image to container registry
inputs:
command: build
repository: ${{ parameters.imageRepository }}
dockerfile: ${{ parameters.dockerfilePath }}
buildContext: $(Build.SourcesDirectory)
containerRegistry: ${{ parameters.dockerRegistryServiceConnection }}
tag: '$(Build.BuildId)'
arguments: '--build-arg CURRENT_ROOT_PASSWORD=$(task_root_password) --build-arg CURRENT_USER=$(task_user) --build-arg CURRENT_PASSWORD=$(password) --build-arg CURRENT_DATABASE=$(task_database)'
And the problem is, that as the root password, and password are secret variables, after the conversions the "task_root_password" and "task_password" are empty strings - this is the log of the pipeline (the pipeline builds and pushes a mariadb docker image):
createdAt:2023-02-01T15:04:26Z; layerSize:0B; createdBy:/bin/sh -c #(nop) ENV MARIADB_DATABASE=smdb_all; layerId:sh
createdAt:2023-02-01T15:04:25Z; layerSize:0B; createdBy:/bin/sh -c #(nop) ENV MARIADB_PASSWORD=; la
createdAt:2023-02-01T15:04:24Z; layerSize:0B; createdBy:/bin/sh -c #(nop) ENV MARIADB_USER=IntegrationTest; layerId:sh
createdAt:2023-02-01T15:04:23Z; layerSize:0B; createdBy:/bin/sh -c #(nop) ENV MARIADB_ROOT_PASSWORD=; la
What should i do in order the actual values of the password and the root_password to be passed as arguments of the build?

Related

how to set/use env variable in github actions?

I would like to use path location in several steps in github action. I tried to follow "DAY_OF_WEEK" example, but my test failed:
name: env_test
on:
workflow_dispatch:
push:
branches: [ "main" ]
env:
ILOC: d:/a/destination/
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v3
- name: Setup Instalation Location
shell: cmd
run: |
echo "just iloc"
echo "${ILOC}"
echo "env"
echo env.ILOC
mkdir "${ILOC}"
Here is relevant part of log:
Run echo "just iloc"
echo "just iloc"
echo "${ILOC}"
echo "env"
echo env.ILOC
mkdir "${ILOC}"
shell: C:\Windows\system32\cmd.EXE /D /E:ON /V:OFF /S /C "CALL "{0}""
env:
ILOC: d:/a/local
"just iloc"
"${ILOC}"
"env"
env.ILOC
'#' is not recognized as an internal or external command,
operable program or batch file.
'#' is not recognized as an internal or external command,
operable program or batch file.
Error: Process completed with exit code 1.
So how to properly set GA global variable?
it looks like you are properly setting it, only the way its accessed needs attention:
1 name: learn_to_use_actions
2
3 on:
4 push
5
6 env:
7 MYVAR: d:/a/destination/
8 jobs:
9 learn:
10 name: use env
11 runs-on: ubuntu-latest
12 steps:
13 - name: print env
14 if: ${{ env.MYVAR == 'TEST' }}
15 run: |
16 echo "the var was detected to be test = " $MYVAR
17
18 - name: other print env
19 if: ${{ env.MYVAR != 'TEST' }}
20 run: |
21 echo "the var was NOT detected to be test = " $MYVAR
when using inside a conditional, the whole test has to be escaped like:
${{ env.MYVAR != 'TEST' }}
when using in a command, it looks like like a nix env var like:
echo "the var was detected to be test = " $MYVAR

GitHub Actions: How to get contents of VERSION file into environment variable?

In my Docker project's repo, I have a VERSION file that contains nothing more than the version number.
1.2.3
In Travis, I'm able to cat the file to an environment variable, and use that to tag my build before pushing to Docker Hub.
---
env:
global:
- USER=username
- REPO=my_great_project
- VERSION=$(cat VERSION)
What is the equivalent of that in GitHub Actions? I tried this, but it's not working.
name: Test
on:
...
...
env:
USER: username
REPO: my_great_project
jobs:
build_ubuntu:
name: Build Ubuntu
runs-on: ubuntu-latest
env:
BASE: ubuntu
steps:
- name: Check out the codebase
uses: actions/checkout#v2
- name: Build the image
run: |
VERSION=$(cat VERSION)
docker build --file ${BASE}/Dockerfile --tag ${USER}/${REPO}:${VERSION} .
build_alpine:
name: Build Alpine
runs-on: ubuntu-latest
env:
BASE: alpine
...
...
...
I've also tried this, which doesn't work.
- name: Build the image
run: |
echo "VERSION=$(cat ./VERSION)" >> $GITHUB_ENV
docker build --file ${BASE}/Dockerfile --tag ${USER}/${REPO}:${VERSION} .
I went down the road that Benjamin W. was talking about with having VERSION in my environment vs just in that specific step.
This worked for me to set the variable in one step, then use it in separate steps.
- name: Set variables
run: |
VER=$(cat VERSION)
echo "VERSION=$VER" >> $GITHUB_ENV
- name: Build Docker Image
uses: docker/build-push-action#v2
with:
context: .
file: ${{ env.BASE_DIR }}/Dockerfile
load: true
tags: |
${{ env.USER }}/${{ env.REPO }}:${{ env.VERSION }}
${{ env.USER }}/${{ env.REPO }}:latest
As I want to re-use ENV_VAR between jobs, this is how I do it. I wish I could find a way to minimize this code.
In this example, I use VARs from my Dockerfile. But it will work from any file.
pre_build:
runs-on: ubuntu-20.04
steps:
...
-
name: Save variables to disk
run: |
cat $(echo ${{ env.DOCKERFILE }}) | grep DOCKERHUB_USER= | head -n 1 | grep -o '".*"' | sed 's/"//g' > ~/varz/DOCKERHUB_USER
cat $(echo ${{ env.DOCKERFILE }}) | grep GITHUB_ORG= | head -n 1 | grep -o '".*"' | sed 's/"//g' > ~/varz/GITHUB_ORG
cat $(echo ${{ env.DOCKERFILE }}) | grep GITHUB_REGISTRY= | head -n 1 | grep -o '".*"' | sed 's/"//g' > ~/varz/GITHUB_REGISTRY
echo "$(cat ~/varz/DOCKERHUB_USER)/$(cat ~/varz/APP_NAME)" > ~/varz/DKR_PREFIX
-
name: Set ALL variables for this job | à la sauce GitHub Actions
run: |
echo "VERSION_HASH_DATE=$(cat ~/varz/VERSION_HASH_DATE)" >> $GITHUB_ENV
echo "VERSION_HASH_ONLY=$(cat ~/varz/VERSION_HASH_ONLY)" >> $GITHUB_ENV
echo "VERSION_CI=$(cat ~/varz/VERSION_CI)" >> $GITHUB_ENV
echo "VERSION_BRANCH=$(cat ~/varz/VERSION_BRANCH)" >> $GITHUB_ENV
-
name: Show variables
run: |
echo "${{ env.VERSION_HASH_DATE }} < VERSION_HASH_DATE"
echo "${{ env.VERSION_HASH_ONLY }} < VERSION_HASH_ONLY"
echo "${{ env.VERSION_CI }} < VERSION_CI"
echo "${{ env.VERSION_BRANCH }} < VERSION_BRANCH"
-
name: Upload variables as artifact
uses: actions/upload-artifact#master
with:
name: variables_on_disk
path: ~/varz
test_build:
needs: [pre_build]
runs-on: ubuntu-20.04
steps:
...
-
name: Job preparation | Download variables from artifact
uses: actions/download-artifact#master
with:
name: variables_on_disk
path: ~/varz
-
name: Job preparation | Set variables for this job | à la sauce GitHub Actions
run: |
echo "VERSION_HASH_DATE=$(cat ~/varz/VERSION_HASH_DATE)" >> $GITHUB_ENV
echo "VERSION_HASH_ONLY=$(cat ~/varz/VERSION_HASH_ONLY)" >> $GITHUB_ENV
echo "VERSION_BRANCH=$(cat ~/varz/VERSION_BRANCH)" >> $GITHUB_ENV
echo "BRANCH_NAME=$(cat ~/varz/BRANCH_NAME)" >> $GITHUB_ENV

Jenkins won't use arguments after variable substitution

I'm trying to use a variable related to the SHA of a Gemfile. The problem is that when I use it in a sh command, the other arguments won't be interpreted.
So, for example:
docker build ${VAR} .
will result in an error stating that "docker build" requires exactly 1 argument, since the " ." of the command is not being interpreted.
Here is the code that tries to pull an image, builds it and publish it:
def GEMFILE_SHA = ""
pipeline {
.....
stages {
stage("Build Docker Image and Push to Artifactory - Snapshot Repository") {
steps {
container("docker") {
script {
GEMFILE_SHA = sh(returnStdout: true, script: "sha256sum Gemfile | cut -d ' ' -f1 | head -n1", label: "Set Gemfile sha")
}
sh script: "docker login -u ${DOCKER_REGISTRY_CREDS_USR} -p ${DOCKER_REGISTRY_CREDS_PSW} ${DOCKER_REGISTRY_URL}", label: "Docker Login."
catchError(buildResult: 'SUCCESS', stageResult: 'SUCCESS') {
sh script: "docker pull ${DOCKER_REGISTRY_URL}/${DOCKER_REPO}:${GEMFILE_SHA}", label: "Pull Cached Image."
}
sh script: "docker build --network=host --no-cache -t ${DOCKER_REGISTRY_URL}/${DOCKER_REPO}:${GEMFILE_SHA} .", label: "Build Docker Image."
sh script: "docker push ${DOCKER_REGISTRY_URL}/${DOCKER_REPO}:${GEMFILE_SHA}", label: "Push Docker Image."
}
}
}
}
}
}

Jenkins docker_login ansible playbook : Permission denied

I would like to copy docker container inside docker registry with Jenkins.
When I execute Ansible playbook i get :
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
I suppose that ansible is run under user jenkins because this link, and because of the log file:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
Because the ansible playbook try to do a docker_login, I understand that user jenkins need to be able to connect to docker.
So I add jenkins to a docker users :
I don't understand why the permission is denied
The whole log jenkins file:
TASK [Log into Docker registry]
************************************************
task path: /var/jenkins_home/workspace/.../build_docker.yml:8
Using module file /usr/lib/python2.7/dist-
packages/ansible/modules/core/cloud/docker/docker_login.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" && echo ansible-tmp-1543388409.78-179785864196502="` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFASoHo TO /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/ /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py; rm -rf "/var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"cacert_path": null,
"cert_path": null,
"config_path": "~/.docker/config.json",
"debug": false,
"docker_host": null,
"email": null,
"filter_logger": false,
"key_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "https://registry.docker....si",
"ssl_version": null,
"timeout": null,
"tls": null,
"tls_hostname": null,
"tls_verify": null,
"username": "jenkins"
},
"module_name": "docker_login"
},
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
}
to retry, use: --limit #/var/jenkins_home/workspace/.../build_docker.retry
The whole ansible playbook
---
- hosts: localhost
vars:
git_branch: "{{ GIT_BRANCH|default('development') }}"
tasks:
- name: Log into Docker registry
docker_login:
registry_url: https://registry.docker.....si
username: ...
password: ....
If anyone have the same problem I found the solution,...
My registry doesn't have valid HTTPS certificate. So, you need to add
{
"insecure-registries" : [ "https://registry.docker.....si" ]
}
inside /etc/docker/daemon.json

Travis-CI does not add deploy section

I followed the Travis-CI documentation, to creating multiple deployments, and for notifications.
So this is my config: (the end has deploy and notifications)
sudo: required # is required to use docker service in travis
language: node_js
node_js:
- 'node'
services:
- docker
before_install:
- npm install -g yarn --cache-min 999999999
- "/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
# Use yarn for faster installs
install:
- yarn
# Init GUI
before_script:
- "export DISPLAY=:99.0"
- "sh -e /etc/init.d/xvfb start"
- sleep 3 # give xvfb some time to start
script:
- npm run test:single-run
cache:
yarn: true
directories:
- ./node_modules
before_deploy:
- npm run build:backwards
- docker --version
- pip install --user awscli # install aws cli w/o sudo
- export PATH=$PATH:$HOME/.local/bin # put aws in the path
deploy:
- provider: script
script: scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
on:
branch: travis
- provider: script
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
on:
tags: true
notifications:
email: false
But this translates to (in Travis - view config): no deploy, no notifications
{
"sudo": "required",
"language": "node_js",
"node_js": "node",
"services": [
"docker"
],
"before_install": [
"npm install -g yarn --cache-min 999999999",
"/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
],
"install": [
"yarn"
],
"before_script": [
"export DISPLAY=:99.0",
"sh -e /etc/init.d/xvfb start",
"sleep 3"
],
"script": [
"npm run test:single-run"
],
"cache": {
"yarn": true,
"directories": [
"./node_modules"
]
},
"before_deploy": [
"npm run build:backwards",
"docker --version",
"pip install --user awscli",
"export PATH=$PATH:$HOME/.local/bin"
],
"group": "stable",
"dist": "trusty",
"os": "linux"
}
Try changing
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
to
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
This will give a detailed result if the script is being executed or not. Also I looked into the build after those changes. It fails on below
Step 4/9 : COPY ./dist /opt/ansyn/app
You need to change your deploy section to
deploy:
- provider: script
script: sh -x scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
skip_cleanup: true
on:
branch: travis
- provider: script
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
skip_cleanup: true
on:
tags: true
So that the dist folder is there during deploy and not cleaned up

Resources