How to test a git-crypt encrypted repo with Jenkins - jenkins

The tests I run with Jenkins (multi-branch pipeline) on my repo make use of an encrypted file (keys.py) in it, via git-crypt. In order to use that file locally, I usually use git-crypt unlock, but I cannot directly add this step to the Jenkinsfile because of how that command works:
gpg is used to decrypt the symmetric key used for encrypting my file (i.e. .git-crypt/keys/default/0/xxxx.gpg). This key is encrypted with RSA, using my private key, and this one has a passphrase which you are prompted to enter when trying to use it.
Decrypts keys.py using the decrypted key.

To solve the issue of the prompt, run the git-crypt steps manually inserting
your passphrase as a command-line argument to gpg and the decrypted symmetric
key to git-crypt unlock. Here we make use of a few more tricks that will
ease your life like the use of the Jenkins environment variables.
gpg --no-tty --passphrase YOUR_PASSPHRASE_GOES_HERE --output $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg --decrypt $WORKSPACE/.git-crypt/keys/default/0/YOUR_KEY_FILE_GOES_HERE.gpg && git-crypt unlock $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg
Here we raise a second issue, and it is that executing this twice will raise an
error as well. We want the repo to be decrypted only when it is encrypted. In
order to solve that, first check that the file containing the symmetric key
exists, generated only during the previous step. In the end, we end up with a
stage that looks like:
stage('Unlock repo') {
steps {
script {
sh("[ -f $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg ] || gpg --no-tty --passphrase YOUR_PASSPHRASE_GOES_HERE --output $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg --decrypt $WORKSPACE/.git-crypt/keys/default/0/YOUR_KEY_FILE_GOES_HERE.gpg && git-crypt unlock $WORKSPACE/.git-crypt/keys/default/0/decrypted.gpg")
}
}
}

I've build another solution for git-crypt by creating a separate container with git-crypt and invoke these in stages before and after the main build step:
pipeline {
environment {
// $HOME is not set in build-agent
JAVA_TOOL_OPTIONS = '-Duser.home=/home/jenkins/'
}
agent {
label 'docker'
}
stages {
stage('Decrypt') {
agent {
docker {
image 'wjung/jenkins-git-crypt:latest'
registryUrl 'https://index.docker.io/v1/'
registryCredentialsId 'docker-hub'
}
}
steps {
withCredentials([file(credentialsId: 'git-crypt-key', variable: 'FILE')]) {
sh 'cd $WORKSPACE; git-crypt unlock $FILE'
}
}
}
stage('Build docker image') {
agent {
docker {
image 'maven:3-jdk-11'
args '-v /services/maven/m2:/home/jenkins/.m2 -v /services/maven/m2/cache:/home/jenkins/.cache'
}
}
steps {
configFileProvider([configFile(fileId: 'mvn-setting-xml', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS -B -Dmaven.test.skip clean deploy'
}
}
}
stage('Lock dir') {
agent {
docker {
image 'wjung/jenkins-git-crypt:latest'
registryUrl 'https://index.docker.io/v1/'
registryCredentialsId 'docker-hub'
}
}
steps {
sh 'cd $WORKSPACE; git-crypt lock'
}
}
}
}
The encryption key is exported from the repository by git-crypt export-key TMPFILE and later added as secret file with id: git-crypt-key.

Related

Jenkins Pipeline: Run the step when you see a new file or when it changes

I have a laravel application that requires the "yarn" command at initialization, and later only if certain files are changed.
Using the code below, I manage to detect when that file has changed, but I need a suggestion to be able to run it at the first initialization (practically, that file together with all the others seem to be new files from the perspective of the Jenkinsfile).
Thanks!
Current try:
stage("Install NodeJS dependencies") {
when {
changeset "package.json"
}
agent {
docker {
image 'node:14-alpine'
reuseNode true
}
}
steps {
sh 'yarn --silent'
sh 'yarn dev --silent'
}
}

Use GITHUB_ACCESS_TOKEN in Jenkins inside docker for authentication

I've followed the solution provided by #biolauri in this post, in using GITHUB_ACCESS_TOKEN in Jenkins inside a docker container. Below is my stage code:
stage('Push git tag') {
agent { label 'docker' }
steps {
script {
try {
container = docker.build("git", "-f git.dockerfile .")
container.inside {
withCredentials([usernamePassword(
credentialsId: "<credential-name-stored-in-jenkins>",
usernameVariable: "GITHUB_APP",
passwordVariable: "GITHUB_ACCESS_TOKEN")]) {
withEnv(["GITHUB_TOKEN=$GITHUB_ACCESS_TOKEN"]) {
sh "git tag ${version_g}"
sh "git push origin ${version_g}"
}
}
}
} catch (Exception e) {
// sh "git tag -d ${version_g} || true"
throw e
}
}
}
}
But I am still getting this error:
fatal: could not read Username for 'https://github.com': No such
device or address
What am I doing wrong here?
Just to make sure that I am getting the correct Github App ID, I actually echoed it and it is indeed the correct Github App ID. And I also echoed the generated GITHUB_ACCESS_TOKEN, and it indeed looks like a generated token. But the docker image built seems to be not recognizing the GITHUB_TOKEN environment variable set, to be able to do a git push

Jenkins pipeline script global variable

I am learning jenkins, and am working on a sample pipeline
pipeline {
agent any
stages {
stage('Stage1') {
steps {
bat '''
cd C:/Users/roger/project/
python -u script1.py
'''
}
}
stage('Stage2') {
steps {
bat '''
cd cd C:/Users/roger/project/abc/
python -u script2.py
'''
}
}
stage('Stage3') {
steps {
bat '''
cd cd C:/Users/roger/project/abc/new_dir/
python -u demo.py
'''
}
}
}
}
is there a way to store the base path of project C:/Users/roger/project/ as a variable, so that it can be used to append new path to it instead of writing the whole path.
How could I write above stages, so that I don't have to repeat writing the same base path each time to each stage
You have several options, the easiest way will be to define the parameter inside the environment directive (read more) which will make the parameter available for all stages in the pipeline and will also load it to the execution environment of any interpreter step like sh, bat and powershell thus making the parameter also available to the scripts you execute as an environment variable.
In addition the environment directive supports credential parameters which is very useful.
In your case it will look like:
pipeline {
agent any
environment {
BASE_PATH = 'C:/Users/roger/project/'
}
stages {
stage('Stage1') {
steps {
// Using the parameter as a runtime environment variable with bat syntax %%
bat '''
cd %BASE_PATH%
python -u script1.py
'''
}
}
stage('Stage2') {
steps {
// Using groovy string interpolation to construct the command with the parameter value
bat """
cd ${env.BASE_PATH}abc/
python -u script2.py
"""
}
}
}
}
Another option you have is to use global variables defined at the top section of the pipeline, which will behave like any groovy variable and will be available for all stages in your pipeline (but not for the execution environment of interpreter steps).
Something like:
BASE_PATH = 'C:/Users/roger/project/'
pipeline {
agent any
stages {
stage('Stage1') {
steps {
// Using the parameter insdie a dir step to change directory
dir(BASE_PATH) {
bat 'python -u script1.py'
}
}
}
stage('Stage2') {
steps {
// Using groovy string interpolation to construct the command with the parameter value
bat """
cd ${BASE_PATH}abc/
python -u script2.py
"""
}
}
}
}

Inject Jenkins Variable to maven using Declarative Pipeline

I am Unable to add the above circled functionality in attached image as Declarative Pipeline Syntax.
PS I am new to this, i Searched for this on others answers but no one matches my requirements.
For example if there is a Parameter in jenkins named VERSION, maven command should become
clean deploy -B -s pathtosettings.xml -DVERSION=valueinparameter
Below is my current code
NOte : I WANT ALL THE PARAMETERS AUTOMATICALLY -DVERSION=${params.VERSION} doesnt help me
pipeline {
agent any
stages {
stage('Checkout Scm') {
steps {
git 'ssh://git#XXXXXXXXXXXXXXXXXXXXXXXXX.git'
}
}
stage('Maven Build 0') {
steps {
configFileProvider([configFile(fileId:'0c0631a5-6510-4b4a-833d-4b80fa67d5f3', targetLocation: 'settings.xml', variable: 'SETTINGS_XML')]) {
withMaven{
sh "mvn clean deploy -B -s ${SETTINGS_XML}
}
}
}
}
tools {
jdk 'JDK_1.8'
}
parameters {
string(name: 'VERSION', defaultValue: '3_12_0', description: 'version to be in maven')
}
}
First, I think you doesn't need targetLocation to perform this.
To access to your parameter value, you need to use params prefix.
This is how I'm using the configFileProvider to make it work :
configFileProvider([configFile(fileId: 'configFileId', variable: 'SETTINGS_XML')]) {
sh "mvn clean deploy -s \$SETTINGS_XML -B -DVERSION=$params.VERSION"
}
With this, the variable which link the settings file is not replaced and it's correctly used in my pipeline and the version is replaced in the command. Don't forget to use a
'Maven settings.xml' type of file in the configFileProvider.
steps {
script{
foo= " "
params.each {param ->
foo = "${foo} -D${param.key}=${param.value} "
}
}
configFileProvider([configFile(fileId:'XXXX', targetLocation: 'settings.xml', variable: 'SETTINGS_XML')]) {
withMaven{
sh "mvn clean deploy -B -s ${SETTINGS_XML} - ${foo}"
}
}
This is the Only Approach found

Pass variables between Jenkins stages

I want to pass a variable which I read in stage A towards stage B somehow. I see in some examples that people write it to a file, but I guess that is not really a nice solution. I tried writing it to an environment variable, but I'm not really successful on that. How can I set it up properly?
To get it working I tried a lot of things and read that I should use the """ instead of ''' to start a shell and escape those variables to \${foo} for example.
Below is what I have as a pipeline:
#!/usr/bin/env groovy
pipeline {
agent { node { label 'php71' } }
environment {
packageName='my-package'
packageVersion=''
groupId='vznl'
nexus_endpoint='http://nexus.devtools.io'
nexus_username='jenkins'
nexus_password='J3nkins'
}
stages{
// Package dependencies
stage('Install dependencies') {
steps {
sh '''
echo Skip composer installation
#composer install --prefer-dist --optimize-autoloader --no-interaction
'''
}
}
// Unit tests
stage('Unit Tests') {
steps {
sh '''
echo Running PHP code coverage tests...
#composer test
'''
}
}
// Create artifact
stage('Package') {
steps {
echo 'Create package refs'
sh """
mkdir -p ./build/zpk
VERSIONTAG=\$(grep 'version' composer.json)
REGEX='"version": "([0-9]+.[0-9]+.[0-9]+)"'
if [[ \${VERSIONTAG} =~ \${REGEX} ]]
then
env.packageVersion=\${BASH_REMATCH[1]}
/usr/bin/zs-client packZpk --folder=. --destination=./build/zpk --name=${env.packageName}-${env.packageVersion}.zpk --version=${env.packageVersion}
else
echo "No version found!"
exit 1
fi
"""
}
}
// Publish ZPK package to Nexus
stage('Publish packages') {
steps {
echo "Publish ZPK Package"
sh "curl -u ${env.nexus_username}:${env.nexus_password} --upload-file ./build/zpk/${env.packageName}-${env.packageVersion}.zpk ${env.nexus_endpoint}/repository/zpk-packages/${groupId}/${env.packageName}-${env.packageVersion}.zpk"
archive includes: './build/**/*.{zpk,rpm,deb}'
}
}
}
}
As you can see the packageVersion which I read from stage Package needs to be used in stage Publish as well.
Overall tips against the pipeline are of course always welcome as well.
A problem in your code is that you are assigning version of environment variable within the sh step. This step will execute in its own isolated process, inheriting parent process environment variables.
However, the only way of passing data back to the parent is through STDOUT/STDERR or exit code. As you want a string value, it is best to echo version from the sh step and assign it to a variable within the script context.
If you reuse the node, the script context will persist, and variables will be available in the subsequent stage. A working example is below. Note that any try to put this within a parallel block can be of failure, as the version information variable can be written to by multiple processes.
#!/usr/bin/env groovy
pipeline {
environment {
AGENT_INFO = ''
}
agent {
docker {
image 'alpine'
reuseNode true
}
}
stages {
stage('Collect agent info'){
steps {
echo "Current agent info: ${env.AGENT_INFO}"
script {
def agentInfo = sh script:'uname -a', returnStdout: true
println "Agent info within script: ${agentInfo}"
AGENT_INFO = agentInfo.replace("/n", "")
env.AGENT_INFO = AGENT_INFO
}
}
}
stage("Print agent info"){
steps {
script {
echo "Collected agent info: ${AGENT_INFO}"
echo "Environment agent info: ${env.AGENT_INFO}"
}
}
}
}
}
Another option which doesn't involve using script, but is just declarative, is to stash things in a little temporary environment file.
You can then use this stash (like a temporary cache that only lives for the run) if the workload is sprayed out across parallel or distributed nodes as needed.
Something like:
pipeline {
agent any
stages {
stage('first stage') {
steps {
// Write out any environment variables you like to a temporary file
sh 'echo export FOO=baz > myenv'
// Stash away for later use
stash 'myenv'
}
}
stage ("later stage") {
steps {
// Unstash the temporary file and apply it
unstash 'myenv'
// use the unstashed vars
sh 'source myenv && echo $FOO'
}
}
}
}

Resources