Jenkins hashicorp-vault-plugin empty result - jenkins

I tried the Jenkins Pipeline Example mentioned here: https://plugins.jenkins.io/hashicorp-vault-plugin
node {
// define the secrets and the env variables
def secrets = [
[$class: 'VaultSecret', path: 'secret/testing', secretValues: [
[$class: 'VaultSecretValue', envVar: 'testing', vaultKey: 'value_one'],
[$class: 'VaultSecretValue', envVar: 'testing_again', vaultKey: 'value_two']]],
[$class: 'VaultSecret', path: 'secret/another_test', secretValues: [
[$class: 'VaultSecretValue', envVar: 'another_test', vaultKey: 'value']]]
]
// optional configuration, if you do not provide this the next higher configuration
// (e.g. folder or global) will be used
def configuration = [$class: 'VaultConfiguration',
vaultUrl: 'http://my-very-other-vault-url.com',
vaultCredentialId: 'my-vault-cred-id']
// inside this block your credentials will be available as env variables
wrap([$class: 'VaultBuildWrapper', configuration: configuration, vaultSecrets: secrets]) {
sh 'echo $testing'
sh 'echo $testing_again'
sh 'echo $another_test'
}
}
So I installed hashicorp-vault-plugin 2.2.0 in Jenkins 2.173 and started a Vault (v1.1.1) Docker Container using
docker run -d --name vaulttest -p 80:8200 --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' vault
Next I configured a token credential within Jenkins using token "myroot"
I created the Secrets within Vault (using the WebUI)
testing
value_one
value_two
another_test
value
First of all there is an error within the example: When using path "secret/testing" and "secret/another_test" the plugin fails with an error 404:
Invalid path for a versioned K/V secrets engine. See the API docs for the appropriate API endpoints to use. If using the Vault CLI, use 'vault kv get' for this operation."
This can be fixed when using path "secret/data/testing" and "secret/data/another_test" (see https://issues.jenkins-ci.org/browse/JENKINS-44900)
When then calling the Job the Variables seem to be empty:
[Pipeline] sh
+ echo
[Pipeline] sh
+ echo
[Pipeline] sh
+ echo
The connection definitely works because when providing invalid credentials or invalid paths I receive errors.
Also retrieving the Secrets directly return a valid response:
/ # vault kv get secret/testing
====== Metadata ======
Key Value
--- -----
created_time 2019-04-17T05:31:23.581020191Z
deletion_time n/a
destroyed false
version 3
====== Data ======
Key Value
--- -----
value_one HUGO
value_two BETTY
What am I missing here?

As seen here https://issues.jenkins-ci.org/browse/JENKINS-52646 Vault KV V2 returns a different Json Resonse.
So you have to use
def secrets = [
[$class: 'VaultSecret', path: 'secret/data/testing', secretValues: [
[$class: 'VaultSecretValue', envVar: 'testing', vaultKey: 'data']]]
]
to retrieve the correct json response.
The resulting Json-Response can then be passed to "readJSON"
def result = readJSON text: testing
echo result.value_one
echo result.value_two

Related

How to explicitly parse username and password in docker.withRegistry() method of the dockerhub plugin used in a Jenkins Declarative pipeline

I'm trying to push a Docker image from Jenkins to DockerHub using a declarative pipeline. The DockerHub's credentials are stored in Vault. And, I wish to use the Docker plugin in my pipeline's syntax.
My following tries were successful:
If I store Dockerhub's credentials in Jenkins, the pipeline works fine with the following code snippet:
stage('Publish the Docker Image on DockerHub')
{
steps {
script {
docker.withRegistry('', 'dockerhub-credentials'){
dockerImage.push()
}
}
}
}
If I store Dockerhub's credentials in Vault and use shell commands to login, then too the pipeline works successful with the code snippet as below:
stage('Publish the Docker Image on DockerHub')
{
steps
{
withVault(
configuration: \
[
timeout: 60,
vaultCredentialId: 'vault-jenkins-approle-creds',
vaultUrl: 'http://172.31.32.203:8200'
],
vaultSecrets:
[[
engineVersion: 2,
path: 'secret/credentials/dockerhub',
secretValues:
[
[envVar: 'DOCKERHUB_USERNAME', vaultKey: 'username'],
[envVar: 'DOCKERHUB_PASSWORD', vaultKey: 'password']
]
]]
)
{
script
{
sh "docker login -u $DOCKERHUB_USERNAME -p $DOCKERHUB_PASSWORD"
sh "docker push <docker-hub-repo>"
}
}
}
}
Now, my query is how to parse the Username+Password credentials (obtained in 2) inside the docker.withRegistry() method (used in 1)?

Retrieving secret from HashiCorp Vault in jenkins pipeline

I am trying to retreive hashicorp vault secret and use it in jenkins pipeline. I managed to connect to hashicorp vault, but pipeline fails to retreive the secret saved in vault.
Pipeline output:
Started by user admin
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test_pipeline
[Pipeline] {
[Pipeline] withVault
Retrieving secret: my.secrets/data/dev
Access denied to Vault Secrets at 'my.secrets/data/dev'
[Pipeline] {
[Pipeline] sh
+ echo
[Pipeline] }
[Pipeline] // withVault
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Pipeline:
key heslo exists in my.secrets/data/dev path
node {
def secrets = [
[path: 'my.secrets/data/dev', engineVersion: 2, secretValues: [
[envVar: 'value', vaultKey: 'heslo']
]]
]
def configuration = [vaultUrl: 'http://10.47.0.235:8200/',
vaultCredentialId: 'b0467c75-24e4-4307-9a35-f7da364f6285',
engineVersion: 2]
withVault([configuration: configuration, vaultSecrets: secrets]) {
sh 'echo $value'
}
}
my jenkins-policy.hcl file for approle method to access vault from jenkins:
path "my.secrets/data/dev" {
capabilities = [ "read" ]
}
Thank you in advance
Remove the "data" from the "path" definition:
path: 'my.secrets/dev'
You must use the "data" in the policy path but not when retrieving the secret.

Need to change a shell command into groovy

I've managed to get what docker images have been deployed but it has to be written in groovy.
I have the following:
sh script: '''
export PATH=\"$PATH\":\"${WORKSPACE}\"
for docker-image in interface data keycloak artifactory ; do
DOCKERHOST=`echo ${DOCKERURL}/images-rancher/$docker-image | sed 's!^localhost://!!g'`
DOCKERVERSION=`docker image ls ${DOCKERHOST} --format '{{ json .Tag }}' | head -1`
echo "${DOCKERHOST} - ${DOCKERVERSION}"
done
'''
Changing it into groovy:
def image = [ "interface", "data" , "keycloak", "artifactory" ]
.
.
.
for docker-image in image
println docker-image
How would you put that in a groovy script?
Thanks
Here's how you can get most of the way to using Groovy instead of bash. The doRegexManipulation() function is left as an exercise for you to implement.
Note that the docker image ls sh step is still required, and cannot be translated to "pure" Groovy.
withEnv(["PATH=${env.PATH}:${env.WORKSPACE}"]) {
def images = [ "interface", "data" , "keycloak", "artifactory" ]
for (String docker_image : images) {
def DOCKERHOST = doRegexManipulation("${DOCKERURL}/images-rancher/$docker_image")
def DOCKERVERSION = sh(
script: """docker image ls '${DOCKERHOST}' --format '{{ json .Tag }}' | head -1""",
returnStdout: true,
)
echo "${DOCKERHOST} - ${DOCKERVERSION}"
}​
}
If you wanted to, you can go one step further and replace the head -1 part with Groovy code, since that can be done in Groovy as well.
The withEnv step is documented here. It is used to set environment variables for a block of Groovy code, thereby making those environment variables available to any child processes spawned in the block of Groovy code.

Jenkins build failed because of an no value to a variable

I have a Jenkins file as like below:
stage("Deploy artifact for k8s sync")
{
sh '''
ns_exists=`kubectl get ns | grep ${target_cluster}`
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${target_cluster}"
echo "Creating namespace ${consider_namespace} in the cluster ${target_cluster}"
kubectl apply "some yaml file"
else
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "serviceaccounts" ]
then
echo "Applying source serviceaccounts on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "secrets" ]
then
echo "Applying source secrets on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "configmaps" ]
then
echo "Applying source configmaps on target cluster ${target_cluster}"
kubectl apply -f ${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-configmaps.yaml
fi
However, when I run, it fails with the error like below:
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy artefact for k8s sync) (hide)
[Pipeline] sh
+ kubectl get ns
+ grep test-central-eks
+ ns_exists=
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Wondering how to resolve it and why it should fail in first place?
As far as I can see you have a couple of things that make this to fail.
From your pipeline extract I can see that you are using variables that are not defined in the script, so I guess those are somehow environment variables or coming from previous step. In order to be able to use it, you will need string interpolation, in your case, a triple double quote https://groovy-lang.org/syntax.html#_triple_double_quoted_string
stage("Deploy artifact for k8s sync")
{
sh """
In the other hand, grep exit code is 1 when there is not match, in case you need to continue in the case that there is not match you can put a block command with a true condition
ns_exists=`kubectl get ns | {grep ${target_cluster} || true; }`
Finally, a good way to catch this problems is to replay the pipeline with debug in the shell blocks by adding set -x at the beginning of the sh sh block.
Please note as well that stderr is not printed, in case you need it you would need to redirect to the stdout and readed from returnStdout: true option of jenkins sh or a custom file where the stdout is redirected

How to run sidecar container in jenkins pipeline running inside kubernetes

I need to build and run some tests using a fresh database. I though of using a sidecar container to host the DB.
I've installed jenkins using helm inside my kubernetes cluster using google's own tutorial.
I can launch simple 'hello world' pipelines which will start on a new pod.
Next, I tried Jenkin's documentation
for running an instance of mysql as a sidecar.
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
At first, it complained that docker was not found, and the internet suggested using a custom jenkins slave image with docker installed.
Now, if I run the pipeline, it just hangs in the loop waiting for the db to be ready.
Disclaimer: New to jenkins/docker/kubernetes
Eventually I've found this method.
It relies on the kubernetes pipeline plugin, and allows running multiple containers in the agent pod while sharing resources.
Note that label should not be an existing label, otherwise when you go to run, your podTemplate will be unable to find the container you made. With this method you are making a new set of containers in an entirely new pod.
def databaseUsername = 'app'
def databasePassword = 'app'
def databaseName = 'app'
def databaseHost = '127.0.0.1'
def jdbcUrl = "jdbc:mariadb://$databaseHost/$databaseName".toString()
podTemplate(
label: label,
containers: [
containerTemplate(
name: 'jdk',
image: 'openjdk:8-jdk-alpine',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'JDBC_URL', value: jdbcUrl),
envVar(key: 'JDBC_USERNAME', value: databaseUsername),
envVar(key: 'JDBC_PASSWORD', value: databasePassword),
]
),
containerTemplate(
name: "mariadb",
image: "mariadb",
envVars: [
envVar(key: 'MYSQL_DATABASE', value: databaseName),
envVar(key: 'MYSQL_USER', value: databaseUsername),
envVar(key: 'MYSQL_PASSWORD', value: databasePassword),
envVar(key: 'MYSQL_ROOT_PASSWORD', value: databasePassword)
],
)
]
) {
node(label) {
stage('Checkout'){
checkout scm
}
stage('Waiting for environment to start') {
container('mariadb') {
sh """
while ! mysqladmin ping --user=$databaseUsername --password=$databasePassword -h$databaseHost --port=3306 --silent; do
sleep 1
done
"""
}
}
stage('Migrate database') {
container('jdk') {
sh './gradlew flywayMigrate -i'
}
}
stage('Run Tests') {
container('jdk') {
sh './gradlew test'
}
}
}
}
you should be using kubectl cli (using manifests yaml files)to create those mysql and centos pods,svc and other k8s objects. run tests on mysql database using mysql service dns.
This is how we have tested new database deployments

Resources