Retrieving secret from HashiCorp Vault in jenkins pipeline - jenkins

I am trying to retreive hashicorp vault secret and use it in jenkins pipeline. I managed to connect to hashicorp vault, but pipeline fails to retreive the secret saved in vault.
Pipeline output:
Started by user admin
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test_pipeline
[Pipeline] {
[Pipeline] withVault
Retrieving secret: my.secrets/data/dev
Access denied to Vault Secrets at 'my.secrets/data/dev'
[Pipeline] {
[Pipeline] sh
+ echo
[Pipeline] }
[Pipeline] // withVault
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Pipeline:
key heslo exists in my.secrets/data/dev path
node {
def secrets = [
[path: 'my.secrets/data/dev', engineVersion: 2, secretValues: [
[envVar: 'value', vaultKey: 'heslo']
]]
]
def configuration = [vaultUrl: 'http://10.47.0.235:8200/',
vaultCredentialId: 'b0467c75-24e4-4307-9a35-f7da364f6285',
engineVersion: 2]
withVault([configuration: configuration, vaultSecrets: secrets]) {
sh 'echo $value'
}
}
my jenkins-policy.hcl file for approle method to access vault from jenkins:
path "my.secrets/data/dev" {
capabilities = [ "read" ]
}
Thank you in advance

Remove the "data" from the "path" definition:
path: 'my.secrets/dev'
You must use the "data" in the policy path but not when retrieving the secret.

Related

How to explicitly parse username and password in docker.withRegistry() method of the dockerhub plugin used in a Jenkins Declarative pipeline

I'm trying to push a Docker image from Jenkins to DockerHub using a declarative pipeline. The DockerHub's credentials are stored in Vault. And, I wish to use the Docker plugin in my pipeline's syntax.
My following tries were successful:
If I store Dockerhub's credentials in Jenkins, the pipeline works fine with the following code snippet:
stage('Publish the Docker Image on DockerHub')
{
steps {
script {
docker.withRegistry('', 'dockerhub-credentials'){
dockerImage.push()
}
}
}
}
If I store Dockerhub's credentials in Vault and use shell commands to login, then too the pipeline works successful with the code snippet as below:
stage('Publish the Docker Image on DockerHub')
{
steps
{
withVault(
configuration: \
[
timeout: 60,
vaultCredentialId: 'vault-jenkins-approle-creds',
vaultUrl: 'http://172.31.32.203:8200'
],
vaultSecrets:
[[
engineVersion: 2,
path: 'secret/credentials/dockerhub',
secretValues:
[
[envVar: 'DOCKERHUB_USERNAME', vaultKey: 'username'],
[envVar: 'DOCKERHUB_PASSWORD', vaultKey: 'password']
]
]]
)
{
script
{
sh "docker login -u $DOCKERHUB_USERNAME -p $DOCKERHUB_PASSWORD"
sh "docker push <docker-hub-repo>"
}
}
}
}
Now, my query is how to parse the Username+Password credentials (obtained in 2) inside the docker.withRegistry() method (used in 1)?

Unable to login to Docker from Jenkins environment

I have the below script, with legacy Jenkins and installation of just docker plugin, I was able to fetch the node image without credentials.
pipeline {
agent {
docker { image 'node:12.22.1' }
}
}
Now, it miserably fails with the below error
$ docker login -u user#gmail.com -p ******** https://index.docker.io/v1/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: docker login failed
Finished: FAILURE
I tried setting up environment with Docker credentials like below:
pipeline {
agent {
docker { image 'node:12.22.1' }
}
environment {
registryCredential: 'dockerhub_id' // created in global credentials
}
}
Still no luck, the best example to dockerize my login app will be good to unblock myself.

Jenkins build failed because of an no value to a variable

I have a Jenkins file as like below:
stage("Deploy artifact for k8s sync")
{
sh '''
ns_exists=`kubectl get ns | grep ${target_cluster}`
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${target_cluster}"
echo "Creating namespace ${consider_namespace} in the cluster ${target_cluster}"
kubectl apply "some yaml file"
else
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "serviceaccounts" ]
then
echo "Applying source serviceaccounts on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "secrets" ]
then
echo "Applying source secrets on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "configmaps" ]
then
echo "Applying source configmaps on target cluster ${target_cluster}"
kubectl apply -f ${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-configmaps.yaml
fi
However, when I run, it fails with the error like below:
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy artefact for k8s sync) (hide)
[Pipeline] sh
+ kubectl get ns
+ grep test-central-eks
+ ns_exists=
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Wondering how to resolve it and why it should fail in first place?
As far as I can see you have a couple of things that make this to fail.
From your pipeline extract I can see that you are using variables that are not defined in the script, so I guess those are somehow environment variables or coming from previous step. In order to be able to use it, you will need string interpolation, in your case, a triple double quote https://groovy-lang.org/syntax.html#_triple_double_quoted_string
stage("Deploy artifact for k8s sync")
{
sh """
In the other hand, grep exit code is 1 when there is not match, in case you need to continue in the case that there is not match you can put a block command with a true condition
ns_exists=`kubectl get ns | {grep ${target_cluster} || true; }`
Finally, a good way to catch this problems is to replay the pipeline with debug in the shell blocks by adding set -x at the beginning of the sh sh block.
Please note as well that stderr is not printed, in case you need it you would need to redirect to the stdout and readed from returnStdout: true option of jenkins sh or a custom file where the stdout is redirected

How to Access the Application after the kubernetes deployment

I'm new to kubernetes tool, i'm trying to deploy the Angular application using docker + kubernetes, here the below Jenkins script.
stage('Deploy') {
container('kubectl') {
withCredentials([kubeconfigFile(credentialsId: 'KUBERNETES_CLUSTER_CONFIG', variable: 'KUBECONFIG')]) {
def kubectl
kubectl = "kubectl --kubeconfig=${KUBECONFIG} --context=demo"
echo 'deployment to PRERELEASE!'
sh "kubectl config get-contexts"
sh "kubectl -n demo get pods"
sh "${kubectl} apply -f ./environment/pre-release -n=pre-release"
}
}
}
}
Please find the below jenkins outputs
/home/jenkins/agent/workspace/DevOps-CI_future-master-fix
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG
[Pipeline] {
[Pipeline] echo
deploy to deployment!!
[Pipeline] echo
deploy to PRERELEASE!
[Pipeline] sh
+ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* demo kubernetes kubernetes-admin demo
kubernetes-admin#kubernetes kubernetes kubernetes-admin
[Pipeline] sh
+ kubectl -n demo get pods
NAME READY STATUS RESTARTS AGE
worker-f99adee3-dedd-46ca-bc0d-6b24391e5865-qkd47-mwl3v 5/5 Running 0 26s
[Pipeline] sh
+ kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
service/frontend unchanged
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Now the questions is after the deployment i am not able to see the pods and deployment in both machine master machine using below command, can you please some one help me how to access the application after the successful deployment .
kubectl get pods
kubectl get services
kubectl get deployments
You're setting the namespace to pre-release when running "${kubectl} apply -f ./environment/pre-release -n=pre-release".
To get pods in this namespace, use: kubectl get pods -n pre-release.
Namespaces are a way to separate different virtual clusters inside your single physical Kubernetes cluster. See https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for more detail.
You are creating the resources in a namespace called pre-release using -n option when you run the following command.
kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
You need to to list the resources in the same namespace.
kubectl get pods -n pre-release
kubectl get services -n pre-release
kubectl get deployments -n pre-release
By default kubectl will do the requested operation in default namespace. If you want to set your current namespace to pre-release so that you need not append -n pre-release with every kubectl command, you can run the following command:
kubectl config set-context --current --namespace=pre-release

Jenkins hashicorp-vault-plugin empty result

I tried the Jenkins Pipeline Example mentioned here: https://plugins.jenkins.io/hashicorp-vault-plugin
node {
// define the secrets and the env variables
def secrets = [
[$class: 'VaultSecret', path: 'secret/testing', secretValues: [
[$class: 'VaultSecretValue', envVar: 'testing', vaultKey: 'value_one'],
[$class: 'VaultSecretValue', envVar: 'testing_again', vaultKey: 'value_two']]],
[$class: 'VaultSecret', path: 'secret/another_test', secretValues: [
[$class: 'VaultSecretValue', envVar: 'another_test', vaultKey: 'value']]]
]
// optional configuration, if you do not provide this the next higher configuration
// (e.g. folder or global) will be used
def configuration = [$class: 'VaultConfiguration',
vaultUrl: 'http://my-very-other-vault-url.com',
vaultCredentialId: 'my-vault-cred-id']
// inside this block your credentials will be available as env variables
wrap([$class: 'VaultBuildWrapper', configuration: configuration, vaultSecrets: secrets]) {
sh 'echo $testing'
sh 'echo $testing_again'
sh 'echo $another_test'
}
}
So I installed hashicorp-vault-plugin 2.2.0 in Jenkins 2.173 and started a Vault (v1.1.1) Docker Container using
docker run -d --name vaulttest -p 80:8200 --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' vault
Next I configured a token credential within Jenkins using token "myroot"
I created the Secrets within Vault (using the WebUI)
testing
value_one
value_two
another_test
value
First of all there is an error within the example: When using path "secret/testing" and "secret/another_test" the plugin fails with an error 404:
Invalid path for a versioned K/V secrets engine. See the API docs for the appropriate API endpoints to use. If using the Vault CLI, use 'vault kv get' for this operation."
This can be fixed when using path "secret/data/testing" and "secret/data/another_test" (see https://issues.jenkins-ci.org/browse/JENKINS-44900)
When then calling the Job the Variables seem to be empty:
[Pipeline] sh
+ echo
[Pipeline] sh
+ echo
[Pipeline] sh
+ echo
The connection definitely works because when providing invalid credentials or invalid paths I receive errors.
Also retrieving the Secrets directly return a valid response:
/ # vault kv get secret/testing
====== Metadata ======
Key Value
--- -----
created_time 2019-04-17T05:31:23.581020191Z
deletion_time n/a
destroyed false
version 3
====== Data ======
Key Value
--- -----
value_one HUGO
value_two BETTY
What am I missing here?
As seen here https://issues.jenkins-ci.org/browse/JENKINS-52646 Vault KV V2 returns a different Json Resonse.
So you have to use
def secrets = [
[$class: 'VaultSecret', path: 'secret/data/testing', secretValues: [
[$class: 'VaultSecretValue', envVar: 'testing', vaultKey: 'data']]]
]
to retrieve the correct json response.
The resulting Json-Response can then be passed to "readJSON"
def result = readJSON text: testing
echo result.value_one
echo result.value_two

Resources