Helm Set Docker Image Tag Dynamically - jenkins

I am pushing Docker images to our private registry via Jenkins with the following command:
def dockerImage = docker.build("repo/myapp:${env.BUILD_NUMBER}")
(BUILD_NUMBER increases after every build.)
Because I am new to using Helm, I could not decide how should I give the tag for images in values.yaml.
I would like to deploy my app to multiple environments such as:
dev
test
prod
Let's say I was able to deploy my app via Helm to dev, and the latest BUILD_NUMBER is:
100 for dev
101 for test
102 for prod
What should be the tag value, then?
image:
repository: registryt/myrepo/image
tag:

You should put "some" tag into your values.yaml which will act as the default tag. Each Helm Chart has it, you can check the official Helm Charts here.
Now, you have two options on how to act with the different environments.
Option 1: Command line parameters
While installing your Helm Chart, you can specify the tag name dynamically with --set. For example:
$ helm install --set image.tag=12345 <your-chart-name>
Option 2: Separate values.yaml files
You can store separate values.yaml in your repository, like:
values.dev.yaml
values.prod.yaml
Then, update the correct values in your Jenkins pipeline.

I just ran into this issue with GitHub Actions. As the other answer already noted, set the image.tag in values.yaml to something as a default. I use latest as the default.
The problem is that the helm upgrade command only upgrades if the image tag is different. So "latest" isn't unique enough for Helm to do the rolling update. I use a SHA in GitHub Actions as my unique version tag. You could tag the image like this:
def dockerImage = docker.build("repo/myapp:${env.NAME}-${env.BUILD_NUMBER}")
Then in your helm command just add a --set:
helm upgrade <helm-app> <helm-chart> --set image.tag=${env.NAME}-${env.BUILD_NUMBER}
This command will override whatever value is in values.yaml. Keep in mind that the --set value must follow the structure of your values.yaml so in this case image is a top level object, with a property named tag:
values.yaml
image
name
tag
pullPolicy
port
replicaCount

May be its late, but hope helps someone later with this similar query.
I had similar situation and was looking for some options. It was painful as helm3 package doesn't come with --set option, which exists in version 2.
Solution:
Implemented with Python jinja packages, along with environment variables, with below steps.
Create values.yaml.j2 file inside your chart directory, with your values file along with templates as per below.
name: {{ APPLICATION | default("SampleApp") }}
labelname: {{ APPLICATION | default("SampleApp") }}
image:
imageRepo: "SampleApp"
imageTag: {{ APPLICATION_IMAGE_TAG | default("1.0.27") }}
Dependency packages (in container):
sh 'yum -y install python3'
sh 'yum -y install python3-pip'
sh 'yum -y install python-setuptools'
sh 'python3 -m pip install jinja-cli'
Sample Environment Variables In your Build Pipeline:
APPLICATION= "AppName"
APPLICATION_VERSION= '1.0'
APPLICATION_CHART_VERSION= '1.0'
APPLICATION_IMAGE_TAG= "1.0.${env.BUILD_NUMBER}"
Now in your pipeline, before packaging chart, Apply/Replace the templates with one jinja command as like below.
sh "jinja CHART_DIR/values.yaml.j2 -X APP.* -o CHART_DIR/values.yaml"
helm package CHART_DIR
Done!

Related

Using latest Docker image in BitBucket pipe command

I have a Bitbucket pipeline that runs a pipe using a version number tag as follows:
script:
- mkdir meta
- pipe: myteam/bladepackager-pipeline:1.0.8
variables: ...
I would prefer to have it automatically resolve the latest tagged version of the Docker image, so I tried:
script:
- mkdir meta
- pipe: myteam/bladepackager-pipeline:latest
variables: ...
But I get an error message from my BitBucket pipeline run that says
Your pipe name is in an invalid format. Check the name of the pipe and try again.
Is there a way to specify latest rather than a specific tag?
The tag latest, itself is a tag, it does not mean the latest tag. so if you want to use images with this tag, you have to make docker with that tag.
The
- pipe: aaa/bbbb:1.2.3
syntax refers to a git repository hosted in the bitbucket, e.g. bitbucket.org/aaa/bbbb, whereas the
- pipe: docker://registry.example.com/aaa/bbbb:tag
refers to a docker image in any registry.
The :latest tag can only be used with the docker syntax. For the bare pipe syntax I guess you can only try git refs? Maybe :main or :master would be valid? Never managed it to work, please reach back if you succeed.

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Jenkins helm installation does not remove plugins

I perform an installation og jenkins on GKE, using the official helm chart.
I initially pass a list of plugins to the corresponding key
- plugin1
- plugin2
- plugin3
- plugin4
and perform helm upgrade --recreate-pods --force --tls --install
I then take out some of the plugins from the above list and run the same helm command again, e,g, with
- plugin1
- plugin2
However jenkins keeps all the plugins from the initial list.
Is this the expected behavior?
Yes, it is an expected behavior.
To change this behavior you should set the parameter master.overwritePlugins to true.
Example:
helm upgrade --set master.overwritePlugins=true --recreate-pods --force --install
From Helm chart documentation:
| master.overwritePlugins | Overwrite installed plugins on start. | false |

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

Jenkins Workflow CD with Kubernetes

To clarify, this is not a question about running Jenkins in Kubernetes, this is about deploying to Kubernetess from Jenkins.
I have recently settled on using Jenkins (and the workflow/pipeline plugin) to orchestrate our delivery process. Currently, I'm using the imperative style to deploy as per below:
stage 'Deploy to Integ'
// Clean up old releases
sh "kubectl delete svc,deployment ${serviceName} || true"
def cmd = """kubectl run ${serviceName} --image=${dockerRegistry}/${serviceName}:${env.BUILD_NUMBER} --replicas=2 --port=${containerPort} --expose --service-overrides='{ "spec": { "type": "LoadBalancer" }}' """
// execute shell for the command above
sh cmd
This works well because the ${env.BUILD_NUMBER} persists through the pipeline, making it easy for me to ensure the version I deploy is the same all the way through. The problem I have is that I would like to use the declarative approach as this isn't scalable, and I would like the definition in VCS.
Unfortunately, the declarative approach comes with the adverse effect of needing to explicitly state the version of the image (to be deployed) in the yaml. One way around this might be to use the latest tag, however this comes with its own risks. For example, lets take the scenario where I'm about to deploy latest to production and a new version gets tagged latest. The new latest may not have gone through testing.
I could get into changing the file programmatically, but that feels rather clunky, and doesn't help developers who have the file checked out to understand what is latest.
What have you done to solve this issue? Am I missing something obvious? What workflow are you using?
In my yaml file (server.origin.yml), I set my image as image-name:$BUILD_NUMBER
Then I run: envsubst < ./server.origin.yml > ./server.yml
This command will replace the string $BUILD_NUMBER by the value of the environment variable

Resources