My company has a small pipeline library that we implicitly load for every build. Is there a way to overload the node { block of every build transparently?
My specific case is that I'm provisioning kubernetes slaves with the kubernetes plugin, and I want to provide a default YAML template, while allowing users to pick another template or override specific values. Eg:
node {
// Gets you a Pod with a DinD engine with a low CPU/Mem request/limit
}
Optionally overridden by name:
node('2-core') {
// Gets you a Pod with a DinD engine with 2 CPU/ more Mem request/limit
}
Or overridden with a template:
import com.foo.utils.PodTemplates
slaveTemplates = new PodTemplates()
slaveTemplates.bigPod {
node {
// Big node
}
}
Or:
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: redis
image: redis
"""
) {
node (label) {
// Same small pod as before PLUS a redis container
}
}
This seems trickiest, since you want the values of the parent to override the values of the child.
You can do this, but, in my opinion, it will lead to confusing behavior and possibly strange error cases.
For example:
echo.groovy
def call(String string) {
steps.echo "Calling step echo: $string"
}
Jenkinsfile
echo 'hello'
Output:
Calling step echo: hello
There is a blog post here that demonstrates this a little more in depth
Paid support for some pipeline restriction tools are offered by CloudBees that might solve your use case
The heaviest way to accomplish this is to of course write a plugin.
Related
I'm creating an eventlisterner for my repo on Bitbucket Cloud and saw on the curent example on the Tekton documentation that the Bitbucket interceptor only support Bitbucket Server.
I've created the eventlistener and looks like this:
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: bitbucket-el
spec:
serviceAccountName: tekton-triggers-admin
triggers:
- name: bitbucket-triggers
interceptors:
- bitbucket:
secretRef:
secretName: bitbucket-secret
secretKey: secretToken
eventTypes:
- cel:
filter: "header.match('X-Event-Key', 'repo:push')"
overlays:
- key: extensions.tag_name
expression: "split(body.ref, '/')[2]"
- key: extensions.mangledtag
expression: "split(split(body.ref, '/')[2], '.')[0]+'-'+split(split(body.ref, '/')[2], '.')[1]+'-'+split(split(body.ref, '/')[2], '.')[2]"
bindings:
- ref: bitbucket-binding
template:
ref: bitbucket-template
and I pass it the token generated (bitbucket-secret) from Bitbucket Cloud consumer secret by going through this doc: https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/
I used basic auth on Ingress and the webhook return 401 Unauthorized and now after I remove the basic auth and then trigger the webhook with a push I'm seeing 403 Forbiden.
Check the image below for illustartion:
enter image description here
Thank you in advance
I spend alot of time on this issue and finally fixed it by using the CEL expression interceptors, as follows.
In this Trigger, we are using the overlays to add the "X-Hub-Signature" to the body of the payload, where the expression value i.e., 1234567 doesn't matter it can be anything, we are just adding the HMAC to the body so that we will not get an error.
Note: By default, there is no interceptor for the bitbucket CLOUD
apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
name: energy
spec:
serviceAccountName: pipeline
interceptors:
- ref:
name: "cel"
params:
- name: "filter"
value: "header.match('X-Event-Key', 'repo:push')"
- name: "overlays"
value:
- key: X-Hub-Signature
expression: "1234567"
bindings:
- ref: energy
template:
ref: energy
I am trying to achieve the same, starting a build when a PR merge has been done in BitBucket cloud.
I was able to create the EventListener resource, but my pipeline is not triggered after merging a PR.
Looking at your example, I still have some questions
How is the GIT repository and the secret configured?
How can you specify a specific branch?
I was looking for a complete example but it seems like Tekton is just ignoring Bitbucket Cloud as a VCS ...
Kind regards,
Bregt
I'm having a hard time using custom Elasticsearch ingest pipelines with Filebeat's Docker autodiscovery. I've started out with custom processors in my filebeat.yml file, however I would prefer to shift this to custom ingest pipelines I've created.
Firstly, here is my configuration using custom processors that works to provide custom grok-like processing for my Servarr app Docker containers (identified by applying a label to them in my docker-compose.yml file). The processor copies the 'message' field to 'log.original', uses dissect to extract 'log.level', 'log.logger' and overwrite 'message'. The final processor is a JavaScript function used to convert the log.level to lowercase (overkill perhaps, but humour me).
Filebeat configuration:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
processors:
- if:
equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
then:
- copy_fields:
fields:
- from: message
to: log.original
fail_on_error: false
ignore_missing: true
- dissect:
tokenizer: "[%{log.level}] %{log.logger}: %{message}"
field: message
target_prefix: ""
overwrite_keys: true
ignore_failure: true
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("log.level");
if(level != null) {
event.Put("log.level", level.toString().toLowerCase());
}
}
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
Excerpt from docker-compose.yml file...
lidarr:
image: ghcr.io/linuxserver/lidarr:latest
container_name: lidarr
labels:
co.elastic.logs/custom_processor: "servarr"
And an example log line (in json):
{"log":"[Info] DownloadDecisionMaker: Processing 100 releases \n","stream":"stdout","time":"2021-08-07T10:10:49.125702754Z"}
This works well, and achieves my aims of extracting fields, but ideally I'd like to use Elasticsearch's (more powerful) ingest pipelines instead, and live with a cleaner filebeat.yml, so I created a working ingest pipeline "filebeat-7.13.4-servarr-stdout-pipeline" like so (ignore the fact that for now, this only does the grokking):
[
{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:log.level}\\] %{WORD:log.logger}: %{GREEDYDATA:message}"
],
"trace_match": true,
"ignore_missing": true
}
}
]
I tested the pipeline against existing documents (not ones that have had my custom processing applied, I should note). The pipeline worked against all the documents I tested it against in the Kibana interface.
So now I come to shift my Filebeat config to use this pipeline for containers with my custom_processor label. This is the filebeat.yml I came up with, which is apparently valid and works for the most part, but doesn't apply the grokking:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition.equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
config:
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
If I use Filebeat's inbuilt modules for my other containers such as nginx, by using a label such as in this example below, the inbuild module pipelines are used:
nginx-repo:
image: nginx:latest
container_name: nginx-repo
mem_limit: 2048m
environment:
- VIRTUAL_HOST=repo.***.***.***,repo
- VIRTUAL_PORT=80
- HTTPS_METHOD=noredirect
networks:
- default
- proxy
labels:
co.elastic.logs/module: "nginx"
co.elastic.logs/fileset.stdout: "access"
co.elastic.logs/fileset.stderr: "error"
What am I doing wrong here? The logs still end up in Elasticsearch and Kibana, and are processed, but my grok isn't applied, new fields aren't created, and the 'message' field is unchanged.
EDIT: In response to one of the comments linking to a post on the elastic forums, which suggested both the path(s) and the pipeline need to be made explicit, I tried the following filebeat.yml autodiscovery excerpt, which also fails to work (but is apparently valid config):
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition:
equals:
docker.container.labels.co_elastic_logs/custom_processor: "servarr"
config:
- type: docker
containers:
ids:
- "${data.docker.container.id}"
stream: all
paths:
- /var/lib/docker/containers/${data.docker.container.id}/${data.docker.container.id}-json.log
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
I tried with the docker.container.labels.co_elastic_logs/custom_processor value both quoted and unquoted. I have the same behaviour where the logs end up in Elasticsearch / Kibana, but they are processed as if they skipped my ingest pipeline.
We're using Kubernetes instead of Docker with Filebeat but maybe our config might still help you out.
We have autodiscover enabled and have all pod logs sent to a common ingest pipeline except for logs from any Redis pod which use the Redis module and send their logs to Elasticsearch via one of two custom ingest pipelines depending on whether they're normal Redis logs or slowlog Redis logs, this is configured in the following block:
All other detected pod logs get sent in to a common ingest pipeline using the following catch-all configuration in the "output" section:
Something else that we do is add the name of the ingest pipeline to ingested documents using the "set" processor:
This has proven to be really helpful when diagnosing whether or not a pipeline was actually executed when viewing an event document in Kibana.
Problem
I want to set a parameter conditionally based on which branch triggered the pipeline. If the triggered branch was feature/automated-testing, I would like to set a parameter equal to "True". See the code below.
Parts of my pipeline.yml file looks like so:
trigger:
branches:
include:
- feature/automated-testing
...
# Global variables for the pipeline
variables:
- name: "triggerRepoName"
value: "$(Build.SourceBranchName)"
stages:
# common stage. Docker build, tag and push
- stage: BuildDockerImage
displayName: "Build docker image"
variables:
...
jobs:
- template: /templates/pipelines/my-prject.yml#templates
parameters:
${{ if eq( variables.triggerRepoName, 'feature/automated-testing') }}:
runTests: "True"
${{ if ne(variables.triggerRepoName, 'feature/automated-testing') }}:
runTests: "False"
Question
When I push from branch feature/automated-testing and ´echo´ the variable runTests in the Dockerfile, it is blank. Is there something wrong with my syntax in the conditional statement?
I believe the error is in the way the variable is set conditionally, and I have therefore chosen not to supply the Dockerfile nor the other .yml template .yml used.
Please change variables.triggerRepoName to variables['triggerRepoName']. It should solve your issue.
The documentation of the Jenkins Kubernetes Plugin states:
Unlike scripted k8s template, declarative templates do not inherit from parent template. You need to explicitly declare the inheritance if necessary.
Plugin Readme
Unfortunately there is no example of how to explicitly state the inheritance from the build's main template. I tried using the label, but then the inheritance seems to be ignored.
def parentLabel = "my-project-${UUID.randomUUID().toString()}"
pipeline {
agent {
kubernetes {
label parentLabel
yamlFile "jenkins-agent.yml"
// a global template in the cloud configuration
inheritFrom "docker"
}
}
stages {
// .. stages using the above agent
stage( 'Test Container' ) {
agent {
kubernetes {
label "xcel-spring-stage-${UUID.randomUUID().toString()}"
inheritFrom parentLabel
yaml """
apiVersion: v1
kind: Pod
metadata:
namespace: build
labels:
project: x-celerate-spring-application
spec:
containers:
- name: spring-application
# defined in previous stages, skipped for brevity
image: ${env.IMAGE_NAME}:${version}.${env.BUILD_NUMBER}
"""
}
}
}
}
}
How / by what template name can I reference the template declared at the top of the pipeline in an inheritFrom statement in the stage agent declaration to actually define the inheritance explicitly?
The documentation about default inheritance has been updated, it states now:
You need to explicitly declare the inheritance if necessary using the field inheritFrom.
There is two examples in the documentation:
podTemplate(inheritFrom: 'mypod', containers: [
containerTemplate(name: 'maven', image: 'maven:3.8.1-jdk-11')
]) {
node(POD_LABEL) {
…
}
}
or in declarative pipeline:
pipeline {
agent {
kubernetes {
inheritFrom 'mypod'
yaml '''
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-11
'''
…
}
}
stages {
…
}
}
Our Jenkins setup consists of master nodes and different / dedicated worker nodes for running jobs in dev, test and prod environment. How do I go about creating a scripted pipeline code that allows users to select environment (possibly from master node) and depending upon the environment selected would execute the rest of the job in the node selected? Here is my initial thought:
stage('Select environment ') {
script {
def userInput = input(id: 'userInput', message: 'Merge to?',
parameters: [[$class: 'ChoiceParameterDefinition', defaultValue: 'strDef',
description:'describing choices', name:'Env', choices: "dev\ntest\nprod"]
])
println(userInput);
}
echo "Environment here ${params.Env}" // prints null here
stage("Build") {
node(${params.Env}) { // schedule job based upon the environment selected earlier
echo "My test here"
}
}
}
I am in the right path or should I be looking at something else?
Another follow up question is that the job that is running on the worker node also requires additional user input. Is there a way to combine the user input in one go such that the users would not be prompted with multiple user screens?
If you pass the environment as a build parameter when kicking off the job, and you have appropriate labels on your nodes, you could do something like:
agent = params.WHAT_NODE
agentLabels = "deploy && ${agent}"
pipeline {
agent { label agentLabels }
....
}
Ended up doing the following for scripted pipeline:
The code for selecting environment can be run on any node (whether master or slaves with agent running). The parameter can be injected into an environment variable: env..
node {
stage('Select Environment'){
env.Env = input(id: 'userInput', message: 'Select Environment',
parameters: [[$class: 'ChoiceParameterDefinition',
defaultValue: 'strDef',
description:'describing choices',
name:'Env',
choices: "jen-dev-worker\njen-test-worker\njen-prod-worker"]
])
println(env.Env);
}
stage('Display Environment') {
println(env.Env);
}
}
The following code snippet ensures that script would be executed on the environment selected in the last step. Requires Jenkins workers with labels: jen-dev-worker, jen-test-worker, jen-prod-worker) available.
node (env.Env) {
echo "Hello world, I am running on ${env.Env}"
}