Is there any kubernetes agent in Tekton pipeline similar to Jenkins Agent - jenkins

I am new to Tekton pipeline. I am migrating Jenkins file from Tekton pipeline
I do have the follow jenkinsfile codes
pipeline {
libraries {
lib 'operator-pipeline-library'
}
parameters {
string(name: 'OPERATOR_CATALOG_REPO', description: ' operator images that are in the catalog', defaultValue: 'operator-3rdparty-dev-local.net')
string(name: 'OPERATOR_INDEX', description: 'Artifactory ', defaultValue: 'operator-3rdparty-catalog-dev-local.net')
}
agent {
kubernetes {
cloud 'openshift'
yamlFile 'operator-mirror-pod.yaml'
}
}
}
I do want to know how do I re-write the following
agent {
kubernetes {
cloud 'openshift'
yamlFile 'operator-pod.yaml'
}
}
in Tekton pipelines

When I want to do something with Tekton, the first thing I would recommend checking is the Tekton Catalog. They have a GitHub repository.
In your case, they have the openshift-client task that could be useful.
Depending on how your Jenkins was configured, you may have to reproduce its credentials configuration as Kubernetes Secrets - or executing commands against local cluster, make sure your PipelineRun uses a ServiceAccount with enough privileges to create objects.
As a more generic answer: how do I translate Jenkins groovy into Tekton: you don't. You rewrite everything from scratch. And while it may be the beginning of a long road: at the end of it, you're done with groovy. You can script in python, ruby, shell, whatever runtime you have in your pods. All of which would be better options, in terms of performance, maintenance, ...

Related

Teamcity Shared Library and Stash/Unstash Like Jenkins

I am currently a jenkins user and is exploring the Teamcity.
In jenkins we have a concept of shared libraries, which basically extends a generic groovy code into different jenkins pipeline and avoid re-writing the same functionality in each jenkins file following the DRY (don't repeat yourself) , hide implementation complexity, keep pipelines short and easier to understand
Example:
There could be a repository having all the Groovy functions like:
Repo: http:://github.com/DEVOPS/Utilities.git (repo Utilities)
Sample Groovy Scipt ==>> GitUtils.groovy with below functions
public void setGitConfig(String userName, String email) {
sh "git config --global user.name ${userName}"
sh "git config --global user.mail ${email}"
}
public void gitPush(StringbranchName) {
sh "git push origin ${branchName}"
}
In jenkinsfile we can just call this function like below (of course we need to define config in jenkins for it to know the Repo url for Shared library and give it a name):
Pipeline
//name of shared library given in jenkins
#Library('utilities') _
pipeline {
agent any
stages {
stage ('Example') {
steps {
// log.info 'Starting'
script {
def gitutil = new GitUtils()
gitutils.setGitConfig("Ray", "Ray#rayban.com")
}
}
}
}
}
And that's it anyone wanting the same function has to just include the library in jenkinsfile and use it in pipeline
Questions:
Can we migrate over the same to Teamcity, if yes how can it be done? We do not want to spend lot of time to re-writing
Jenkins also support Stashing and unstashing of workspace between stages, is the similar concept present in teamcity?
Example:
pipeline {
agent any
stages {
stage('Git checkout'){
steps {
stash includes: '/root/hello-world/*', name: 'mysrc'
}
}
stage('maven build'){
agent { label 'slave-1' }
steps {
unstash 'mysrc'
sh label: '', script: 'mvn clean package'
}
}
}
}
As for reusing common TeamCity Kotlin DSL libraries, this can be done via maven dependencies. For that you have to mention it in the pom.xml file within your DSL code. You can also consider using JitPack if your DSL library code is hosted on GitHub for example and you do not want to handle building it separately and publishing its maven artifacts.
Although with migration from Jenkins to TeamCity you will most likely have to rewrite the common library (if you still need one at all), as TeamCity project model and DSL are quite different to what you have in Jenkins.
Speaking of stashing/unstashing workspaces, it may be covered by either artifact rules and artifact dependencies (as described here: https://www.jetbrains.com/help/teamcity/artifact-dependencies.html) or repository clone mirroring on agents.

Docker in jenkins openshift

When using docker in jenkins pipeline got a problem: "docker: command not found"
How can I start docker on OpenShift Jenkins?
Also, got no permissions to open plugins page in Jenkins.
The easiest way is to use jenkins installation provided by OpenShift
The doc for for jenkins in OpenShift is here : https://docs.openshift.com/container-platform/4.8/openshift_images/using_images/images-other-jenkins.html
To build docker images, the proposed solution is to use "source-to-image" (s2i) and/or use"BuildConfig"to manege "docker builds"
The doc also explains how to manage jenkins authentication/authorizations with jenkins integrated with OCP (OAuth + role bindings to OpenShift)
Bonus points: jenkins is configured with all the required plugins (k8s, OCP...) and it is automatically upgraded when you upgrade OCP
Using Jenkins on OpenShift, you won't find any Docker runtime on your Jenkins master - and probably not on the slaves either: cri-o being the preferred runtime (and only runtime, as of 4.x).
If you want to start containers, you're supposed to use any of the OpenShift or Kubernetes plugins: creating Pods. In OpenShift, you won't need to configure or install anything on Jenkins itself - aside from writing your pipeline. As long as your Jenkins ServiceAccount has edit or admin privileges over a Project, then it would be able to manage containers in the corresponding Namespace.
pipeline {
agent {
node { label 'maven' }
}
stages {
stage('clone') {
// clone the repository serving your YAMLs
}
stage('create') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
try {
timeout(5) {
objectsFromTemplate = openshift.process("-f", "/path/to/clone/directory/deploy.yaml", '-p', "FRONTNAME=${params.buildHash}",
'-p', "LDAP_IMAGE_TAG=${params.buildHash}", '-p', "ROOT_DOMAIN=${params.rootDomain}")
echo "The template will create ${objectsFromTemplate.size()} objects"
for (o in objectsFromTemplate) { o.metadata.labels["${templateSel}"] = "${templateMark}-${params.buildHash}" }
created = openshift.create(objectsFromTemplate)
created.withEach { echo "Created ${it.name()} from template with labels ${it.object().metadata.labels}" }
}
} catch(e) {
// DoSomething
}
}
}
}
}
}
}
}
If you don't have full access to your Jenkins admin interface, check the RoleBindings in that namespace: maybe you're not admin?
It won't prevent you from creating your own podTemplates (using ConfigMaps), Credentials (using Secrets) or jobs/jenkinsFiles (using BuildConfigs). While you may install additional plugins to Jenkins, adding/changing the content of the INSTALL_PLUGINS environment variable, in your Jenkins deployment.
Also note that the Jenkins ServiceAccount Token could be used authenticating against Jenkins as admin, working around Jenkins OpenShift OAuth integration.

How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file?

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?
There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

How to run multiple pipeline jenkins for multiple branch

I have a scenario where in I have a frontend repository with multiple branches.
Here's my repo vs application structure.
I have a single Jenkinsfile like below:
parameters{
string(name: 'CUSTOMER_NAME', defaultValue: 'customer_1')
}
stages {
stage('Build') {
steps {
sh '''
yarn --mutex network
/usr/local/bin/grunt fetch_and_deploy:$CUSTOMER_NAME -ac test
/usr/local/bin/grunt collect_web'''
}
}
}
The above Jenkinsfile is same for all customers so I would like to understand what is the best way to have multiple customers build using a same Jenkinsfile and build different pipelines based on the parameter $CUSTOMER_NAME
I am not sure if I understood your problem. But I guess you could use a shared pipeline library: https://jenkins.io/doc/book/pipeline/shared-libraries/
You can put the build step in the library and call it with CUSTOMER_NAME as parameter.
(Please note: a shared pipeline library must be stored in a separate GIT repository!)

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Resources