In jenkins-kubernetes-plugin, how to generate labels in Pod template that are based on a pattern - jenkins

Set-Up
I am using jenkins-kubernetes-plugin to run our QE jobs. The QE jobs are executed over multiple PODs and each POD has a static set of labels like testing chrome
Issue:
In these QE jobs, there is one port say 7900 that I want to expose through Kubernetes Ingress Controller.
The issue is we have multiple PODs running from the same Pod Template and they all have the same set of labels. For Ingress Controller to work, I want these PODs to have some labels that come from a pattern.
Like POD1 has a label chrome-1 and POD2 has a label called chrome-2 and so on...
Is this possible?

This is not currently possible directly, but you could use groovy in the pipeline to customize it, ie. add the build id as a label

Related

How to define component/step using training operators such as TFJob in kubeflow pipeline

I know there is a way to use tfjob operator via kubectl, like the example at here (https://www.kubeflow.org/docs/components/training/tftraining/):
kubectl create -f https://raw.githubusercontent.com/kubeflow/training-operator/master/examples/tensorflow/simple.yaml
But I don't know how to incorporate in kubeflow pipeline. A normal component/job is defined via #component decoration or ContainerOp is a Kubernetes Job kind which runs in a Pod, but I don't know how to define a component with special training operator such as TFJob, so that my code runs as
apiVersion: "kubeflow.org/v1"
kind: TFJob
rather than:
apiVersion: "kubeflow.org/v1"
kind: Job
in kubernetes.
P.S.: there is a example here: https://github.com/kubeflow/pipelines/blob/master/components/kubeflow/launcher/sample.py
but don't see anywhere specify TFJob
The example you reference leverages some code that actually creates a TFJob (look at the folder of your example):
TFJob is instantiated here: https://github.com/kubeflow/pipelines/blob/master/components/kubeflow/launcher/src/launch_tfjob.py#L97
...and created as Kubernetes resource here: https://github.com/kubeflow/pipelines/blob/master/components/kubeflow/launcher/src/launch_tfjob.py#L126
...the previous code is accessed by a Kubeflow component as specified here: https://github.com/kubeflow/pipelines/blob/master/components/kubeflow/launcher/component.yaml
...which is imported into your referenced example here: https://github.com/kubeflow/pipelines/blob/master/components/kubeflow/launcher/sample.py#L60
The general question you raised is still subject to current discussions. Using tfjob_launcher_op appears to be the currently recommended way. Instead, some people also natively use ResourceOps to simulate your kubectl create call.

Labelling Openshift Build transient pods

I have a simple S2I build running on an Openshift 3.11 project that outputs to an Image Stream.
The build is working fine, the resulting images are tagged correctly and available in the stream. My issue is, each build spins up a transient pod to handle doing the actual build. I would like to label these pods. This project is shared between multiple teams and we have several scripts that differentiate pods based on a label.
Right now each one automatically gets labelled like so:
openshift.io/build.name: <buildname>-<buildnum>
Which is fine, I don't want to get rid of that label. I just want to add an additional custom label (something like owner: <teamname>). How can I do that?
You can add your custom label using imageLabels section in the buildConfig as follows. Further information is here, Output Image Labels.
spec:
output:
to:
kind: "ImageStreamTag"
name: "your-image:latest"
imageLabels:
- name: "owner"
value: "yourteamname"

How can i spawn a multiple instances of a container using kubernetes?

I have a container(Service C) which is listening to certain user event and based on the input it needs to spawn one or more instance of an another container(Service X).
From your use case description, it looks like deployment is what you are looking for https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ . By using deployments you can dynamically scale the number of instances of the pod.

How can I select some projects to be eligible to be built by slaves

I've created successfully my 1st Jenkins slave, but I don't want to be used to all the projects, so I've chosen the {{Only selected projects}} option.
But how can I choose the projects?
I've looked at Jenkins management, and the configuration of a project, and I don't see the needed setting.
You tie jobs to slaves in the jobs' configuration: go to ${jenkins url}/job/${job name}/configure and look for the Restrict where this project can be run field in the general settings:
You can type names of slaves there, or even better, use tags assigned to slaves. You can use logical expressions like || and &&, too.
To expand on ameba's answer you can set labels for a node in the node configuration settings and therefore label nodes with the toolchains or tools that you require.
Then in jenkins-pipeline you can do the following:
node('TOOL label')
{
stage('build using TOOL') {
}
}
the node section tell jenkins to find a node with that label and to use it for the following block of code.

Simple way to temporary exclude a job from running on a node in a label group

I wan't to be able to temporary exclude a specific job from running on a node in a label group.
jobA, jobB, jobC are tied to run on label general
nodeA,nodeB,nodeC have the label general on them.
Let's say that jobA starts to fail consistently on nodeA.
The only solutions that I see today are taking nodeA offline for all jobs or reconfigure many jobs or nodes which is pretty time consuming. We are using JOB-DSL to configure the jobs so changing in the job configuration requires a checkin.
An ideal situation for us would be to have a configuration on the node:
Exclude job with name: jobA
Is there some easy way to configure that jobA should temporarily only run on nodeB and node C and that jobB/C should still run on all nodes in label general?
Create a parameterized job to run some job-dsl configuration. Make one of the parameters a "Choice" listing the job names that you might want to change.
Another parameter would select a label defining the node(s) you want to run the job on. (You can have more than one label on a node).
The job-dsl script then updates the job label.
This groovy script will enable/disable all jobs in a folder:
// "State" job parameter (choice, DISABLED|ENABLED)
def targetState = ('DISABLED'.equalsIgnoreCase(State))
// "Folder" job parameter (choice or free-text)
def targetFolderPath = Folder.trim()
def folder = findFolder(jenkins, targetFolderPath)
println "Setting all jobs in '${folder.name}' to '${targetState}'"
for (job in folder.getAllJobs()) {
job.disabled = targetState
println "updated job: ${job.name}"
}
I just came across the same issue, I want the job to run on the device with lable, say "lableA", but do not want it to run on device with lable "lableB".
We may try this:
node(nodeA && !nodeB) {
}
Refer to: https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#node-allocate-node
you can also use NodeLabel Parameter Plugin in jobA. Using this plugin you can define nodes on which the job should be allowed to be executed on. Just add parameter node and select all nodes but nodeA.
https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin
For a simple quick exclude, what I think the original question refers to as "The only solutions that I see today are ... reconfigure ... jobs or nodes" see this other answer: https://stackoverflow.com/a/29611255/598656
To stop using a node with a given label, one strategy is to simply change the label. E.g. suppose the label is
BUILDER
changing the label to
-BUILDER
will preserve information for the administrator but any job using BUILDER as the label will not select that node.
To allow a job to run on the node, you can change the node selection to
BUILDER||-BUILDER
A useful paradigm when shuffling labels around.
NOTE that jobs may still select using the prior label for a period of time.

Resources