Allocate resources for Kubeflow pipeline using pipeline params - kubeflow

I would like to be able to create a Kubeflow pipeline that allows users to set the allocated resources for a run. The end result would be something like this:
Example of Kubeflow "Create Run" UI with ability to set resource allocation.
Definition of the pipeline params is possible; however, the syntax of the pipeline params does not match the validation regex used by Kubeflow to preprocess its YAML definition.
As an example, using the values paramters in the screen shot, I can hard-code the resources allocated to the pipeline by adding this to the pipeline's YAML definition:
resources:
limits: {nvidia.com/gpu: 1}
requests: {cpu: 16, memory: 32G}
However, what I want to do is to use the pipeline's paramaters to define these allocations for each run. Something like:
resources:
limits: {nvidia.com/gpu: '{{inputs.parameters.gpu_limit}}'}
requests: {cpu: '{{inputs.parameters.cpu_request}}', memory: '{{inputs.parameters.memory_request}}'}
When I use the second definition of pipeline resources, creation of the pipeline fails because Kubeflow cannot to parse these resource parameter as the input parameter syntax '{{input.parameters.parameter}}' does not match the regular expression ^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$.
{
"error_message":"Error creating pipeline: Create pipeline failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: while decoding JSON: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'",
"error_details":"Error creating pipeline: Create pipeline failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: while decoding JSON: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'"
}
Has anyone found a workaround for this issue, or am I trying to force Kubeflow to do something it isn't built for? Defining and using pipeline parameters like I have in the second example works for other portions of the pipeline definition (eg args or commands to run in the Docker container).

This just can be done in current version of kubeflow pipelines. It is a limitation, but you cannot change resources from the pipeline itself.

Related

Upload pipeline on kubeflow

I am currently trying to setup a kubeflow pipeline. My use case requires that the configuration for pipelines shall be provided via a yaml/json structure. Looking into the documentation for submitting pipelines I came across this paragraph:
Each pipeline is defined as a Python program. Before you can submit a pipeline to the Kubeflow Pipelines service, you must compile the pipeline to an intermediate representation. The intermediate representation takes the form of a YAML file compressed into a .tar.gz file.
Is is possible to upload/submit a pipeline to KubeFlow via json representation or any other representation instead of a zip file(tar.gz) representation? Is there a way to bypass the filesystem persistence of files(zips and tar.gz) and add them into database as a yaml/json representation?
When you compile your python pipeline code then it results in a compressed file containing a YAML file. You can take out the YAML file after decompressing it and you can add its contents to your database table.
Later If you want to upload it to Kubeflow then use the following code:
pipeline_file_path = 'pipelines.yaml' # extract it from your database
pipeline_name = 'Your Pipeline Name'
client = kfp.Client()
pipeline = client.pipeline_uploads.upload_pipeline(
pipeline_file_path, name=pipeline_name)

Is there any way to dynamically convert a Jenkinsfile into a Drone.yml pipeline configuration?

I currently have some existing Jenkinfiles from an older Jenkins CI/CD pipeline configuration. I've started migrating services to Drone CI recently but not quite sure how some of the Jenkins (groovy) commands translate to Drone's yaml syntax.
Example (redacted / sample):
// ...
stage('version')
choice = new ChoiceParameterDefinition('VERSION', ['x', 'y', 'z'] as String[], '...')
def type = input(id: 'type', message: 'Select one', parameters: [choice])
stage('Tag') {
sh "./some-script/.sh -t ${type}"
}
// ...
Is there anything that could do the conversion automatically? The DroneCI docs are pretty vague and don't cover many important pipeline design aspects (at least not from what I've found).
Unfortunately this is impossible to achieve in DroneCI by the same means. This is because Jenkins allows entering input from the UI when running a pipeline, while DroneCI does not.
You can however specify properties such as the version number in a different file that the pipeline can identify and process accordingly.

Fail stage on failed expressions in Spinnaker pipeline stag

I am working on Spinnaker Pipeline. I noticed there is an option called Fail stage on failed expressions when editing the stag through the Web UI. I didn't find any explanation about it in the docs, could somebody give an example about it?
This option is used to fail stage if any pipeline expression like ${} inside it failed to be processed even if stage itself was successful.
For example for Evaluate Variables stage this option is enabled by default.

Jenkins scripted pipeline or declarative pipeline

I'm trying to convert my old style project base workflow to a pipeline based on Jenkins. While going through docs I found there are two different syntaxes named scripted and declarative. Such as the Jenkins web declarative syntax release recently (end of 2016). Although there is a new syntax release Jenkins still supports scripted syntax as well.
Now, I'm not sure in which situation each of these two types would be a best match. So will declarative be the future of the Jenkins pipeline?
Anyone who can share some thoughts about these two syntax types.
When Jenkins Pipeline was first created, Groovy was selected as the foundation. Jenkins has long shipped with an embedded Groovy engine to provide advanced scripting capabilities for admins and users alike. Additionally, the implementors of Jenkins Pipeline found Groovy to be a solid foundation upon which to build what is now referred to as the "Scripted Pipeline" DSL.
As it is a fully featured programming environment, Scripted Pipeline offers a tremendous amount of flexibility and extensibility to Jenkins users. The Groovy learning-curve isn’t typically desirable for all members of a given team, so Declarative Pipeline was created to offer a simpler and more opinionated syntax for authoring Jenkins Pipeline.
The two are both fundamentally the same Pipeline sub-system underneath. They are both durable implementations of "Pipeline as code." They are both able to use steps built into Pipeline or provided by plugins. Both are able to utilize Shared Libraries
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Copied from Syntax Comparison
Another thing to consider is declarative pipelines have a script() step. This can run any scripted pipeline. So my recommendation would be to use declarative pipelines, and if needed use script() for scripted pipelines. Therefore you get the best of both worlds.
I made the switch to declarative recently from scripted with the kubernetes agent. Up until July '18 declarative pipelines didn't have the full ability to specify kubernetes pods. However with the addition of the yamlFile step you can now read your pod template from a yaml file in your repo.
This then lets you use e.g. vscode's great kubernetes plugin to validate your pod template, then read it into your Jenkinsfile and use the containers in steps as you please.
pipeline {
agent {
kubernetes {
label 'jenkins-pod'
yamlFile 'jenkinsPodTemplate.yml'
}
}
stages {
stage('Checkout code and parse Jenkinsfile.json') {
steps {
container('jnlp'){
script{
inputFile = readFile('Jenkinsfile.json')
config = new groovy.json.JsonSlurperClassic().parseText(inputFile)
containerTag = env.BRANCH_NAME + '-' + env.GIT_COMMIT.substring(0, 7)
println "pipeline config ==> ${config}"
} // script
} // container('jnlp')
} // steps
} // stage
As mentioned above you can add script blocks. Example pod template with custom jnlp and docker.
apiVersion: v1
kind: Pod
metadata:
name: jenkins-pod
spec:
containers:
- name: jnlp
image: jenkins/jnlp-slave:3.23-1
imagePullPolicy: IfNotPresent
tty: true
- name: rsync
image: mrsixw/concourse-rsync-resource
imagePullPolicy: IfNotPresent
tty: true
volumeMounts:
- name: nfs
mountPath: /dags
- name: docker
image: docker:17.03
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: nfs
nfs:
server: 10.154.0.3
path: /airflow/dags
declarative appears to be the more future-proof option and the one that people recommend. it's the only one the Visual Pipeline Editor can support. it supports validation. and it ends up having most of the power of scripted since you can fall back to scripted in most contexts. occasionally someone comes up with a use case where they can't quite do what they want to do with declarative, but this is generally people who have been using scripted for some time, and these feature gaps are likely to close in time.
more context: https://jenkins.io/blog/2017/02/03/declarative-pipeline-ga/
The Jenkins documentation properly explains and compares both the types.
To quote:
"Scripted Pipeline offers a tremendous amount of flexibility and extensibility to Jenkins users. The Groovy learning-curve isn’t typically desirable for all members of a given team, so Declarative Pipeline was created to offer a simpler and more opinionated syntax for authoring Jenkins Pipeline.
The two are both fundamentally the same Pipeline sub-system underneath."
Read more here:https://jenkins.io/doc/book/pipeline/syntax/#compare
The declarative pipeline is defined within a block labelled ‘pipeline’ whereas the scripted pipeline is defined within a ‘node’.
Syntax - Declarative pipeline has 'Stages' , 'Steps'
If the build is failed, declarative one gives you an option to restart the build from that stage again which is not true in scripted option
If there is any issue in scripting, the declarative one will notify you as soon as you build the job but in case of scripted , it will pass the stage that is 'Okay' and throw error on the stage which is 'Not ok'
You can also refer this. A very Good read -> https://e.printstacktrace.blog/jenkins-scripted-pipeline-vs-declarative-pipeline-the-4-practical-differences/
#Szymon.Stepniak https://stackoverflow.com/users/2194470/szymon-stepniak?tab=profile
I also have this question, which brought me here. Declarative pipeline certainly seems like the preferred method and I personally find it much more readable, but I'm trying to convert a mid-level complexity Freestyle job to Declarative and I've found at least one plugin, the Build Blocker plugin, that I can't get to run even in the a script block in a step (I've tried putting the corresponding "blockOn" command everywhere with no luck, and the return error is usually "No such DSL method 'blockOn' found among steps".) So I think plugin support is a separate issue even with the script block (someone please correct me if I'm wrong in this.) I've also had to use the script block several times to get what I consider simple behaviors to work such as setting the build display name.
Due to my experience, I'm leaning towards redoing my work as scripted since support for Declarative still isn't up to where we need, but it's unfortunate as I agree this seems the most future proof option, and it is officially supported. Maybe consider how many plugins you intend to use before making a choice.

How can I test my jenkins ci pipelines locally?

I am writing a number of ci scripts for jenkins pipelines. A frequently occuring pattern is
dir("path/to/stuff"){
do_stuff()
}
I would like to 'test-run' these scripts to achieve a (very) short feedback loop. But I immediately run into the fact that this dir method is not an 'official' groovy method.
$ groovy ci/test-ci-scripts.groovy
Caught: groovy.lang.MissingMethodException: No signature of method:
test-ci-scripts.dir() is applicable for argument types: ....
What do I need to import to get this running?
just use single apostrophe's
dir('path') {
// some block
}
works fine for me. (You can find dir in Jenkins Pipeline Snippet Generator)

Resources