I'm currently building my choices depending on available agents and its labels in a pipeline:
def loadConfigurations() {
def configurations = [];
def jenkins = Jenkins.instance;
def onlineComputers = jenkins.computers.findAll { it.online };
def availableLabels = onlineComputers.collect {
it.assignedLabels.collect { LabelAtom.escape(it.expression) } }
.flatten().unique(false);
def lineage16Configurations = ['samsung:klte:lineage:16.0'];
if (availableLabels.containsAll(['lineage', '16.0'])) {
configurations.addAll(lineage16Configurations);
}
return configurations;
}
def configurations = loadConfigurations();
pipeline {
agent { label 'master' }
parameters {
choice name: 'CONFIG', choices: configurations, description: 'Configuration containing vendor, device, OS and its version. Each separated by a colon.'
}
//...
Now, lets say all agents are offline, when requesting the remote access API I don't get up-to-date choices cause they're only updated when starting a build. Is there any existing way to retrieve them somehow through remote access API or do I need to write my own plugin which adds a new endpoint for the remote access API?
I've already tried the Active Choices Parameter and the Extended Choice Parameter without success. Both don't display any choices in the API.
I was playing around with the Extensible Choice Parameter and created a Pull Request which exposes the choiceList to the Remote Access API. The API was then returning changed choices without building the job.
Related
We have two environments, qa and dev, and that is configured as parameters in Jenkinsfile. Plugin Role-based Authorization Strategy is enabled, and there are two groups of users, qa and dev (same as environment). The problem here is that qa users can start to build jobs with dev environment. Is there any way that we restrict this behavior?
Here is a simple example:
pipeline {
agent any
choice(name: 'environment', choices: ['dev', 'qa']
stages {
stage('test') {
script {
if (params.environment == 'dev' && env.BUILD_USER_ID not in env.BUILD_USER_GROUPS) {echo "User ${env.BUILD_USER_ID} can not start build on DEV enviroment"}
else if (params.environment == 'qa' && env.BUILD_USER_ID not in env.BUILD_USER_GROUPS) {echo "User ${env.BUILD_USER_ID} can not start build on QA enviroment"}
else {echo "You can run job, You are in proper group for this enviroment"}
}
}
}
}
An example is not real, and maybe not working, but I hope that can be understood what I want to accomplish.
P.S. Documentation for this is not so good, and also can't find much more examples on web.
Instead of blocking (or failing) the execution after it started, you can use a different approach and prevent an unauthorized user to even start the build with irrelevant parameters (dev environment in this case).
To do so you can use the Extended Choice Parameter plugin, it enables you to create a select list value (multi or single select) based on the return value of a groovy script.
Then you can use the following script:
def buildUserGroup = ["group1","group 2","group3"]
def environments = ['qa'] // This default value will be available to everyone
// All the groups that the current logged in user is a member of
def currentUserGroups = hudson.model.User.current().getAuthorities()
if (currentUserGroups.any{ buildUserGroup.contains(it) }) {
environments.add("dev") // Add relevant environments according to groups
}
return environments
This way you can define the logic that will add environments according to group membership and adjusted it according to your needs. The user that builds the job wont even see the environments that he is not allowed to build and you will get the restriction you need.
In a Pipeline Job using your requirements the configuration can be simplified and will look like:
pipeline {
agent any
parameters {
extendedChoice(name: 'environment', type: 'PT_SINGLE_SELECT', description: 'Environment type', visibleItemCount: 10,
groovyScript:"return hudson.model.User.current().getAuthorities().contains('dev') ? ['dev','qa'] : ['qa']")
}
stages {
stage('test') {
....
}
}
}
Update:
If you are using the Role-based Authorization Strategy and want to use the above solution with roles instead of groups you can use the following code (based on this script in your parameter:
def environments = ['qa'] // This default value will be available to everyone
def userID = hudson.model.User.current().id // The current user it
def authStrategy = jenkins.model.Jenkins.instance.getAuthorizationStrategy()
def permissions = authStrategy.roleMaps.inject([:]){map, it -> map + it.value.grantedRoles}
// Validate current user is in the 'dev' role
if (permissions.any{it.key.name == 'dev' && it.value.contains(userID)}) {
environments.add("dev") // Add relevant environments according to groups
}
return environments
I'm using shared library to build CI/CD pipelines in Jenkins. And in my case, some of the stages need to send the execute info through web apis. In this case, we need to add stage id for current stage to api calls.
How can I access the stage id similar with ${STAGE_NAME}?
I use Pipeline REST API Plugin as well as HTTP Request Plugin
Your methods in Jenkinsfile can look like:
#NonCPS
def getJsonObjects(String data){
return new groovy.json.JsonSlurperClassic().parseText(data)
}
def getStageFlowLogUrl(){
def buildDescriptionResponse = httpRequest httpMode: 'GET', url: "${env.BUILD_URL}wfapi/describe", authentication: 'mtuktarov-creds'
def buildDescriptionJson = getJsonObjects(buildDescriptionResponse.content)
def stageDescriptionId = false
buildDescriptionJson.stages.each{ it ->
if (it.name == env.STAGE_NAME){
stageDescriptionId = stageDescription.id
}
}
return stageDescriptionId
}
Questiion is old but i found the solution: use some code from pipeline-stage-view-plugin( looks like it is already installed in jenkins by default)
we can take current job ( workflowrun ) and pass it as an argument to
com.cloudbees.workflow.rest.external.RunExt.create , and whoala: we have object that contains info about steps and time spent on it's execution.
Full code will looks like this:
import com.cloudbees.workflow.rest.external.RunExt
import com.cloudbees.workflow.rest.external.StageNodeExt
def getCurrentBuildStagesDuration(){
LinkedHashMap stagesInfo = [:]
def buildObject = com.cloudbees.workflow.rest.external.RunExt.create(currentBuild.getRawBuild())
for (StageNodeExt stage : buildObject.getStages()) {
stagesInfo.put(stage.getName(), stage.getDurationMillis())
}
return stagesInfo
}
Function will return
{SomeStage1=7, SomeStage2=1243, SomeStage3=5}
Tested with jenkins shared library and Jenkins 2.303.1
Hope it helps someone )
I have followed Create an Azure Key Vault-backed secret scope to integrate Databricks with Key Vault and all works ok. Unfortunately this requires manual intervention, which breaks our 'full automated infrastructure' approach. Is there any way to automate this step?
UPDATE: You create a Databricks-backed secret scope using the Databricks CLI (version 0.7.1 and above). Alternatively, you can use the Secrets API.
It does not appear that Azure Key Vault backed secret scope creation has a publicly available API call, unlike the Databricks backed secret scope creation. This is backed by the 'Note' on the secret scopes doc page:
Creating an Azure Key Vault-backed secret scope is supported only in the Azure Databricks UI. You cannot create a scope using the Secrets CLI or API.
A request for the feature you are asking for was made last year, but no ETA was given.
I took a look at the request made by the UI page. While the form data is simple enough, the headers and security measures make programmatic access impractical. If you are dead-set on automating this part, you could use one of those tools which automates the cursor around the screen and clicks things for you.
Now it is possible, but you can't use a service principal token. It must be a user token which hinder automation.
Refer to Microsoft Docs:
https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes#create-an-azure-key-vault-backed-secret-scope-using-the-databricks-cli
You can use Databricks Terraform provider to create secret scope baked by the Azure KeyVault. But because of Azure limitations it should be done by using user’s AAD token (usually using azure cli). Here is the working snippet for creation of the secret scope from existing KeyVault:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.2.9"
}
}
}
provider "azurerm" {
version = "2.33.0"
features {}
}
data "azurerm_databricks_workspace" "example" {
name = var.workspace_name
resource_group_name = var.resource_group
}
provider "databricks" {
azure_workspace_resource_id = data.azurerm_databricks_workspace.example.id
}
data "azurerm_key_vault" "example" {
name = var.keyvault_name
resource_group_name = var.resource_group
}
resource "databricks_secret_scope" "example" {
name = data.azurerm_key_vault.example.name
keyvault_metadata {
resource_id = data.azurerm_key_vault.example.id
dns_name = data.azurerm_key_vault.example.vault_uri
}
}
variable resource_group {
type = string
description = "Resource group to deploy"
}
variable workspace_name {
type = string
description = "The name of DB Workspace"
}
variable keyvault_name {
type = string
description = "The name of DB Workspace"
}
I'm quite new to Jenkins, Groovy and all that, so forgive me if this sounds dumb.
I'm using the Active Choices plugin, and from one of the AC Parameters inside the Groovy script I want to use a different plugin - Artifactory, to fetch a file and display each line inside it as an option.
try {
def server = Artifactory.newServer url: 'http://localhost:8081/artifactory/', username: 'user', password: 'pass'
def downloadSpec = """{
"files": [
{
"pattern": "example-repo-local/file.txt",
"target": "example/"
}
]
}"""
server.download(downloadSpec)
String text = readFile("example/file.txt")
return text.tokenize("\n")
} catch (Exception e) {
return [e]
}
However, the Active Choices Parameter doesn't seem to recognize other plugins, and it can't find the Artifactory property:
groovy.lang.MissingPropertyException: No such property: Artifactory for class: Script1
My question is - do I need to import the plugin somehow? If so, how do I determine what to import?
There is an option to also specify an "Additional classpath" near an Active Choice Parameter, but the plugin contains 75 jar files in its WEB-INF/lib directory. (just specifying the artifactory.jar one doesn't seem to change anything)
Just a note - the Pipeline recognizes the Artifactory plugin and it works fine - I can successfully connect and retreive a file and read it.
I can't fine any possibility to run Artifactory plugin in reasonable way. So i thing better option is use curl, and Artifactory API. For example my Active Choices Parameter based on Json file from Artifactory;
import groovy.json.JsonSlurper
def choices = []
def response = ["curl", "-k", "https://artifactory/app/file.json"].execute().text
def list = new JsonSlurper().parseText( response )
list.each { choices.push(it.name) }
return choices
I am trying to run some AJAX on the frontend of a Jenkins build configuration page. To be exact to have a dropdown selection of available databases on a remote server. How do I accomplish this?
In case you need a dynamic list you can use Active Choice Parameter and add a groovy script to generate dynamic list from a remote api call.
below is an example i use to generate list of vpc's from aws :
#!groovy
def sout = new StringBuffer(), serr = new StringBuffer()
def process = [ "aws", "ec2", "describe-vpcs", "--query", "Vpcs[*].[Tags[?Key==`Name`].Value]"].execute()
process.consumeProcessOutput(sout, serr)
process.waitFor();
def s3_vpcs = sout.tokenize('\n')
def vpcs = []
for ( vpc in s3_vpcs) {
vpcs.add(vpc)
}
return vpcs