Referencing Folder-level scope credentials in declarative Jenkins pipeline - jenkins

Question
I have the following script as part of a declarative pipeline in Jenkins
stages {
stage('sql') {
steps {
step([
$class: 'SQLPlusRunnerBuilder',
credentialsId:"sis-database-prod-schema-test",
instance:"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=db_${ENVIRONMENT}.int.excelsior.edu)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=${ENVIRONMENT})))",
scriptType:'userDefined',
script: '',
scriptContent:"select*from dual",
customOracleHome: '/usr/lib/oracle/12.2/client64'
])
}
}
}
You will notice I am referencing the credential ID sis-database-prod-schema-test. When I scope this credential globally this script works. However, when I scope the credential at the folder level it get the following error
ERROR: Invalid credentials [sis-database-prod-schema-test]. Failed to initialize credentials or load user and password
Here is a screenshot of my folder-level scope configuration
Additional Information
When I scope the credential at the folder level I can see it in a configuration drop down element ONLY when I am in the appropriate folder. So, in my mind, the scope configuration is correct but there referencing ( in the code ) is wrong.
The entry I have highlighted is the sis-database-prod-schema-test credential ID. The one below it ( sis-test-database-prod-schema ) is a global credential unrelated to this question
Edit : This Was a Known Issue
This is a known bug that the author was unable to fix. The relevant code is here. You can issue a pull request to fix the bug.

I don't know if you still care but I just submitted a pull request for this that got accepted. Now this problem should be fixed.

Known Issue
This is a known bug that the author was unable to fix. The relevant code is here. You can issue a pull request to fix the bug.

Related

How to fetch SSM Parameters from two different accounts using AWS CDK

I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.

How to provide keychainPwd in hudson.util.Secret format in Xcode Integration plugin while working with Blue Ocean?

We created Jenkins Pipeline as a Code using Blue Ocean and it was working fine until recently.
Now when we try to pass parameters for keychainPwd for Xcode integration plugin in Blue Ocean, it gives us errors and our Blue Ocean Pipeline for iOS is not working.
We tried to use Secret key of credential and tried to pass it as parameter but it is not working.
environment {
Keychain_pwd_id = credentials('test')
}
Here 'test' secret key was created.
We tried following as well:
keychainPwd: hudson.util.Secret.fromString("${Keychain_pwd_id}")
pipeline {
environment {
Keychain_pwd_id = credentials('test')
}
stages {
stage('Xcode Build') {
steps {
xcodeBuild(buildIpa: true, bundleID: 'com.xxx.xxxxxxxxxx', cleanBeforeBuild: true, configuration: 'Release', developmentTeamID: 'xxxxxxxx', developmentTeamName: 'xxxxxxxxxxxxxxxxxxxxx', ipaExportMethod: ‘enterprise’, ipaName: ‘xxxxxxxxxxx’, ipaOutputDirectory: 'build', keychainName: 'login', keychainPath: '${HOME}/Library/Keychains/login.keychain', keychainPwd: "${Keychain_pwd_id}, manualSigning: true, provisioningProfiles: [[provisioningProfileAppId: 'xxxxxxxxxxxxxxxxxxx', provisioningProfileUUID: 'xxxxxxxxxxxxxxxxxxxxxxxxxx']], unlockKeychain: true, xcodeSchema: ‘xxxxxxxxxxxxxxxx
}
}
}
}
Expecting "class hudson.util.Secret" for parameter "keychainPwd" but got "${keychainPwd}" of type class java.lang.String instead # line 12, column 407.
I'm currently working through the same issue. It seems the xcodebuild plugin updated recently to require it.
I was able to get this building with the following answers: How do i compare user inputed password to credentials passphrase
Jenkins CI Pipeline Scripts not permitted to use method groovy.lang.GroovyObject
The change from the first link is what you're looking for, but you may need to approve your script via the info in the second link.

How do you disable Jenkins CSRF with script?

I'm having issues disabling the CSRF protection in an automated fashion. I want to disable with a groovy init script or just in a property file before Jenkins Master Starts. I'm not sure why I'm getting a crumb issue I assume it has to do with the exposed LB in K8S / AWS. I'm using AWS ELB to expose pods and its causing a csrf exception in the crumb, and I also get a reverse proxy warning sometimes when I goto manage Jenkins.
I researched the issue it said I could enable the expanded proxy compatibility or disable the CSRF checking. I haven't found the groovy or config files where these live.
My current groovy init script is as follows:
import hudson.security.csrf.DefaultCrumbIssuer
import jenkins.model.Jenkins
def j = Jenkins.instance;
j.setCrumbIssuer(null); // I've also tried setting a new crumb issuer here as well.
j.save();
System.setProperty("hudson.security.csrf.CrumbFilter", "false");
System.setProperty("hudson.security.csrf", "false");
System.setProperty("hudson.security.csrf.GlobalCrumbIssuerConfiguration", "false");
I can't seem to find the reference as to how to disable this property or enable the Enable proxy compatibility property either.
Crumb Algorithm
Default Crumb Issuer
Enable proxy compatibility
I intercepted the request to configure when I click apply and the json payload passed seems like the setting is
"hudson-security-csrf-GlobalCrumbIssuerConfiguration": {
"csrf": {
"issuer": {
"value": "0",
"stapler-class": "hudson.security.csrf.DefaultCrumbIssuer",
"$class": "hudson.security.csrf.DefaultCrumbIssuer",
"excludeClientIPFromCrumb": true
}
}
},
im not sure what or how I'm supposed to set these.
If you really need to (temporarily) disable CSRF it can be done with groovy:
import jenkins.model.Jenkins
def instance = Jenkins.instance
instance.setCrumbIssuer(null)
It should be enabled afterwards again by setting to the Default CrumbIssuer again
as mentioned in the Jenkins Wiki:
import hudson.security.csrf.DefaultCrumbIssuer
import jenkins.model.Jenkins
def instance = Jenkins.instance
instance.setCrumbIssuer(new DefaultCrumbIssuer(true))
instance.save()
N.B.: It's not enough to set the Flag to enable CSRF protection via the GUI afterwards, you need to check the crumb algorithm, too.
I stumbled on this question while I was tearing my hair out trying to figure out more or less the same thing (in my case, I needed to know how the proxy compatibility option mapped to Jenkins' config.xml). In the HTML source for the form, there's this helpful bit of info (truncated for brevity):
<label>Enable proxy compatibility</label><a helpURL="/descriptor/hudson.security.csrf.DefaultCrumbIssuer/help/excludeClientIPFromCrumb"><img /></a>
excludeClientIPFromCrumb is a constructor parameter on DefaultCrumbIssuer, as the javadocs expose: http://javadoc.jenkins-ci.org/hudson/security/csrf/DefaultCrumbIssuer.html. I just needed to flip that value in my config.xml - my confusion stemmed from how the label for the field in the UI differed from the name of the constructor argument.
For your case, if you want to enable CSRF protection using the default crumb provider with "enable proxy compatibility" turned on, in your script you can do
j.setCrumbIssuer(new DefaultCrumbIssuer(true));
Instead of disabling the CSRF, you can simply add a crumb in your request so that you won't get that error anymore. Please go through this link to do it. Please go through this link for more info. Hope this helps.

Bitbucket Jenkins plugin constructs wrong push URL

We use Bitbucket server and want to trigger a Jenkins build whenever something is pushed to Bitbucket.
I tried to set up everything according to this page:
https://wiki.jenkins.io/display/JENKINS/BitBucket+Plugin
So I created a Post Webhook in Bitbucket, pointing at the Jenkins Bitbucket plugin's endpoint.
Bitbucket successfully notifies the plugin when a push occurs. According to the Jenkins logs, the plugin then iterates over all jobs where "Build when a change is pushed to BitBucket" is checked, and tries to match that job's repo URL to the URL of the push that occurred.
So, if the repo URL is
https://jira.mycompany.com/stash/scm/PROJ/project.git, the plugin tries to match it against
https://jira.mycompany.com/stash/PROJ/project, which obviously fails.
As per official info from Atlassian, Bitbucket cannot be prevented from inserting the "/scm/" part in the path.
The corresponding code in the Bitbucket Jenkins plugin is in class com.cloudbees.jenkins.plugins.BitbucketPayloadProcessor:
private void processWebhookPayloadBitBucketServer(JSONObject payload) {
JSONObject repo = payload.getJSONObject("repository");
String user = payload.getJSONObject("actor").getString("username");
String url = "";
if (repo.getJSONObject("links").getJSONArray("self").size() != 0) {
try {
URL pushHref = new URL(repo.getJSONObject("links").getJSONArray("self").getJSONObject(0).getString("href"));
url = pushHref.toString().replaceFirst(new String("projects.*"), new String(repo.getString("fullName").toLowerCase()));
String scm = repo.has("scmId") ? repo.getString("scmId") : "git";
probe.triggerMatchingJobs(user, url, scm, payload.toString());
} catch (MalformedURLException e) {
LOGGER.log(Level.WARNING, String.format("URL %s is malformed", url), e);
}
}
}
In the JSON payload that Bitbucket sends to the plugin, the actual checkout URL doesn't appear, only the link to the repository's Bitbucket page. The above method from the plugin appears to construct the checkout URL from that URL by removing everything after and including projects/ and adding the "full name" of the repo, resulting in the above wrong URL.
Official info from Atlassian is that Bitbucket cannot be prevented from adding the "scm" part to the checkout URL.
Is this a bug in the Jenkins plugin? If so, how can the plugin work for anyone?
I found the reason for the failure.
The issue is that the Bitbucket plugin for Jenkins does account for the /scm part in the path, but only if it's the first part after the host name.
If your Bitbucket server instance is configured not under its own domain but under a path of another service, matching the checkout URLs will fail.
Example:
https://bitbucket.foobar.com/scm/PROJ/myproject.git will work,
https://jira.foobar.com/stash/scm/PROJ/myproject.git will not work.
Someone who also had this problem has already created a fix for the plugin, the pull request for which is pending: JENKINS-49177: Now removing first occurrence of /scm

Jenkins job DSL plugin - hidden parameter

I am using the Jenkins hidden parameter plugin but I cant find the syntax to write it in DSL like I am doing with other parameters.
For example:
https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.helpers.BuildParametersContext.activeChoiceParam
Is there any way to reflect hidden parameter in DSL?
Job DSL has no built-in support for the Hidden Parameter plugin, so it's not mentioned in the API viewer. But it's supported by the Automatically Generated DSL:
job('example') {
parameters {
wHideParameterDefinition {
name('FOO')
defaultValue('bar')
description('lorem ipsum')
}
}
}
BEfore using the declarative pipeline syntax (described in jenkinsci/pipeline-model-definition-plugin), you would have used:
the groovy-based DSL plugin
in combination with the JENKINS Mask Passwords Plugin (PR 755)
But with the pure DSL pipeline syntax, this is not yet supported (April 2017).
The PR 34 (a secret step) has been rejected
The following issues are still open:
"JENKINS-27386: Access credentials value from workflow Groovy script" (when to be implemented in a DSL pipeline)
"JENKINS-27398: Pipeline-as-Code CredentialsProvider for a job" (which would at least allow you tu use credentials as a workaround to access secret values)
The last issue though points out to JENKINS-29922 (Promote delegates of metasteps to top-level functions, deprecate $class) and adds the comment:
JENKINS-29922 is implemented, so assuming a #Symbol is defined for each credentials kind, and a credentials step is marked metaStep, you could write more simply:
usernamePassword id: 'hipchat-login', username: 'bob', password: 'abc/def+GHI0123='
hipchat server: …, message: …, credentialsId: 'hipchat-login'
or even allow the id to be generated, and return it from the step:
hipchat server: …, message: …, credentialsId: usernamePassword(username: 'bob', password: 'abc/def+GHI0123=')
While that is encrypted, that is not exactly "hidden" though.

Resources