Jenkins ec2 plugin ssh keys - jenkins

I have a groovy script which will configure AWS ec2 plugin with required data. I am able to configure all other inputs. I need to give private key in same region, is there any way that i can generate and configure this key in grrovy script. followed below document and template.
https://gist.github.com/vrivellino/97954495938e38421ba4504049fd44ea
https://github.com/jenkinsci/ec2-plugin/blob/master/src/main/java/hudson/plugins/ec2/SlaveTemplate.java

This will help you to get Jenkins private keys:
EC2Cloud cloud = Jenkins.instance.clouds.find { it instanceof EC2Cloud }
KeyPair key_pair= cloud.getKeyPair()
private_key_text = key_pair.keyMaterial
def secret_key = hudson.util.Secret.decrypt(cloud.getSecretKey()).toString()

I am not sure if this is the right answer to your question, but this is where Google led me when I wanted to decipher the private key for the EC2 Jenkins plugin.
This worked for me with Jenkins 2.190.2.
import hudson.plugins.ec2.AmazonEC2Cloud
def cloud = Jenkins.instance.clouds.find { it instanceof AmazonEC2Cloud }
println cloud.getKeyPair().keyMaterial

Related

Using the AWS CDK to deploy bucket, build frontend, then move files to bucket in a Code Pipeline

I'm using the AWS CDK to deploy code and infrastructure from a monorepo that includes both my front and backend logic (along with the actual CDK constructs). I'm using the CDK Pipelines library to kick off a build on every commit to my main git branch. The pipeline should:
deploy all the infrastructure. Which at the moment is just an API gateway with an endpoint powered by a Lambda function, and a S3 bucket that will hold the built frontend.
configure and build the frontend by providing the API URL that was just created.
move the built frontend files to the S3 bucket.
My Pipeline is in a different account than the actual deployed infrastructure. I've bootstrapped the environments and set up the correct trust policies. I've succeeded in the first two points by creating the constructs and saving the API URL as a CfnOutput. Here's a simplified version of the Stack:
class MyStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const API = new aws_apigateway.LambdaRestApi(this, id, {
handler: lambda,
});
this.apiURL = new CfnOutput(this, 'api_url', { value: api.url });
const bucket = new aws_s3.Bucket(this, name, {
bucketName: 'frontend-bucket',
...
});
this.bucketName = new CfnOutput(this, 'bucket_name', {
exportName: 'frontend-bucket-name',
value: bucket.bucketName
})
}
Here's my pipeline stage:
export class MyStage extends Stage {
public readonly apiURL: CfnOutput;
public readonly bucketName: CfnOutput;
constructor(scope, id, props) {
super(scope, id, props);
const newStack = new MyStack(this, 'demo-stack', props);
this.apiURL = backendStack.apiURL;
this.bucketName = backendStack.bucketName;
}
}
And finally here's my pipeline:
export class MyPipelineStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const pipeline = new CodePipeline(this, 'pipeline', { ... });
const infrastructure = new MyStage(...);
// I can use my output to configure my frontend build with the right URL to the API.
// This seems to be working, or at least I don't receive an error
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build'
],
primaryOutputDirectory: 'frontend/dist',
envFromCfnOutputs: {
AWS_API_BASE_URL: infrastructure.apiURL
}
})
// Now I need to move the built files to the S3 bucket
// I cannot get the name of the bucket however, it errors with the message:
// No export named frontend-bucket-name found. Rollback requested by user.
const bucket = aws_s3.Bucket.fromBucketAttributes(this, 'frontend-bucket', {
bucketName: infrastructure.bucketName.importValue,
account: 'account-the-bucket-is-in'
});
const s3Deploy = new customPipelineActionIMade(frontend.primaryOutput, bucket)
const postSteps = pipelines.Step.sequence([frontend, s3Deploy]);
pipeline.addStage(infrastructure, {
post: postSteps
});
}
}
I've tried everything I can think of to allow my pipeline to access that bucket name, but I always get the same thing: // No export named frontend-bucket-name found. Rollback requested by user. The value doesn't seem to get exported from my stack, even though I'm doing something very similar for the API URL in the frontend build step.
If I take away the 'exportName' of the bucket and try to access the CfnOutput value directly I get a dependency cannot cross stage boundaries error.
This seems like a pretty common use case - deploy infrastructure, then configure and deploy a frontend using those constructs, but I haven't been able to find anything that outlines this process. Any help is appreciated.

Access Jenkins credentials bindings from inside a Jenkins job DSL script

I'm not creating a new job.
I want to access a Jenkins secret string binding from inside a job DSL script. I haven't been able to find examples of this.
If I have a secret string binding in Jenkins named "my-secret-string" how do I get the value of that in a DSL script? I want the DSL to make REST calls and other things using secrets I have securely stored in Jenkins.
I cant use credentials('<idCredentials>') because I'm not creating a new job or anything, I want to use those secret values in the DSL script itself.
I don't understand the scenario. You are not creating a new job but you are still inside a job? What does that mean? I understood that you defined a credential - secret text in Jenkinks and you want to access it from a job? This is a standard scenario:
withCredentials([string(credentialsId: 'my-secret-string', variable: 'mySecretStringVar')]){
println mySecretStringVar
}
From Jenkins Console or groovy script epending on where credentials are located:
def getFolderCredsScript(def pipelineFolder, def credId){
def credentialsStore =
jenkins.model.Jenkins.instance.getAllItems(com.cloudbees.hudson.plugins.folder.Folder.class).findAll{it.name.equals(pipelineFolder)}
.each{
com.cloudbees.hudson.plugins.folder.AbstractFolder<?> folderAbs = com.cloudbees.hudson.plugins.folder.AbstractFolder.class.cast(it)
com.cloudbees.hudson.plugins.folder.properties.FolderCredentialsProvider.FolderCredentialsProperty property = folderAbs.getProperties().get(com.cloudbees.hudson.plugins.folder.properties.FolderCredentialsProvider.FolderCredentialsProperty.class)
if(property != null){
for (cred in property.getCredentials()){
if ( cred.id == credId ) {
return "${cred.username}:${cred.password}"
}
}
}
}
}
def getGlobalCredsScript(def credId){
def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class, Jenkins.instance, null, null);
for (cred in creds) {
if (cred.id == credId){
return "${cred.username}:${cred.password}"
}
}
}
I found this question when trying to figure out how to set authenticationToken in my jenkins DSL. You can't use withCredential or a credentials call since it only accepts a string. The answer I found is to wrap the build/seed file. It can use withCredential and you pass in the credential as a string like this:
Jenkinsfile.build
withCredentials([
string(credentialsId: 'deploy-trigger-token', variable: 'TRIGGER_TOKEN'),
]) {
jobDsl targets: ".jenkins/deploy_${env.INSTANCE}_svc.dsl",
ignoreMissingFiles: true,
additionalParameters: [
trigger_token: env.TRIGGER_TOKEN
]
}
Then in your dsl file:
pipelineJob("Deploy Service") {
...
authenticationToken (trigger_token)
...
}
So to answer your question, you are correct you can't directly access the credential in your dsl, instead you do it in the seed build file which passes it in as a additionalParameters variable.

Configure bitbucket plugin to avoid hardcoding of secure variable

I have developed an Atlasian Bitbucket plugin which globally listens for push/PR and send repository details to databases using REST API.
I need to configure REST API URL and credential so that my plugin can make an API call. Currently I have hardcoded REST API URL and credential in my plugin properties file. Which I don't like because every time if I need to create a package to target my test environment or production, I have to change. Also, I don't like to keep credentials in the source code.
What is the best way to add configuration screen in the bitbucket plugin? I would like to have form for URL, username and password (once I installed the plugin) and update the storage in Bitbucket only once. If I need to restart my bitbucket, I do not want to lose saved data.
I tried to search on how to configure a bitbucket plugin, however I could not find an easy way. I do see multiple approaches, for example to add "Configure" button which will open a servelet to take user input. Seems very cryptic to me. Also, I see so many recommendations for template, for example velocity, soy etc which confused me a lot.
Since I am new to plugin development therefore not able to explore. Looking for some help.
I have solution for this case:
From pom.xml please add more library:
<dependency>
<groupId>com.atlassian.plugins</groupId>
<artifactId>atlassian-plugins-core</artifactId>
<version>5.0.0</version>
<scope>provided</scope>
</dependency>
Create new abc-server.properties on resources/ folder with following content:
server.username=YOUR_USERNAME
server.password=YOUR_PASSWORD
Get value from abc-server.properties on service class as the following:
import com.atlassian.plugin.util.ClassLoaderUtils;
...
final Properties p = new Properties();
final InputStream is = ClassLoaderUtils.getResourceAsStream("abc-server.properties", this.getClass());
try {
if (is != null) {
p.load(is);
String username = p.getProperty("server.username");
String password = p.getProperty("server.password");
}
} catch (IOException e) {
e.printStackTrace();
}
Please try to implement it. Thanks!
One possibility for a simple configuration file, is to read somefile.properties from the Bitbucket home directory, this way the config file will survive application updates.
Create somefile.properties in BITBUCKET_HOME
server.username=YOUR_USERNAME
server.password=YOUR_PASSWORD
Read the properties in your plugin class like this
// imports
import com.atlassian.bitbucket.server.StorageService;
import com.atlassian.plugin.spring.scanner.annotation.imports.ComponentImport;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
private final StorageService storageService;
// StorageService injected via constructor injection
public SomePlugin(#ComponentImport final StorageService storageService) {
this.storageService = storageService;
}
Properties p = new Properties();
File file = new File(storageService.getHomeDir().toString(), "somefile.properties");
FileInputStream fileInputStream;
try {
fileInputStream = new FileInputStream(file);
p.load(fileInputStream);
String username = p.getProperty("server.username");
String password = p.getProperty("server.password");
} catch (IOException e) {
//handle exception
}

job-dsl - How to pass credentials while creating jobs from gitlab repo branches?

I am creating a jobs for each application branches from github.
I am not sure how to pass the credentials to the repo link?
import groovy.json.*
def project = 'app-ras'
def branchApi = new URL("https://gitlab.etctcssd.com/sdadev/${project}/branches")
def branches = new JsonSlurper().parse(branchApi.newReader())
branches.each {
def branchName = it.name
def jobName = "${project}-${branchName}".replaceAll('/','-')
job(jobName) {
scm {
git("https://gitlab.etctcssd.com/sdadev/${project}.git", branchName)
}
}
}
Our project is secure project in gitlab, so how can I pass the credentials in this case?
I am sure it would redirect to login page. But I am not sure how to handle this. Any help would be greatly appreciated.
I hope it will work in the following way:
import groovy.json.JsonSlurper
def project = 'app-ras'
def branchApi = new URL("https://gitlab.etctcssd.com/sdadev/${project}/branches")
def branches = new JsonSlurper().parse(branchApi.newReader())
branches.each {
def branchName = it.name
String jobName = "${project}-${branchName}".replaceAll('/', '-')
job(jobName) {
scm {
git {
branch(branchName)
remote {
url("https://gitlab.etctcssd.com/sdadev/${project}.git")
credentials("HERE")
}
}
}
}
}
Try to substitute HERE with plain credentials (a kind of an access token) or with credential ID (of type Secret text) defined under Jenkins -> Credentials.
Also, are you using gitlab or github?
EDIT
So as far as I understood you have problems with fetching the branches names not with the Jenkins DSL. Here you can see how to fetch branches from gitlab. In groovy in can be done in the following way:
URLConnection connBranches = new URL("https://gitlab.etctcssd.com/sdadev/${project}/branches").openConnection()
connBranches.setRequestProperty("PRIVATE-TOKEN", "PASTE TOKEN VALUE HERE")
new JsonSlurper().parse(new BufferedReader(new InputStreamReader(connBranches.getInputStream())))

How to set an environment variable in Jenkins DSL using the Credentials Binding plugin?

I have created a credential in Jenkins called AZURE_CLIENT_ID. I have the "Credentials Binding Plugin" installed.
If I create a Job manually in the UI I am able to select the Binding I would like for the Environment and select my Secret Text type.
I want to replicate this in my Jobs DSL script. I have found the following snippet which is very close to what I want to do:
job('example-2') {
wrappers {
credentialsBinding {
usernamePassword('PASSWORD', 'jarsign-keystore')
}
}
}
However the credential I want to inject is Secret Text and I cannot find what the function to it with is, e.g. instead of usernamePassword. Does anyone know what this should be please?
'Secret text' kind credentials are retrieved as 'string()' in the credentialBinding context.
For example:
job('example') {
wrappers {
credentialsBinding {
string('SECRETWORD', 'name_of_credential')
}
}
}
Documentation at: https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.helpers.wrapper.WrapperContext.credentialsBinding

Resources