I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.
I am trying to programmatically add the mention of users that are members of groups in TFS in the discussion area of work items. We were using the 1.0 version with TFS 2017 update 2 with success:
#{id.DisplayName}
However upgrading to TFS 2017 update 3 fails to send emails on the notifications. We also tried all of the "user ids" we could find on the TeamFoundationIdentitiy object for the solutions found here:
VSTS - uploading via an excel macro and getting #mentions to work
So how can we get emails for #mentions to work again in TFS 2017.3?
Update: 9/11/2018
Verified service account fails to send emails while my account running the same code will send emails for mentions:
using (var connection = new VssConnection(collectionUri, cred))
using (var client = connection.GetClient<WorkItemTrackingHttpClient>())
{
var wi = new JsonPatchDocument
{
new JsonPatchOperation()
{
Operation = Operation.Add,
Path = "/fields/System.History",
Value = $"#{id.DisplayName} <br/>"
}
};
using (var response = client.UpdateWorkItemAsync(wi, workItemId, suppressNotifications: false))
{
response.Wait();
}
}
We solved by dropping use of the WorkItemHttpClient and going back to loading the SOAP WorkItemStore as the user that submitted the changes instead of the service account. It would be nice if we could use impersonation of a user with TFS's WebApi
I use the Parameterized Trigger Plugin for Jenkins to trigger a Multibranch Pipeline project (RED Outlook Addin). After the build has finished I want to copy the artifacts via Copy Artifact Plugin.
I Add a build step "copy artifacts from another project" with project name "RED Outlook Addin/${CIOS_BRANCH_NAME}" because I get the branch name as a parameter. This works if I specify the build number like "12". But if I set the build number to $TRIGGERED_BUILD_NUMBER_RED_Outlook_Addin_${CIOS_BRANCH_NAME} I get this error: Unable to find project for artifact copy.
How can I call the $TRIGGERED_BUILD_NUMBER_ Parameter with the specified branch?
Thx for help
Chris
You could query the json api of your jenkins server, for example using httpRequest plugin:
#NonCPS
def parseJson(String text) {
def sup = new JsonSlurper()
def json = sup.parseText(text)
sup = null
return json
}
def getLastStableBuildNumber(String project, String branchName = 'master') {
def response = httpRequest url: "http://jenkins/job/${project}/job/${branchName}/lastStableBuild/api/json", validResponseCodes: '200'
def json = parseJson(response.content)
return json.number
}
Looking for ways to trigger a "perform maven" release job from another jenkins job. It can be a rest api (or) a plugin that can do it. I saw posts about "trigger paramterized" plugin which can do this, but I cant see a way to do it . So I need real examples on how to try it.
Thanks!
This task has been open in Jenkin's Jira since July 2015 with no movement yet.
Since this is the case, I suggest using an HTTP POST to accomplish this task. To do this, you will need to do the following:
Install the HTTP Request Plugin
Create an httpUser (or use an existing one) with the appropriate Matrix Permissions and then grab its API Token Jenkins -> People -> httpUser -> Configure -> API Token -> Show API Token...
Jenkins -> Manage Jenkins -> Configure System -> HTTP Request -> Basic/Digest Authentication -> Add -> create a Global HTTP Authentication Key with the information from step 2
Create a "parent" job that will trigger other Jenkins job(s) via the M2-Release-Plugin and configure it as follows:
This build is parameterized
releaseVersion (Text Parameter)
developmentVersion (Text Parameter)
(add other parameters as required, look in the doSubmit method for details)
Build -> Add build step -> HTTP Request
URL (should have this format) = http://JenkinsServerName/job/JenkinsJobName/m2release/submit
HTTP mode = POST
Advanced...
Authorization -> Authenticate = select the Authenticate option created in step 3
Headers -> Custom headers -> Add
Header = Content-type
Value = application/x-www-form-urlencoded
Body -> Pass build params to URL? = Yes
Request body = (your parameters from step 5 and a json parameter object with any additional parameters required)
Response body in console? = Yes
These are the steps I followed to have one Jenkins job trigger an m2release on another job in my environment. Hopefully this helps others and should I lose my notes or memory, I can refer to this post as well.
In my Jenkins i have installed a new plugin to see next executions details
Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Next+Executions
I can see that in Jenkins dashboard successfully but how can i access its details through REST API, like the way we do for all other stuff in Jenkins.
I am using Java to access Jenkins via REST API.
Thanks
UPDATED in 2016.9.20 the REST API is supported from release 1.0.12
<jenkinsurl>/view/<viewname>/widgets/<n>/api/json?pretty=true
see detail for the ticket JENKINS-36210
Below is left for reference
Though the REST API doesn't exist, I share the html parse python code sample for reference
It use the internal lxml code to parse and generate the list of the data, key code segment here
html = urllib2.urlopen(url, context=CTX).read()
# use beautifulSoup4 instead of lxml is better, but it is not default
html2 = lxml.html.fromstring(html)
div = html2.get_element_by_id("next-exec") # key id !!
result = lxml.html.tostring(div)
tree = lxml.html.fromstring(result) # ugly, but it works
trs = tree.xpath('/html/body/div/div/table/tr')
for tr in trs:
tds = tr.xpath("td")
url = tds[0].xpath("a/#href")[0]
jobname = tds[0].text_content()
datetime = tds[1].text_content()
status.append((datetime, jobname, url))
return status
see detail in https://gist.github.com/larrycai/6828c959f57105ca93239ca6aa4fc6fa