How to tag stacks deployed by CDK Pipelines? - aws-cdk

When doing a CloudFormationCreateUpdateStackAction in a CodePipeline (#aws-cdk/aws-codepipeline), you can pass in the tags via templateConfiguration property.
This will tag all resources created by the Stack deployed.
However, this stack action is abstracted when you use CDK Pipelines Library. (aws-cdk-lib.pipelines).
How do you achieve this with CDK Pipelines?

From the source code here and here, it appears that the pipelines.CodePipeline construct simply applies the stack's tags to the templateConfiguration.
If your stacks are added in a cdk.Stage subclass, you can apply tags the usual way:
cdk.Tags.of(stack).add('DeployContext', 'PipelineStage');
If your app reuses stack code in different scope contexts (i.e. a stack may or may not be in a cdk.Stage context), set Pipeline-only tags like this:
// identify whether a stack has a pipeline stage ancestor
const stage = !cdk.App.isApp(cdk.Stage.of(myStack)) ? cdk.Stage.of(myStack) : undefined;
// add tags for pipeline deployments only
if (stage?.stageName) {
cdk.Tags.of(myStack).add('StageName', stage?.stageName);
}

Related

In Jenkins Pipeline return a value ( map or list ) from shared library in vars folder

I have a Jenkins Pipeline which invokes a groovy function inside vars folder.
a) Is it a right approach to return a value , say a list or map from the script and access it in the pipeline
b) Even if its not the right approach is there a way to achieve this functionality?
For utility functions I would use the src folder, however I don't see a reason why it shouldn't work with the vars folder.
From within Jenkinsfile call:
def result = yourClass{ yourArg }
and add a return value to the definition if vars like:
def call(body) { return true }
Yes, there is nothing wrong with that approach. Many method calls in shared libraries return values to be manipulated in the pipeline.
You return the value just like you would for any other method call. In a declarative pipeline, you can't assign that return value to anything unless you are in a script{} tag. But you could always use a GString to print it.

Unable to run multiple Pipelines in desired order by creating template in Apache Beam

I have two separate Pipelines say 'P1' and 'P2'. As per my requirement I need to run P2 only after P1 has completely finished its execution. I need to get this entire operation done through a single Template.
Basically Template gets created the moment it finds run() its way say p1.run().
So what I can see that I need to handle two different Pipelines using two different templates but that would not satisfy my strict order based Pipeline execution requirement.
Another way I could think of calling p1.run() inside the ParDo of p2.run() and keep the run() of p2 wait until finish of run() of p1. I tried this way but stuck at IllegalArgumentException given below.
java.io.NotSerializableException: PipelineOptions objects are not serializable and should not be embedded into transforms (did you capture a PipelineOptions object in a field or in an anonymous class?). Instead, if you're using a DoFn, access PipelineOptions at runtime via ProcessContext/StartBundleContext/FinishBundleContext.getPipelineOptions(), or pre-extract necessary fields from PipelineOptions at pipeline construction time.
Is it not possible at all to call the run() of a pipeline inside any transform say 'Pardo' of another Pipeline?
If this is the case then how to satisfy my requirement of calling two different Pipelines in sequence by creating a single template?
A template can contain only a single pipeline. In order to sequence the execution of two separate pipelines each of which is a template, you'll need to schedule them externally, e.g. via some workflow management system (such as what Anuj mentioned, or Airflow, or something else - you might draw some inspiration from this post for example).
We are aware of the need for better sequencing primitives in Beam within a single pipeline, but do not have a concrete design yet.

How to get PipelineOptions in composite PTransform in Beam 2.0?

After upgrading to Beam 2.0 the Pipeline class doesn't have getOptions() class anymore.
I have a composite PTransform that relies on getting the options in the its expand method:
public class MyCompositeTransform extends PTransform<PBegin, PDone> {
#Override
public PDone expand(PBegin input) {
Pipeline pipeline = input.getPipeline();
MyPipelineOptions options = pipeline.getOptions().as(MyPipelineOptions.class);
...
}
}
In Beam 2.0 there doesn't seem to be a way to access the PipelineOptions at all in the expand method.
What's the alternative?
Pablo's answer is right on. I want to also clarify that there is a major change in how PipelineOptions are managed.
You can use them to parse and pass around the arguments to your main program (or whatever code builds your pipeline) but these are technically independent from the PipelineOptions that configure how your pipeline is run.
In Beam, the Pipeline is fully constructed, and only afterwards you choose a PipelineRunner and PipelineOptions to control how the pipeline is run. The pipeline itself does not actually have options.
If you do want the behavior of your PTransform (not its expansion) to use some option that is obtained dynamically, you should make your PTransform accept a ValueProvider like this example in WriteFiles and you can define a pipeline option that returns a ValueProvider like here in ValueProviderTest

Create SVNClientManager on slaves

In my Jenkins plugin this code is used to create an instance of SVNClientManager:
final SVNClientManager svnm = SubversionSCM.createSvnClientManager(build.getProject());
It works fine on the master, but to support slaves I have to change it from
SubversionSCM.createSvnClientManager(AbstractProject)
to
SubversionSCM.createSvnClientManager(ISVNAuthenticationProvider)
According to the documentation these steps are required to get a instance of ISVNAuthenticationProvider:
Therefore, to access ISVNAuthenticationProvider, you need to call this method on the master, then pass the object to the slave side, then call SubversionSCM.createSvnClientManager(ISVNAuthenticationProvider) on the slave.
But I have no clue how to implement it. How to ensure that a method is called on the master? Please provide a short example (maybe based on the default plugin "HelloWorldBuilder").
After hours of testing I found it out by myself. Use the main instance to ensure that you call the function "createAuthenticationProvider" on the master. I put this functionality in a separated method of the plugin:
private ISVNAuthenticationProvider createAuthenticationProvider(AbstractProject context) {
return Hudson.getInstance().getDescriptorByType(SubversionSCM.DescriptorImpl.class)
.createAuthenticationProvider(context);
}
During the execution of the plugin you can generate a valid AuthenticationProvider by calling the method:
createAuthenticationProvider(build.getProject())

How to get executing job's name?

In my application I have a static class (singleton) that needs to be initialized with some environmental variables that's used through out my layers, I'm calling it my applicationContext. That in turn has customer and user contexts.
As each job runs it modifies these customer and user contexts depending on the situation.
The problem I have is that when 2 jobs fires concurrently they might overwrite each others contexts, therefor I need to keep multiple user and customer contexts alive for each running job and I need to be able to pick the right context by somehow being able to see what the current job is.
Is it possible to somehow get information about the current executing quartz.net job?
I'm envisioning something like this where "currentQuartzJob.Name" is made up and is the part I'm missing:
public class MyContextImpl : IApplicationContext {
private Dictionary<string,subContexts> allCustomerContexts;
public string CurrentContext
{
get { return allCustomerContexts[currentQuartzJob.Name] };
}
}
edit:
I don't think it's possible to do what I wanted, that is to be able to get the executing job's name in a class that doesn't know about Quartz.Net.
What I really needed was a way to keep a different context for each job. I managed to do that by looking at the executing thread's ID as they seem to differ for each running job.
Try this:
public void Execute(IJobExecutionContext context)
{
var yourJobName = context.JobDetail.Key.Name;
}
Given your statement above: "The problem I have is that when 2 jobs fires concurrently they might overwrite each others contexts", you may want to reduce the complexity of your application by making sure that jobs do not fire concurrently. This can be achieved by implementing the IStatefulJob interface instead of the usual IJob interface: http://quartznet.sourceforge.net/tutorial/lesson_3.html
Alternatively if that is not an option you can query the Scheduler object for the currently executing jobs via the Ischeduler.GetCurrentlyExecutingJobs() method. This method returns an IList of those jobs but note the following remarks when using that method (from version 1.0.3 API):
This method is not cluster aware. That is, it will only return Jobs currently executing in this Scheduler instance, not across the entire cluster.
Note that the list returned is an 'instantaneous' snap-shot, and that as soon as it's returned, the true list of executing jobs may be different. Also please read the doc associated with JobExecutionContext- especially if you're using remoting.
I'm not entirely sure of what you're doing with your application, but I will say that the name of the currently executing job is definitely available during job execution.
Inside the IJob Execute() method:
// implementation in IJob
public void Execute(JobExecutionContext context)
{
// get name here
string jobName = context.JobDetail.Name;
}
Is that what you're looking for?

Resources