How to add multiple CodeCommit source repository to CdkPipline - aws-cdk

I am trying to create a CdkPipeline with multiple source CodeCommit repository.
I followed instructions from cdkworkshop to successfully create a self-mutating pipeline with single CodeCommit repository but cannot figure out how to add more packages (CodeCommit repository) inside the source stage.
I did see examples from https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html#build--test but this does not provide CDK's self-mutating capability.
This example https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html#codepipeline_example_stack seems a bit more promising but it looks too manual.
Any help would be appreciated.

Late to the response, but here is the solution from the AWS Documentation
If you are using a ShellStep for your initial step, utilize the additional_inputs property:
additional_inputs (Optional[Mapping[str, IFileSetProducer]]) – Additional FileSets to put in other directories. Specifies a mapping from directory name to FileSets. During the script execution, the FileSets will be available in the directories indicated. The directory names may be relative. For example, you can put the main input and an additional input side-by-side with the following configuration:: const script = new pipelines.ShellStep(‘MainScript’, { commands: [‘npm ci’,’npm run build’,’npx cdk synth’], input: pipelines.CodePipelineSource.gitHub(‘org/source1’, ‘main’), additionalInputs: { ‘../siblingdir’: pipelines.CodePipelineSource.gitHub(‘org/source2’, ‘main’), } }); Default: - No additional inputs
In the above example, it shows you how to add an additional GitHub repository as an input, and maps it to the file to ../siblingdir in your ShellStep.
This will work for some types of Steps (Example for CodeBuildStep), but it's recommended to check the documentation to confirm.

Related

How can I pass a pointer to a file in helm upgrade command?

I have a truststore file(a binary file) that I need to provide during helm upgrade. This file is different for each target env(dev,qa,staging or prod). So I can only provide this file at time of deployment. helm upgrade --set-file does not take a binary file. This seem to be the issue I found here: https://github.com/helm/helm/issues/3276. This truststore files are stored in Jenkins Credential store.
As the command itself is described below:
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
it is also important to know The Format and Limitations of
--set.
The error you see: Error: failed parsing --set-file data... means that the file you are trying to use does not meet the requirements. See the example below:
--set-file key=filepath is another variant of --set. It reads the
file and use its content as a value. An example use case of it is to
inject a multi-line text into values without dealing with indentation
in YAML. Say you want to create a brigade project with certain value
containing 5 lines JavaScript code, you might write a values.yaml
like:
defaultScript: |
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
Being embedded in a YAML, this makes it harder for you to use IDE
features and testing framework and so on that supports writing code.
Instead, you can use --set-file defaultScript=brigade.js with
brigade.js containing:
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
I hope it helps.

Jenkins Shared Library - Importing classes from the /src folder in /vars

I am trying to writing a Jenkins Shared Library for my CI process. I'd like to reference a class that is in the \src folder inside a global function defined in the \vars folder, since it would allow me to put most of the logic in classes instead of in the global functions. I am following the repository structure documented on the official Jenkins documentation:
Jenkins Shared Library structure
Here's a simplified example of what I have:
/src/com/example/SrcClass.groovy
package com.example
class SrcClass {
def aFunction() {
return "Hello from src folder!"
}
}
/vars/classFromVars.groovy
import com.example.SrcClass
def call(args) {
def sc = new SrcClass()
return sc.aFunction()
}
Jenkinsfile
#Library('<lib-name>') _
pipeline {
...
post {
always {
classFromVars()
}
}
}
My goal was for the global classes in the /vars folder to act as a sort of public facade and to use it in my Jenkinsfile as a custom step without having to instantiate a class in a script block (making it compatible with declarative pipelines). It all seems pretty straightforward to me, but I am getting this error when running the classFromVars file:
<root>\vars\classFromVars.groovy: 1: unable to resolve class com.example.SrcClass
# line 1, column 1.
import com.example.SrcClass
^
1 error
I tried running the classFromVars class directly with the groovy CLI locally and on the Jenkins server and I have the same error on both environments. I also tried specifying the classpath when running the /vars script, getting the same error, with the following command:
<root>>groovy -cp <root>\src\com\example vars\classFromVars.groovy
Is what I'm trying to achieve possible? Or should I simply put all of my logic in the /vars class and avoid using the /src folder?
I have found several repositories on GitHub that seem to indicate this is possible, for example this one: https://github.com/fabric8io/fabric8-pipeline-library, which uses the classes in the /src folder in many of the classes in the /vars folder.
As #Szymon Stepniak pointed out, the -cp parameter in my groovy command was incorrect. It now works locally and on the Jenkins server. I have yet to explain why it wasn't working on the Jenkins server though.
I found that when I wanted to import a class from the shared library I have, to a script step in the /vars I needed to do it like this:
//thanks to '_', the classes are imported automatically.
// MUST have the '#' at the beginning, other wise it will not work.
// when not using "#BRANCH" it will use default branch from git repo.
#Library('my-shared-library#BRANCH') _
// only by calling them you can tell if they exist or not.
def exampleObject = new example.GlobalVars()
// then call methods or attributes from the class.
exampleObject.runExample()

Substitutions and Java library lifecycle

I've got a Java project I'm converting to Bazel.
As is typical with Java projects, there are property files with placeholders that need to be resolved/substituted at build time.
Some of the values can be hardcoded in a BUILD or BZL file:
BUILD_PROPERTIES = { "pom.version": "1.0.0", "pom.group.id": "com.mygroup"}
Some of the variables are "stamps" (e.g. BUILD_TIMESTAMP, GIT_REVISION, etc): The source for these variables are volatile-status.txt and stable-status.txt
I must generate a POM for publish, so I use #bazel_common//tools/maven:pom_file in BUILD
(assume that I need ALL the values described above for my pom template):
_local_build_properties = {}
_local_build_properties.update(BUILD_PROPERTIES)
# somehow add workspace status properties?
# add / override
_local_build_properties.update({
"pom.project.name": "my-submodule",
"pom.project.description": "My submodule description",
"pom.artifact.id": "my-submodule",
})
# Variable placeholders in the pom template are wrapped with {}
_pom_substitutions = { '{'+k+'}':v for (k,v) in _local_build_properties.items()}
pom_file(
name = "my_submodule_pom",
targets = [
"//my-submodule",
],
template_file = "//:pom_template.xml",
substitutions = _pom_substitutions,
)
So, my questions are:
How do I get key-value pairs from volatile/stable -status.txt into the
dictionary I need for pom_file.substitutions?
pom_file depends on java_library so that it can write its dependencies
into the POM. How do I update the jar generated by java_library with the
pom?
Once I have the pom and the updated jar containing the pom, how do I publish to a Maven repo?
When I look at existing code, for example rules_docker, it seems that the implementation always bails to a local executable (shell | python | go) to do the real work of substitution, jar manipulation and image publication. Am I trying to do too much in BUILD and BZL files? Should I be thinking, "Ultimately, what do I need to pass to local shell/python/go scripts to get real build work done?
(Answered on bazel-discuss group)
Hi,
You can't get these values from Starlark. You need a genrule to read the stable/volatile files and do the substitutions using an
external tool like 'sed'.
A file cannot be both an input and output of an action, i.e. you can't update the .jar from which you generate the pom. The action has
to produce a new .jar file.
I don't know -- how would you publish outside of Bazel, is there a tool to do so? Can you write a genrule / Starlark rule to wrap this
tool?
Cheers, László

How to write Bazel rules that work with external repositories?

The Bazel Starlark API does strange things with files in external repositories. I have the following Starlark snippet:
print(ctx.genfiles_dir)
print(ctx.genfiles_dir.path)
print(output_filename)
ret = ctx.new_file(ctx.genfiles_dir, output_filename)
print(ret.path)
It is creating the following output:
DEBUG: build_defs.bzl:292:5: <derived root>
DEBUG: build_defs.bzl:293:5: bazel-out/k8-fastbuild/genfiles
DEBUG: build_defs.bzl:294:5: google/protobuf/descriptor.upb.c
DEBUG: build_defs.bzl:296:5: bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
That extra external/com_google_protobuf comes seemingly out of nowhere, and it makes my rule fail:
I tell protoc to generate into ctx.genfiles_dir.path (which is bazel-out/k8-fastbuild/genfiles).
So protoc generates bazel-out/k8-fastbuild/genfiles/google/protobuf/descriptor.upb.c
Bazel fails because I didn't generate bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
Likewise, when I try to call file.short_path on a source file from an external repository, I get a result like ../com_google_protobuf/google/protobuf/descriptor.proto. This seems quite unhelpful, so I just wrote some manual code to strip off the leading ../com_google_protobuf/.
Am I missing something? How can I write this rule in a way that doesn't feel like I'm fighting Bazel the whole time?
Am I missing something?
The basic problem, as you already realized, is that you have two path "namespaces" the one that protoc sees (i.e. import paths) and the one that bazel sees (i.e. the path you pass to declare_file().
2 things to note:
1) All paths declared with declare_file() get the path <bin dir>/<package path incl. workspace>/<path you passed to declare_file()>
2) All actions are executed from <bin dir> (unless output_to_genfils=True in which case this switches to <gen dir> as in your example.
Trying to solve the exact same problem you encountered, I resorted to stripping the known path from the output_file's path to determine which directory to pass as p:
# This code is run from the context of the external protobuf dependency
proto_path = "google/a/b.proto"
output_file = ctx.actions.declare_file(proto_path)
# output_file.path would be `<gen_dir>/external/protobuf/google/a/b.proto`
# Strip the known proto_path from output_file.path
protoc_prefix = output_file.path[:-len(proto_path)]
print(protoc_prefix) # Prints: <gen_dir>/external/protobuf
command = "{protoc} {proto_paths} {cpp_out} {plugin} {plugin_options} {proto_file}".format(
...
cpp_out = "--cpp_out=" + protoc_prefix,
...
)
Alternatives
You may also be able to construct the same path with ctx.bin_dir, ctx.label.workspace_name, ctx.label.package, and ctx.label.name.
Misc.
proto_library recently gained an attribute strip_import_prefix. When used, the above is not correct, as all dependent files are symlinked into a new directory from which they have the relative paths declared with strip_import_prefix.
The path format is:
<bin dir>/<repo>/<package>/_virtual_base/<label name>/<path `import`ed in .proto files>
i.e.
<bin dir>/external/protobuf/_virtual_base/b_proto/google/a/b.proto
Assuming you are building an external repo called protobuf, which contains a BUILD file at its root with a target named b_proto, which in turn, relies on a proto_library wrapping google/a/b.proto AND uses the strip_import_prefix attribute.

Does Bazel need external-repo BUILD files to be in $WORKSPACE_ROOT/external?

I made a repository for glfw with this:
load("#bazel_tools//tools/build_defs/repo:git.bzl", "new_git_repository")
new_git_repository(
name = "glfw",
build_file = "BUILD.glfw",
remote = "https://github.com/glfw/glfw.git",
tag = "3.2.1",
)
I put BUILD.glfw in the WORKSPACE root. When I built, I saw:
no such package '#glfw//': Not a regular file: [snipped/external/BUILD.glfw
I moved BUILD.glfw to external/BUILD.glfw and it seems to work, but I couldn't find documentation about this. The docs about new_git_repository say that build_file "...is a label relative to the main workspace."; I don't see anything about 'external' there.
This is due to an inconsistent semantical difference between the native and (newer) Skylark versions of new_git_repository. To use the native new_git_repository, comment/remove the load statement:
# load("#bazel_tools//tools/build_defs/repo:git.bzl", "new_git_repository")
Assuming that new_git_repository has the same problem that http_archive has, per Bazel issue 6225 you need to refer to the BUILD file for glfw as #//:BUILD.glfw

Resources