AWS CDK Pipeline - action before synth - aws-cdk

I have a standard CDK pipeline and use the output of mvn package to build a docker container. This one is used as DockerImageAsset and gets deployed to different environments. Therefore, the mvn package must run before the cdk synth.
While this works, I don't like the fact that the mvn package runs within the Synth action and I would prefer to have a separate action before, that also publishes test results from unit tests etc.
Is there a way to get an action before Synth ?
This is the current code:
const pipeline = new CodePipeline(this, 'Pipeline', {
dockerEnabledForSynth: true,
dockerEnabledForSelfMutation: true,
crossAccountKeys: true,
synth: new ShellStep('Synth', {
input: CodePipelineSource.gitHub('OWNER/repo', 'main', {
authentication: SecretValue.secretsManager('GITHUB_TOKEN'),
}),
commands: [
'./mvnw package',
'npm ci',
'npm run build',
'npx cdk synth',
]
})
});
...
const dockerImage = new DockerImageAsset(this, 'Image', {
directory: '......'
});
...

There is another CDK construct library for AWS CodePipeline that is lower-level and unopinionated, which should allow you to add stages before the synth stage.
There is also a method buildPipeline for the higher-level pipeline construct you are currently using (the CodePipeline construct) that allows you to build the lower-level pipeline (the Pipeline construct). Note that once you call this method you can't modify the higher-level construct anymore. After building the lower-level pipeline construct, you might be able to call addStage on it and pass in the placement property to add the stage prior to the synth stage, like so:
pipeline.buildPipeline()
const builtPipeline = pipeline.pipeline
builtPipeline.addStage({
stageName: 'Maven',
placement: {
rightBefore: pipeline.stages[1]
}
})
I have not personally tried this method so I cannot guarantee it, but if that doesn't work, then converting the CDK code to the lower-level construct library and using that from the beginning should work. From the docs for the higher-level construct library:
CDK Pipelines is an opinionated construct library....If you want or need more control, we recommend you drop down to using the aws-codepipeline construct library directly.

Related

Teamcity Shared Library and Stash/Unstash Like Jenkins

I am currently a jenkins user and is exploring the Teamcity.
In jenkins we have a concept of shared libraries, which basically extends a generic groovy code into different jenkins pipeline and avoid re-writing the same functionality in each jenkins file following the DRY (don't repeat yourself) , hide implementation complexity, keep pipelines short and easier to understand
Example:
There could be a repository having all the Groovy functions like:
Repo: http:://github.com/DEVOPS/Utilities.git (repo Utilities)
Sample Groovy Scipt ==>> GitUtils.groovy with below functions
public void setGitConfig(String userName, String email) {
sh "git config --global user.name ${userName}"
sh "git config --global user.mail ${email}"
}
public void gitPush(StringbranchName) {
sh "git push origin ${branchName}"
}
In jenkinsfile we can just call this function like below (of course we need to define config in jenkins for it to know the Repo url for Shared library and give it a name):
Pipeline
//name of shared library given in jenkins
#Library('utilities') _
pipeline {
agent any
stages {
stage ('Example') {
steps {
// log.info 'Starting'
script {
def gitutil = new GitUtils()
gitutils.setGitConfig("Ray", "Ray#rayban.com")
}
}
}
}
}
And that's it anyone wanting the same function has to just include the library in jenkinsfile and use it in pipeline
Questions:
Can we migrate over the same to Teamcity, if yes how can it be done? We do not want to spend lot of time to re-writing
Jenkins also support Stashing and unstashing of workspace between stages, is the similar concept present in teamcity?
Example:
pipeline {
agent any
stages {
stage('Git checkout'){
steps {
stash includes: '/root/hello-world/*', name: 'mysrc'
}
}
stage('maven build'){
agent { label 'slave-1' }
steps {
unstash 'mysrc'
sh label: '', script: 'mvn clean package'
}
}
}
}
As for reusing common TeamCity Kotlin DSL libraries, this can be done via maven dependencies. For that you have to mention it in the pom.xml file within your DSL code. You can also consider using JitPack if your DSL library code is hosted on GitHub for example and you do not want to handle building it separately and publishing its maven artifacts.
Although with migration from Jenkins to TeamCity you will most likely have to rewrite the common library (if you still need one at all), as TeamCity project model and DSL are quite different to what you have in Jenkins.
Speaking of stashing/unstashing workspaces, it may be covered by either artifact rules and artifact dependencies (as described here: https://www.jetbrains.com/help/teamcity/artifact-dependencies.html) or repository clone mirroring on agents.

How to run multiple pipeline jenkins for multiple branch

I have a scenario where in I have a frontend repository with multiple branches.
Here's my repo vs application structure.
I have a single Jenkinsfile like below:
parameters{
string(name: 'CUSTOMER_NAME', defaultValue: 'customer_1')
}
stages {
stage('Build') {
steps {
sh '''
yarn --mutex network
/usr/local/bin/grunt fetch_and_deploy:$CUSTOMER_NAME -ac test
/usr/local/bin/grunt collect_web'''
}
}
}
The above Jenkinsfile is same for all customers so I would like to understand what is the best way to have multiple customers build using a same Jenkinsfile and build different pipelines based on the parameter $CUSTOMER_NAME
I am not sure if I understood your problem. But I guess you could use a shared pipeline library: https://jenkins.io/doc/book/pipeline/shared-libraries/
You can put the build step in the library and call it with CUSTOMER_NAME as parameter.
(Please note: a shared pipeline library must be stored in a separate GIT repository!)

Jenkins pipeline groovy testing in shell

Can jenkins pipeline scripts be tested using groovysh or groovy scriptname to run tests for validation without using the Jenkins UI
For example for a simple script
pipeline {
stages {
stage ('test') {
steps {
sh '''
env
'''
}
}
}
}
running a test like this, depending on the subset of scripting gives:
No signature of method: * is applicable for argument types
groovysh_evaluate.pipeline()
or for
stage('test'){
sh '''
env
'''
}
reports:
No signature of method: groovysh_evaluate.stages()
or simply
sh '''
env
'''
reports:
No signature of method: groovysh_evaluate.sh()
The question may be which imports are required and how to install them outside of a jenkins installation?
Why would anyone want to do this?
Simplify and shorten iterating over test cases, validation of library versions without modifying jenkins installations and other unit and functional test scenarios.
JenkinsPipelineUnit is what you're looking for.
This testing framework lets you write unit tests on the configuration and conditional logic of the pipeline code, by providing a mock execution of the pipeline. You can mock built-in Jenkins commands, job configurations, see the stacktrace of the whole execution and even track regressions.

Multi-branch configuration with externally-defined Jenkinsfile

I have an open-source project, that resides in GitHub and is built using a build farm, controlled by Jenkins.
I want to build it branch-wise using a pipeline, but I don't want to store Jenkinsfile inside the code. Is there a way to accomplish this?
I have encountered the same issue as you. While the idea of having the build process as part of the code is good, there is information that the Jenkinsfile would include that are not intrinsic to the project build itself, but rather are specific to the build environment instance, which may change.
The way I accomplished this is:
Encapsulate the core build process in a single script (build.py or build.sh). This may call specific build tools like Make, CMake, Ant, etc.
Tell Jenkins via the Jenkinsfile to call a function defined in a single global library
Define the global Jenkins build function to call the build script (e.g. build.py) with appropriate environment settings. For example, using custom tools and setting up the PATH.
So for step 2, create a Jenkinsfile in your project containing just the line
build_PROJECTNAME()
where PROJECTNAME is based on the name of your project.
Then use the Pipeline Shared Groovy Libraries Plugin and create a Groovy script in the shared library repository called vars/build_PROJECTNAME.groovy containing the code that sets up the environment and calls the project build script (e.g. build.py):
def call() {
node('linux') {
stage("checkout") {
checkout scm
}
stage("build") {
withEnv([
"PATH+CMAKE=${tool 'CMake'}/bin",
"PATH+PYTHON=${tool 'Python-3'}",
"PATH+NINJA=${tool 'Ninja'}",
]) {
execute 'python build.py'
}
}
}
}
First of all, why do you not want a Jenkinsfile in your code? The pipeline is just as much part of the code as would be your build file.
Other then that, you can load groovy files to be evaluated as a pipeline script. You can do this either from a different location with the from SCM option and then checkout the actual code. But this will force you to manually take care of the branch builds.
Another option would be to have a very basic Jenkinsfile that merely checkouts an external pipeline.
You would get something like this:
node{
deleteDir()
git env.flowScm
def flow = load 'pipeline.groovy'
stash includes: '**', name: 'flowFiles'
stage 'Checkout'
checkout scm // short hand for checking out the "from scm repository"
flow.runFlow()
}
Where the pipeline.groovy file would contain the actual pipeline would look like this:
def runFlow() {
// your pipeline code
}
// Has to exit with 'return this;' in order to be used as library
return this;

Create resusable jenkins pipeline script

I am considering to use Jenkins pipeline script recently, one question is that I don't figure out a smart to way to create internal reusable utils code, imagine, I have a common function helloworld which will be used by lots of pipeline jobs, so I hope to create a utils.jar can injected it into the job classpath.
I notice Jenkins have a similar concept with the global library, but my concern regarding this plugin:
Since it is a plugin, so we need to install/upgrade it through jenkins plugin manager, then it may require reboot to apply the change, this is not what I want to see since utils may change, add always, we hope it could be available immediately.
Secondly, it is official jenkins shared lib, I dont want to (Or they will not apply us) put private code into jenkins repo.
Any good idea?
The Shared Libraries (docs) allows you to make your code accessible to all your pipeline scripts. You don't have to build a plugin for that and you don't have to restart Jenkins.
E.g. this is my library and this a Jenkinsfile that calls this common function.
EDIT (Feb 2017):
The library can be accessed through Jenkins' internal Git server, or deployed through other means (e.g. via Chef) to the workflow-lib/ directory within the jenkins user's home directory. (still possible, but very unhandy).
The global library can be configured through the following means:
an #Library('github.com/...') annotation in the Jenkinsfile pointing to the URL of the shared library repo.
configured on the folder level of Jenkins jobs.
configured in Jenkins configuration as global library, with the advantage that the code is trusted, i.e., not subject to script security.
A mix of the first and last method would be a not explicitly loaded shared library that is then requested only using its name in the Jenkinsfile: #Library('mysharedlib').
Depending on how often you plan on reusing your code, you could also load a function (or a set of functions) as part of another pipeline.
{
// ...your pipeline code...
git 'http://urlToYourGit/projectContainingYourScript'
pipeline = load 'global-functions.groovy'
pipeline.helloworld() // Call one of your defined function
// ...some other pipeline code...
}
This solution might seems a little bit cumbersome compared to StephenKing's one but what I like about this solution is that my global functions are all commited to Git and anybody can easily modify them without (almost) any knowledge of Jenkins, just basics of Groovy.
In the Groovy script your are loading, make sure you add return this at the very end. This will allow you to make calls later. Otherwise when you set pipeline = load global-functions.groovy, the variable will be set to null.
Here is the solution that we are currently using in order to re-use Jenkinsfile code:
node {
curl_cmd = "curl -H 'Accept: application/vnd.github.v3.raw' -H 'Authorization: token ${env.GITHUB_TOKEN}' https://raw.githubusercontent.com/example/foobar/master/shared/Jenkinsfile > Jenkinsfile.t
sh "${curl_cmd}"
load 'Jenkinsfile.tmp'
}
I might be a bit ugly but it works realiably and in addition to that it also allows us to insert some repository specific code before or after the shared code.
I prefer creating a buildRepo() method that I invoke from repositories. Its signature is def call(givenConfig = [:]) so that it can also be invoked with parameters, like:
buildRepo([
"npm": [
"cypress": false
]
])
I keep parameters at an absolute minimum and try to rely on conventions rather than configuration.
I provide a default configuration which is also where I put documentation:
def defaultConfig = [
/**
* The Jenkins node, or label, that will be allocated for this build.
*/
"jenkinsNode": "BUILD",
/**
* All config specific to NPM repo type.
*/
"npm": [
/**
* Whether or not to run Cypress tests, if there are any.
*/
"cypress": true
]
]
def effectiveConfig merge(defaultConfig, givenConfig)
println "Configuration is documented here: https://whereverYouHos/getConfig.groovy"
println "Default config: " + defaultConfig
println "Given config: " + givenConfig
println "Effective config: " + effectiveConfig
Using the effective configuration, and the content of the repository, I create a build plan.
...
derivedBuildPlan.npm.cypress = effectiveConfig.npm.cypress && packageJSON.devDependencies.cypress
...
The build plan is the input to some build methods like:
node(buildPlan.jenkinsNode) {
stage("Install") {
sh "npm install"
}
stage("Build") {
sh "npm run build"
}
if (buildPlan.npm.tslint) {
stage("TSlint") {
sh "npm run tslint"
}
}
if (buildPlan.npm.eslint) {
stage("ESlint") {
sh "npm run eslint"
}
}
if (buildPlan.npm.cypress) {
stage("Cypress") {
sh "npm run e2e:cypress"
}
}
}
I wrote a blog post about this on Jenkins.io:
https://www.jenkins.io/blog/2020/10/21/a-sustainable-pattern-with-shared-library/

Resources