How to execute commands needed to prepare an artifact - aws-cdk

Is there a best practice for executing a preparation command within a CDK stack?
For example, if I was creating a Lambda and I wanted to run serverless package before deployment, is there a way to have the CDK execute that command when necessary? I was reading the documentation and it seems like maybe Construct#prepare would be appropriate?
Basically all I need to run is a child_process.execSync.
Any help appreciated!

Two approaches that I would recommend:
1. Use a wrapper script (e.g. scripts section of package.json):
{
"name": "your-app",
"scripts":{
"deploy": "serverless package && cdk deploy"
}
}
2. Use cdk.json
{
"app": "serverless package && npx ts-node bin/ci.ts"
}
Use the first if your project already uses package.json as an entrypoint for common operations. The second one is more generic but I dislike it because it's a bit more "hidden" (people who are new to CDK may never notice that it's there).

Related

How do I build lambdas in different languages using AWS CDK Pipelines

I'm setting up a CDK project that have some lambdas in Javascript and Python, I'm trying to figure out what's the best way to build these functions as I would normally pass the build command like this:
// Install dependencies, build and run cdk synth
commands: [
'npm ci',
'npm run build',
'npx cdk synth'
]
or
buildCommand: 'npm run build'
The only thing I can think of is to create a build.sh file inside each lambda, for the ones in JS I'd add npm run build and for the ones in Python pip install -r requirements.txt but I don't really know if this is a good practice and if there's a better way to accomplish this.
What you need is a Bundling docker container. You can
either configure the bundling yourself using the bundling option for Code.fromAsset()
or use the PythonFunction and NodejsFunction constructs which provide standard bundling for Python and Node.js, respectively.
This AWS Blog post gives some more examples for bundling.

Yocto - Why can't I override the build task?

I'm playing with my own yocto layer/recipes to figure out how everything works together, and I'm seeing some very confusing behavior related to the build task. Here is my test recipe called condtest:
ICENSE = "GPLv2+ & LGPLv2+"
DISTRO = ""
do_print() {
echo "print"
}
addtask print
do_fetch() {
echo "fetch"
}
addtask fetch before build
do_build() {
echo "build"
}
addtask build
do_compile() {
:
}
addtask compile
So if I run bitbake -c fetch condtest I see "fetch" echoed exactly as I would expect, so that makes sense. However, when I run bitbake -c build condtest bitbake will not echo "build" and instead will begin fetching and compiling a bunch of packages. What confuses me further is that if I add the -e flag to the two commands, their output is nearly identical, so I'm not sure why bitbake appears to begin building an entirely different recipe with the default build task instead of using the override build task that I defined in my recipe.
The base bbclass file (meta/classes/base.bbclass) sets:
do_build[noexec] = "1
which means the content of the function is not executed and it is just a placeholder task for the dependency graph. This is why you never see output from the build task.
As mentioned in other answers, there are default dependencies which are why other recipes execute when you try and run "standard" tasks like do_build.
The other packages are built because there are build time dependencies (and such dependencies are not needed for the fetch task). Content of your build task is not relevant, the dependencies are stored elsewhere (see the BitBake User Manual and section Build Dependencies for more information). You can generate graph of dependencies using the -g in bitbake invocation (see the official docs).
If you want to disable default dependencies, check the documentation for the variable INHIBIT_DEFAULT_DEPS.
It wasn't part of your question, but I see these glitches in your recipe:
You don't have to add addtask for standard tasks. You can find them (along with documentation) in the documentation.
If you want to skip the task and preserve the the dependency list, you can use do_compile[noexec] = "1".
The DISTRO variable (i.e. definition) belongs to the global configuration.
Edit: I didn't answer why build is not echoed, see the Richard's answer for the explanation.

What's the best way to bulk update Jenkins projects?

We have hundreds of Jenkins projects (mostly created from a few templates), often need to make the same change to all of them. e.g. today I need to add a post-build step to delete workspace at the end. Next I need to change the step to copy build result to a shared drive to Nexus repository.
What's the best way to apply such kind of bulk change to Jenkins projects?
You could use Configuration Slicing Plugin which is designed to do this.
It supports many configuration options.
The REST API is quite powerful. The following sequence worked for me:
In loop for all relevant projects (list of projects is available via e.g. /api/xml?tree=jobs[name]):
download config.xml via /job/{name}/config.xml
edit using your favorite scripted xml editor (mine was xmlstarlet)
upload new config xml via /job/{name}/config.xml
Some random notes:
do *BACKUP* before doing anything
I probably could post some bash script example if anyone is interested
Good luck!
EDIT> Example bash script:
#!/bin/bash
jenkinsUrlBase='http://user:token#jenkins'
callJenkins() { # funcPath
curl --silent --show-error -g "${jenkinsUrlBase}${1}"
}
postJenkinsFile() { # funcPath fileName
curl --silent --show-error -g -d "#${2}" "${jenkinsUrlBase}${1}"
}
callJenkins '/api/xml?tree=jobs[name]' | xmlstarlet sel -t -v '//hudson/job/name' | while read projectName ; do
echo "Processing ${projectName}..."
origFile="${projectName}_old.xml"
newFile="${projectName}_new.xml"
callJenkins "/job/${projectName}/config.xml" > "$origFile"
echo " - Updating artifactory url..."
cat "$origFile" \
| xmlstarlet ed -P -u '//maven2-moduleset/publishers/org.jfrog.hudson.ArtifactoryRedeployPublisher/details/artifactoryUrl' -v "http://newServer/artifactory" \
> "${newFile}"
if false ; then
echo " - Commiting new config file..."
postJenkinsFile "/job/${projectName}/config.xml" "$newFile"
else
echo " - Dry run: not commiting new config file"
fi
done
Groovy is by far the best way to bulk update jobs. You may have to do a little digging into the jenkins / plugin api to figure out what api calls to make, but the script console (http://yourJenkinsUrl/script) provides an easy way to play around with the code until you get it right.
To get you started, you can add / remove post-build steps by calling the getPublishersList() method on a job and then calling the add / remove methods.
def publishersList = Jenkins.instance.getJob("JobName").getPublishersList()
publishersList.removeAll { it.class == whatever.plugin.class }
publishersList.add(new PluginConstructor())
If you're not sure what publisher class you need to delete the workspace, I would suggest manually adding the desired configurations to one job, and then run getPublishersList() from the script console on that job. You will see the class you are working with in the list, and then you can go look at the api to see what is required to construct it.
You can then iterate through all your jobs and add the publisher doing something like this:
Jenkins.instance.getView("All Jobs").items.each { job ->
//Maybe some logic here to filter out specific jobs
job.getPublishersList().add(new PluginConstructor())
}
Alternatively, you can use the Jenkins CLI api or the REST api, but in order to update post-build actions, you will have to modify the project configuration xml file (which isn't trivial programmatically configure) and then overwrite the job configuration with the new configuration file.
You can edit the config.xml file with your favorite text tool (I use Python) and then reload the jenkins configuration.
In my setup the jobs are stored in ~/.jenkins/jobs/*/config.xml.
See: https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins
Here is a small example to update foo to bar:
</com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition>
<hudson.model.StringParameterDefinition>
<name>additional_requirements</name>
<description>foo</description>
...
Script:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, unicode_literals, print_function
import sys
from lxml import etree
from collections import defaultdict
def change_parameter_description(config_xml_path, parameter_name, new_description):
tree=etree.parse(config_xml_path)
for tag in tree.findall('.//hudson.model.StringParameterDefinition'):
name_tag=tag.find('./name')
if not name_tag.text==parameter_name:
continue
description=tag.find('./description')
description.text=new_description
tree.write(config_xml_path)
for config_xml_path in sys.argv[1:]:
change_parameter_description(config_xml_path, 'additional_requirements', 'bar')
In this small example a regex would work, but if things span several lines, it is better to work with xml tools :-)
The other answers are great, but if you use pipelines, I'd suggest you to use Pipeline Shared Libraries.
We have all our jobs in a git repository. To develop a new feature we try it in a branch, since it is possible to point just one job to a specific branch. When we need to update them, just merge into master. The jobs are treated as code, with a proper release process.

VSCode Task to run ant buildfile located anywhere

I have a huge project spread across multiple source directories which was developed the last 15 years using eclipse with custom external tools configurations to launch ant tasks from build.xml files anywhere inside the source directories (a big mess, I know!).
As the everyday work is mostly xml and JavaScript based, I thought of VSCode as a lightweight alternative (as eclipse is e.g. unable to deal with large xml files without exceeding HeepSpace). Task Runners look to me like a great way to integrate the ant builds into the editor, they are also advertised as capable of running ant builds:
Examples are Make, Ant, Gulp, Jake, Rake and MSBuild to name only a few.
I am able to run ant builds with the build.xml in the root folder. This is however not how the project is structured.
Is there a way to run the task command (ant, in my case) from a directory different to the workspace root?
I think of something like git's GIT_WORK_TREE environment variable or a way to perform two commands (like cd {{build.xml folder}} && ant). My current tasks.json is
{
"version": "0.1.0",
"command": "ant",
"isShellCommand": true,
"showOutput": "silent",
"args": ["all", "jar"],
"promlemMatcher": "" // I'm also not sure what to put here,
// but that's another question
}
(I'm on windows, btw -- but come from linux/osx and am kinda new to the ways thinks work here.)
You can either define the cwd directory to be used. This is done like this:
{
"version": "0.1.0",
"command": "ant",
"isShellCommand": true,
"options": {
"cwd": "My folder to run in"
}
}
See https://code.visualstudio.com/Docs/editor/tasks_appendix for a definition of the tasks.json file.

How to execute package for one submodule only on Jenkins?

I have a sbt project with 4 modules: module-a, module-b, module-c, module-d.
Each module can be packaged as a WAR. I want to set up a deployment on Jenkins that would build only one of the 4 modules and deploy it to a container.
In detail, I want to have 4 Jenkins jobs - job-a, job-b, job-c, job-d, each building only the defined module (a to d).
For now, I am using clean update test package as the command for the Jenkins sbt build, but this results in packaging all 4 modules that is not necessary.
I already tried project -module-a clean update test package but with no luck.
You may also like to execute project-scoped clean and test tasks as follows:
sbt module-a/clean module-a/test
The solution is slightly shorter and clearer as to what project the following commands apply to.
You don't need to execute update task since it's implicitly executed by test as described in inspect tree test.
There's a way to make it cleaner with an alias. Use the following in the build.sbt:
addCommandAlias("jenkinsJob4ModuleA", "; module-a/clean; module-a/test")
With the alias, execute jenkinsJob4ModuleA to have the same effect as the above solution.
Quote the argument to project, i.e. project module-a, and don't use a dash before the name of the submodule.
The entire command line for the Jenkins job would than be as follows:
./sbt "project module-a" clean update test

Resources