I'm trying to create a statemachine with a BatchSubmitJob in AWS CDK with dynamic environment variables in the BatchContainerOverrides. I was thinking about something like this:
container_overrides = sfn_tasks.BatchContainerOverrides(
environment={
"TEST.$": "$.dynamic_from_payload"
}
)
return sfn_tasks.BatchSubmitJob(self.scope,
id="id",
job_name="name",
job_definition_arn="arn",
job_queue_arn="arn",
container_overrides=container_overrides,
payload=sfn.TaskInput.from_object({
"dynamic_from_payload.$": "$.input.some_variable"
}))
However, upon deployment, CDK will add "Name" and "Value" to the statemachine definition, but Value is now static. This is part of the statemachine definition as seen in the console:
"Environment": [
{
"Name": "TEST.$",
"Value": "$.dynamic_from_payload"
}
]
But I need to have it like this:
"Environment": [
{
"Name": "TEST",
"Value.$": "$.dynamic_from_payload"
}
]
I also tried using "Ref::", as done here for the command parameters: AWS Step and Batch Dynamic Command. But this doesn't work either.
I also looked into escape hatches, overwriting the CloudFormation template. But I don't think that is applicable here, since the generated statemachine definition string is basically one large string.
I can think of two solutions, both of which don't make me happy: override the statemachine definition string with escape hatches with a copy in which "Value" is replaced on certain conditions (probably with regex) OR put a lambda in the statemachine that will create and trigger the batch job and a lambda that will poll if the job is finished.
Long story short: Does anyone have an idea of how to use dynamic environment variables with a BatchSubmitJob in CDK?
You can use the aws_cdk.aws_stepfunctions.JsonPath class:
container_overrides = sfn_tasks.BatchContainerOverrides(
environment={
"TEST": sfn.JsonPath.string_at("$.dynamic_from_payload")
}
)
Solved thanks to K. Galens!
I ended up with a Pass state with intrinsic functions to format the value and the aws_cdk.aws_stepfunctions.JsonPath for the BatchSubmitJob.
So something like this:
sfn.Pass(scope
id="id",
result_path="$.result",
parameters={"dynamic_from_payload.$": "States.Format('static_sub_part/{}',$.dynamic_sub_part)"} )
...
container_overrides = sfn_tasks.BatchContainerOverrides(
environment={
"TEST": sfn.JsonPath.string_at("$.result.dynamic_from_payload")
}
)
Related
I would like to pass a variable to some of my build rules, e.g. this Webpack step:
load("#npm//webpack-cli:index.bzl", webpack = "webpack_cli")
webpack(
name = "bundle",
args = [
"--config",
"$(execpath webpack.config.js)",
"--output-path",
"$(#D)",
],
data = [
"index.html",
"webpack.config.js",
"#npm//:node_modules",
] + glob([
"src/**/*.js",
]),
env = {
"FOO_BAR": "abc",
},
output_dir = True,
)
Some builds will be done with FOO_BAR=abc and others with a different value. I don't know the full set of possible values!
I don't think that --action_env is applicable here since it is not a genrule.
I would also like to be able to set a default value in my BUILD script.
How can I accomplish this with Bazel?
If you knew the set of values, then the usual tools are config_setting() and select(), but since you don't know the possible values, then that won't work here.
It looks like webpack is actually a npm_package_bin or nodejs_binary underneath:
# Generated helper macro to call webpack-cli
def webpack_cli(**kwargs):
output_dir = kwargs.pop("output_dir", False)
if "outs" in kwargs or output_dir:
npm_package_bin(tool = "#npm//webpack-cli/bin:webpack-cli", output_dir = output_dir, **kwargs)
else:
nodejs_binary(
entry_point = { "#npm//:node_modules/webpack-cli": "bin/cli.js" },
data = ["#npm//webpack-cli:webpack-cli"] + kwargs.pop("data", []),
**kwargs
)
and in both cases env will do make-variable substitution:
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#nodejs_binary-env
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#npm_package_bin-env
So if you know at least what the variables will be, you can do something like
env = {
"FOO_BAR": "$(FOO_BAR)",
},
and use --define=FOO_BAR=123 from the Bazel command line.
nodejs_binary has additional attributes related to environment variables:
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#nodejs_binary-configuration_env_vars
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#nodejs_binary-default_env_vars
If the number of environment variables to set, or the name of the environment variable itself, is not known, then you might need to open a feature request with rules_nodejs.
I'm trying to grok the rules surrounding variables in Groovy/Jenkinsfiles/declarative syntax.
The generic webhook trigger captures HTTP POST content and makes them available as variables available to your Jenkinsfile. E.g.:
pipeline {
agent any
triggers {
GenericTrigger (
genericVariables: [
[ key: "POST_actor_name", value: "\$.actor.name" ]
],
token: "foo"
)
}
stages {
stage( "Set up" ) {
steps {
script {
echo "env var ${env.actor_name}"
echo "global var ${actor_name}"
}
}
}
}
If the HTTP POST content contains a JSON object with an actor_name field valued "foo", then this prints:
env var foo
global var foo
If the HTTP POST content does not contain the JSON field actor_name, then this prints
env var null
...then asserts/aborts with a No such property error.
Jenkins jobs also have a "this project is parameterized" setting, which seems to introduce yet another way to inject variables into your Jenkinsfile. The following Jenkinsfile prints a populated, parameterized build variable, an unpopulated one, and an intentionally nonexistent variable:
pipeline {
agent any
stages {
stage( "Set up" ) {
steps {
script {
echo "1 [${env.populated_var}]"
echo "2 [${env.unpopulated_var}]"
echo "3 [${env.dontexist}]"
echo "4 [${params.populated_var}]"
echo "5 [${params.unpopulated_var}]"
echo "6 [${params.dontexist}]"
echo "7 [${populated_var}]"
echo "8 [${unpopulated_var}]"
echo "9 [${dontexist}]"
}
}
}
}
}
The result is:
1 [foo]
2 []
3 [null]
4 [foo]
5 []
6 [null]
7 [foo]
8 []
...then asserts/aborts with a No such property error.
The pattern I can ascertain is:
env.-scoped variables will be NULL if they come from unpopulated HTTP POST content.
env.-scoped variables will be empty strings if they come from unpopulated parameterized build variables.
env.-scoped variables will be NULL if are nonexistent among parameterized build variables.
Referencing global-scoped variables will assert if they come from unpopulated HTTP POST content.
Referencing global-scoped variables will be be empty strings if they come from unpopulated parameterized build variables.
params.-scoped variables will be NULL if they if are nonexistent among parameterized build variables.
params.-scoped variables will be empty strings if they come from unpopulated parameterized build variables.
I have a few questions about this - I believe they are reasonably related, so am including them in this one post:
What is the underlying pattern/logic behind when a variable is NULL and when it is an empty string?
Why are variables available in different "scopes": env., params., and globally, and what is their relationship (why are they not always 1:1)?
Is there a way for unpopulated values in parameterized builds to be null-valued variables in the Jenkinsfile instead of empty strings?
Context: in my first Jenkinsfile project, I made use of variables populated by HTTP POST content. Through this, I came to associate a value's absence with the corresponding .env variable's null-ness. Now, I'm working with variables coming from parameterized build values, and when a value is not populated, the corresponding .env variable isn't null -- it's an empty string. Therefore, I want to understand the pattern behind when and why these variables are null versus empty, so that I can write solid and simple code to handle absence/non-population of values from both HTTP POST content and parameterized build values.
The answer is a bit complicated.
For 1 and 2:
First of all pipeline, stage, steps... are groovy classes. Everything in there is defined as object/variable.
env is an object that holds pretty much everything,
params holds all parameter ;)
They are both a Map, if you access an empty value it's empty, if you access an non existing one it's null.
The globals are variables itself and if you try to access a non existing the compiler complains.
For 3:
You can define "default" parameter:
pipeline {
agent any
stages {
stage( "Set up" ) {
steps {
script {
params = setConfig(params);
}
}
}
}
}
def merge(Map lhs, Map rhs) {
return rhs.inject(lhs.clone()) { map, entry ->
if (map[entry.key] instanceof Map && entry.value instanceof Map) {
map[entry.key] = merge(map[entry.key], entry.value)
} else {
map[entry.key] = entry.value
}
return map
}
}
def setConfig(givenConfig = [:]) {
def defaultConfig = [
"populated_var": "",
"unpopulated_var": "",
"dontexist": ""
];
effectiveConfig = merge(defaultConfig, givenConfig);
return effectiveConfig
}
I am able to add/modify DevOps release definitions through a combination of CLI and CLI REST methods. The release definition object does not include (as far as I can tell) a property that controls the variable group scope. The release definition itself takes an array of variable group ID's but there is also the scope of the variable group within the context of the release definition. Where is that?
Is there support to access the variable group scope property in the CLI or CLI REST interface? The image below shows the interface from the portal in azure. Selecting the ellipses (...) you can "change scope" where a list of stages is displayed. You than save and then save the release definition.
I captured fiddler output but the body post was huge and not very helpful, I didn't see anything related to a list of scopes. but obviously this can be done. I'm just not sure about doing so via CLI or REST.
Edit: Here is a view of the script. There is no "scope", which should contain a list of environment names, anywhere in the release definition that I can see. Each environment name (aka stage) contains a number of variable groups associated with each environment.
$sourcedefinition = getreleasedefinitionrequestwithpat $reldefid $personalAccesstoken $org $project | select -Last 1
Write-Host "Root VariableGroups: " $sourcedefinition.variableGroups
$result = #()
#search each stage in the pipeline
foreach($item in $sourcedefinition.environments)
{
Write-Host ""
Write-Host "environment name: "$item.name
Write-Host "environment variable groups: "$item.variableGroups
}
To help clarify, the scope I seek cannot be in the environments collection as this is specific to each element (stage). The scope is set at the release definition level for a given variable group (again refer to the image).
I use this API to get the Definitions of my release and find that the values of variableGroups in ReleaseDefinition and in ReleaseDefinitionEnvironment are different when the scopes are different.
Then I think if we want to change the scope via REST API, we just need to change the variableGroups and update the Definitions. We can use this API to update the Definitions.
Edit:
For example, I want to change my scope from Release to Stage, I use the API like below:
PUT https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions?api-version=6.1-preview.4
Request Body: (I get this from the first Get Definitions API Response Body and make some changes to use it)
{
"source":"userInterface",
"revision":6,
...
"lastRelease": {
"id": 1,
...
},
...
"variables":{
},
"variableGroups":[],
"environments":[
{
"name": "Stage 1",
...
"variables":{
},
"variableGroups":[
4
],
...
}
],
...
}
Note:
Please use your own newer revision.
The id value in lastRelease is your release definitionId.
Specify the stage name in environments name.
The variableGroups value in environments is the id of the variable group you want to change scope.
I have a multi-branch pipeline job set to build by Jenkinsfile every minute if new changes are available from the git repo. I have a step that deploys the artifact to an environment if the branch name is of a certain format. I would like to be able to configure the environment on a per-branch basis without having to edit Jenkinsfile every time I create a new such branch. Here is a rough sketch of my Jenkinsfile:
pipeline {
agent any
parameters {
string(description: "DB name", name: "dbName")
}
stages {
stage("Deploy") {
steps {
deployTo "${params.dbName}"
}
}
}
}
Is there a Jenkins plugin that will let me define a default value for the dbName parameter per branch in the job configuration page? Ideally something like the mock-up below:
The values should be able to be reordered to set priority. The plugin stops checking for matches after the first one. Matching can be exact or regex.
If there isn't such a plugin currently, please point me to the closest open-source one you can think of. I can use it as a basis for coding a custom plugin.
A possible plugin you could use as a starting point for a custom plugin is the Dynamic Parameter Plugin
Here is a workaround :
Using the Jenkins Config File Provider plugin create a config json with parameters defined in it per branch. Example:
{
"develop": {
"dbName": "test_db",
"param2": "value"
},
"master": {
"dbName": "prod_db",
"param2": "value1"
},
"test_branch_1": {
"dbName": "zss_db",
"param2": "value2"
},
"default": {
"dbName": "prod_db",
"param2": "value3"
}
}
In your Jenkinsfile:
final commit_data = checkout(scm)
BRANCH = commit_data['GIT_BRANCH']
configFileProvider([configFile(fileId: '{Your config file id}', variable: 'BRANCH_SETTINGS')]) {
def config = readJSON file:"$BRANCH_SETTINGS"
def branch_config = config."${BRANCH}"
if(branch_config){
echo "using config for branch ${BRANCH}"
}
else{
branch_config = config.default
}
echo branch_config.'dbName'
}
You can then use branch_config.'dbName', branch_config.'param2' etc. You can even set it to a global variable and then use throughout your pipeline.
The config file can easily be edited via the Jenkins UI(Provided by the plugin) to provision for new branches/params in the future. This doesn't need access to any non sandbox methods.
Not really an answer to your question, but possibly a workaround...
I don't know that the rest of your parameter list looks like, but if it is a static list, you could potentially have your static list with a "use Default" option as the first one.
When the job is run, if the value is "use Default", then gather the default from a file stored in the SCM branch and use that.
I'd like to switch to XML syntax in Sublime Text 2 using some key binding, for example Ctrl+Shift+X.
There is a command for that, I can successfully execute it from console:
view.set_syntax_file("Packages/XML/XML.tmLanguage")
I tried this binding, but it doesn't work:
{ "keys": ["ctrl+shift+x"], "command": "set_syntax_file", "args" : {"syntax_file" : "Packages/XML/XML.tmLanguage" }}
Here API reference for the set_syntax_file command can be found.
Any ideas?
Try this:
{ "keys": ["ctrl+shift+x"], "command": "set_file_type", "args" : {"syntax" : "Packages/XML/XML.tmLanguage" } }
set_syntax_file is an API command, so to use it I created a simple plug-in.