How to check if value is defined in rego? - open-policy-agent

I want to check if a variable is defined in OPA policy.
> subject
1 error occurred: 1:1: rego_unsafe_var_error: var subject is unsafe
Is there a function to check if a variable is defined

You can use show command to show active module definition.
> show
package repl
b = 45534543
a = 3423
sites = [
{"name": "prod"},
{"name": "smoke1"},
{"name": "dev"},
]
You will find the list of defined variables here.

Related

How to define string based on host os in bazel rule definition?

I have the following rule definition:
helm_action = rule(
attrs = {
…
"cluster_aliases": attr.string_dict(
doc = "key value pair matching for creating a cluster alias where the name used to evoke a cluster alias is different than the actual cluster's name",
default = DEFAULT_CLUSTER_ALIASES,
),
…
},
…
)
I'd like for DEFAULT_CLUSTER_ALIASES value to be based on the host os but
DEFAULT_CLUSTER_ALIASES = {
"local": select({
"#platforms//os:osx": "docker-desktop",
"#platforms//os:linux": "minikube",
})
}
errors with:
Error in string_dict: expected value of type 'string' for dict value element, but got select({"#platforms//os:osx": "docker-desktop", "#platforms//os:linux": "minikube"}) (select)
How do I go about defining DEFAULT_CLUSTER_ALIASES based on the host os?
Judging from https://github.com/bazelbuild/bazel/issues/2045, selecting based on host os is not possible.
When you create a rule or macro, it is evaluated during the loading phase, before command-line flags are evaluated. Bazel needs to know the default value in your build rule helm_action during the loading phase but can't because it hasn't parsed the command line and analysed the build graph.
The command line is parsed and select statements are evaluated during the analysis phase. As a broad rule, if your select statement isn't in a BUILD.bazel then it's not going to work. So the easiest way to achieve what you are after is to create a macro that uses your rule injecting the default. e.g.
# helm_action.bzl
# Add an '_' prefix to your rule to make the rule private.
_helm_action = rule(
attrs = {
…
"cluster_aliases": attr.string_dict(
doc = "key value pair matching for creating a cluster alias where the name used to evoke a cluster alias is different than the actual cluster's name",
# Remove default attribute.
),
…
},
…
)
# Wrap your rule in a publicly exported macro.
def helm_action(**kwargs):
_helm_action(
name = kwargs["name"],
# Instantiate your rule with a select.
cluster_aliases = DEFAULT_CLUSTER_ALIASES,
**kwargs,
)
It's important to note the difference between a macro and a rule. A macro is a way of generating a set of targets using other build rules, and actually expands out roughly equivalent to it's contents when used in a BUILD file. You can check this by querying a target with the --output build flag. e.g.
load(":helm_action.bzl", "helm_action")
helm_action(
name = "foo",
# ...
)
You can query the output using the command;
bazel query //:foo --output build
This will demonstrate that the select statement is being copied into the BUILD file.
A good example of this approach is in the rules_docker repository.
EDIT: The question was clarified, so I've got an updated answer below but will keep the above answer in case it is useful to others.
A simple way of achieving what you are after is to use Bazels toolchain api. This is a very flexible API and is what most language rulesets use in Bazel. e.g.
Create a build file with your toolchains;
# //helm:BUILD.bazel
load(":helm_toolchains.bzl", "helm_toolchain")
toolchain_type(name = "toolchain_type")
helm_toolchain(
name = "osx",
cluster_aliases = {
"local": "docker-desktop",
},
)
toolchain(
name = "osx_toolchain",
toolchain = ":osx",
toolchain_type = ":toolchain_type",
exec_compatible_with = ["#platforms//os:macos"],
# Optionally use to restrict target platforms too.
# target_compatible_with = []
)
helm_toolchain(
name = "linux",
cluster_aliases = {
"local": "minikube",
},
)
toolchain(
name = "linux_toolchain",
toolchain = ":linux",
toolchain_type = ":toolchain_type",
exec_compatible_with = ["#platforms//os:linux"],
)
Register your toolchains so that Bazel knows what to look for;
# //:WORKSPACE
# the rest of your workspace...
register_toolchains("//helm:all")
# You may need to register your execution platforms too...
# register_execution_platforms("//your_platforms/...")
Implement the toolchain backend;
# //helm:helm_toolchains.bzl
HelmToolchainInfo = provider(fields = ["cluster_aliases"])
def _helm_toolchain_impl(ctx):
toolchain_info = platform_common.ToolchainInfo(
helm_toolchain_info = HelmToolchainInfo(
cluster_aliases = ctx.attr.cluster_aliases,
),
)
return [toolchain_info]
helm_toolchain = rule(
implementation = _helm_toolchain_impl,
attrs = {
"cluster_aliases": attr.string_dict(),
},
)
Update helm_action to use toolchains. e.g.
def _helm_action_impl(ctx):
cluster_aliases = ctx.toolchains["#your_repo//helm:toolchain_type"].helm_toolchain_info.cluster_aliases
#...
helm_action = rule(
_helm_action_impl,
attrs = {
#…
},
toolchains = ["#your_repo//helm:toolchain_type"]
)

Get the variable value if variable name is stored as string in Groovy Script

I am completely new to Groovy and Jenkins. I have some pre defined variable in Groovy Script (of Jenkins pipeline) and need to pick any one variable from them dynamically based on job/user input.
The example context of requirement is as provided below.
Here variable env is my input and based on that I should get correct userid.
env = "dev" //Input
def stg_userid = "abc"
def dev_userid = "xyz"
uid_var_name = "${env}_userid"
print "${uid_var_name}" // It is giving "dev_userid"
print 'abc' if we give 'stg' for env ;
print 'xyz' if we give 'dev' for env
Tried searching online for dynamic variable name use case in Groovy , but didn't got anything useful.
usually it's question of complex variable (Map) that holds parameters for all possible environments
and you could get section of this configuration by environment name
env = "dev"
def config = [
dev: [
user: 'aaa',
url: 'aaa-url'
],
stg: [
user: 'zzz',
url: 'zzz-url'
]
]
def uid_var_name = config[env].user // returns "aaa"

AWS CDK StateMachine BatchSubmitJob with dynamic environment variables

I'm trying to create a statemachine with a BatchSubmitJob in AWS CDK with dynamic environment variables in the BatchContainerOverrides. I was thinking about something like this:
container_overrides = sfn_tasks.BatchContainerOverrides(
environment={
"TEST.$": "$.dynamic_from_payload"
}
)
return sfn_tasks.BatchSubmitJob(self.scope,
id="id",
job_name="name",
job_definition_arn="arn",
job_queue_arn="arn",
container_overrides=container_overrides,
payload=sfn.TaskInput.from_object({
"dynamic_from_payload.$": "$.input.some_variable"
}))
However, upon deployment, CDK will add "Name" and "Value" to the statemachine definition, but Value is now static. This is part of the statemachine definition as seen in the console:
"Environment": [
{
"Name": "TEST.$",
"Value": "$.dynamic_from_payload"
}
]
But I need to have it like this:
"Environment": [
{
"Name": "TEST",
"Value.$": "$.dynamic_from_payload"
}
]
I also tried using "Ref::", as done here for the command parameters: AWS Step and Batch Dynamic Command. But this doesn't work either.
I also looked into escape hatches, overwriting the CloudFormation template. But I don't think that is applicable here, since the generated statemachine definition string is basically one large string.
I can think of two solutions, both of which don't make me happy: override the statemachine definition string with escape hatches with a copy in which "Value" is replaced on certain conditions (probably with regex) OR put a lambda in the statemachine that will create and trigger the batch job and a lambda that will poll if the job is finished.
Long story short: Does anyone have an idea of how to use dynamic environment variables with a BatchSubmitJob in CDK?
You can use the aws_cdk.aws_stepfunctions.JsonPath class:
container_overrides = sfn_tasks.BatchContainerOverrides(
environment={
"TEST": sfn.JsonPath.string_at("$.dynamic_from_payload")
}
)
Solved thanks to K. Galens!
I ended up with a Pass state with intrinsic functions to format the value and the aws_cdk.aws_stepfunctions.JsonPath for the BatchSubmitJob.
So something like this:
sfn.Pass(scope
id="id",
result_path="$.result",
parameters={"dynamic_from_payload.$": "States.Format('static_sub_part/{}',$.dynamic_sub_part)"} )
...
container_overrides = sfn_tasks.BatchContainerOverrides(
environment={
"TEST": sfn.JsonPath.string_at("$.result.dynamic_from_payload")
}
)

How can I define an owner to an empty_dir using container_image or container_layer from bazel rules_docker?

From the PR that implemented empty_dirs, it seems there's support for defining dir owners (with the names argument) and mode into the add_empty_dir method of TarFile class.
But the container_image rule (and container_layer) supports only mode.
This works:
container_image(
name = "with_empty_dirs",
empty_dirs = [
"etc",
"foo",
"bar",
],
mode = "0o777",
)
But this returns an error: "ERROR: (...) no such attribute 'names' in 'container_image_' rule":
container_image(
name = "with_empty_dirs",
empty_dirs = [
"etc",
"foo",
"bar",
],
names = "nginx",
)
Do we need to “write a customized container_image” if we want to add support for owner of empty_dirs?
In a BUILD file, the attribute you're looking for is ownername. See the pkg_tar reference documentation documentation for more details. Also, I don't think you can pass it directly to container_image, you have to create a separate pkg_tar first. Like this:
pkg_tar(
name = "with_empty_dirs_tar",
empty_dirs = [
"etc",
"foo",
"bar",
],
ownername = "nginx.nginx",
)
container_image(
name = "with_empty_dirs",
tars = [":with_empty_dirs_tar"],
)
In general, container_image has a subset of pkg_tar as direct attributes to make simple forms of adding files, but for complex use cases you should create a pkg_tar yourself for full access to all of its features for adding/arranging files and setting their ownership.
The names you see in that PR is a variable in a Python helper tool which the BUILD file rules use as part of the implementation. There's a layer of abstraction between what you write in a BUILD file and that Python code.

Is it possible to set the a variable group scope using DevOps CLI or via REST

I am able to add/modify DevOps release definitions through a combination of CLI and CLI REST methods. The release definition object does not include (as far as I can tell) a property that controls the variable group scope. The release definition itself takes an array of variable group ID's but there is also the scope of the variable group within the context of the release definition. Where is that?
Is there support to access the variable group scope property in the CLI or CLI REST interface? The image below shows the interface from the portal in azure. Selecting the ellipses (...) you can "change scope" where a list of stages is displayed. You than save and then save the release definition.
I captured fiddler output but the body post was huge and not very helpful, I didn't see anything related to a list of scopes. but obviously this can be done. I'm just not sure about doing so via CLI or REST.
Edit: Here is a view of the script. There is no "scope", which should contain a list of environment names, anywhere in the release definition that I can see. Each environment name (aka stage) contains a number of variable groups associated with each environment.
$sourcedefinition = getreleasedefinitionrequestwithpat $reldefid $personalAccesstoken $org $project | select -Last 1
Write-Host "Root VariableGroups: " $sourcedefinition.variableGroups
$result = #()
#search each stage in the pipeline
foreach($item in $sourcedefinition.environments)
{
Write-Host ""
Write-Host "environment name: "$item.name
Write-Host "environment variable groups: "$item.variableGroups
}
To help clarify, the scope I seek cannot be in the environments collection as this is specific to each element (stage). The scope is set at the release definition level for a given variable group (again refer to the image).
I use this API to get the Definitions of my release and find that the values of variableGroups in ReleaseDefinition and in ReleaseDefinitionEnvironment are different when the scopes are different.
Then I think if we want to change the scope via REST API, we just need to change the variableGroups and update the Definitions. We can use this API to update the Definitions.
Edit:
For example, I want to change my scope from Release to Stage, I use the API like below:
PUT https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions?api-version=6.1-preview.4
Request Body: (I get this from the first Get Definitions API Response Body and make some changes to use it)
{
"source":"userInterface",
"revision":6,
...
"lastRelease": {
"id": 1,
...
},
...
"variables":{
},
"variableGroups":[],
"environments":[
{
"name": "Stage 1",
...
"variables":{
},
"variableGroups":[
4
],
...
}
],
...
}
Note:
Please use your own newer revision.
The id value in lastRelease is your release definitionId.
Specify the stage name in environments name.
The variableGroups value in environments is the id of the variable group you want to change scope.

Resources