Is there a means i can use for_each to create resources conditionally in terraform? or any terraform functions to refactor this? - foreach

count = var.create_monitoring_role && var.monitoring_interval > 0 ? 1 : 0
Is there a means i can use for_each to create resources conditionally in terraform? or any terraform functions to refactor this?

Related

deleting resources during CDK deploy?

How can I delete unused resources during a CDK deploy? I'm creating a VPC with:
vpc = ec2.Vpc(self, id = "myTestVPC", max_azs = 3,
cidr = "10.120.0.0/16",
subnet_configuration = [ presentation_nc, logic_nc, data_nc ])
So I wind up with 9 subnets - 3 public and 6 private. I want to use 1 routing table for the public subnets and 1 for the private. I'm doing that by iterating over vpc.private_subnets and vpc.public_subnets, and associating the routing tables with ec2.CfnSubnetRouteTableAssociation. That part works.
Now I have 7 unused routing tables that I want to get rid of. How can I do that?
Set the removal policy to DESTROY and remove the stack from cloudformation or remove the code from cdk application.
vpc.applyRemovalPolicy(RemovalPolicy.DESTROY)

parallel steps on different nodes on declarative jenkins pipeline cannot access variables defined in global scope

Let me preface this by saying that I don't yet fully understand how jenkins DSL / groovy handles namespace, scope, variables etc.
In order to keep my code DRY I put repeated command sequences into variables.
It turns out the variable script below is not readable by the code in doParallelStuff. Why is that? Is there a way to share global variables defined in the script (or elsewhere) among both the main pipleine steps and the doParallelStuff code?
def script = """\
#/bin/bash
python xyz.py
"""
def doParallelStuff() {
tests["1"] = {
node {
stage('ps1') {
sh script
}
}
}
tests["2"] = {
node {
stage('ps2') {
sh script
}
}
}
parallel tests
}
pipeline {
stages {
stage("myStage") {
steps {
script {
sh script
doParallelStuff()
}
}
}
}
}
The actual steps are a bit more complicated, but this causes an error like the following to be thrown:
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: script for class: WorkflowScript
When you define a variable outside of the pipeline directive scope using the def keyword you are defining it in the local scope of the main script, because the pipeline keyword is actually a method that is executed in the main script it can access the variable is they are defined and executed in the same scope (they are actually transformed into a separated class).
When you define a function outside of the pipeline directive, that function has its own scope for variables which is separated from the scope of the main script and therefore it cannot access the defined variable in the top level.
To solve it you can define the variable without the def keyword which will affect the scope in which this variable is created, as without the def (in a groovy script, not class) the variable is added to the global variables of the script (The Binding) which makes it accessible from any function or code within the groovy script. You can read more on the following question: What is the difference between defining variables using def and without?
So in your case you want a variable that is available for both the pipeline code itself and for the defined functions - so it needs to be available anywhere in the script as a global variable and therefore just define it without the def keyword, and it should do the trick:
script = """\
#/bin/bash
python xyz.py
"""

Conditionally create a Bazel rule based on --config

I'm working on a problem in which I only want to create a particular rule if a certain Bazel config has been specified (via '--config'). We have been using Bazel since 0.11 and have a bunch of build infrastructure that works around former limitations in Bazel. I am incrementally porting us up to newer versions. One of the features that was missing was compiler transitions, and so we rolled our own using configs and some external scripts.
My first attempt at solving my problem looks like this:
load("#rules_cc//cc:defs.bzl", "cc_library")
# use this with a select to pick targets to include/exclude based on config
# see __build_if_role for an example
def noop_impl(ctx):
pass
noop = rule(
implementation = noop_impl,
attrs = {
"deps": attr.label_list(),
},
)
def __sanitize(config):
if len(config) > 2 and config[:2] == "//":
config = config[2:]
return config.replace(":", "_").replace("/", "_")
def build_if_config(**kwargs):
config = kwargs['config']
kwargs.pop('config')
name = kwargs['name'] + '_' + __sanitize(config)
binary_target_name = kwargs['name']
kwargs['name'] = binary_target_name
cc_library(**kwargs)
noop(
name = name,
deps = select({
config: [ binary_target_name ],
"//conditions:default": [],
})
)
This almost gets me there, but the problem is that if I want to build a library as an output, then it becomes an intermediate dependency, and therefore gets deleted or never built.
For example, if I do this:
build_if_config(
name="some_lib",
srcs=[ "foo.c" ],
config="//:my_config",
)
and then I run
bazel build --config my_config //:some_lib
Then libsome_lib.a does not make it to bazel-out, although if I define it using cc_library, then it does.
Is there a way that I can just create the appropriate rule directly in the macro instead of creating a noop rule and using a select? Or another mechanism?
Thanks in advance for your help!
As I noted in my comment, I was misunderstanding how Bazel figures out its dependencies. The create a file section of The Rules Tutorial explains some of the details, and I followed along here for some of my solution.
Basically, the problem was not that the built files were not sticking around, it was that they were never getting built. Bazel did not know to look in the deps variable and build those things: it seems I had to create an action which uses the deps, and then register an action by returning a (list of) DefaultInfo
Below is my new noop_impl function
def noop_impl(ctx):
if len(ctx.attr.deps) == 0:
return None
# ctx.attr has the attributes of this rule
dep = ctx.attr.deps[0]
# DefaultInfo is apparently some sort of globally available
# class that can be used to index Target objects
infile = dep[DefaultInfo].files.to_list()[0]
outfile = ctx.actions.declare_file('lib' + ctx.label.name + '.a')
ctx.actions.run_shell(
inputs = [infile],
outputs = [outfile],
command = "cp %s %s" % (infile.path, outfile.path),
)
# we can also instantiate a DefaultInfo to indicate what output
# we provide
return [DefaultInfo(files = depset([outfile]))]

IF / ELSE statement inside .yml file

Is there a way to use IF/ELSE inside the .yml file?
I wanted to define env variables if it's not a pull request.
Something like this idea:
env:
matrix:
if ($TRAVIS_PULL_REQUEST) {
- BROWSER='chrome_linux' BUILD='default'
- BROWSER='chrome_linux' BUILD='nocompat'
- BROWSER='firefox_linux' BUILD='default'
- BROWSER='firefox_linux' BUILD='nocompat'
}
else {
- BROWSER='phantomjs' BUILD='default'
}
Is this possible?
I don't think this particular case would work. TRAVIS_PULL_REQUEST is defined on the build worker, while build matrix must be constructed before handing off the job to the worker.
I suggest writing a wrapper script that takes TRAVIS_PULL_REQUEST and set the environment variables correctly, or do something like this in before_install:
[ "${TRAVIS_PULL_REQUEST}" != "false" ] && BROWSER='chrome_linux' BUILD='default' || true

redis lua script vs. single calls

I have the following setup:
2 different datastructures: Sets, Strings
They are in different namespaces *:collections:*, *:resources:*
The client doesn't know about this and I try to get both namespaces every time.
Based on exists I decide which datastructure to get finally.
all calls to redis are done asynchronous (vert.x redis-mod)
Now I have to decide if I execute this as lua script or as single commands.
The lua script I came up with:
local path = KEYS[1]
local resourcesPrefix = ARGV[1]
local collectionsPrefix = ARGV[2]
if redis.call('exists',resourcesPrefix..path) == 1 then
return redis.call('get',resourcesPrefix..path)
elseif redis.call('exists',collectionsPrefix..path) == 1 then
return redis.call('smembers',collectionsPrefix..path)
else
return "notFound"
end
Are there any pros and cons for single calls or lua script?
Yes, LUA script is a best solution in case of EVALSHA call:
You are working woth redis asynchronous. So LUA helps you to reduce number of code and code readability.
LUA case is faster becouse of reduce network communication.
I think you can write your code with just 2 commands. You do not need exists in your code.
local path = KEYS[1]
local resourcesPrefix = ARGV[1]
local collectionsPrefix = ARGV[2]
local ret
set ret = redis.call('get',resourcesPrefix..path)
if ret then
return ret
end
set ret = redis.call('smembers',collectionsPrefix..path)
if ret then
return ret
end
return "notFound"
It looks like a good use of Redis LUA scripting to me. The execution time will be short (long scripts are to be avoided as they are blocking). It avoids doing multiple calls, so reduces the total network communication time. So I think it's a better solution than single calls, if you make many of these calls. Especially if you use EVALSHA to have the script cached in Redis.

Resources