Setting up extra context with kitchen-terraform - devops

I'm trying to use kitchen-terraform to verify a terraform module I'm building. This particular module is a small piece in a larger infrastructure. It depends on some pieces of the network being available and will then be used later to spin up additional servers and whatnot.
I'm curious if there's a way with kitchen-terraform to create some pieces of infrastructure before the module under test runs and to also add in some extra pieces that aren't part of the module proper.
In this particular case, the module is creating a new VPC with some peering connections with an existing VPC, security groups, and subnets. I want to verify that the peering connections were established correctly as well as spin up some ec2 instances to verify the status of the network.
Does anyone have examples of doing something like this?

I'm curious if there's a way with kitchen-terraform to create some
pieces of infrastructure before the module under test runs and to also
add in some extra pieces that aren't part of the module proper.
You can do all of this. Your .kitchen.yml will specify where the terraform code exists to execute here:
provisioner:
name: terraform
directory: path/to/terraform/code
variable_files:
- path/to/terraform/variables.tfvars
More to the point, create a main.tf in a test location that builds all the infrastructure you want, including the modules. The order of execution will be controlled by the dependencies of the resources themselves.
Assuming you are testing in the same repo as your module, maybe arrange something like this:
├── .kitchen.yml
├── Gemfile
├── Gemfile.lock
├── README.md
├── terraform
│   ├── my_module
│      ├── main.tf
│      └── variables.tf
├── test
   ├── main.tf
   └── terraform.tfvars
The actual .kitchen.yml will include this:
provisioner:
name: terraform
directory: test
variable_files:
- test/variables.tfvars
variables:
access_key: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
And your test/main.tf will instantiate the module along with any other code under test.
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
...
module "my_module" {
name = "foo"
source = "../terraform/my_module"
...
}
resource "aws_instance" "test_instance_1" {
...
}

Related

Is it possible to specify paths in a configuration file that are relative to the configuration file location?

I have a complex config search path consisting of multiple locations where each location looks similar to this:
├── conf
│ └── foo
│ ├── foo.yaml
│ └── bar.yaml
└── files
├── foo.txt
└── bar.txt
with foo.yaml:
# #package _group_
path: "../../files/foo.txt"
and bar.yaml:
# #package _group_
path: "../../files/bar.txt"
Now the problem is: how do I find the correct location of the files specified in the configurations? I am aware of the to_absolute_path() method provided by hydra, but it interprets the path relative to the directory in which the application was started. However, I would like to interpret that path relative to the position of the configuration file. I cannot do this manually in my code, because I don't know how hydra resolved the configuration file and where exactly it is used to.
Is there some mechanism to determine the location of a config file from hydra? I really want to refrain from putting hard coded absolute paths in my configurations.
You can't get the path of a config file. In fact, it may not be a file at all (such as the case for Structured Configs), or it can be inside a python wheel (even in a zipped wheel).
You can do something like
path = os.path.join(os.path.dirname(__file__), "relative_path_from_config")
You can use also APIs designed for loading resources files from Python modules.
Here is a good answer in the topic.

.bzl file in external dependencies

I've an external dependency declared in WORKSPACE as a new_git_repository and provided a BUILD file for it.
proj/
├── BUILD
├── external
│   ├── BUILD.myDep
│   └── code.bzl
└── WORKSPACE
in the BUILD.myDep file, I want to load code.bzl nearby, but when I load it (load("//:external/code.bzl", "some_func")) bazel tries to load #myDep//:external/code.bzl instead!
Of course it's not a target in #myDep repository, but in my local worksapce.
Seems I Rubber Duck ed the Stackoverflow. since the solution appeared when writing the question!
However, the solution is to explicitly mention the local workspace when loading the .bzl file:
Suppose we have declared the name in the WORKSPACE as below:
workspace(name = "local_proj")
Now instead of load("//:external/code.bzl", "some_func"), just load it explicitly as a local workspace file:
load("#local_proj//:external/code.bzl", "some_func")
NOTE: When using this trick just be careful about potential dependency loops (i.e. loading a generated file that itself is produced by a rule depending on the same external repo!)

terraform validate error: The argument "region" is required, but was not set

I wrote a custom RDS module for my development team to consume for deploying RDS instances. I am using BitBucket for source control and I am trying to integrate a BitBucket pipeline to run terraform validate on my .tf files to validate syntax before making it consumable to the devs.terraform init runs fine but when I run terraform validate I get the following error: Error: Missing required argument. The argument "region" is required, but was not set. After looking at the documentation, I am confused why this command would check for a declared provider if it is not actually deploying anything? I am admittedly new to writing modules. Perhaps this isn't the right command for what I want to accomplish?
Terraform version: v0.12.7
AWS Provider version: 2.24
bitbucket-pipelines.yml:
image: hashicorp/terraform:full
pipelines:
branches:
master:
- step:
script:
- terraform version
- terraform init
- terraform validate
Module tree:
├── CHANGELOG.md
├── README.md
├── bitbucket-pipelines.yml
├── main.tf
├── modules
│   ├── db_instance
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── db_option_group
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── db_parameter_group
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── db_subnet_group
│   ├── README.md
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── outputs.tf
└── variables.tf
The situation you've hit here is the bug described in Terraform issue #21408, where validation is checking that the provider configuration is complete even though you're intending to write a module that will inherit a provider.
There are two main workarounds for this at the time of writing. The easiest one-shot workaround is to set the environment variable AWS_DEFAULT_REGION to any valid AWS region then it should be used as a value for region and allow validation to pass.
To make that reproducible, you can use a test configuration which can serve a test bed for the module when you are developing it alone, outside the context of a particular caller. To do this, make a directory tests/simple (or really anything you like; the name doesn't matter) and put in it a test.tf file containing something like this:
provider "aws" {
region = "us-east-1"
}
module "under_test" {
source = "../.."
# Any arguments the module requires
}
You can then switch into that test directory and use the normal Terraform workflow to validate the whole configuration together:
cd tests/simple
terraform init
terraform validate
A nice benefit of this general idea of test configurations is that you can potentially also use them for end-to-end testing by running terraform plan or terraform apply with a suitable set of environment variables set, and you can have multiple test configurations to test different permutations of options and make sure they all pass validation and, if you do end-to-end testing, that they all work.
Even once that Terraform issue is fixed, test configurations will remain a nice technique for ensuring that your module works as a child module, separately from whether it's valid in isolation.
I have run into the same problem even though I have provided region to my provider configuration.
After some digging I have come across this thread from terraform's discussion board. The problem it seems is that for some undocumented reason, terraform expects AWS_DEFAULT_REGION environment variable to be set to a region value (eg. "us-west-1"). Setting it to a valid region has solved this problem for me.
You can find more information about setting environment variables for Terraform here.
Hope it helps your problem.
One or more of your TF resources has no region configured. To handle this without the AWS_DEFAULT_REGION env variable or if you have multiple regions, you can use provider aliases in your resources to specify your region. For example:
provider "aws" {
region = "us-east-1"
alias = "us"
}
...
resource "aws_cloudwatch_log_metric_filter" "hk_DBrecoverymode-UAT" {
provider = aws.us
...
}

Rebar eunit skips all app tests if root app is not included

my problem is that I can't run eunit tests for a single app or module without including the root app. My directory laylout looks a bit like this:
├── apps
│   ├── app1
│   └── app2
├── deps
│   ├── amqp_client
│   ├── meck
│   ├── rabbit_common
│   └── ranch
├── rebar.config
├── rel
└── src
   ├── rootapp.app.src
   ├── rootapp.erl
   ├── rootapp.erl
   └── rootapp.erl
Now, what I can do is:
$ rebar eunit skip_deps=true
which runs the tests for all apps. Also, I can do:
$ cd apps/app1/
$ rebar eunit skip_deps=true
which runs the tests for app1 (I have a rebar.config in apps/app1 as well.
However, if I try
$ rebar eunit skip_deps=true apps=app1
does...nothing. no output. Trying verbose mode gives me:
$ rebar -vv eunit skip_deps=true apps=app1
DEBUG: Consult config file "/Users/myuser/Development/erlang/rootapp/rebar.config"
DEBUG: Rebar location: "/usr/local/bin/rebar"
DEBUG: Consult config file "/Users/myuser/Development/erlang/erlactive/src/rootapp.app.src"
DEBUG: Skipping app: rootapp
When I include the root app, it works:
$ rebar eunit skip_deps=true apps=rootapp,app1
Despite the fact, that I actually want to test app1, not rootapp, this is really uncomfortable since the SublimeErl plugin for SublimeText 2 will always set the apps to the app that the module under test is contained in. So the tests will always fail because actually no tests will run at all.
Long story short: Is there something I can configure in any of the rebar.config files to make it possible to run the tests for one app in /apps without including the root app?
Personally I prefer to put the main app into its own OTP compliant folder in apps. Just create a new app rootapp in apps and include it in your rebar.config:
{sub_dirs, ["apps/app1",
"apps/app2",
"apps/rootapp"]}.
You might also have to include the apps directory into your lib path:
{lib_dirs, ["apps"]}.
You might want to have a look into Fred Herbert's blog post “As bad as anything else”.
With this set up you should be able to run:
rebar skip_deps=true eunit
which will run all eunit tests of the apps in apps.

How to generate the ActionScript 3 source code of an OpenLaszlo LZX SWF runtime app

When developing OpenLaszlo applications, it's sometimes useful to generate the ActionScript 3 source code of an application written in lzx, e.g. when you want to compile OpenLaszlo into an Adobe AIR application.
What is the simplest way to generate the ActionScript 3 source code into a predefined folder?
The lzc command line tool which can be found in the $LPS_HOME/WEB-INF/lps/server/bin/ has on option for that:
--lzxonly
for as3 runtime, emit intermediate as files,
but don't call backend as3 compiler
By default the OpenLaszlo compiler will generate the ActionScript 3 code into the system specific Java temp folder, but the $JAVA_OPTS environment variable can be used to change that folder.
Here's an example of how the command can be used in combination with $JAVA_OPTS on Linux:
a) Create a simple LZX file, e.g.
<canvas>
<button text="Hello world" />
</canvas>
and save it as test.lzx.
b) Set the $JAVA_OPTS variable
The following syntax is for Linux or OS X:
export JAVA_OPTS="-Djava.io.tmpdir=./tmp -DXmx1024M"
c) Compile the LZX into ActionScript 3
> lzc --lzxonly test.lzx --runtime=swf10
Compiling: test.lzx to test.swf10.swf
The tmp folder will contain the generated ActionScript 3 files
tmp
├── lzccache
└── lzswf9
└── build
└── test
├── app.swf
├── build.sh
├── LzApplication.as
├── $lzc$class_basebutton.as
├── $lzc$class_basecomponent.as
├── $lzc$class_basefocusview.as
├── $lzc$class_button.as
├── $lzc$class__componentmanager.as
├── $lzc$class_focusoverlay.as
├── $lzc$class__m2u.as
├── $lzc$class__m2v.as
├── $lzc$class__m2w.as
├── $lzc$class__m2x.as
├── $lzc$class__m2y.as
├── $lzc$class__m2z.as
├── $lzc$class__m30.as
├── $lzc$class__m31.as
├── $lzc$class__mm.as
├── $lzc$class__mn.as
├── $lzc$class__mo.as
├── $lzc$class__mp.as
├── $lzc$class_statictext.as
├── $lzc$class_style.as
├── $lzc$class_swatchview.as
├── LZC_COMPILER_OPTIONS
├── LzPreloader.as
└── LzSpriteApplication.as
The folder structure follows the following scheme:
{JAVA_TEMP_FOLDER}/lzswf9/build/{LZX_FILENAME_WITHOUT_ENDING}, therefore in our case
tmp/lzswf9/build/test/
The main applicaton file is LzSpriteApplication.as, and you can look into the build.sh file to get an idea how the Flex SDK's mxmlc command is used to compile the generated source code.

Resources