I want to use output variables of one resource/module as an input to another resource/modules. Is that possible? Here i want the output value from 'outputs.tf' in root to be used as input in 'main.tf' of module.
root
|--main.tf
|--vars.tf
|--outputs.tf
|---module
|--main.tf
|--vars.tf
Of course you can. And there is nothing more you need to do. Just do it as usual. Here is an example:
main.tf
├── rg
│ ├── output.tf
│ └── rg.tf
└── vnet
├── output.tf
└── vnet.tf
You create the modules rg and vnet as it does. Set the output you need. Here I set the output rg_name and rg_location. And I also set the variables rg_name and rg_location in the module vnet as need. Then the main.rf shows here:
provider "azurerm" {
features {}
}
module "rg" {
source = "./rg"
rg_name = "charlesTerraform"
}
module "vnet" {
source = "./vnet"
rg_name = module.rg.rg_name
rg_location = module.rg.rg_location
}
output "vnet" {
value = module.vnet.vnet
}
You see, I use the output of the module rg as input for the module vnet. Hope it help you understand the Terraform modules.
Update:
It's also the same way when the structure is the thing you said. You just need to input the output you need into the module. For example:
resource "azurerm_resource_group" "example" {
name = "xxxxxx"
location = "xxxx"
}
module "vnet" {
source = "./modules"
resource_group = azurerm_resource_group.example.name
}
This is just an example, but it shows you how to achieve it. Hope you understand.
I think you can use the terraform output command to interpolate output value to other means as variables.
See this terraform docs https://www.terraform.io/cli/commands/output
Related
The following is a minimal reproducer for an infinite recursion error when building a nixos configuration:
(import <nixpkgs/nixos>) {
configuration = { pkgs, ... }: {
options = builtins.trace "Building a system with system ${pkgs.system}" {};
};
system = "x86_64-linux";
}
When evaluated it fails as follows, unless the reference to pkgs.system is removed:
$ nix-build
error: infinite recursion encountered
at /Users/charles/.nix-defexpr/channels/nixpkgs/lib/modules.nix:496:28:
495| builtins.addErrorContext (context name)
496| (args.${name} or config._module.args.${name})
| ^
497| ) (lib.functionArgs f);
If we look at the implementation of nixos/lib/eval-config.nix:33, we see that the value passed for the system argument is set as an overridable default in pkgs. Does this mean we can't access it until later in the evaluation process?
(In the real-world use case, I'm introspecting a flake -- investigating someFlake.packages.${pkgs.system} to find packages for which to generate configuration options.)
This has been cross-posted to NixOS Discourse; see https://discourse.nixos.org/t/accessing-target-system-when-building-options-for-a-module/
In order for the module system to construct the configuration, it needs to know which config and options items exist, at least to the degree necessary to produce the root attribute set of configuration.
The loop is as follows:
Evaluate the attribute names in config
Evaluate the attribute names of the options
Evaluate pkgs (your code)
Evaluate config._module.args.pkgs (definition of module argument)
Evaluate the attribute names in config (loop)
It can be broken by removing or reducing the dependency on pkgs.
For instance, you could define your "dynamic" options as type = attrsOf foo instead of enumerating the each item from your flake as individual options.
Another potential solution is to move the option definitions into a submodule. A submodule without attrsOf as in attrsOf (submodule x) is generally quite useless, but it may create a necessary indirection that separates your dynamic pkgs-dependent options from the module fixpoint that has pkgs.
(import <nixpkgs/nixos>) {
configuration = { pkgs, lib, ... }: {
options.foo = lib.mkOption {
type = lib.types.submodule {
options = builtins.trace "Building a system with system ${pkgs.system}" { };
};
default = { };
};
};
system = "x86_64-linux";
}
nix-repl> config.foo
trace: Building a system with system x86_64-linux
{ }
As an alternate approach for cases where avoiding recursion isn't feasible, one can use specialArgs in invoking nixos/lib/eval-config.nix to pass a final value not capable of being overridden through the module system:
let
configuration = { pkgs, forcedSystem, ... }: {
options = builtins.trace "Building a system with system ${forcedSystem}" {};
};
in
(import <nixpkgs/nixos/lib/eval-config.nix>) {
modules = [ configuration ];
system = "x86_64-linux";
specialArgs = { forcedSystem = "x86_64-linux"; };
}
I am currently using a Jenkins library without issues from my jobs.
Right now I am trying to do some refactor, there is a chunk of code to determine with AWS account to use in almost every tool we currently have in the library.
I created the following file "get account.groovy"
class GetAccount {
def getAccount(accountName) {
def awsAccount = "abcd"
return awsAccount;
}
}
Then I am trying to do this from within one of the other groovy scripts:
def getaccount = load 'getaccount.groovy'
def awsAccount = getaccount.getAccount(account)
But that does not work since it is looking for that file in the current work directory not in the library directory
I am unable to figure out what the best way to call another class from within a library that is already being used.
Jenkins load DSL is meant to load an externalize groovy file that is available in the job workspace and it will not work if you try to load a groovy script available in Jenkins shared library, because the shared library never checkout in the job Workspace.
If you follow the standard shared library structure like below, it could be done like :
shared-library
├── src
│ └── org
│ └── any
│ └── GetAccount.groovy
└── vars
└── aws.groovy
GetAccount.groovy
package org.any
class GetAccount {
def getAccount(accountName) {
def awsAccount = "abcd"
return awsAccount;
}
}
aws.groovy
import org.any;
def call() {
def x = new GetAccount()
// make use of val and proceed with your further logic
def val = x.getAccount('xyz')
}
In your Jenkinsfile (declarative or scripted ) you can use both the shared library groovy class like :
make use of aws.groovy
scripted pipeline
node {
stage('deploy') {
aws()
}
}
declarative pipeline
pipeline {
agent any;
stages {
stage('deploy') {
steps {
aws()
}
}
}
}
make use of GetAccount.groovy
scripted pipeline
import org.any
node {
stage('deploy') {
def x = new GetAccount()
// make use of val and proceed with your further logic
def val = x.getAccount('xyz')
}
}
declarative pipeline
import org.any
pipeline {
agent any;
stages {
stage('deploy') {
steps {
script {
def x = new GetAccount()
// make use of val and proceed with your further logic
def val = x.getAccount('xyz')
}
}
}
}
}
I'm experiencing some behaviors in Jenkins Shared Libraries, and it'll be great if someone can explain this to me:
First issue
Let's say i have a file in the vars directory:
// MultiMethod.groovy
def Foo() { ... }
def Bar() { ... }
Now if i want to use those functions from the pipeline, what i did was:
// Jenkinsfile
#Library('LibName') _
pipeline {
...
steps {
script {
// Method (1): this will work
def var = new MultiMethod()
var.Foo()
var.Bar()
// Method (2) this will not work
MultiMethod.Foo()
}
}
}
(The (1) and (2) methods are methods of calling the methods in the groovy script. Don't be confused by these 2 uses of the word "Method" please.)
So it works only if I instantiate this MultiMethod with the new operator.
But, if I name the file multiMethod (camelCased) instead of MultiMethod, i can use method (2) to call the methods in the script. Can someone explain this behavior?
That seems to be working fine.
Second issue
Based on the example above. If I have the groovy file named MultiMethod, (We saw earlier that i can use its methods if I instantiate with with new), I can't seem to instantiate an object of MultiMethod when loading the library dynamically, like this:
// Jenkinsfile
pipeline {
...
steps {
script {
// Method (1): this will not work
library 'LibName'
def var = new MultiMethod()
var.Foo()
var.Bar()
}
}
}
If i try to do so, i get this:
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 11: unable to resolve class multiMethod
# line 11, column 32.
def mult = new multiMethod()
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:958)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUni
...
EDIT
I notice that if I do this:
// Jenkinsfile
pipeline {
...
steps {
script {
library 'LibName'
MultiMethod.Foo()
MultiMethod.Bar()
}
}
}
It does work !!
Last Question
Another question if you may. I noticed that people use to write
return this
In the end of their scripts in the vars directory. Can someone explain what is it good for? I'd be happy if someone could explain this in the context of how does the mechanism of this works, like why are those scripts turns into Global Variables?
Thanks
First Question Answer
It's because Jenkins has defined that standard for the shared library. To clear your doubt, there is really a good explanation in Jenkins official documentation and it will work if you do it by following the standards. like below example:
make sure you are following this folder structure
shared-library
├── src
│ └── org
│ └── any
│ └── MultiMethod.groovy
└── vars
└── multiMethod.groovy
multiMethod.groovy
def foo() {
echo "Hello foo from vars/multiMethod.groovy"
}
def bar() {
echo "Hello bar from vars/multiMethod.groovy"
}
Once you have this and you are configuring your shared library like this way, then you can make use of multiMethod.groovy on your Jenkins file like below:
Jenkinsfile
#Library('jenkins-shared-library') _
pipeline {
agent any;
stages {
stage('log') {
steps {
script {
multiMethod.foo()
multiMethod.bar()
}
}
}
}
}
why does it work this way? - it explains here
But to make use of src/org/any/MultiMethod.groovy available in the src folder, you have to instantiate the class and call the method. Below is my example
MultiMethod.groovy
package org.any
class MultiMethod {
def steps;
MultiMethod(steps) {
this.steps = steps
}
def foo() {
steps.echo "Hello foo from src/org/any/MultiMethod.groovy"
}
def bar() {
steps.echo "Hello bar from src/org/any/MultiMethod.groovy"
}
}
Jenkinsfile
#Library('jenkins-shared-library') _
import org.any.MultiMethod
pipeline {
agent any;
stages {
stage('log') {
steps {
script {
def x= new MultiMethod(this);
x.foo()
x.bar()
}
}
}
}
}
Second Question Answer
Your second question is duplicate to this post. I have tried to explain and given an example. Please take a look.
Last Question Answer
It's not necessary to return this from the Jenkins global variable defined in vars if you do
vars/returnThisTest.groovy
def helloWorld() {
echo "Hello EveryOne"
}
or
def helloWorld() {
echo "Hello EveryOne"
}
return this;
both are the same and from Jenkinsfile you can just call like returnThisTest.helloWorld(), but return this can be more useful when a scenario will be like this - a good example from Jenkins documentation
I have a singleton class
#Singleton
class CustomerBundleSingleton {
def grailsApplication = Holders.getGrailsApplication()
String projName
private CustomerBundleSingleton() {
line 10: projName = // how to get sub-project name here ???
}
}
application.properties // my project is running
-----------------------
app.name = MyNewProject
application.properties // located in sub project
-----------------------
app.name = MySubProject
I tried grailsApplication.metadata['app.name'] in "line 10:" it returns "MyNewProject".Whereas I want a way to get the project name of the UserBundleSingleton located (MySubProjectName). Something like grailsApplication.current.metadata['app.name'] ???? .
So that it can give me back MySubProjectName instead of MyNewProject??
I have 3 suggestions depending on your requirements and your 'bundling'.
1) You don't have a bundle marker/descriptor
Assuming that you know the sub-project(Grails plugin) name, your life gets easier, instead of having to loop through all plugins...
You can probably use something among these lines.
// Plugin name is 'hibernate' in this example
import org.codehaus.groovy.grails.plugins.PluginManagerHolder
def hibernateVersion = PluginManagerHolder.pluginManager.getGrailsPlugin('hibernate').version
// Loop through all plugins
// PluginManagerHolder.pluginManager.getAllPlugins()
2) Using custom plugin properties to lookup plugins of interest
Other strategy, if you must lookup the bundle dynamically.
Create a custom marker property in each of your plugin descriptors
def specialProperty = "whatever"
Then inside your CustomerBundleSingleton
PluginManagerHolder.pluginManager.getAllPlugins().each {
if (it.properties.specialProperty) {
def subProjectName = it.name
def subProjectVersion = it.version
}
}
3) Custom bundle info resolution
You may also want to consider some metadata via META-INF/MANIFEST.MF or similar mechanism.
Hope it helps...
I have a configuration parameter in my BuildConfig.groovy that requires I put a lot of directories in for each plugin I have. I like to create a simple method that will generate all of those directories for me. I know groovy config files are groovy code, but I can't seem to find any information related to the preferred method for creating helper methods. (for example mavenRepo, grailsPlugins(), grailsHome(), etc). If I want to create my own helper method where would I put it so I can call it like so:
someProperty = myHelperMethod()
A bit of an update. I wrote this method directly in my BuildConfig.groovy file and it works! But I'd like to move it out and organize it better. How do I write such methods and expose those to config files like BuildConfig.groovy?
def pluginDirs() {
List dirs = []
String trash = "[:]/"
grails.plugin.location.each() { name, plugin ->
if( plugin.startsWith(trash) ) {
plugin = plugin.substring(trash.length())
}
dirs << "${plugin}/src/groovy" <<
"${plugin}/grails-app/controllers" <<
"${plugin}/grails-app/domain" <<
"${plugin}/grails-app/services" <<
"${plugin}/grails-app/taglib" <<
"${plugin}/grails-app/utils"
}
return dirs
}
Extra-credit: Some of these plugins use ${basedir} to define their path, but when I run one of my scripts ${basedir} isn't defined to it puts [:] trash in my URLs. That's why that ugly code is in there.