I recently switched my logback configuration file from logback.xml to logback.groovy. Using a DSL with Groovy is more versatile than XML for this sort of thing.
I need to analyse this file programmatically, like I analysed the previous XML file (any of innumerable parsing tools). I realise that this will be imperfect, as a DSL config file sits on top of an object which it configures and must be executed, so its results are inevitably dynamic, whereas an XML file is static.
If you want to include one Groovy file in another file there are solutions. This one worked for me.
But I'm struggling to find what I need from the results.
If I put a function like this in the DSL file ...
def greet(){
println "hello world"
}
... not only can I execute it (config.greet() as below), but I can also see it listed when I go
GroovyShell shell = new GroovyShell()
def config = shell.parse( logfileConfigPath.toFile() )
println "config.class.properties ${config.class.properties}"
But if I put a line like this in the DSL file...
def MY_CONSTANT = "XXX"
... I have no idea how to find it and get its value (it is absent from the confusing and copious output from config.class.properties).
PS printing out config.properties just gives this:
[class:class logback, binding:groovy.lang.Binding#564fa2b]
... and yes, I did look at config.binding.properties: there was nothing.
further thought
My question is, more broadly, about what if any tools are available for analysis of Groovy DSL configuration files. Given that such a file is pretty meaningless without the underlying object it is configuring (an object implementing org.gradle.api.Project in the case of Gradle; I don't know what class it may be in the case of logback), you would have thought there would need to be instrumentation to kind of hitch up such an object and then observe the effects of the config file in a controlled, observable way. If Groovy DSL config files are to be as versatile as their XML counterparts surely you need something along those lines? NB I have a suspicion that org.gradle.tooling.model.GradleProject or org.gradle.tooling.model.ProjectModel might serve that purpose. Unfortunately, at the current time I am unable to get GradleConnector working, as detailed here.
I presume there is nothing of this kind for logback, and at the moment I have no knowledge of its DSL or configurable object, or the latter's class or interface...
The use of def creates a local variable in the execution of the script that is not available in the binding of the script; see this. Even dropping def will not expose MY_CONSTANT in the binding because parsing the script via GroovyShell.parse() does not interpret/execute the code.
To expose MY_CONSTANT in config's binding, change def MY_CONSTANT = "XXX" to MY_CONSTANT = "XXX" and execute the config script via config.run().
Related
Working on my 6th or 7th Jenkins script now - and I already noticed they share a bit of code (essentially just the same groovy subroutines over and over again). I wouldn't like to continue with that and rather learn some best practices.
It seems that "Shared Libraries" are the thing to do. (Or is there a better way when you just want to share groovy code, not script steps etc.?)
Those scripts are part of a larger repo (that contains the source of the entire project, including the other scripts), stored in a subfolder Jenkins/Library with this structure:
Jenkins/Library
+- vars
| common_code.groovy
There is only a vars folder, no src. The documentation said
For Shared Libraries which only define Global Variables (vars/), or a Jenkinsfile which only needs a Global Variable, the annotation pattern #Library('my-shared-library') _ may be useful for keeping code concise. In essence, instead of annotating an unnecessary import statement, the symbol _ is annotated.
so I concluded that I wouldn't need a src folder and can do with vars alone.
The library is made available via "Configure Jenkins" > "Global Pipeline Libraries" with SourcePath set to "/Jenkins/Library/" and is brought in with the statement #Library('{name}') _ as first line of the script.
However, when attempting to use the library, I get the error shown in the subject.
What's the problem? (I already searched around and found this instance of the problem, but that doesn't seem to fit for my issue - unless I misunderstood something.)
To specify a name of the library you should set the same name in your jenkins settings:
Name.
An identifier you pick for this library, to be used in the #Library
annotation. An environment variable library.THIS_NAME.version will
also be set to the version loaded for a build (whether that comes from
the Default version here, or from an annotation after the #
separator).
Your '{name}' parameter inside of #Library() means you should add a library with the same name. Because it's not a variable like "${name}" which is not a built in variable and undefined.
If you wish to set up your library with the same name as your jenkins pipleine you could use env.JOB_NAME variable, or check the all environment and pre-defined variables:
println env.getEnvironment()
Or check job parameters only:
println params
Now step-by-step instructions:
Create your library, for example from Git SCM as shown on the screenshot.
Put your library code to the project, e.g: <project_root_folder>/vars/common_code.groovy. You don't need your additional path Jenkins/Library. Also you have named your file in 'snake case' style, which is not usual for groovy:
The vars directory hosts scripts that define global variables
accessible from Pipeline. The basename of each *.groovy file should be
a Groovy (~ Java) identifier, conventionally camelCased.
So your file in 'camel case' should looks CommonCode.groovy.
Write your library code:
// vars/commonCode.groovy
// Define your method
def call() {
// do some stuff
return 'Some message'
}
Write your pipeline. Example of scripted pipeline:
#!/usr/bin/env groovy
// yourPipeline.groovy file in your project
#Library('jenkins-shared-library') _
// Get the message from the method in your library
def messageText = commonCode() as String
println messageText
If you wish to define some global variables this answer also may help you.
PS: Using 'vars' folder allows you to load everything from your vars folder once at the same time. If you wish to load dynamically use import from src folder.
How do you automatically parse ansible warnings and errors in your jenkins pipeline jobs?
I greatly enjoy the power of leveraging in ansible in jenkins when it works. Upon a failure, the hunt to locate the actual error can be challenging.
I use WarningsNG which supports custom parsers (and allows their programmatic generation)
Do you know of any plugins or addons that already transform these logs into the kind charts similar to WarningsNG?
I figured I'd ask as I go off into deep regex land and make my own.
One good way to achieve this seems to be the following:
select an existing structured output ansible callback plugin (json, junit and yaml are all viable) . I selected junit as I can play with the format to get a really nice view into the playbook with errors reported in a very obvious way.
fork that GPL file (yes, so be careful with that license) to augment with the following:
store output as file
implement the missing callback methods (the three mentioned above do not implement the v2...item callbacks.
forward events to the default or debug callback to ensure operators see something when they execute the plan
add a secrets cleaner - if you use jenkins credentials-binding-plugin it will hide secrets from the console, it will not not hide secrets within stored files. You'll need to handle that in your playbook or via some groovy code (if groovy, try{...} finally { clean } seems a good pattern)
Snippet - forewarding to default callback
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
...
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'json'
def __init__(self, display=None):
super(CallbackModule, self).__init__(display)
self.default_callback = CallbackModule_default()
...
def v2_on_file_diff(self, result):
self.default_callback.v2_on_file_diff(result)
... do whatever you'd want to ensure the content appears in the json file
All the tutorials that I have come across regarding writing a declarative pipeline suggest to include the stages and steps in the Jenkinsfile.
But I have noticed one of my seniors writing it the opposite way. He uses the Jenkinsfile just for defining all the properties, i.e. his Jenkinsfile is just a properties file, nothing more nothing less.
And for defining the pipeline he makes use of the shared library concepts where he writes his pipeline code in a file in the vars folder. I am not able to guess the wisdom behind this approach.
Nowhere over the internet did I come across anything similar.
Any guidance in this regard is highly appreciated. I am a beginner in the Jenkins world.
As illustrated in Extending with Shared Libraries, that approach (that I am using as well) allows to:
keep a Jenkinsfile content to a minimum
enforce a standard way of doing a particular job (as coded in the shared library)
That shared library becomes a template of a process for which you provide only values in your Jenkinsfile before delegating the actual execution to the pre-defined library.
The OP Asif Kamran Malick note that the documentation does include:
There is also a “builder pattern” trick using Groovy’s Closure.DELEGATE_FIRST, which permits Jenkinsfile to look slightly more like a configuration file than a program, but this is more complex and error-prone and is not recommended.
He then asks:
Why did the blogger prefer that way when its actually discouraged in the official doc.
I checked and we are using also Closure.DELEGATE_FIRST.
The reason is in the part "permits Jenkinsfile to look slightly more like a configuration file than a program"
This avoids us having to define a JSON block, and keep the parameter as a series of key=value lines, easier to read.
A call to a shared library is then:
#!/usr/bin/env groovy
#Library("MyLibraries") _
MyLibrary {
config1 = 'value1'
config2 = 'value2'
...
}
{
anotherConfigA = 'valueA'
anotherConfigB = 'valueB'...
astep(
...
)
}
Then your jenkins pipeline template in MyLibraries/vars/MyLibrary.yml can use those closure blocks:
def call(Closure configBlock, Closure body) {
def config = [:]
configBlock.resolveStrategy = Closure.DELEGATE_FIRST
configBlock.delegate = config
configBlock()
astep(
...
){
if (body) { body() }
}
}
I have a script which saves some files at a given location. It works fine but when I send this code to someone else, he has to change the paths in the code. It's not comfortable for someone who does not know what is in that code and for me to explain every time where and how the code should be changed.
I want to get this path in a variable which will be taken from the configuration file. So it will be easier for everyone to change just this config file and nothing in my code. But I have never done this before and could not find any information on how I can do this in the internet.
PS: I do not have any code and I ask about an ultimate solution but it is really difficult to find something good in the internet about dxl, especially since I'm new with that. Maybe someone of you already does that or has an idea how it could be done?
DXL has a perm to read the complete context of a file into a variable: string readFile (string) (or Buffer readFile (string))
you can split the output by \n and then use regular expressions to find all lines that match the pattern
^\s*([^;#].*)\s*=\s*(.*)\s*$
(i.e. key = value - where comment lines start with ; or #)
But in DOORS I prefer using DOORS modules as configuration modules. Object Heading can be the key, Object Text can be the value.
Hardcode the full name of the configuration module into your DXL file and the user can modify the behaviour of the application.
The advantage over a file is that you need not make assumptions on where the config file is to be stored on the file system.
It really depends on your situation. You are going to need to be a little more specific about what you mean by "they need to change the paths in the code". What are these paths to? Are they DOORS module paths, are they paths to local/network files, or are the something else entirely?
Like user3329561 said, you COULD use a DOORS module as a configuration file. I wouldn't recommend it though, simply because that is not what DOORS modules were designed for. DOORS is fully capable of reading system files in one line at a time as well as all at once, but I can't recommend that option either until I know what types of paths you want to load and why.
I suspect that there is a better solution for your problem that will present itself once more information is provided.
I had the same problem, I needed to specify the path of my configuration file used in my dxl script.
I solved this issue passing the directory path as a parameter to DOORS.exe as follow:
"...\DOORS\9.3\bin\doors.exe" -dxl "string myVar = \"Hello Word\"
then in my dxl script, the variable myVar is a global variable.
In general I have my dsl as plugin and I want to create a new app that use my dsl
so i tried to write this code:
JsonParser p = new JsonParser();
IParseResult r = p.parse(new StringReader("{}"));
//once that work it will be the file data instead of {}
but when i do the parse the node model builder is null and the following line has exception:
return doParse(ruleName, in, nodeModelBuilder.get(), 0);
and i'm not sure how to init nodeModelBuilder
i'm sure i missing some steps but i'm not quite familiar with the xtext process.
thanks!
You already read the following answer on Eclipse Forum. You need to create an IParser instance by injecting it. All dependencies gets also injected. The necessary bindings are described in your JsonRuntimeModule. Xtext uses Guice and theses Modules to glue everything together. This pattern is called Dependency Injection.
... I want to create a new app that use my dsl
So you want to use your Json DSL in standalone mode.
My suggestion:
Create a minimum Eclipse IApplication with CLI that reads and parses an input file. The advantage of an Eclipse IApplication is that you can easily deploy an headless version of your DSL runtime. [1]
Have a look at your JsonInjectorProvider and the ParseHelper [2] from Xtext's JUnit support for examples how to use your DSL and Xtext runtime in standalone mode.
[1] http://www.eclipsezone.com/eclipse/forums/t99762.html
[2] org.eclipse.xtext.junit.util.ParseHelper
You are not supposed to call parser directly. See:
http://wiki.eclipse.org/Xtext/FAQ#How_do_I_load_my_model_in_a_standalone_Java_application.C2.A0.3F
The code should look like:
Injector injector = new MyDslStandaloneSetup().createInjectorAndDoEMFRegistration();
XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class);
resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);
Resource resource = resourceSet.getResource(new File("/../../some.json").toURI(), true);
Model modelRootElement = (Model) resource.getContents().get(0);
Replace MyDsl with 'JsonParser' or 'Json' or whatever is your DSL name. Look for class JsonStandaloneSetup or JsonParserStandaloneSetup in your DSL source code. This class is generated when you start the Xtext project (or when you run workflow for the first time, not sure now). Replace Model with whatever is your root element type. It must be EObject subclass.
The parsing/validation/buidling AST is done resource.getContents() command. Not very intuitive, I know. It is because you have to initialize context, all sorts of contexts i fact, Guice context, EMF context, and perhaps other, all encapsulated in the StandaloneSetup (and RuntimeModule). The context is similar to Spring Application Context.
You need to use StandaloneSetup to run in standalone mode.
See this tutorial for help