Salt Stack: Requiring packages installed before formula executed - devops

This is more of a clarification request.
I have two formulas that rely on a bunch of packages being installed by the init.sls formula before they're run.
At the moment I have something like the below. I was wondering if anyone could confirm this is the correct approach, or whether someone could indeed offer a better approach.
init.sls
install_packages:
pkg.installed:
- pkgs:
- foo
- bar
- require_in:
- sls: brrap
- sls: blah
So would doing this ensure that the above packages are installed before brrap.sls and blah.sls are executed?
Thanks

Yes, using the require_in requisite in your example will ensure the packages are installed before brrap.sls and blah.sls are executed.
All of the _in requisites work in the same way: they result in a normal requisite in the targeted state. The require_in statement is particularly useful when assigning a require in separate sls files.
As long as the brrap.sls and blah.sls states do not need to be aware of the additional components (the foo and bar packages) that require it when it is set up, your configuration is fine. If the brrap.sls and blah.sls states do require the foo and bar packages to be installed in all the cases, it might be a more straightforward solution to create a require requisite from the brrap.sls and blah.sls states.
Take for instance the following state http.sls:
httpd:
pkg.installed:
- name: httpd
service.running:
- name: httpd
On some of your minions you might want to use the http.sls, and on others you might want to use the http.sls and the php.sls states:
include:
- http
php:
pkg.installed:
- name: php
- require_in:
- service: httpd
Now the httpd server will only start if php is verified to be installed.
Take a look at the requisites and other global state arguments documentation for the full example.

Related

With Jenkins Job Builder (JJB) what's the preferred way to inject values into a static set of job configuration files?

This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?

Nice way to display Terraform Plan output in Jenkins

I am working on a requirement where I need to show terraform plan output in a nicer way so that user can understand what resources it is going to create etc.
Currently we are printing terraform plan output in jenkins console in json format but we need this output in graphical format like we have Blue Ocean plugin in jenkins. I did research for various plugins bit not find any.
Do we have any kind of Jenkins plugin which shows terraform plan in the form of any visual representation like graph or do we have any alternate option where we can show the terraform plan output anywhere where user can understand it easily as we are looking for good visual representation.
Here is a link for output options of the terraform plan command. I don't think there are any visual representations regarding the plan. We are using json format and find it very readable.
You can install the AnsiColor plugin and enable the 'Color ANSI Console Output' checkbox (under Build Environments) in job configuration.
I found a few workarounds that are not jenkins specific but still work nicely in a pipeline. For simple stuff you can pipe through dot and render it using:
terraform graph -var-file=config/$TARGET.tfvars | dot -Tpng > qa-graph.png
It ain't pretty, but it works...
Now: graphviz will take that dot output and render it a little nicer. It even gives you some pretty sexy rendering options - but even though you can put lipstick on a pig, it's still a pig! The problem is the graph, itself... I think programmers have been trying to solve this for like 30 or 40 years. The '(not-so) old-school' solution is here --> stackoverflow://1494492/graphviz-how-to-go-from-dot-to-a-graph
Currently, I have been playing with graph beautifier which takes the output of tf-graph and renders it as javascript. I just push the html to nginx instead of the output from graphviz. Then, I just print the url to the output of jenkins, and open the link in a new tab. I know it's kinda-hacky, but I couldn't find any cleaner way - which brought me here, today...
ANYWAY: depending on how modular your tf-code is, the graph-beautifier creates a nice little spider-web screenshot of graph-beautifier
Below is a bash snip-it from the end of my jenkinsfile which rendered that screenshot
#------------------------------------------------------------------------------
# * VISUALIZER: data analysis tool to help make deployments & audits faster
# creates a tf-graph file & publish to the web using local nginx server
#> requires nginx & graphviz -- https://graphviz.org
#------------------------------------------------------------------------------
WEBHOST='myserver.example.com'
WEBROOT='/usr/share/nginx/html'
WEBPATH="graphviz/$BRANCH"
WEBDEST="$WEBROOT/$WEBPATH"
WEBPAGE="index.html"
# * create the target dir if it doesn't exist
[ -d $WEBDEST ] || mkdir -vp $WEBDEST
##------------------------------------------------------------------------------
## 21-1124JN - unhappy with just an image I switched to an interactive visualizer
## ? -- https://github.com/pcasteran/terraform-graph-beautifier
##------------------------------------------------------------------------------
terraform graph | /usr/share/go/bin/terraform-graph-beautifier \
--exclude="module.root.provider" \
--output-type=cyto-html \
--embed-modules=true \
> $WEBDEST/$WEBPAGE
echo -en "\n\n tf-graph --> http://$WEBHOST/$WEBPATH \n\n"
##------------------------------------------------------------------------------
## NOTE: the beautifier is just a POC. it's something I have used before and
##> threw together in a few hours for demo. There are newer projects that
##> are actively maintained & seem to have a lot cleaner ways of working with
##> larger data-sets & graphs. One is called the Rover - Terraform Visualizer
##> and is available here --> https://github.com/im2nguyen/rover check it out
##------------------------------------------------------------------------------
last but not least (as I mention in my code comments) is the Rover Terraform Visualizer - a new one that came out of the brain of one of the hashicorp-kids which is super-promising and has recently been released to open source. It is also written in go, but I haven't gotten it to work yet in my pipeline. Check out his demo.

Automated ansible output parsing in jenkins pipeline

How do you automatically parse ansible warnings and errors in your jenkins pipeline jobs?
I greatly enjoy the power of leveraging in ansible in jenkins when it works. Upon a failure, the hunt to locate the actual error can be challenging.
I use WarningsNG which supports custom parsers (and allows their programmatic generation)
Do you know of any plugins or addons that already transform these logs into the kind charts similar to WarningsNG?
I figured I'd ask as I go off into deep regex land and make my own.
One good way to achieve this seems to be the following:
select an existing structured output ansible callback plugin (json, junit and yaml are all viable) . I selected junit as I can play with the format to get a really nice view into the playbook with errors reported in a very obvious way.
fork that GPL file (yes, so be careful with that license) to augment with the following:
store output as file
implement the missing callback methods (the three mentioned above do not implement the v2...item callbacks.
forward events to the default or debug callback to ensure operators see something when they execute the plan
add a secrets cleaner - if you use jenkins credentials-binding-plugin it will hide secrets from the console, it will not not hide secrets within stored files. You'll need to handle that in your playbook or via some groovy code (if groovy, try{...} finally { clean } seems a good pattern)
Snippet - forewarding to default callback
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
...
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'json'
def __init__(self, display=None):
super(CallbackModule, self).__init__(display)
self.default_callback = CallbackModule_default()
...
def v2_on_file_diff(self, result):
self.default_callback.v2_on_file_diff(result)
... do whatever you'd want to ensure the content appears in the json file

How decides Typo3 Neos which Settings.yaml to choose?

i use one Neos installation for multiple domains with different content.
duplicating the package TYPO3.NeosDemoTypo3Org, removing the node-identifier and doing some replacements brought me nearby everything i need.
But only the first Settings.yaml found in Packages/Sites/ seems to be parsed. All changes to the Settings.yaml found in other packages (Test1 and Test2 in the following example) are ignored.
Packages/Sites/TYPO3.NeosDemoTypo3Org/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://TYPO3.NeosDemoTypo3Org/Private/Form/'
Packages/Sites/UDF.Test1/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test1/Private/Form/'
Packages/Sites/UDF.Test2/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test2/Private/Form/'
When i delete the first Settings.yaml (Packages/Sites/UDF.Test2/Configuration/Settings.yaml), the next Setting.yaml in alphabetical order (Packages/Sites/UDF.Test1/Configuration/Settings.yaml) is used for all 3 site packages. When i also delete this file, the Settings.yaml from UDF.Test2 is used and so on.
would be awesome if somebody can enlighten me. I am new to flow and neos and any help is welcome. RTFM, i know, but as described here i have to believe, that it should work like i did?
alternative way?
is it possible not to set the savePath in the site package configuration but in the common settings ./Packages/Application/TYPO3.Form/Configuration/Settings.yaml
I see a {#package} placeholder in
### BASE ELEMENTS ###
# NAMING: base class for everything is RENDERABLE
'TYPO3.Form:Base':
renderingOptions:
templatePathPattern: 'resource://{#package}/Private/Form/{#type}.html'
but this doesn't work here
TYPO3:
Form:
yamlPersistenceManager:
#savePath: '%FLOW_PATH_DATA%Forms/'
savePath: 'resource://{#package}/Private/Form/'
as you see i am not really experienced with this stuff but i am very motivated.
All Settings.yaml are used, but the settings are merged in order of the package loading.
The loading order of packages again is based on their dependencies.
All three packages probably have the same dependencies so they are loaded one after the other (would need to check with which ordering), so third Settings.yaml is loaded, then second Settings.yaml is loaded and overwrites the third, then the first is loaded and again overwrites the second. Every setting path can only be set once, that's why.
In any case what you are trying to archive probably won't work. This is one of the things we have to fix (site package dependent configuration).
A possible workaround is either using a common package with the form configuration and just set the savePath to this package or using diferent subcontexts (like Production/Domain1 Production/Domain2) and setting this setting different per subcontext, then you could define the subcontext by domain (as the sites are triggered by domain anyway).

update module on import to python interpreter

In short
How to force python interpreter to load the most up-to-date code version of my module everytime I make some changes in the module code?
Or at least reload the last modified version by typing
>>> from myModule import *
into console, without necessity to restart whole python console and setup everything again and again anytime I make some changes? This is extremely unpleasant behavior for debugging.
--------- LONGER STORY -----------
I tried to delete the .pyc file, and import it again - but it has no effect. It does't even create .pyc file again - so I expect it completely ignore my "import" command if the module is already loaded.
this also does not help:
>>> mymodule.myfunc() # the old version
>>> del myModule # unload mymodle from python conole / interpeter
... # now I removed .pyc
... # now I make some modifications in mymodule.myfunc() code
>>> mymodule.myfunc() # module is unknonwn, ... OK
>>> import myModule # try to load modified version
>>> mymodule.myfunc() # stil the old version :(((((, How it can remember?
I have tried also Spyder where is this feature called "User Module Deleter (UMD)"
http://pythonhosted.org/spyder/console.html#reloading-modules-the-user-module-deleter-umd
which I thought should do exactly this, but it seem it doesn't (Yes, I checked that it is turned on).
Maybe I'm missing something - can somebody explain me how is it supposed to be used?
Is this somehow affected by the fact that the imported module is not in "Working directory" but in PYTHONPATH ?
(Spyder dev here) I think at the moment you are not able to reload a module directly in the console (but we are considering to change this in the future).
The idea about UMD is that it will reload your modules but only if you run a file from the editor that imports them. It doesn't work if you want to reload them directly in the console.
Let's say you developed a module, then you are probably using it in a different script that (most likely) you'll be writing in our editor and send it to run to our console. UMD is a little bit of magic that reloads it for you when that happens.
Maybe useful for others. In Spyder 3.0.0, the modules can be reloaded by,
Tools -> Update modules names list.
It worked for me.

Resources