Puppet3 | read values from different yaml file - devops

So I'm using puppet3 and I have X.yaml and Y.yaml. X.yaml has profiles::resolv_conf::nameservers: [ '1.1.1.1', '8.8.8.8', '2.2.2.2' ]in it. I want to add that [ '1.1.1.1', '8.8.8.8', '2.2.2.2' ] as a value to the servers: which is in Y.yaml:
'dns_test':
plugin_type: 'dns_query'
options:
'servers': \['1.1.1.1', '8.8.8.8', "2.2.2.2"\]
'domains': \['google.com'\]
'record_type': 'A'
'timeout': 5
tags:
'input_source': 'dns_query'
By doing this I want to make sure that when someone change values in profiles::resolv_conf::nameservers: that value is changed in this telegraf plugin too.
I tried multiple solution but the one that was the closest was:
'dns_test':
plugin_type: 'dns_query'
options:
'servers': "%{hiera('profiles::resolv_conf::nameservers')}"
'domains': ['google.com']
'record_type': 'A'
'timeout': 5
tags: 'input_source': 'dns_query'
but problem is that puppet was adding extra " " to the value and final value in plugin conf was:
"["1.1.1.1", "2.2.2.2", "8.8.8.8"]" instead of ["1.1.1.1", "2.2.2.2", "8.8.8.8"]

TL;DR: You can't.
From the current docs and the Puppet documentation archive, I confirm that no version of the %{hiera} interpolation function or its replacement, %{lookup}, ever supported interpolating values other than strings. That's expressed in the current docs like so:
The lookup and hiera interpolation functions look up a key and return
the resulting value. The result of the lookup must be a string; any
other result causes an error.
(Emphasis added)
What you're looking for would be supported by Hiera 5's %{alias} function, provided that the data are available somewhere else in the same hierarchy (which is also a requirement for %{hiera}). Since you're stuck on Puppet 3, however, you're probably on Hiera 2, and certainly not later than Hiera 3.
"But wait!" You may say. "I'm getting a successful interpolation, but the data are just munged". Specifically, you wrote:
problem is that puppet was adding extra " " to the value and final value
Since %{hiera()} interpolates only strings, it is not surprising that you got a string value, given that you got a value at all. I do find it a bit surprising that Puppet did not throw an error, but I'm not prepared to comment further on that without a minimum reproducible example that demonstrates the behavior.

Related

dask - read_json into dataframe ValueError

A minimal example here: I have a json file xaa.json whose contents looks like this (two rows from stackoverflow archive):
[
{"Id": 11, "Body": "<p>Given a specific <code>DateTime</code> value", "Title": "Calculate relative time in C#", "Comments": "There is the .net package https://github.com/NickStrupat/TimeAgo which pretty much does what is being asked."},
{"Id": 7888, "Body": "<p>You need to use an <code>ifstream</code> if you just want to read (use an <code>ofstream</code> to write, or an <code>fstream</code> for both).</p>
<p>To open a file in text mode, do the following:</p>
<pre><code>ifstream in(\\"filename.ext\\", ios_base::in); // the in flag is optional
</code></pre>
<p>To open a file in binary mode, you just need to add the \\"binary\\" flag.</p>
<pre><code>ifstream in2(\\"filename2.ext\\", ios_base::in | ios_base::binary );
</code></pre>
<p>Use the <code>ifstream.read()</code> function to read a block of characters (in binary or text mode). Use the <code>getline()</code> function (it's global) to read an entire line.</p>
", "Title": null, "Comments": "+1 for noting that the global getline() function is to be used instead of the member function."}
]
I want to load such json files into a dask dataframe. I use:
so_posts_df = dd.read_json('./xaa.json', orient='columns').compute()
I get this error:
ValueError: Unexpected character found when decoding object value
After looking into the contents, I figured that the "\\"' stuff was causing it. So, when I removed them, (the editor - IntelliJ said it was clean and nice looking JSON) and when I ran the same read_json, it was able to read into a df and display them nicely.
So, I have 2 questions: (a) what are the values for the read_json argument "errors" ? (b) How can I properly preprocess the json file before reading into dask dataframe? The presence of double-quotes and the double-escaping seems to be causing an issue.
[This may not be a dask issue at all...]...
This also fails with pandas.read_json. I recommend first trying to get things to work well with Pandas, and then try the same workload with dask dataframe. You will likely get much better support when asking Pandas questions.

Sending IFS File to Outq Prints Line of "#" Symbols

I am attempting to send a file from IFS to an outq on our AS/400 system. Whenever I do, I get exactly what I send, as well as a line of "#" symbols of varying lengths appended to the end.
Here's the command I'm using:
qsh cmd('cat -c /path/test.txt | Rfile -wbQ -c "ovrprtf file(qprint)
outq(*LIBL/ABCD) devtype(*USERASCII) rplunprt(*no) splfname(test) hold(*no)"
qprint')
The contents of test.txt is just Hello World!
The output I get when I send the command is
Hello World!####################################################################
I have not found any posts online about a similar problem, and have tried changing values and looking for additional switches to get it to work. Nothing I'm doing seems to fix the issue.
Is there a command or switch that I am missing, or is something I have in there already causing this?
EDIT:
I found this documentation which is the first time I've seen this issue mentioned, but it's not very helpful:
“Messages for a Take Action command might consist of a long string of "at" symbols (#) in a pop-up message. (The Reflex automation Take Action command, which is configured in situations, does not have this problem.) A resolution for this problem is under construction. This problem might be resolved by the time of the product release. If you see this problem, contact IBM Software Support.”
The only differences are: 1) this is not a pop-up message, it's printed. 2) I don't believe we use Tivoli Monitoring, although I could be wrong.
Assuming we do use Tivoli Monitoring, what would the solution be? There's no additional documentation past that, and I am not a system administrator, so I can't really make the call to IBM Software Support myself. And assuming we DON'T use it, what else could cause this issue?
I get different results, yet similar. I created a test.txt with Windows Explorer, put in Hello, world!, saved it and tried the script. I got gibberish for the 'Hello, world!' and then the line of # symbols.
My system is 7.3 TR5, CCSID 37 (US English) and my IFS file is CCSID 1252 (Windows English). Results did not change if I used a stream file of CCSID 819 (US ASCII).
I didn't have any luck modifying Rfile switches.
I found that removing devtype(*userascii) produced printed output in plain English without the # symbols. Do you really need *USERASCII? I would think that would be more for a pre-formatted 'print-ready' file like Postscript or the like.
EDIT: some more things to try
I don't understand why *USERASCII is adding those # symbols; it looks like a translation issue.
I tried this and still got the extra ###... You might have to play with the TOCCSID() parameter. Although a failure, it did give me an idea: what if those # symbols are EBCDIC spaces being sent as-is to the *USERASCII print stream? All we'd need is a way to send only the number of bytes in the stream file, without any padding.
CRTPF FILE(QTEMP/PRTSTMF) RCDLEN(132)
CPY OBJ('/path/test.txt') TOOBJ('/qsys.lib/qtemp.lib/prtstmf.file/prtstmf.mbr') replace(*yes)
ovrprtf file(qprint) outq(*LIBL/prt3812) devtype(*USERASCII) rplunprt(*no) splfname(test) hold(*no)
cpyf prtstmf qprint
The data in QTEMP/PRTSTMF is in ASCII; DSPPFM shows that much. It also shows a bunch of spaces: after all, it is a fixed length file. My next step was to write an RPG program to read the stream file and print it, but Scott Klement already did that: http://www.scottklement.com/PrtStmf.zip
This works on my system:
ovrprtf file(qsysprt) outq(*LIBL/abcd) devtype(*USERASCII) rplunprt(*no) splfname(test) hold(*no)
prtstmf stmf('/path/test.txt') outq(abcd)

Redis: Atomic get and conditional set

I'd like to perform an atomic GET in Redis, and if the value returned is equal to some expected value, I'd like to do a SET, but I want to chain all of this together as one atomic operation. (I'm trying to set a flag that indicates whether any process is writing data to disk, as only one process may be permitted to do so.)
Is it possible to accomplish this with Redis?
I have seen documentation on MULTI operations but I haven't seen conditional operations i MULTI operations. Any suggestions others can offer with this would be greatly appreciated!
You can do both the GET and set operations on the redis server itself using Lua scripts. They're atomic and allow you to add logic too.
I ended up using redlock-py, an implementation of the redlock algorithm that the Redis docs recommend for creating write locks: https://redis.io/topics/distlock. The linked article is fantastic reading for anyone looking to create similar write locks in Redis.
redis-if - lua script for "conditional transactions". More convenient than WATCH + MULTY.
You can pass any combination of conditions & followed commands as json object:
const Redis = require('ioredis')
const redis = new Redis()
redis.defineCommand('transaction', { lua: require('redis-if').script, numberOfKeys: 0 })
await redis.set('custom-state', 'initialized')
await redis.set('custom-counter', 0)
// this call will change state and do another unrelated operation (increment) atomically
let success = await redis.transaction(JSON.stringify({
if: [
// apply changes only if this process has acquired a lock
[ 'initialized', '==', [ 'sget', 'custom-state' ] ]
],
exec: [
[ 'set', 'custom-state', 'finished' ],
[ 'incr', 'custom-counter' ]
]
}))
With this script we removed all custom scripting from our projects.
I came across this post looking for a similar type of function, but I didn't see any options that appealed to me. I opted instead to write a small module in Rust that provides this exact type of operation:
https://github.com/KennethWilke/redis-setif
With this module you would do this via:
SETIF <key> <expected> <new>
HSETIF <key> <field> <expected> <new>
You can do this by SET command, with these 2 arguments, which according to the docs here:
GET - return the old string stored at key, or nil if key did not
exist.
NX - Only set the key if it does not already exist.
Since Redis doesn't execute any command while another command is running - you have the 2 operations in an atomic manner.

Ruby - extract info from JSON with variable loop iteration

I have a JSON response which is stored as a string in "BQresponse"
{"kind":"bigquery#queryResponse", "schema":{"fields":[{"name":"Revenue", "type":"INTEGER", "mode":"NULLABLE"}, {"name":"Country", "type":"STRING", "mode":"NULLABLE"}]}, "jobReference":{"projectId":"curious-idea-532", "jobId":"job_S5rTcY2vwEu-amtrxb8NRPWiynU"}, "totalRows":"3", "rows":[{"f":[{"v":"100"}, {"v":"Ireland"}]}, {"f":[{"v":"200"}, {"v":"Netherlands"}]}, {"f":[{"v":"50"}, {"v":"Singapore"}]}], "totalBytesProcessed":"0", "jobComplete":true, "cacheHit":true}
I am trying to convert this into a two line response (for later export to CSV), looking exactly like this:
Country||Sum of Revenue|,Ireland,Netherlands,Singapore
Revenue,100,200,50
So far, I've extracted the first parts, like so:
puts BQresponse[/#{D1_mark1}(.*?)#{D1_mark2}/m, 1]+"||"+BQresponse[/#{M1_mark1}(.*?)#{M1_mark2}/m, 1]
Next I need to extract "Ireland,Netherlands,Singapore". However I cannot use the same approach as I have done above as there may be more or less values as the string is updated (maybe only 2 or 5 countries).
The string included a part that says "totalRows":"3"," - this 3 is the number of expected countries and I suppose could be used in a loop/for-each of some sort. But I'm not sure how to best approach this.
The number values on the second line face the exact same issue (each country has a number). The "Revenue" on the second line is simply a repeat of "Revenue" on the first line, with "Sum_of_" removed.
Appreciate suggestions on what direction to head in.
Also, this is a valid JSON, if I'm completely off track and it would be easier to convert this string into a JSON first, that's okay too.
Thanks!
There's an awesome gem for this, json2csv here that I've had to use before.
To try it out, I'd save down a sample JSON response into a file called sample.json and then in your terminal you can run:
json2csv convert sample.json

URLTrigger plugin. Need examples for TXT-RegEx or XML-XPath

So, I try to use plugin https://wiki.jenkins-ci.org/display/JENKINS/URLTrigger+Plugin.
I want to trigger my Jenkins job when the text "Last build (#40), 17 hr ago" in the response of provided URL is changed (build number will be different after each build).
So I made following configurations:
1. Build trigger: Set [URLTrigger] - Poll with a URL.
2. Specified URL to another Jenkins: http://mydomain:8080/job/MasterJobDoNothing/
3. Set Inspect URL content option
4. Set Monitor the contents of a TEXT response
5. Set following regular expression: ^Last build[.]*
6. Set Schedule every minute: * * * * *
7. Trigger the job on another Jenkins
Actual result: My job wasn't triggered.
Then I tried to deal with XML/XPath and specify
8. Set Monitor the contents of an XML response
9. Set XPath: //*[#id="side-panel"] (also tried with one "/")
Actual result: the same.
Tell me please what I'm doing wrong? Please provide examples of RegEx or XPath if possible.
Thanks, Dima
I managed to trigger reliably with regex setting.
The regex pattern matches each line of the input.
No need to use ^ or $. it always match line start to line end.
This plugin compares the contents of the matched lines. It triggers if different.
This plugin compares the count of the matched lines. It triggers if the count is different.
This plugin uses matches() method of java.util.regex.Matcher. So the regex pattern should conform to it. (it's fairly normal regex)
As for your example,
Last build.*
may work.
Refs:
Reference of regex patten:
http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html
Reference of Matcher: http://docs.oracle.com/javase/7/docs/api/java/util/regex/Matcher.html#matches()
The regex trigger source code:
github.com/jenkinsci/urltrigger-plugin/blame/master/src/main/java/org/jenkinsci/plugins/urltrigger/content/TEXTContentType.java
I'd recommend to use the "RSS for all" link as a trigger URL instead, and /feed/entry[1] as the XPath expression for the XML response content nature.
PS: I was using PathEnq to debug the XPath expression.

Resources