I would like to store the message of an alert in influxDB using influxDBOut. Is is possible?
Here is my tick script
batch
|query('SELECT mean(value) as value FROM "metrics"."autogen"."__MEASUREMENT__"')
.period(15m)
.every(5s)
.groupBy(*)
.fill(0)
|alert()
.id('[METRICS] - {{ .Name }}')
.message('{{ .ID }} changed state to {{ .Level}} [{{ .Time }}] => The metric {{ index .Fields "value" }} in the last 15m.')
.info(lambda: TRUE)
.warn(lambda: "value" < __WARN_THRESHOLD__)
.crit(lambda: "value" < __CRIT_THRESHOLD__)
.stateChangesOnly()
.levelField('Severity')
|influxDBOut()
.database('alerts')
.retentionPolicy('autogen')
.measurement('__MEASUREMENT__')
.tag('Condition', 'Low')
Thank you in advancek
Unfortunately there currently isn't a way to achieve a result like this. If this functionality is particularly important to you, I'd recommend opening up a feature request on Kapacitor detailing your use case.
Q:
I would like to store the message of an alert in influxDB using influxDBOut. Is is possible?
A:
Michael definitely knows waayyy better than I do. Yes, there is no straight forward way out at the moment. However it doesn't mean that this is not do-able.
What you're trying to do here is a typical software dev problem.
Open a file
Read its content
Format it
Write it somewhere else.
You can handle this sort of problem in any scripting language that supports the highlighted points above. The only tricky thing is probably #4 as not every scripting language has a influxdb database driver, but still you can do curl commands to perform the writes.
What you could do is
Modify your TICK script to output the alert to a file. See log() of alert node.
Write a simple script to lookout for any new files written by the log() functionality.
Parse the file
format the data so that they can be inserted into a measurement
setup a scheduler like unix's cron to periodically run your script.
Hope it helps.
Related
I need some sort of schedule thing to schedule a task to happen at x:y (12:00 for example) in Tcl.
The scenario is a router using Openwrt with Tcl 8.6.10 with limited RAM and storage where I have some sort of IRC client "bot" (using socket to connect). The "bot" was just a barebone that I modify to suit my needs. Most of the things work fine, except that I don't have way to schedule easily things. I wanted something like how eggdrop has "bind time" where the bind thing is "bind time flag "cron-style string" caller".
The "bot" scheme is like:
Main Tcl script:
<info+code to connect to IRC>
<while loop>
<some code in case of IRC disconnection>
<list of files with tcl code aka sub-scripts>
<usage of source based from a list of the filenames>
<code for error handling>
<end of while loop>
The list of files is source filelist.tcl, where filelist.tcl is a set var {filename1.tcl filename2.tcl...}. The filenamex.tcl has some basic code to respond to IRC server or IRC input from channels and reply to channels.
I can make some sort of schedule if I base a execution like if {[clock format [clock seconds] -format "%H:%M"]=="12:00"} {code to execute} and hopefully wait for a server ping/pong but that can lead to repeated code inside of the if body.
I been looking around and found a package called cron but I don't know how to use it correctly because there are not many examples and I don't know to use vwait properly and I don't want vwait to hang the bot waiting for a value to change. I also read about tcl threads for maybe parallel execution.
So I need some code inside of a sub-script that looks like (a package cron style):
#beginning of file
#add a task specifying hour and minute
task-at "12:00" proccaller
proc procname {optional} {
<some code to be executed at specific hour+time>
}
#end of file
I also don't know how to use after command to use it.
How can I accomplish I want?
Thanks for the replies and yes, it would help if I study event loops and coroutine, which probably comes next.
Some time has passed since I posted the question and kinda sorted the thing by creating a sub-script in a folder named scripts with the following structure:
#beginning of the script
if {![file exists executed]} {set executed "no"}
#the following clock instruction returns for example: Tuesday 22:14
switch -glob -- [clock format [clock seconds] -format "%A %H:%M"] {
"*12:00" - "*12:01" {
#Basic example of sending a message to the irc channel when it's midday
if {$executed=="no"} {
puts $fd "PRIVMSG #CODE :It's midday right now."
flush $fd
set executed "yes"
}
}
#...more time comparisions and code
default {set executed "no"}
}
#end of script
And the script is almost the top of the list of scripts to be loaded so if I wish to send some command down stream at giving time, the command can be executed.
There is double timings because the "bot" reacts, at least at minimum, to the irc server's ping which happens each 90 seconds and it may skip some minutes.
This is not an answer but an unproper workaround.
I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage
I am writing a small program in Progress that needs to write an error message to the system's standard error. What ways, simple if at all possible, can I use to print to standard error?
I am using OpenEdge 11.3.
When on Windows (10.2B+) you can use .NET:
System.Console:Error:WriteLine ("This is an error message") .
together with
prowin32 2> stderr.out
Progress doesn't provide a way to write to stderr - the easiest way I can think of is to output-through an external program that takes stdin and echoes it to stderr.
You could look into LOG-MANAGER:WRITE-MESSAGE. It won't log to standard output or standard error, but to a client-specific log. This log should be monitored in any case (specifically if the client is an application server).
From the documentation:
For an interactive or batch client, the WRITE-MESSAGE( ) method writes the log entries to the log file specified by the LOGFILE-NAME attribute or the Client Logging (-clientlog) startup parameter. For WebSpeed agents and AppServer servers, the WRITE-MESSAGE() method writes the log entries to the server log file. For DataServers, the WRITE-MESSAGE() method writes the log entries to the log file specified by the DataServer Logging (-dslog) startup parameter.
LOG-MANAGER:WRITE-MESSAGE("Got here, x=" + STRING(x), "DEBUG1").
Will write this in the log:
[04/12/05#13:19:19.742-0500] P-003616 T-001984 1 4GL DEBUG1 Got here, x=5
There are quite a lot of options regarding the LOG-MANAGER system, what messages to display, where the file is placed, etc.
There is no easy way, but in Unixen you can always do something like this using OUTPUT THROUGH (untested):
output through "cat >&2" no-echo unbuffered.
Alternatively -- and this is tested -- if you just want error messages from a batch-mode program to go to standard out then
output through "tee" ...
...definitely works.
i want to generate the edoc file for one erlang module, here is my code http://ideone.com/TFktht,
I want oder the export api interface, but i can not success.
generate_log_statistics/4 Analyse the log files and draw graph.
prepare_sipp_cmd/3 Shows SIPp online help documentation of what config parameters there are.
prepare_systeminfo_filenames/1 Prepare the log files according which blades you want to monitor.
run_scenario/6 Run the SIPp scenario.
start_collect_system_infos/1 Start collecting the logs.
stop_collect_system_infos/1 Stop collecting the logs.
These exported api in edoc are not odered.
I want:
prepare_sipp_cmd/3
prepare_systeminfo_filenames/1
start_collect_system_infos/1
run_scenario/6
start_collect_system_infos/1
generate_log_statistics/4
To order functions
Put this in your rebar.config:
{edoc_opts, [{sort_functions, true}]}.
Or you can generate single doc files with something like:
edoc:file("src/foo.erl", [{dir, "doc"}, {sort_functions, true}]).
"doc" being the output directory.
To achieve specific order
Turn off function sorting, as above, but with {sort_functions, false} and order the functions in your source as required.
For debugging purposes it would be great to be able to see just what you are getting back from the map method. Is this possible in ruby?
To do this in the Mongo shell, you can define you own debug version of the emit() function to print trace information.
function emit(k, v) {
print("emit");
print(" k:" + k + " v:" + tojson(v));
}
Check out Troubleshooting MapReduce in the MongoDB docs for more info.
I know the Mongo documentation recommends defining your own emit function, but I find it easier to instead use print() directly in my map- and reduce-functions while I watch Mongo's log.
Just put any print()'s in your code, run tail -f /var/log/mongodb/mongodb.log, then run your code. You should see the print()'s output to the console.
Here's a few of the benefits:
Ability to debug the reduce() function – defining your own emit() doesn't help here
No need to define the emit() function every time you fire up the mongo console
Write your code in your editor instead of going back and forth between console and IDE
Ability to do code generation and variable interpolation in your native language