Help with Nagios check_postgres.pl custom query - monitoring

I have a postgres function that returns two columns
result, data
(int), (text)
If I run this command from postgres it returns the proper values and if I run it from the linux command line like this:
/usr/local/nagios/libexec/check_postgres.pl -H $HOSTADDRESS$ -u postgres -db monitordb --action=custom_query --critical=1 --query="SELECT * from ops_get_status();"
It also return the proper values - at least it seems to and I don't get any errors.
But when I insert it in the commands.cfg and watch this through the Nagios frontend
it return (null).
The log file doesn't contain any detailed information for debugging this. So, what can I do to get to the bottom of this issue - any help greatly appreciated
result

I just had this same problem. Removing the semi-colon from the end of the query got it working.
Mail archives reference here:
https://mail.endcrypt.com/pipermail/check_postgres/2011-February/000726.html

Shot in the dark, but try enclosing the query in single quotes. The * might be somehow getting expanded.

A bit late, but I also got a similar error with a custom query, but it turns out you need to return a column called 'result' and it must be an integer.
For example:
check_postgres.pl --action=connection --db=db-name --host=x.x.x.x --dbuser=db-user --action=custom_query --critical=10 --warning=5 --query="SELECT count(id) as result from your-table"

Related

How to expend tAssertCatcher to catch more errors than its default in talend?

I have a job which when I run it- I get this:
[ERROR] 11:47:54 org.talend.components.snowflake.runtime.SnowflakeRowStandalone- Query execution has
failed. Please validate your query.
net.snowflake.client.jdbc.SnowflakeSQLException: Execution error in store procedure
SP_GENERAL:
Numeric value '' is not recognized
but when I'm trying to catch this error- I cant.
I tried tAssertCatcher, tLogCatcher, tStatCatcher- and nothing has worked.
could anybody help please?
ok, finally I got a solution.
tAssertCatcher does not catch errors from stored procedures, so I created a joblet which contain the tAssertCatcher AND in addition, added in there:
input with schema almost identical to tAssertCatcher's schema:
moment, pid, project, job, language, origin, status, substatus, errorCode, errorMessage.
now- I connected between the tDBrow component which I want to catch error from this process- with Reject row to the joblet, and pass errorCode and errorMessage- this will be the exception and description.
in the schema I used tMap to add variables values to almost all the columns:
pid, status, jobName, projectName and the rest I passed hardcoded.
finally I got a solution... every time I'll have error in stored procedure- I'll get 2 detailed records: one that I created manually, and the another one is UNEXPECTED-EXCEPTION.
the additional part in the joblet

DLV predicate not being derived

I have this simple DLV program consisting of few predicates and derivations rules. One of the rules is not being activated and I have no clue why since apparently all predicates exist. I have to admit I am no expert in DLV and a bit rusty since the last time I used it so please forgive me if this is too obvious :-/
Among others, I have this rule:
knows(ps, chunk(v, ps, pd)) :- value(v),
knows(ps, v),
connected(ps, pd).
And here you can see what I get after executing the code:
./dlv -nofinitecheck model.edb rules.idb
{participant(p1), participant(p2), participant(p3), value(v1),
value(r1), value(v2), value(r2), value(v3), value(r3),
connected(p1,p2), connected(p1,p3), connected(p2,p3), knows(p1,v1),
knows(p1,r1), knows(p2,v2), knows(p2,r2), knows(p3,v3), knows(p3,r3)}
Since I have "value(v1)" and "knows(p1,v1)" and "connected(p1,p2)", I was expecting the output of the program should contain "knows(p1, chunk(v1, p1, p2))".
Can anyone explain me why this is not happening?
Edit: I have removed all rules and created just this single one
chunk(v, ps) :- value(v), participant(ps).
But this rule is not being activated either! What's the problem? I have tried the simplest one:
chunk(v) :- value(v).
and no activation. What am I missing?
OK. I Just realised of my stupidity. The problem is I am using lowercase letters for the variables rather than capitol letters... Sorry, as I said I am rusty!
So, just for the record. Rather than chunk(v) :- value(v) it should be something like chunk(V) :- value(V)

py2neo Error returning data from Cypher query

My simple code is retrieving attributes from nodes in neo4j.
results = graph.cypher.execute("MATCH (m)-[:AB]->(a) "
"RETURN m.searchField as origin, a.searchField as destination "
"LIMIT {limit}", {"limit": 100})
nodes = []
rels = []
i = 0
for r in results:
print (r)
ent1 = {"title": r.origin, "label": "entity"}
but the server returns "NameError("global name 'searchField' is not defined",)" Certainly I missed something, but I'm puzzled that the searchField inside the Cypher query is the object of the error.
This is still with py2neo 2.0.8.
Thanks for any pointer, hj
Later editing:
Thanks for taking the time to look at this question. Two elements further puzzle me in this error:
1. The query in cypher is fine, and returns the result I expect in neo4j-shell without problem
2. This code seems to work fine when I run bottle as standalone (run(port=8080) in main), but fails when I run it as wsgi under an apache server. I am wondering if it is a problem of running user, or of context in some part of the code.
Do you have a property called searchField on the node(s)?
If not, the query will fail.
BTW, it is easier to use a string for the query like so:
query = '''
MATCH (m)-[:AB]->(a)
RETURN m.searchField as origin, a.searchField as destination
LIMIT {limit}
'''
result = graph.cypher.execute(query, limit='foo')
Got it to work! It was unrelated to code, but I did not know that any refresh of a new python code served through wsgi requires an apache reload at least.
sudo service apache2 reload
With that I obtain the same (and correct) behavior as with the direct server. The error was the result of an old version of the code... newbie mistake!
Thanks and sorry for the hassle, hj

Neo4j: Java API IndexHits<Node>.size() is 0

I'm trying to use the Java API for Neo4j but I seem to be stuck at IndexHits. If I query the DB with Cypher using
START n=node:types(type="Process") RETURN n;
I get all 2087 nodes of type "Process".
In my application I have the following lines
Index<Node> nodeIndex = db.index().forNodes("types");
IndexHits<Node> hits = nodeIndex.get("type", "Process");
System.out.println("Node index size: " + hits.size());
which leads my console to spit out a value of 0. Here, db is of course an instance of GraphDatabaseService.
I expected an object that included all 2087 nodes. What am I doing wrong?
The .size() question is just the prelude to my iterator
for(Node process : hits) { ... }
but that does not much when hits.size() == 0. According to http://api.neo4j.org/1.9.2/org/neo4j/graphdb/index/IndexHits.html this should be possible, provided there is something in hits.
Thanks in advance for your help.
I figured it out. Man, I feel so embarrassed...
It so happens that I had set up the DB_PATH to my default data folder, whereas the default storage folder is the default data folder plus graph.db. When I tried to run the code from that corrected DB_PATH I got an error saying that a lock file was in place because the Neo4j server was running. After shutting it down it worked perfectly.
So, if you happen to see the following error, just stop the server and run the code again:
Caused by: org.neo4j.kernel.StoreLockException: Could not create lock file
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:74)
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:40)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:491)
I found on several forums that you cannot run the Neo4j server and use the Java API to query it at the same time.

What is the correct syntax to use when trying to create a Data Source View to a linked server?

I have tried several statements but this one at least returns data.. but I get the error message: Deferred prepare could not be prepared. Incorrect syntax near')'. Incorrect syntax near the keyword 'DECLARE'. The following statement executed when creating namedquery:
SELECT[vwStatistics].*
FROM
(
***THIS IS MY QUERY***
DECLARE #SQL1 VARCHAR(500)
SET #SQL1 = 'SELECT *
FROM OPENQUERY(PORTAL, ''SELECT DeviceID, Date, Count
FROM printer_stats.Statistics
GROUP BY DeviceID'')'
EXEC (#SQL1)
***END OF MY QUERY***
)
AS[vwStatistics] (Microsoft.AnalysisServices.Controls)
I am new to linked servers and to SSAS. This is our company's first Cube from a linked server. My query does run in Management Studio and creates a SSRS report but it is slow.
Any suggestions would be helpful. Not much info on syntax for this situation on web. I have been looking for any such situation and have not found much help other than trying changes on server. EX: Make sure openrowset is on and reinstall OWC component.. I do not have that capability.
This is what we found to work:
SELECT DeviceID, CAST(statsdt AS CHAR) AS sdt, Count FROM OPENQUERY (
PORTAL, 'select * from (select DeviceID,CAST( Date AS CHAR) statsdt, Count from printer_stats.Statistics) as pstats')

Resources