Siddhi polling interval on sqs source not working - amazon-sqs

Hi I'm new to Siddhi and have set up an app with a SQS source, I've tried setting the polling interval to 10 seconds and the Max number of messages to 1.
However I just get a constant stream of messages as if it is just ignoring the values.
Here's my code with sensitive info removed
#App:name('SQS TEST')
#App:description('Description of the plan')
#source(type = 'sqs',
queue = '',
access.key = '',
secret.key = '',
region = '',
polling.interval = '10000',
wait.time = '20',
max.number.of.messages = '1',
delete.messages='false',
number.of.parallel.consumers = '1',
#map(type = 'json', fail.on.missing.attribute='false', enclosing.element='$.entries',
#attributes(val1 = 'val1', val2='val2')))
define stream inStream (val1 string, val2 string);
#sink(type = 'log', prefix = 'Cafe Feed:')
define stream log_received (val1 string, val2 string);
from inStream
select val1,val2
insert into log_received;
Any help would be much appreciated thank you.

The default behavior for the SQS source is to pull messages from the sqs queue until there are no messages left in the queue the amazon API allows the devs to define the number of messages retrieved per request the above 'max.number.of.messages' exposes that property.
To overcome this issue you can fix the visibility timeout of the queue to a greater value to stop the messages from being returned or delete a message after reading from the queue.

Related

New Relic alert when application stop

I have an application deployed on PCF and have a new relic service binded to it. In new relic I want to get an alert when my application is stopped . I don't know whether it is possible or not. If it is possible can someone tell me how?
Edit: I don't have access to New Relic Infrastructure
Although an 'app not reporting' alert condition is not built into New Relic Alerts, it's possible to rig one using NRQL alerts. Here are the steps:
Go to New Relic Alerts and begin creating a NRQL alert condition:
NRQL alert conditions
Query your app with:
SELECT count(*) FROM Transaction WHERE appName = 'foo'
Set your threshold to :
Static
sum of query results is below x
at least once in y minutes
The query runs once per minute. If the app stops reporting then count will turn the null values into 0 and then we sum them. When the number goes below whatever your threshold is then you get a notification. I recommend using the preview graph to determine how low you want your transactions to get before receiving a notification. Here's some good information:
Relic Solution: NRQL alerting with “sum of the query results”
Basically you need to create a NewRelic Alert with conditions that check if application available, Specially you can use Host not reporting alert condition
The Host not reporting event triggers when data from the Infrastructure agent does not reach the New Relic collector within the time frame you specify.
You could do something like this:
// ...
aggregation_method = "cadence" // Use cadence for process monitoring otherwise it might not alert
// ...
nrql {
// Limitation: only works for processes with ONE instance; otherwise use just uniqueCount() and set a LoS (loss of signal)
query = "SELECT filter(uniqueCount(hostname), WHERE processDisplayName LIKE 'cdpmgr') OR -1 FROM ProcessSample WHERE GENERIC_CONDITIONS FACET hostname, entityGuid as 'entity.guid'"
}
critical {
operator = "below"
threshold = 0
threshold_duration = 5*60
threshold_occurrences = "ALL"
}
Previous solution - turned out it is not that robust:
// ...
critical {
operator = "below"
threshold = 0.0001
threshold_duration = 600
threshold_occurrences = "ALL"
}
nrql {
query = "SELECT percentage(uniqueCount(entityAndPid), WHERE commandLine LIKE 'yourExecutable.exe') FROM ProcessSample FACET hostname"
}
This will calculate the fraction your process has against all other processes.
If the process is not running the percentage will turn to 0. If you have a system running a vast amount of processes it could fall below 0.0001 but this is very unprobable.
The advantage here is that you can still have an active alert even if the process slips out of your current time alert window after it stopped. Like this you prevent the alert from auto-recovering (compared to just filtering with WHERE).

ADO.Net connection executing DB2 stored procedure returning pooled output parameter values using iSeries connection

[EDIT - This issue is resolved. The problem had to do with uninitialized out parameters on the stored procedure.]
Why would I need to turn connection pooling off to get this to work correctly???
[EDIT - connection pooling released a shared connection memory area on the AS400]
In my MVC web app I call a DB2 Stored Procedure (SP).
This SP has multiple in and out parameters similar to this pseudo code:
CreatePO(#REQNO[in], #PO[out], #Approver[out], #ErrorMsg[out])
My app writes data to tables used by this SP during its processing so when all the data is in place I call the SP and it tries to create a PO.
If the PO creation fails there will be an error message in the #ErrorMsg out parameter. In these cases the #PO and #Approver parameters should be blank.
Here's what happens in sequence:
1) I try to create my first PO but there is a problem...
CreatePO(100, blank, blank, blank)
which results in...
CreatePO(100, blank, blank, 'unable to determine approver')
2) I successfully create the 2nd PO...
CreatePO(101, blank, blank, blank)
CreatePO(101, 'P1234', 'JJONES', blank)
3) I try to re-create a PO for #REQNO 100
CreatePO(100, blank, blank, blank)
CreatePO(100, 'P1234', 'JJONES', 'unable to determine approver')
Step 3 has conflicting out parameters. The app pool is returning the PO and Approver from Step 2 along with the appropriate an error message.
If I recycle my IIS app pool then the results are back to what happened in Step #1.
I am able to get expected results I add "pooling=false" to the connection string. But why would output parameters be affected in this manner by connection pooling? This seems more like a bug than some sort of desirable caching method.
If I don't paste my code someone will get bent out of shape so here it is...
(Look at the end of the top two lines)
'Dim cs As String = "DataSource=mydb;UserID=myuser;Password=mypassword;Naming=System;ConnectionTimeout=180; DefaultIsolationLevel=ReadUncommitted;AllowUnsupportedChar=True;CharBitDataAsString=True; TransactionCompletionTimeout=0;pooling=false"
Dim cs As String = "DataSource=mydb;UserID=myuser;Password=mypassword;Naming=System;ConnectionTimeout=180; DefaultIsolationLevel=ReadUncommitted;AllowUnsupportedChar=True;CharBitDataAsString=True; TransactionCompletionTimeout=0;"
Using conn As New iDB2Connection(cs)
conn.Open()
Dim cmd As iDB2Command = New iDB2Command()
cmd.Connection = conn
cmd.CommandType = CommandType.StoredProcedure
cmd.CommandText = "BF6360CL"
' Input parameters
cmd.Parameters.Add(New iDB2Parameter With {.ParameterName = "#REQNO", .DbType = SqlDbType.Char, .Size = 7, .Value = model.RO})
' Output parameters
Dim opo = New iDB2Parameter With {.ParameterName = "#POORDER", .DbType = SqlDbType.Char, .Size = 7, .Direction = ParameterDirection.Output}
cmd.Parameters.Add(opo)
Dim oApprover = New iDB2Parameter With {.ParameterName = "#APPROVER", .DbType = SqlDbType.Char, .Size = 10, .Direction = ParameterDirection.Output}
cmd.Parameters.Add(oApprover)
Dim oStatus = New iDB2Parameter With {.ParameterName = "#STATUS", .DbType = SqlDbType.Char, .Size = 3, .Direction = ParameterDirection.Output}
cmd.Parameters.Add(oStatus)
Dim oErr = New iDB2Parameter With {.ParameterName = "#ERROR", .DbType = SqlDbType.Char, .Size = 1, .Direction = ParameterDirection.Output}
cmd.Parameters.Add(oErr)
' return value
Dim oRetval = New iDB2Parameter With {.ParameterName = "#RETURN_VALUE", .DbType = SqlDbType.Char, .Size = 10, .Direction = ParameterDirection.ReturnValue}
cmd.Parameters.Add(oRetval)
cmd.ExecuteNonQuery()
model.PO = opo.Value
model.Approver = oApprover.Value
model.Status = oStatus.Value
model.Err = oErr.Value
End Using
return model
So the big question is this:
Why on earth would connection pooling be responsible for out parameter values???
Could this be a bug in the IBM iSeries iDB2Connection implementation?
The IIS application pool is caching stored procedure output parameters by name and returning a cached value when nulls are detected. This happens with ODBC or iSeries connections. When I recycled the application pool this cached value went away. I added to the connection string “pooling=false;” and these cached values would no longer appear.
My boss asked me to try calling the stored procedure using iSeries Navigator just to see what the out parameters contain. Boy was I surprised.
It turned out that the Stored Procedure (SP) was at fault after all. I sat with the AS400 RPG developer this morning and watched them debug the SP. The problem had to do with uninitialized memory.
Here's the definition of the SP:
BF6360CL(#REQNO, #USER, #ENVIRONMENT, #PO[out], #Approver[out], #Status[out], #Error[out])
I then reset my connection to the AS400 in iSeries Navigator and the output parameters reset back to
4 = S2.RETU
5 = RN_VAR0000
etc...
The AS400 developer is making changes now to initialize the variables. When they're done I expect to be able to change my program back to use connection pooling.
When I reset the IIS App Pool it reset my connection to the database. This seemed to release allocated memory on the AS400.
If anyone has more specifics about Connections and AS400 output parameter memory please share.

Can RxJS be used in a pull-based way?

The examples in the RxJS README seem to suggest we have to subscribe to a source. In other words: we wait for the source to send events. In that sense, sources seem to be push-based: the source decides when it creates new items.
This contrasts, however, with iterators, where strictly speaking new items need only be created when requested, i.e., when a call is made to next(). This is pull-based behavior, also known as lazy generation.
For instance, a stream could return all Wikipedia pages for prime numbers. The items are only generated when you ask for them, because generating all of them upfront is quite an investment, and maybe only 2 or 3 of them might be read anyway.
Can RxJS also have such pull-based behavior, so that new items are only generated when you ask for them?
The page on backpressure seems to indicate that this is not possible yet.
Short answer is no.
RxJS is designed for reactive applications so as you already mentioned if you need pull-based semantics you should be using an Iterator instead of an Observable. Observables are designed to be the push-based counterparts to the iterator, so they really occupy different spaces algorithmically speaking.
Obviously, I can't say this will never happen, because that is something the community will decide. But as far as I know 1) the semantics for this case just aren't that good and 2) this runs counter to the idea of reacting to data.
A pretty good synopsis can be found here. It is for Rx.Net but the concepts are similarly applicable to RxJS.
Controlled observable from the page you referenced can change a push observable to pull.
var controlled = source.controlled();
// this callback will only be invoked after controlled.request()
controlled.subscribe(n => {
log("controlled: " + n);
// do some work, then signal for next value
setTimeout(() => controlled.request(1), 2500);
});
controlled.request(1);
A truly synchronous iterator is not possible, as it would block when the source was not emitting.
In the snippet below, the controlled subscriber only gets a single item when it signals, and it does not skip any values.
var output = document.getElementById("output");
var log = function(str) {
output.value += "\n" + str;
output.scrollTop = output.scrollHeight;
};
var source = Rx.Observable.timer(0, 1000);
source.subscribe(n => log("source: " + n));
var controlled = source.controlled();
// this callback will only be invoked after controlled.request()
controlled.subscribe(n => {
log("controlled: " + n);
// do some work, then signal for next value
setTimeout(() => controlled.request(1), 2500);
});
controlled.request(1);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/2.5.2/rx.all.js"></script>
<body>
<textarea id="output" style="width:150px; height: 150px"></textarea>
</body>
I'm quite late to the party, but it's actually very simple to combine generators with observables. You can pull a value from a generator function by syncing it with a source observable:
const fib = fibonacci()
interval(500).pipe(
map(() => fib.next())
)
.subscribe(console.log)
Generator implementation for reference:
function* fibonacci() {
let v1 = 1
let v2 = 1
while (true) {
const res = v1
v1 = v2
v2 = v1 + res
yield res
}
}

SP execution time

I commented all the body of my SP except the declare parameters part. The SP Body is something like below, Note that all other part of body is commented.
OUT PO_ERROR INTEGER,
IN PI_CURRENT_DATE INTEGER,
IN PI_USER_ID DECIMAL(15),
IN PI_BID DECIMAL(15),
IN PI_AID DECIMAL(15),
IN PI_UUID VARCHAR(36),
IN PI_XML XML,
OUT PO_VERSION INTEGER,
OUT PO_ERROR_MSG INTEGER,
OUT PO_BID DECIMAL(15),
OUT PO_STEP INTEGER
SPECIFIC ESPNAME1
RESULT SETS 1
MODIFIES SQL DATA
NOT DETERMINISTIC
NULL CALL
LANGUAGE SQL
BODY: BEGIN
DECLARE L_SQLCODE INT DEFAULT 0;
DECLARE SQLCODE INTEGER DEFAULT 0;
DECLARE L_AID INTEGER DEFAULT 0;
DECLARE L_BNO INTEGER DEFAULT 0;
DECLARE L_BID INTEGER DEFAULT 0;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SET L_SQLCODE = SQLCODE;
DECLARE CONTINUE HANDLER FOR NOT FOUND
SET L_SQLCODE = SQLCODE;
SET PO_ERROR = 0;
SET PO_STEP = 0;
SET PO_ERROR_MSG = 0;
COMMIT;
END BODY
Question: I run SP with specified input parameters and every time the execution time of SP is in the range of 140ms to 180ms. I think this execution time is much for a SP without body. What is wrong here? Does this time contains get connection time either? If yes, how can I check SP execution time without get connection time?
Note that, I tried deleting PI_XML from input parameters, cause I thought maybe the XML input is increasing execution time, but nothing happened and execution time is still in that range.
It's a lot easier to measure the elapsed time of just the stored procedure part if you capture the start and end times inside the procedure itself. One way to accomplish this is to temporarily add a couple of output parameters to it, like this:
CREATE PROCEDURE ...
OUT PO_START TIMESTAMP,
OUT PO_END TIMESTAMP )
...
BODY:BEGIN
SET PO_START = CURRENT TIMESTAMP;
... -- Rest of the procedure
SET PO_END = CURRENT TIMESTAMP;
END BODY
In a do-nothing procedure such as the one you're currently testing, I'd be surprised if PO_START and PO_END differ by more than a handful of milliseconds. The rest of the elapsed time could be caused by any of the following:
Client opens a database connection and authenticates
Database was not already activated

how documentum method timeout performed?

I have documentum dm_method
create dm_method object
set object_name = 'xxxxxxxxxxx',
set method_verb = 'xxx.yyy.Foo',
set method_type = 'java',
set launch_async = false,
set use_method_server = true,
set run_as_server = true,
set timeout_min = 60,
set timeout_max = 600,
set timeout_default = 500
It invoked via dm_job with period 600 second.
But my method can work more than 600 second (depend on size of input data, produced by users)
Whats happens whan max_timeout exceeded on dm_method implemented in java?
DFC job manager send Thread.interrupt()?
DFC waits for finishing job and only log warning?
I didn't find detailed description in Documentum documentation.
See Discussion on https://forums.opentext.com/forums/discussion/153860/how-documentum-method-timeout-performed
Actually, it's possible that the Java method will continue running in
the JMS after timeout. However, the Content Server will already have
closed the OutputStream where the method can write the response. So
you will most likely see errors in the log, and also in the job object
if the method was called by a job. Depending on what the method does,
it might actually be able to complete whatever it needs to do.
However, you should try to set the default timeout to a value that
will give your job enough time to complete cleanly.

Resources