What is the difference between automatic and commit transaction modes in WMB 8.0 compute node? - messagebroker

What is the difference between automatic and commit transaction modes in WMB 8.0 compute node ?
We have developed a message flow with a compute node which inserts records into Oracle DB tables. The flow looks like
MQInputNode (Out terminal) --- > Compute Node
MQInputNode (Catch terminal) --- > Error handling flow
The properties which we have set in that flow are as below.
MQ Input node has transaction property as "Yes"
Compute node transaction property as "Automatic"
MQ Input Node catch terminal is connected to already developed error handling sub flow
In compute node, we are just parsing the message and inserting the records into tables.
Consider the scenario that the message is having 2 records. First record is valid and second record is invalid.
When we set, transaction property as "Automatic", first record is getting inserted and committed properly eventhough the insertion of second record throwing exception. We consider that it is a successfull flow because we have catched that exception and handled properly using MQ Input catch terminal.
But when we set, transaction property as "Commit", even first record is not getting inserted. WMB developers who work with me have told that "Commit" is a node level property, so when the second records inserts, it throwing exception and first record is getting roled back from DB.
I have gone through the WMB info center. No where mentioned that Automatic is Commit is node level and if any exception in that node will rollback the records inserted.
Please clarify.

The setting controls both the transactionality and the transaction scope.
If set to "Automatic" then the transactionality is inherited from the Input Node. In this case the input node is set to "Yes" so you get transactionality with the scope fo the transaction covering the entire invocation of the message flow.
When set to "commit" the scope of the transaction is the compute node itself and work will be committed as processing exits the compute node (by exiting in this case I mean the compute node completing, not exiting via propagate into another node).

Automatic: You'll lose data inserted into database if your tnx gets a failure after db operation.(rollback)
Commit: db data will be committed even if there is a failure in your tnx

Related

FireDAC ApplyUpdates without clearing the Delta

Is it possible to call Applyupdates on a FireDAC Query on cached updates mode without clearing its Delta ?.
The reason to do so is because I have 4 FDQuerys that I want to save together or cancel together if any of them raises on error. I could use a single transaction to rollback all changes if any problem happens, but that would leave the Delta empty for every FDQuery where its ApplyUpdates were successful.
So I would like to call some kind of ApplyUpdates that doesn't clear the Delta, and only if all the FDQuerys ApplyUpdates are successful then I would Commit the Transaction and call CommitUpdates on every FDQuery to clear their Deltas. But if one of them fails the changes of every FDQuery would still remain in their Deltas, so I Rollback the transaction and the user can still fix the data and try to save them again.
Update: As #Brian has commented, setting the property UpdateOptions.AutoCommitUpdates to False does the trick and doesn't clear the Delta.
Setting UpdateOptions.AutoCommitUpdates to false will leave the deltas alone when ApplyUpdates is called. The current help for AutoCommitUpdates is lacking however and describes the False setting incorrectly in a confusing way. It should be by more like:
AutoCommitUpdates controls the automatic committing of cached updates.
Specifies whether the automatic committing of updates is enabled or
disabled.
If AutoCommitUpdates is True, then all the successfully updated records applied by the call to ApplyUpdates are automatically marked as unchanged. You do not need to explicitly call CommitUpdates.
If AutoCommitUpdates is False, then your application must explicitly call CommitUpdates to mark all changed records as unchanged.
I put in a ticket to fix the help: RSP-31141

select from system$stream_has_data returns error - parameter must be a valid stream name... hmm?

I'm trying to see if there is data in a stream and I provided the exact stream name as follows :
Select SYSTEM$STREAM_HAS_DATA('STRM_EXACT_STREAM_NAME_GIVEN');
But, I get an error :
SQL compilation error: Invalid value ['STRM_EXACT_STREAM_NAME_GIVEN'] for function 'SYSTEM$STREAM_HAS_DATA', parameter 1: must be a valid stream name
1) Any idea why ? How can this error be resolved ?
2) Would it hurt to resume a set of tasks (alter task resume;) without knowing if the corresponding stream has data in it or not? I blv if there is (delta) data in the stream, the task will load it, if not, the task won't do anything.
3) Any idea how to modify / update a stream that shows up as 'STALE' ? - or should just loading fresh data into the table associated with the stream should set the stream as 'NOT STALE' i.e. stale = false ? what if loading the associated table does not update the state of the task? (and that is what is happening currently in my case, as things appear.
1) It doesn't look like you have a stream by that name. Try running SHOW STREAMS; to see what streams you have active in the database/schema that you are currently using.
2) If your task has a WHEN clause that validates against the SYSTEM$STREAM_HAS_DATA result, then resuming a task and letting it run on schedule only hits against your global services layer (no warehouse credits), so there is no harm there.
3) STALE means that the stream data wasn't used by a DML statement in a long time (I think its 14 days by default or if data retention is longer than 14 days, then it's the longer of those). Loading more data into the stream table doesn't help that. Running a DML statement will, but since the stream is stale, doing so may have bad consequences. Streams are meant to be used for frequent DML, so not running DML against a stream for longer than 14 days is very uncommon.

Getting "user" in the MergeValidationListener

I'm writing a plugin for Gerrit, that temporary locks one branch (or more) for all but one user (in order to preform a merge of two branches, for example).
I'm using Gerrit 2.11.3.
My problem is, that I can't get the current user in my implementation of the MergeValidationListener interface.
The code is:
private final Provider<CurrentUser> user;
...
user.get()
The exception is:
1) Error in custom provider, com.google.inject.OutOfScopeException: No user on merge thread
at com.google.gerrit.server.util.ThreadLocalRequestContext$1.provideCurrentUser(ThreadLocalRequestContext.java:56) (via modules: com.google.gerrit.server.config.GerritGlobalModule -> com.google.gerrit.server.util.ThreadLocalRequestContext$1)
while locating com.google.gerrit.server.CurrentUser
at com.google.gerrit.server.plugins.PluginGuiceEnvironment$2.configure(PluginGuiceEnvironment.java:534) (via modules: com.google.gerrit.server.plugins.PluginGuiceEnvironment$1 -> com.google.gerrit.server.plugins.PluginGuiceEnvironment$2)
while locating com.google.gerrit.server.CurrentUser
1 error
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
Caused by: com.google.inject.OutOfScopeException: No user on merge thread
I understand it so, that the actual submit (or merge) is performed in a separated thread, that has no user context.
Is there some way to get hands on the actual user, that clicks on the "submit" button or issues the ssh-command?
Maybe there is some other Listener, that can get user information and able to prevent the submit?
As a last possibility I would consider an upgrade to a later version.

How to get previous executed sql in informix

On my esql program when an sql fails and generates the exception I want to print the SQL that generated the exception. For that I need to find out how to get the previously executed SQL. I am running informix 11.5.
I tried the following but nothing works
select * from sysmaster:sysconblock where cbl_sessionid in (select dbinfo('SessionId') from sysmaster:syssqlstat);
SELECT scs_sqlstatement FROM sysmaster:syssqlcurses WHERE scs_sessionid in (select dbinfo('SessionId') from sysmaster:syssqlstat);
All these get the sql of it self. For example if I run select * from sysmaster:sysconblock it show "select * from sysmaster:sysconblock" in the last executed. Is there any way to get this in informix? and is it [possible to do it on ESQL program?
Many Thanks
You're on the right track, but if you're using the same connection to run those SQL statements, then of course their successful execution obliterates the information from the previous statement. (In fact it's almost a perfect example of a heisenbug.)
What you need to do is create a second connection to the database, and use that to interrogate sysmaster content for the main connection that failed.
Connect to database for main program processing.
Identify SessionID and capture to a variable.
Connect to sysmaster database with a fresh connection.
Start processing on main connection.
When main connection processing fails with an error, use secondary connection with SessionID as parameter to obtain SQL etc.
Hope that's helpful.

how to send HL7 message using mirth by reading data from my database

I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna

Resources