Not able to read next record while browsing KSDS in CICS - cobol

I am trying to read a VSAM KSDS file sequentially using STARTBR and READNEXT. I am able to read the 1st record. After processing 1st record, I was expecting the read of 2nd record from the VSAM when READNEXT is executed but only first record is read again. Could you please help here? I am using same lines 2 times: 1 after STARTBR and 1 while reading the next record after 1st.
MOVE LENGTH OF WS-INPUT-DATA TO X01-KEY1-LENGTH
EXEC CICS READNEXT DATASET(X01-INPUT-NAME)
INTO(WS-INPUT-DATA)
RIDFLD(X01-KEY1)
LENGTH(X01-KEY1-LENGTH)
RESP(X-RESP)
END-EXEC

From the description it looks like the RIDFLD has changed between the 1st and 2nd requests, possibly cleared, that will cause the browse to be repositioned to look for the next record after the new value passed in RIDFLD.
Make sure that on the 2nd READNEXT that the RIDFLD has the value returned by the 1st READNEXT.

Related

How does the increment operation on FirebaseRealtimeDatabase work?

I was looking at the Firebase documentation and saw that you can increment a data in the database with the following line:
Database.database().reference().child("Post").setValue([
"number" : ServerValue.increment(10)
])
It also says on the documentation that "the increment operation occurs directly on the database server", I don't really understand what that means. What is the difference between this operation and an operation like :
// We have previously retrieved the value of number which we have stored in a variable
Database.database().reference().child("Post").setValue([
"number" : numberOldValue + 10
])
Instead of you getting the value from the server and doing that little "atomic" action of adding a integer to another one the increment allows you to just say for what value you want to increment the one on the server. It works on the server side so you don't need to worry at all to get the current value. If it changes in a millisecond before you send your request it will notice that.
Extra info: It is also much faster than the transaction. Check it out here.
Transaction VS ServerValue
When working with data that could be corrupted by concurrent modifications, such as incremental counters, you can use a transaction operation,Using a transaction prevents current increment from being incorrect if multiple users star the same post at the same time or the client had stale data.
The benefit of ServerValue.increment(10) makes you no worry about grabbing the current value to increment it as it will get the current value and increment it with sent value automatically

how to send HL7 message using mirth by reading data from my database

I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna

Parsing a CSV for Database Insertion when Formatted Incorrectly

I recently wrote a mailing platform for one of our employees to use. The system runs great, scales great, and is fun to use. However, it is currently inoperable due to a bug that I can't figure out how to fix (fairly inexperienced developer).
The process goes something like this...
Upload a CSV file to a specific FTP directory.
Go to the import_mailing_list page.
Choose a CSV file within the FTP directory.
Name and describe what the list contains.
Associate file headings with database columns.
Then, the back-end loops over each line of the file, associating the values with a heading, and importing these values into a database.
This all works wonderfully, except in a specific case, when a raw CSV is not correctly formatted. For example...
fname, lname, email
Bob, Schlumberger, bob#bob.com
Bobbette, Schlumberger
Another, Record, goeshere#email.com
As you can see, there is a missing comma on line two. This would cause an error when attempting to pull "valArray[3]" (or valArray[2], in the case of every language but mine).
I am looking for the most efficient solution to keep this error from happening. Perhaps I should check the array length, and compare it to the index we're going to attempt to pull, before pulling it. But to do this for each and every value seems inefficient. Anybody have another idea?
Our stack is ColdFusion 8/9 and MySQL 5.1. This is why I refer to the array index as [3].
There's ArrayIsDefined(array, elementIndex), or ArrayLen(array)
seems inefficient?
You gotta code what you need to code, forget about inefficiency. Get it right before you get it fast (when needed).
I suppose if you are looking for another way of doing this (instead of checking the array length each time, although that really doesn't sound that bad to me), you could wrap each line insert attempt in a try/catch block. If it fails, then stuff the failed row in a buffer (including the line number and error message) that you could then display to the user after the batch has completed, so they could see each of the failed lines and why they failed. This has the advantages of 1) not having to explicitly check the array length each time and 2) catching other errors that you might not have anticipated beforehand (maybe a value is too long for your field, for example).

CAB file API clarification

Since I'm not really seeing any content anywhere that doesn't point back to the original Microsoft documents on this matter, or source code that really doesn't seem to answer the questions I'm having, I thought I might ask a few things here. (Delphi tag is there because that's what my dev environment is on the code I'm making from this)
That said, I had a few questions the API document wasn't answering. First one: fdi_notify messages. What is "my responsibility" is in coding these: fdintCABINET_INFO: fdintPARTIAL_FILE: fdintNEXT_CABINET: fdintENUMERATE: ? I'll illustrate what I mean by an example. For fdintCLOSE_FILE_INFO, "my responsibility" is to Close a file related to handle given me, and set the file's date and time according to the data passed in fdi_notify.
I figure I'm missing something since my code isn't handling extracting spanned CAB files...any thoughts on how to do this?
What you're more than likely running into is that FDICopy only reads the cab you passed in. It will use fdintNEXT_CABINET to get spanned data for any files you extract in response to fdintCOPY_FILE, but it only calls fdintCOPY_FILE for files that start on that first cab.
To get a directory listing for the entire set, you need to call FDICopy in a loop. Every time you get a fdintCABINET_INFO event, save off the psz1 parameter (next cab name). When FDICopy returns, check that. If it's an empty string you're done, if not call FDICopy again with the next cab as the new path.
fdintCABINET_INFO: The only responsibility for this is returning 0 to continue processing. You can use the information provided (the path of the next cabinet, next disk, path name, nad set ID), but you don't need to.
fdintPARTIAL_FILE: Depending on how you're processing your cabs, you can probably ignore this. You'll only see it for the second and later images in a set, and it's to tell you that the particular entry is continued from a previous cab. If you started at the first cab in the set you'll have already seen an fdintCOPY_FILE for the file. If you're processing random .cabs, you won't really be able to use it either, since you won't have the start of the file to extract.
fdintNEXT_CABINET: You can use this to prompt the user for a new directory for the next cabinet, but for simple spanning support just return 0 if the passed in filename is valid or -1 if it isn't. If you return 0 and the cab isn't valid, or is the wrong one, this will get called again. The easiest approach (if you don't request a new disk/directory), is just to check pfdin^.fdie. If it's FDIError_None it's equal the first time being called for the requested cab, so you can return 0. If it's anything else it's already tried to open the requested cab at least once, so you can return -1 as an error.
fdintENUMERATE: I think you can ignore this. It isn't covered in the documentation, and the two cab libraries I've looked at don't use it. It may be a leftover from a previous API version.

Dynamic READ ...RECORD INVALID KEY not working properly in COBOL. How to fix it?

A Cobol program with file-control like so:
SELECT D-FLAT-FILE ASSIGN TO DFLAT-FILE
ORGANIZATION IS INDEXED
ACCESS MODE IS SEQUENTIAL
FILE STATUS IS RECORD-STAT
RECORD KEY IS D_KEY OF D-FLAT-FILE DESCENDING WITH DUPLICATES.
SELECT C-MAST-FILE ASSIGN TO CMAST-FILE
ORGANIZATION IS INDEXED
ACCESS MODE IS DYNAMIC
FILE STATUS IS RECORD-STAT
RECORD KEY IS C_KEY OF C-MAST-FILE.
reads a record from the first flat file like so:
PROCESSING.
READ D-FLAT-FILE NEXT RECORD
AT END ....END READ.
and reads a record on the second DYNAMIC file like so:
READ C-MAST-FILE RECORD
INVALID KEY
GO TO PROCESSING.
All works well except for 1 case. If the 1st record from the 1st flat file does not match any records on the 2nd dynamic file, the program goes into an infinite loop instead of doing GO TO PROCESSING. I checked the manuals, all as per manual (it is the VAX Cobol). What am I missing?
Best practice is to use a different FILE STATUS variable for each file. In your case you haven't shown us enough context to see the problem. But if you are in a loop looking at RECORD-STAT, then it is possible that the failed read from C-MAST-FILE is giving you grief.

Resources