I have a VSAM file with customer details and customer number is one of the fields. In CICS the user has to enter a customer number. If and only if the customer number is present in the VSAM file the next map will be sent.
How do I validate the customer number from the VSAM file?
CUSTOMER NO sounds as if it is a number, you should validate that it is a number.
To check if it exists in CICS, you can use the CICS READ command see Cics Read; i.e.
Exec CICS
Read File(..)
INTO(data-area) RIDFLD(data-area)
blah blah blah ...
end exec.
where RIDFLD is the record-key
I would suggest finding an Existing program where you work (I am assuming you are not a student) and use it as an example. These days it is rare to write a program from scratch on the mainframe. There is nearly always an existing example you can look at copy.
Also you should show us what you have tried !!!
Related
I am trying to read a VSAM KSDS file sequentially using STARTBR and READNEXT. I am able to read the 1st record. After processing 1st record, I was expecting the read of 2nd record from the VSAM when READNEXT is executed but only first record is read again. Could you please help here? I am using same lines 2 times: 1 after STARTBR and 1 while reading the next record after 1st.
MOVE LENGTH OF WS-INPUT-DATA TO X01-KEY1-LENGTH
EXEC CICS READNEXT DATASET(X01-INPUT-NAME)
INTO(WS-INPUT-DATA)
RIDFLD(X01-KEY1)
LENGTH(X01-KEY1-LENGTH)
RESP(X-RESP)
END-EXEC
From the description it looks like the RIDFLD has changed between the 1st and 2nd requests, possibly cleared, that will cause the browse to be repositioned to look for the next record after the new value passed in RIDFLD.
Make sure that on the 2nd READNEXT that the RIDFLD has the value returned by the 1st READNEXT.
I have a report that is using a Stored Procedure as the dataset. I have a simple case statement in my stored procedure that manipulates a set of captions based on the results in the record.
CASE
WHEN CWLI.CAPTION='Hypo Tax – Medical'
THEN 'Medical Tax'
WHEN CWLI.CAPTION='Hypo Tax Social'
THEN 'Social Tax'
ELSE CWLI.CAPTION
END AS CAPTION,
CASE
WHEN CWD.CAPTION='Hypo Tax – Medical'
THEN 'Medical Tax'
WHEN CWD.CAPTION='Hypo Tax Social'
THEN 'Social Tax'
ELSE CWD.CAPTION
END AS CWDCAPTION,
I see the expected results when I execute the stored procedure in SSMS, as below:
However, in my report I still see "Hypo Tax Social"
I have closed BIDS. I deleted the dataset and added it as a new data set. I have also changed my config file RSReportDesigner.config and changed CacheDataForPreview to false and deleted my .data file.
What could be going wrong?
I searched for rdl.data files and did not return anything. Though I changed my RSReportDesigner.config file Add Key="CacheDataForPreview" Value="false" , I would still expect to find residual .data files. Nevertheless, I rolled back this change to Add Key="CacheDataForPreview" Value="true"
I deleted my data source, my dataset and recreated from within VSS. I then created an external tool to delete these files for me. I get this message when I use my tool.
Could Not Find C:\Users\Pennie\Documents\Visual Studio 2008\Projects\Client*.rdl.data
I still see the correct results (the new CASE captions) in SSMS and the old captions in my report.
I really want to handle this in the Stored Procedure, but I have a deliverable.
Can I manipulate the captions in the expression of my tablix? Currently
the expression is looking for the value from two fields. One field is at a company level of my application (these could be blank). Fields!CAPTION.Value, the other is at the system level (these will never be blank) Fields!CWDCAPTION.Value
Here is my current expression w/o the additional case.
=Iif(Fields!CAPTION.Value="",Fields!CWDCAPTION.Value,Fields!CAPTION.Value)
When I try to add an additional Iif in this expression. I just return True or False.
Thank again.
Pennie
Well this lesson has taught me quite a bit about cache and .data... but the issue was that the idiot developer (me) was calling my test SP in my report.. My test SP did not contain the case statement. Wish I had more exciting news but the truth is easy to prove ;-)
I'm all set.
Pennie
Sounds like you have added the CASE statement after you initially created the dataset. If that is the case you need to re-create the dataset again. Just to be clear not only delete the dataset from the report but also the dataset itself.
I want to generate an IDOC file to make shop orders availabe to the R/3 SAP System. My question is concerning BELNR in E2EDK01. As we dont have the possibility to use Webservices and BAPI, we only make the data available via files.
Actual questions are:
Do i yet need a [BELNR] in the IDOC file?
If yes, where to get this [BELNR] from?
E2EDK01-BELNR is the order number of the ordering party, normally it is not needed, but useful.
Example:
The customer sends an order and is unsure if the order received you. So he sends the order again. Now you have two similar orders. You can't know, if there are two similar orders or if you have two idocs for the same order.
If E2EDK01-BELNR is filled with the order number of the customer system, you can decide how to proceed (e.g. ignore the 2nd idoc).
If I understand your question correct, you create the order out of your shop system. You could offer an (optional) field like 'Your order number' and use this. If the same order number (per customer) is used again you can make a warning ("Order X is already ordered").
If you don't want this field you could use the session id to identify double postings.
I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna
A Cobol program with file-control like so:
SELECT D-FLAT-FILE ASSIGN TO DFLAT-FILE
ORGANIZATION IS INDEXED
ACCESS MODE IS SEQUENTIAL
FILE STATUS IS RECORD-STAT
RECORD KEY IS D_KEY OF D-FLAT-FILE DESCENDING WITH DUPLICATES.
SELECT C-MAST-FILE ASSIGN TO CMAST-FILE
ORGANIZATION IS INDEXED
ACCESS MODE IS DYNAMIC
FILE STATUS IS RECORD-STAT
RECORD KEY IS C_KEY OF C-MAST-FILE.
reads a record from the first flat file like so:
PROCESSING.
READ D-FLAT-FILE NEXT RECORD
AT END ....END READ.
and reads a record on the second DYNAMIC file like so:
READ C-MAST-FILE RECORD
INVALID KEY
GO TO PROCESSING.
All works well except for 1 case. If the 1st record from the 1st flat file does not match any records on the 2nd dynamic file, the program goes into an infinite loop instead of doing GO TO PROCESSING. I checked the manuals, all as per manual (it is the VAX Cobol). What am I missing?
Best practice is to use a different FILE STATUS variable for each file. In your case you haven't shown us enough context to see the problem. But if you are in a loop looking at RECORD-STAT, then it is possible that the failed read from C-MAST-FILE is giving you grief.