I have a Coplat message in the following format
DEB1234567890 5 CODE1 5 2007020610471COPLAT0
(... other data here ....)
DEB1234567890 5 CODE2 5 2007020610471COPLAT0
(... other data here ....)
FIN00000245
the above message has two DEB sections as you can notice.
I want to create a Copaym message that can be mapped to that Coplat message, this is an example of a Copaym message with one BGM segment.
UNB+UNOC:2+1234567890:5+CODE1'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNZ+2+091000000'
I want to create two messages like this one to be translated to the above Coplat message, means when translated, I must get 2 DEB sections with CODE1 and CODE2 respectivily, I tried this :
UNB+UNOC:2+1234567890:5+CODE1'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNZ+2+091000000'
UNB+UNOC:2+1234567890:5+CODE2'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNZ+2+091000000'
but I got syntax error, so I makes all the data inside one UNB segment which works, but the generated Coplat has just one DEB with CODE1, this is the message :
UNB+UNOC:2+1234567890:5+CODE1'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNZ+2+091000000'
One UNB and 2 UNH segments inside.
Can anyone help how to make a Copaym message so I get 2 DEB segments with CODE1 and CODE2?
that's the correct format, you have just to specify different Interchange identifires for the two messages
UNB+UNOC:2+1234567890:5+CODE1'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNZ+2+091000000'
UNB+UNOC:2+1234567890:5+CODE2'
UNH+1000000+COPAYM:0:4.2:RT'
BGM+903:ZZZ+1000000'
(... other data here ....)
UNT+62:1000000'
UNZ+2+091000000'
Related
I am trying to load a parameter table.
I get error messages when opening the Parameter Table and trying to load a txt file (created with Excel and saved as a tab-delimited txt) via Treatmant -> Import Variable Table -> Group.
I tried using the advice given here: How to use table loader in ztree?
But I cannot import the parameter table generated.
The error messages say, e.g.:
Syntax error: line 1 (or above)
Error in period 0; subject 1
Parameter table in z-Tree is a special table and (if I am not mistaken) they are not meant to be exported or imported.
I just assumed you would like to have a special matching structure. (If you are planing to do something else, my answer might not be relevant.)
If you want to manage the Group variable from a file, you can create a table, say MATCHING and load an external file the same way it is described in the post you put the link. For instance something like that:
Period Subject Group
1 1 3
1 2 3
1 3 2
...
2 1 2
2 2 1
2 3 3
and you can add a program (subjects.do) as follows under the background stage:
Group = MATCHING.find(Subject == :Subject & Period == :Period, Group);
Just make sure you define the group for each subject and each period as if the program cannot find a valid entry for the subject and the period, it will create trouble.
Note: If you are using z-Tree 4, it seems that the variables need to be initiated first. This can be done by adding a program under the table. In z-Tree 3, this is not necessary.
I store data in XML files in Data Lake Store within each folder, like one folder constitutes one source system.
On end of every day, i would like to run some kid of log analytics to find out how many New XML files are stored in Data Lake Store under every folder?. I have enabled Diagnostic Logs and also added OMS Log Analytics Suite.
I would like to know what is the best way to achieve this above report?
It is possible to do some aggregate report (and even create an alert/notification). Using Log Analytics, you can create a query that searches for any instances when a file is written to your Azure Data Lake Store based on either a common root path, or a file naming:
AzureDiagnostics
| where ( ResourceProvider == "MICROSOFT.DATALAKESTORE" )
| where ( OperationName == "create" )
| where ( Path_s contains "/webhdfs/v1/##YOUR PATH##")
Alternatively, the last line, could also be:
| where ( Path_s contains ".xml")
...or a combination of both.
You can then use this query to create an alert that will notify you during a given interval (e.g. every 24 hours) the number of files that were created.
Depending on what you need, you can format the query these ways:
If you use a common file naming, you can find a match where the path contains said file naming.
If you use a common path, you can find a match where the patch matches the common path.
If you want to be notified of all the instances (not just specific ones), you can use an aggregating query, and an alert when a threshold is reached/exceeded (i.e. 1 or more events):
AzureDiagnostics
| where ( ResourceProvider == "MICROSOFT.DATALAKESTORE" )
| where ( OperationName == "create" )
| where ( Path_s contains ".xml")
| summarize AggregatedValue = count(OperationName) by bin(TimeGenerated, 24h), OperationName
With the query, you can create the alert by following the steps in this blog post: https://azure.microsoft.com/en-gb/blog/control-azure-data-lake-costs-using-log-analytics-to-create-service-alerts/.
Let us know if you have more questions or need additional details.
I am using Rails 5 and Flash.
Here is a method in my controller.
The query takes some 10-15 seconds to execute no matter the db tuning, due to some large volumes of data to be processed.
I want to have a flash message telling something like "Processing ... " before the #result is handed over to the view with the same name to be rendered.
def monthly_measurements
if user_signed_in?
#query = "
SELECT to_char(scheduled_on, 'yyyy-mm') as year_month,
get_measurements_count_by_year_month(to_char(scheduled_on, 'yyyy-mm'),NULL) as monthly_total,
get_measurements_count_by_year_month(to_char(scheduled_on, 'yyyy-mm'),'Completed') as monthly_completed,
get_measurements_count_by_year_month(to_char(scheduled_on, 'yyyy-mm'),'Not Completed') as monthly_not_completed
FROM measurements
GROUP BY year_month
ORDER BY year_month DESC;
"
#result = Measurement.connection.execute(#query)
else
flash[:alert] = "Not signed in"
redirect_to root_url
end
end
I solved the problem by making the display of the intermediary message "Processing ... " no longer necessary by reducing the controller query processing time with on order of magnitude.
I denormalized the measurements table by adding a column year_month of type text. The column is populated with the yyyy-mm part of the scheduled_on of each respective table row. The column year_month is indexed.
The total number of records in the table measurements is 120K+
I replaced also as necessary in all functions and queries the occurrence of to_char(scheduled_on, 'yyyy-mm') with year_month.
Everything (all related reports) now runs perfect and in a split of a second.
I've defined a function for running batches of custom tables:
DEFINE !xtables (myvars=!CMDEND)
CTABLES
/VLABELS VARIABLES=!myvars retailer total DISPLAY=LABEL
/TABLE !myvars [C][COLPCT.COUNT PCT40.0, TOTALS[UCOUNT F40.0]] BY retailer [c] + total [c]
/SLABELS POSITION=ROW
/CRITERIA CILEVEL=95
/CATEGORIES VARIABLES=!myvars ORDER=D KEY=COLPCT.COUNT (!myvars) EMPTY=INCLUDE TOTAL=YES LABEL='Base' POSITION=AFTER
/COMPARETEST TYPE=PROP ALPHA=.05 ADJUST=BONFERRONI ORIGIN=COLUMN INCLUDEMRSETS=YES CATEGORIES=ALLVISIBLE MERGE=YES STYLE=SIMPLE SHOWSIG=NO
!ENDDEFINE.
I can then run a series for commands to run these in one batch.
!XTABLES MYVARS=q1.
!XTABLES MYVARS=q2.
!XTABLES MYVARS=q3.
However, if a table has the same row and column, Custom Tables freezes:
!XTABLES MYVARS=retailer.
The culprit appears to be SLABELS. I hadn't encountered this problem before v24.
I tried replicating a CTABLES spec as close as possible to yours and found that VLABELSdoes not like the same variable specified twice.
GET FILE="C:\Program Files\IBM\SPSS\Statistics\23\Samples\English\Employee data.sav".
CTABLES /VLABELS VARIABLES=Gender Gender DISPLAY=LABEL
/TABLE Gender[c][COLPCT.COUNT PCT40.0, TOTALS[UCOUNT F40.0]]
BY Gender[c] /SLABELS POSITION=ROW
/CATEGORIES VARIABLES=Gender ORDER=D KEY=COLPCT.COUNT(Gender) .
Which yields an error message:
VLABELS: Text GENDER. The same keyword, option, or subcommand is used more than once.
The macro has a parmeter named MYVARS, which suggests that more than one variable can be listed, however, if you do that, it will generate an invalid command. Something else to watch out for. I can see the infinite loop in V24. In V23, an error message is produced.
I am calling a webservice that's returning a comma separated dataset with varying columns and multiple text-qualified rows (the first row denotes the column names) . I need to insert each row into a database while concatenating the rows that are varied.
The data is returned like so
"Email Address","First Name","Last Name", "State","Training","Suppression","Events","MEMBER_RATING","OPTIN_TIME","CLEAN_CAMPAIGN_ID"
"scott#example.com","Scott","Staph","NY","Campaigns and activism","Social Media","Fundraiser",1,"2012-03-08 17:17:42","Training"
There can be up to 60 columns between State and Member_Rating, and the data in those fields are to get concatenated and inserted into one database column. The first four fields and the last three fields in the list will always be the same. I'm unsure the best way to tackle this.
I am not sure if this solution fits your needs. I hope so. It's a perl script that joins with - surrounded with spaces all fields but first four and last three. It uses a non standard module, Text::CSV_XS that must be installed using CPAN or similar tool.
Content of infile:
"Email Address","First Name","Last Name","State","Training","Suppression","Events","MEMBER_RATING","OPTIN_TIME","CLEAN_CAMPAIGN_ID"
"scott#example.com","Scott","Staph","NY","Campaigns and activism","Social Media","Fundraiser",1,"2012-03-08 17:17:42","Training"
Content of script.pl:
use warnings;
use strict;
use Text::CSV_XS;
my $csv = Text::CSV_XS->new({
allow_whitespace => 1,
});
open my $fh, q[<], $ARGV[0] or die qq[Open: $!\n];
while ( my $row = $csv->getline( $fh ) ) {
my $concat = join q[ - ], (#$row)[4 .. #$row-4];
splice #$row, 4, scalar #$row - (3 + 4), $concat;
$csv->print( \*STDOUT, $row );
print qq[\n];
}
Run it like:
perl script.pl infile
With following output:
"Email Address","First Name","Last Name",State,"Training - Suppression - Events",MEMBER_RATING,OPTIN_TIME,CLEAN_CAMPAIGN_ID
scott#example.com,Scott,Staph,NY,"Campaigns and activism - Social Media - Fundraiser",1,"2012-03-08 17:17:42",Training