Oracle RTF template for etext - delete last row - bi-publisher

I have created a RTF template for generating a etext file in BI Publisher. I have set NEW RECORD CHARACTER to Carriage return as I need each new record to start on new row. The problem is that after all XML data has been processed there is an empty row after the data. I need to remove that empty row but have had no success in doing so.
Example:
firstname1;lastname1;
firstname2;lastname2;<-file should end here, no enter after last
record
Any ideas please?
Template example:
<LEVEL> G_DATA
<MAXIMUM LENGTH> <FORMAT> <DATA>
<NEW RECORD> ReportData
50 Alpha FIRST_NAME
';'
50 Alpha LAST_NAME
';'
< END LEVEL > G_DATA

Related

SoftwareAG webMethods EDI mapping question : How to map one record into Multiple records

I am trying to map one record into multiple records using webMethods Designer flow service.
1 row converted into several rows.
Please help me to wire webethods flow service to map the following using LOOP,REPEAT, MAP, etc.
SourceRecord
DateFields TargetRecord
DT ( Record initiator )( 1 .. 1 ) DTM (Record initiator)( 1 .. many times)
OrderDate DTM_01
SalesDate DTM_02
ExpireDate
Sample Input data ( element delimiter "," and segment terminator newline)
DT,20200914,20200916,20230913 <-- where DT is record initiator "," is element separator
and orderDate = 20200914
SalesDate = 20200916
ExpireDate = 20230913
Desired Output Data ( multiple rows) ( DTM is record initiator element delimiter "*" and segment terminator newline)
DTM*002*20200914 <-- 002 is qualifier for OrderDate
DTM*007*20200916 <-- 007 is the qualifier for SalesDate
DTM*036*20230913 <-- 036 is the qulifier for ExpireDate
There is not enough information. Do you have one string with one record of input data? Do you have a list of strings or a document list? Most likely the record comes from a flat file?
Is the output a string list or document list?
Anyway, the simple solution to your question (assuming one record) is to tokenize (using pub.string:tokenize) the input string and map it to output object using indices concatenating it with preset qualifiers:
Now you can build your string out of that string list using pub.string:makeString, using new line as separator (notice that cursor is on the second line):

Escaping Special Characters in Import for Neo4j

In my where clause I am trying to escape a special character "#" by following the manual's recommendation regarding backticks when creating a node:
WHERE line.`The #` IS NOT NULL AND line.`Person's First/Last Name` IS NOT NULL
However, when I do this, I get message:
No data returned, and nothing was changed.
Am I escaping the header values ("The #" and "Person's First/Last Name") properly?
This example code works for me, doesn't look like your problem is the escape characters.
CREATE (n:TestNode { `The #`:"123", `Person's First/Last Name`:"john johnson" });
MATCH (line)
WHERE line.`The #` IS NOT NULL AND line.`Person's First/Last Name` IS NOT NULL
RETURN line.`The #`, line.`Person's First/Last Name`;
line.`The #` line.`Person's First/Last Name`
123 john johnson
Returned 1 row in 128 ms

Display only a part of a string

I'm selecting an email address but I don't want to display the full email. Only the part before the '#'. How can I cut it. I know how to display only certain amount of characters or numbers. But how do I specify to display only till the '#' symbol.
Thank you.
Recent versions of Informix SQL have the CHARINDEX() function which can be used to isolate where the '#' symbol appears:
SELECT LEFT(email_addr, CHARINDEX('#', email_addr)-1)
CHARINDEX() will return 0 if not found, otherwise the ordinal position of the located string. My testing found that LEFT() doesn't complain about being passed 0 or -1, so it's safe to execute this as is, you don't have to verify that you get something back from CHARINDEX() first.
CREATE TEMP TABLE ex1
(
email_addr VARCHAR(60)
) WITH NO LOG;
INSERT INTO ex1 VALUES ('ret#example.com.au');
INSERT INTO ex1 VALUES ('emdee#gmail.com');
INSERT INTO ex1 VALUES ('unknown');
INSERT INTO ex1 VALUES (NULL);
INSERT INTO ex1 VALUES ('#bademail');
SELECT LEFT(email_addr, CHARINDEX('#', email_addr)-1) FROM ex1
... produces:
(expression)
ret
emdee
5 row(s) retrieved.
If you have an older version of Informix that doesn't support CHARINDEX(), you'll be forced to iterate through the string character by character, until you find the '#' symbol.

Parsing a CSV file with rows of varying lenghs

I am calling a webservice that's returning a comma separated dataset with varying columns and multiple text-qualified rows (the first row denotes the column names) . I need to insert each row into a database while concatenating the rows that are varied.
The data is returned like so
"Email Address","First Name","Last Name", "State","Training","Suppression","Events","MEMBER_RATING","OPTIN_TIME","CLEAN_CAMPAIGN_ID"
"scott#example.com","Scott","Staph","NY","Campaigns and activism","Social Media","Fundraiser",1,"2012-03-08 17:17:42","Training"
There can be up to 60 columns between State and Member_Rating, and the data in those fields are to get concatenated and inserted into one database column. The first four fields and the last three fields in the list will always be the same. I'm unsure the best way to tackle this.
I am not sure if this solution fits your needs. I hope so. It's a perl script that joins with - surrounded with spaces all fields but first four and last three. It uses a non standard module, Text::CSV_XS that must be installed using CPAN or similar tool.
Content of infile:
"Email Address","First Name","Last Name","State","Training","Suppression","Events","MEMBER_RATING","OPTIN_TIME","CLEAN_CAMPAIGN_ID"
"scott#example.com","Scott","Staph","NY","Campaigns and activism","Social Media","Fundraiser",1,"2012-03-08 17:17:42","Training"
Content of script.pl:
use warnings;
use strict;
use Text::CSV_XS;
my $csv = Text::CSV_XS->new({
allow_whitespace => 1,
});
open my $fh, q[<], $ARGV[0] or die qq[Open: $!\n];
while ( my $row = $csv->getline( $fh ) ) {
my $concat = join q[ - ], (#$row)[4 .. #$row-4];
splice #$row, 4, scalar #$row - (3 + 4), $concat;
$csv->print( \*STDOUT, $row );
print qq[\n];
}
Run it like:
perl script.pl infile
With following output:
"Email Address","First Name","Last Name",State,"Training - Suppression - Events",MEMBER_RATING,OPTIN_TIME,CLEAN_CAMPAIGN_ID
scott#example.com,Scott,Staph,NY,"Campaigns and activism - Social Media - Fundraiser",1,"2012-03-08 17:17:42",Training

Data collection task

I have data that follows this kind of patten:
ID Name1 Name2 Name3 Name4 .....
41242 MCJ5X TUAW OXVM4 Kcmev 1
93532 AVEV2 WCRB3 LPAQ 2 DVL2
.
.
.
As of now this is just format in a spreadsheet and has about 6000 lines. What I need to do is to create a new row for each Name after Name1 and associate that with the ID on its current row. For example, see below:
ID Name1
41242 MCJ5X
41242 TUAW
41242 OXVM4
41242 Kcmev 1
93532 AVEV2
93532 WCRB3
93532 LPAQ 2
93532 DVL2
Any ideas how I could do this? I feel like this shouldn't be too complicated but not sure of the best approach. Whether a script or some function I'd really appreciate the help.
If possible, you might want to use a csv file. These files are plain-text and most spreadsheet programs can open/modify them (I know Excel and the OpenOffice version can). If you go with this approach, your algorithm will look something like this:
read everything into a string array
create a 1 to many data structure (maybe a Dictionary<string, List<string>> or list of (string, string) tuple types)
loop over each line of the file
splice the current line on the ','s and loop over those
if this is the first splice, add a new item to the 1 to many data structure with the current splice as the Id
otherwise, add this splice to the "many" (name) part of the last item in the data structure
create a new csv file or open the old one for writing
output the "ID, Name1" row
loop over each 1-many item in the data collection
loop over the many items in the current 1-many item
output the 1 (id) + "," + current many item (current name)
You could do this in just about any language. If its a one-time use script then Python, Ruby, or Powershell (depending on platform) would probably be a good choice.

Resources