How to get Data Header in eText File Template - BI Publisher - bi-publisher

I am trying to get a header on my data table, however when I create another group it is repeating the header the same number of times as my data.
So if I have 10 rows of data I get 10 header rows.
I have attached my eText file.
https://drive.google.com/file/d/1im-iGEIhZuhjlb2XbeD5FoNmQ6y3raz4/view?usp=sharing
Could you please advice me

Have to see your data xml , looks like you also have 10 WORKER nodes in the XML, that could be why it is repeating.
Instead of WORKER, try G_1, or whatever is the root of your data xml.
Another option you could try WORKER[position()=1], so it only prints for the first worker.

Related

How to load data from text file to table in odi 12c mapping?

I am trying to load data from text file to a table. The mapping executes successfully but no data is loaded to the staging table. I am using LKM FILE TO SQL and SQL Control Append as IKM. Staging table is created but not loading the rows to the table, insert rows is 0
As I observed you doesn't created the datastore properly, for a text file you have to manually give the proper conditions like fixed or delimited file and then what about delimited separator(,|,..etc) and then view the data . After giving all these you can view your data i.e columns and check whether the datatype is correct or not then creat source file in the model

Align imported Data with manually added data

Example File: https://docs.google.com/spreadsheets/d/1Ht_2QMGMbHmlxSPoOiLX2vw78IL1wp3VhpOOc66eMqY/edit#gid=0
We are filling Point 1 - 4 manually. The data in A,B,C is sorted through C and will change every now and then. The problem I am noticing now is that A,B,C is moving. But D:G will stay in the same column.
We want to use this file to fill in the data since its our main file. So using the initial =query to also take into account D:G is not an option.
Would there be any other way to "link" D:G to the corresponding values in A:C?
Looking at your sheet I noticed you try a VLOOKUP formula.
Please try the following formula
=INDEX(IFERROR(VLOOKUP(A1:A;Blad1!A2:I;{6\7\8\9};0)))
Of course your approach would cause problems. You're trying to map manual data to some data that is bound to change. You can't expect the manual data to move or change in sync when the imported data changes.
You could probably make it work at least if the imported data does not change in order, and instead gets any new data appended. Even then, it doesn't help you if any of the imported rows gets deleted.
There are only two ways I could see to make this work:
Map your manual data as part of the original sheet where your other data is imported from. In other words, make D:G part of the source of A:C, if possible. This is the best approach. Works even when some imported rows get deleted or changed.
Don't sort A:C at source. Simply append new rows, and import as is. Keep your Blad1 sheet as the local source sheet, and add your manual data to D:G here. Then create a new sheet for sorting or do any other thing you'd like, and use this new sheet to feed your Blad2 sheet. This doesn't work if some of the imported rows get deleted or changed.

Why does my infulxdb insert a new row of data only every 10 seconds?

I wrote a timed task in c# to insert one row of data per second,
but I found that only one row of data is inserted every 10 seconds.
I also noticed that new insert requests within 10 seconds will only update the same row of data and not insert a new one.
What is the setting that causes this and how do I change it?
The version of influxdb is 2.2, I downloaded it from the website and started it directly without changing any configuration.
You are probably using query creator which aggregates data (prepares query with aggregation). Example setting in InfluxDB v2 web GUI:
Setting period to 1s or writing your own query without any aggregation should solve your problem.
What is more: writing data with the same tag keys to InfluxDB, with the same timestamp and the same value field name will overwrite existing value in InfluxDB. So described behaviour is normal.

Essbase Hyperion Add data inside Rule Files Bulk more than 100 rows

I already have a rule file (ex. Rule MM01), and I need to add more data rows in rule MM01 to one dimension like below.
For example I want to add more 100 rows of data in column "Replace" and column "With"
Do I have to add 100 rows one by one? Input manually? Or anything else to add bulk data into a rule file?
Nope, you just have to type them in.
If new items keep on popping-up in your source data, you might consider one of the following:
put your source text file into a SQL table and make your load rule read from the table (or even better, try to directly load from the tables that generated the text file)
(assuming you have the data load automated via MaxL) add a powershell script that does the rename before you load the data

Manual entries in google spreadsheet do not match when the data gets updated

I have a google spreadsheet which have some columns of data written through a python script. At the end of the last data column I have added three more columns manually and data for those three columns would be entered manually. Python script would run daily, thus updating the data in the spreadsheet. My issue is whenever I run the script to update the data, the data in the last three manual columns gets jumbled. This is because the order of the data returned by the sql query from the script is different everytime. We can use order by to keep the order same but if new rows are added or the existing rows are deleted from the db then this would also not work.
As stated in this related thread, I think it's an expected behavior because the imported data is dynamic and the data you are adding are static.
The idea is that you don't add any columns to the Sheet that receives the imported data as this data is dynamic.
You need to create a new Sheet and select the data from the Sheet that has the imported data.
The Notes Sheet will need you to select the imported data by the order number in this case. The other columns of data will then be extracted from the ImportedData Sheet using the =vlookup() function and displayed and then you would enter the required note for that record.
You may check the link above for more information.

Resources