Detail about the scenario is like I am clubbing the 10 tables queries into one data model after that in RTF template I have develop 5 different reports with the sample XML of same data-model but here I need to filter each report with 5 different parameter. which, I don't get exactly how to achieve ....?
For example :- 1st report to be filter with booking-date & 2nd report to be filter with category-id='1001' & 3rd report to be filter with category-id in ('2001','2003','2004'.......)
You can give the filter criteria within square brackets in your foor loops.
eg: <?for-each:root[category-id='1001]?>
Will filter only those nodes which meet that criteria. Of course the actual command will depend on the schema of your data.
Related
I have a list with clients and data about their orders (date, amount ordered, price etc.)
It is named [m]/[year] e.g. 1/23 for January 2023.
I am trying to create a separate sheet with statistics that would give me a better overview of the data in the first sheet. I'd like to sort it by the period, category and location which you can select from a drop-down menu.
I have this piece of code that takes the sheet name from Period and uses the Category and Location filter keywords to sum the totals from all orders corresponding with the selected filters.
=SUMIFS(INDIRECT($A2&"!"&"F2:F");INDIRECT($A2&"!"&"C2:C");$B$2;INDIRECT($A2&"!"&"D2:D");$C$2)
This works perfectly. However, I would like to implement an "All" option for the filters too.
Summing up everything at the same time wouldn't be such a problem, but the filters can have any combination of "All" options selected which adds up to 8 possible combinations between the three filters.
My thought process is to create 8 different branches of SUMIFS nested in an IFS function.
Is there a simpler, more elegant way of doing filters in Google Sheets? I am not just looking for t
he solution, I need a pointer in the right direction so that I can read up on it and learn it.
Here's a generalized example with some sample data to deal with your expected case scenario:
=sumif(A:A,if(D2="All","<>All",D2),B:B)
My company is using BI Publisher for some data dumps and I know BI Publisher isn't really designed for that, but this is what I have to use.
I have two files with over 100 fields each. Is there a way to add every field to the report or do I have to add each field individually?
If you have an data model export, you could use the BIP designer in word to add a table, and select multiple fields to the output. The wizard will do the xdo code for you.
Use E-Text output. It is designed for EFT (Electronic Fund Transfers) in Payment module. But you can use it to export a CSV (Comma Separated Value) file, or a fixed width file. Both of which can be opened in Excel. The E-Text output is fully documented in the BI Publisher user guides. Some advanced stuff is a bit harder to accomplish, but it is not terribly difficult. A simple CSV file should be quick and easy to create. You will need to list every field, there's no "give me everything" command.
It's actually to your benefit to use those for larger data extracts, when you you use the Word RTF tables, the output Excel files are VERY large due to the way BI Publisher formats the cells.
I'm designing a business analysis report using Crystal Report XI and oracle stored procedure as data source. Report contains a crosstab with one row (on the left) and summarized values under selling station names.
Requirement is to have multiple attribute columns on left like Product ID, Product Name, Product Color, Product Size, Product Sold Date etc and at the end, summarized values. What I've done so far is a crosstab with only one column at left and then summarized values.
Here is the sample of crosstab as required.
I've done plenty of R&D but didn't find any appropriate solution.
The output of report is required to match the format provided by business user.
So the solution I devised is here:
Crosstab is used to aggregate and jointly display the distribution of two or more variables by tabulating their results against one dimension. Problem was how to increase the number of dimensions. Since this is against logic of crosstab, so I modified my stored-procedure and created one single string by concatenating the dimensions and created a crosstab against it. These dimensions are separated by a delimiter '~' or you can use some other for better readability.
I'm currently working with Google Sheets to import data from Contact Form 7 in Wordpress. All the data is coming over fine, but I wanted to see about formatting it in more user friendly fashion. I've simplified the example a bit, but the gist of the form I have created allows the user to request multiple versions of a graphic file with different wording as needed, up to 5(my example has just 2 for simplicity sake).
All the data is imported using the CF7 variables and ideally I wanted to clean this up a bit. What I had thought of as a solution was creating a second sheet that pulls in this data submitted in the first sheet into a more user friendly format, as I intended to use this as a work form for a designer to create the requested graphic once the data is received. With each request the name/department/email/date all stay the same, but I'd like to display the version and line 1 and 2 data on another line. Is it possible to reorganize data like this on the fly, so when a new form is submitted and adds data to sheet 1, sheet 2 would then update with the properly formatted info?
Is this even possible to do? I did some looking online, but didn't anything that really related to this type of data manipulation.
Solution:
Here's what ended up working for my example
=ArrayFormula(QUERY({
Sheet1!A2:D,Sheet1!E2:G,ROW(Sheet1!A2:A);
IFERROR(LEN(Sheet1!A2:D)/0),Sheet1!H2:J,ROW(Sheet1!A2:A);
IFERROR(LEN(Sheet1!A2:D)/0),Sheet1!K2:M,ROW(Sheet1!A2:A);
IFERROR(LEN(Sheet1!A2:D)/0),Sheet1!N2:P,ROW(Sheet1!A2:A);
IFERROR(LEN(Sheet1!A2:D)/0),Sheet1!Q2:S,ROW(Sheet1!A2:A)
},"select Col1,Col2,Col3,Col4,Col5,Col6,Col7 where Col5<>'' order by Col8",1))
Yes, it's possible.
One way is to use arrays and the QUERY function.
For simplicity, let say that
Columns A and B have the general information of the order
Columns C and D have the data for version 1
Columns E and F have the data for version 2
Columns G and H have the data for version 3
On the output sheet, add the headers.
Below of them add a formula like the following:
=ArrayFormula(QUERY({A2:B,C2:D,ROW(A2:A);IFERROR(LEN(A2:B)/0),E2:F,ROW(A2:A);IFERROR(LEN(A2:B)/0),G2:H,ROW(A2:A)},"select Col1,Col2,Col3,Col4 where Col3<>'' order by Col5"))
References start on row 2 to skip the headers to avoid to include them on the output sheet.
ROW(A2:A) is used to keep the order
IFERROR(LEN(A2:B)/0) is a "trick" used to "hide" the order (general information) data for the second and following rows for the same order. On the select parameter of the QUERY function, it's referrey as Col5 on the order by clause.
It's assumed that lookup-choice-1 will never be empty.
NOTES:
If more columns were added, the column numbers should be updated accordingly
Don't use the order by clause to sort the result by the general information columns because the "trick" to hide the "labels". If you need to apply a sort, do it' before applying the above formula, you could do this by sorting the source range through the Data > Sort range... feature, so the data is sorted before it's transformed by the above formula.
See also
Sort and filter your data, an official help article describing Data > Sort range...
Hi after performing a group by key on a KV Pcollection, I need to:-
1) Make every element in that PCollection a separate individual PCollection.
2) Insert the records in those individual PCollections into a BigQuery Table.
Basically my intention is to create a dynamic date partition in the BigQuery table.
How can I do this?
An example would really help.
For Google Dataflow to be able to perform the massive parallelisation which makes it as one of its kind (as a service on the public cloud), the job flow needs to be predefined before submitting it to on the Google cloud console. Everytime you execute the jar file that conatins your pipleline code (which includes pipeline options and the transforms), a json file with the description of the job is created and submitted to Google cloud platform. The managed service then uses this to execute your job.
For the use case mentioned in the question, it demands that the input PCollection be split into as many PCollections as their are unique dates. For the split, the Tuple Tags needed to split the collection should be created dynamically which is not possible at this time. Creating tuple tags dynamically is not allowed because that doesn't help in creating the job description json file and beats the whole design/purpose with which dataflow was built.
I can think of a couple of solutions to this problem (both having its own pros and cons) :
Solution 1 (a workaround for the exact use case in the question):
Write a dataflow transform that takes the input PCollection and for each element in the input -
1. Checks the date of the element.
2. Appends the date to a pre-defined Big Query Table Name as a decorator (in the format yyyyMMDD).
3. Makes an HTTP request to the BQ API to insert the row into the table with the table name added with a decorator.
You will have to take into consideration the cost perspective in this approach because there is single HTTP request for every element rather than a BQ load job that would have done it if we had used the BigQueryIO dataflow sdk module.
Solution 2 (best practice that should be followed in these type of use cases):
1. Run the dataflow pipeline in the streaming mode instead of batch mode.
2. Define a time window with whatever is suitable to the scenario in which it is being is used.
3. For the `PCollection` in each window, write it to a BQ table with the decorator being the date of the time window itself.
You will have to consider rearchitecting your data source to send data to dataflow in the real time but you will have a dynamically date partitioned big query table with the results of your data processing being near real time.
References -
Google Big Query Table Decorators
Google Big Query Table insert using HTTP POST request
How job description files work
Note: Please mention in the comments and I will elaborate the answer with code snippets if needed.