How to create multiple BI publisher reports from same ess job in oracle cloud - bi-publisher

I am trying to run a ESS job that will create two files as output. My requirement is to create an extract file and another file that shows how many records are present in the extract file. I know how to create the extract file by writing the select query , but how do I find the count in the first select query and send it to another file ? These two files should be generated when I run the same ess job.
One report should be created after another

If you haven't seen it, I believe that this document going over creating reports in BI can assist - https://docs.oracle.com/cd/E28280_01/bi.1111/e22257/create_rpt_jobs.htm#BIPUG188
Specifically section 4.3

Related

Avoid reading the same file multiple times using Telegraf and file input plugin

I need to read csv files inside a folder. New csv files are generated every time a user submits a form. I'm using the "file" input plugin to read the data and send it to Influxdb. These steps are working fine.
The problem is that the same file is read multiple times every data collection interval. I was thinking of a solution where I could move the file that was read to a different folder, but I couldn't do that with Telegraf's "exec" output plug.
ps: I can't change the way csv files are generated.
Any ideas on how to avoid reading the same csv file multiple times?
As you discovered file input plugin is used to read entire files at each collection interval.
My suggestion is for you to instead use the directory monitor input plugin. This will read files in a directory, monitor the directory for new files, and parse the ones that have not already been picked up yet. There are some configuration settings in that plugin that make it easier to time when new files are read as well.
Another option is to use the tail input plugin which will tail a file and only read new updates to that file as things come. However, I think the directory monitor is more likely something you are after for your scenario.
Thanks!

How can I turn spool requests into PDF files on the application server?

I'm currently doing Invoicing and Printing setup on a SAP demo system. I've managed to create Smart Forms based on the standard ones. The problem starts with printing using FPCOPARA transaction and LP01 as Output device. I was able to generate a spool (was able to view it as well) but not printed (no actual file).
I just want to have a file from that Smart Form stored in AL11 and be able to archive it later on. Do you have idea on how can I proceed with this?
Thanks
We actually have an inhouse-developed program for this exact task. I don't have permission to publish the sourcecode of the program, but it involves:
Reading the list of spool requests from database table TSP01
Using the function module RSTS_GET_ATTRIBUTES to obtain the type of the spool request.
Calling the function modules CONVERT_OTFSPOOLJOB_2_PDF or CONVERT_ABAPSPOOLJOB_2_PDF, depending on the type determined by the previous function module. They return a table containing the content of the spool request in the PDF format.
Writing the table returned by the previous function modules to a file using the ABAP statements OPEN DATASET and TRANSFER

Best way to migrate single project from Jira 5.0 to Jira 8?

I want to move a single project from a Jira 5.0 instance to a new Jira 8.0 instance being already used for other projects - so the process must not bring in configurations, workflows, etc. nor should alter existing projects.
I'm only interested in importing issues and related data:
title, description, etc (obviously)
attachments (images, files, whatever)
issue links
issue type (with mapping to new types in case they don't match)
... (other properties that I'm forgetting right now)
I've just started searching for the topic and already found several options - and it's not clear if they're all available to be, mosly due to the starting Jira version, they are:
Export to CSV and import to CSV
Export to XML
Import from JSON (though I've yet to find a JSON export)
Rest API
Import project from backup
... and surely others
Of course I'd like the most complete yet less error-prone method, though if resorting to the REST API will be the only way to be sure to import all I want, I'm ready to write a script / program.
So, what should I choose?
P.S.: I'm not sure if this fits this community, is there a more proper one?
The easiest way is to get csv export, get all attachments (jira_home/data/attachments). Then copy attachments to a new instance to jira_home/import. You'll need to edit export file to match names and paths of your attachments in order to import them successfully.
And last step is import csv to your Jira 8 instance.
I suggest trying this on dev/stage environment first because there are many small details that can affect import.
Some useful data is here:
https://confluence.atlassian.com/adminjiraserver/importing-data-from-csv-938847533.html

Neo4j Desktop - Difference between Database and File?

I'm new to Neo4j. I've just opened the Desktop application. Starting a Project, the dashboard lists "Add Database" and "Add File". If data is stored in a database then what's a file? I don't get what a file does. Off hand, when I click on each (either the sample database or the sample file) they open the database browser thing so this doesn't help to understand the difference either..
You would use Add File to reference a file with a Cypher query (or series of Cypher queries).
When you open a file that you've saved here, it will open the browser window (associated with the currently running Database) and paste the file contents into the query box.
So this is a more portable way to save important queries saved in files (such as already established scripts in Cypher) that you expect to run often or reuse/test across databases.
Add Database is used to create a new database instance (technically "dbms" would be the better term, since this doesn't have to do with multi-database features in Neo4j). You can select the version of Neo4j to use for the database, and configure and manage it as needed.

Kettle over kettle transform file pentaho CDE

Facing issue regarding kettle over kettle transform step in pentaho CDE, i have created transformation file and it is working perfectly.
Properties of kettle over kettle transform step where i have option of select transformation file, so when i am browsing it i am able to see only 3 folders home,public, etc..
So where i have to keep my transformation file so that i can able to access it while selecting from select transformation file.
You can create a separate folder/directory (e.g.: Transformations) inside any of the already present directories (say: Admin). Next refresh your repository/cache and you will be able to see the files. Link it from your CDE.
Now ideally when building a project, i used to have a separate folder with my project initials say PROJECT. Inside this folder, i used to create the rest of the sub-folders. This helps in separating the project codes.
Hope this helps :)
Edit:
The files in user console cannot be accessed from your local system post pentaho version 5. The only way to upload or download file is either to load it from User console or execute commands from command line. Check the below link. The files are internally stored in jackrabbit repository of pentaho bi server.
http://infocenter.pentaho.com/help/index.jsp?topic=%2Fadmin_guide%2Ftask_import_export_repository.html
Extra note: If you still want to access the files, there is REST API to handle most of the pentaho bi server capabilites. You may check this link also: http://help.pentaho.com/Documentation/5.2/0R0/070/010/0A0/0Q0#

Resources