bulk insert using OCI - oracle-call-interface

I am currently inserting records one-by-one into a table from C++ code using OCI. The data is in a hashmap of structs, I iterate over the elements of the map, binding the attributes of the struct to the columns of a record in the table (e.g.
define insert query
use OCIBindByname( ) for all the columns of record
iterate over map
assign bind variables as attributes of the struct
OCIStmtExecute
end
This is pretty slow, so I'd like to speed up by doing a bulk insert. What is a good way to do this? Should I use an array of struct to insert all the records in one OCIStmtExecute? Do you have any example code which shows how to do this?

Here is some sample code showing how I implemented this in OCI*ML. In summary, the way to do this is (say for a table with one column of integers):
malloc() a block of memory of sizeof(int) × the number of rows and populate it. This could be an array.
Call OCIBindByPos() with that pointer for *valuep and the size for value_sz.
Call OCIStmtExecute() with iters set to the number of rows from step 1
In my experience, speedups of 100× are certainly possible.

What you probably want to do is 'bulk inserts'. Bulk inserts in array are done by using ArrayBinds where you bind the data of the first row with the first structure of the array and set jumps, which generally is size of structure. After this you can just do statement execute with number of arrays. Multiple binds will create overheads, hence bulk inserts are used.

bulk insert example.txt
by
{
delimeter=',' // or any delimiter specified in your text files
size=200kb //or your size of text file
}

Use DPL(Direct Path Loading).
Refer to docs.oracle.com for more info.

Related

Write to BQ one field of the rows of a PColl - Need the entire row for table selection

I have a problem:
My Pcoll is made of rows with this format
{'word':'string','table':'string'}
I want to write into BigQuery only the words, however I need the table field to be able to select the right table in BigQuery.
This is how my pipeline looks:
tobq = (input
| 'write names to BigQuery '>> beam.io.gcp.bigquery.WriteToBigQuery(
table=compute_table_name, schema=compute_schema,
insert_retry_strategy='RETRY_ON_TRANSIENT_ERROR',
create_disposition=beam.io.gcp.bigquery.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.gcp.bigquery.BigQueryDisposition.WRITE_APPEND)
)
The function compute_table_name accesses an element and returns the table field. Is there a way to write into BQ just the words while still having this table selection mechanism based on rows?
Many thanks!
Normally the best approach with a situation like this in BigQuery is to use the ignoreUnknownValues parameter in ExternalDataConfiguration. Unfortunately Apache Beam doesn't yet support enabling this parameter while writing to BigQuery, so we must find a workaround, as follows:
Pass a Mapping of IDs to Tables as a table_side_input
This solution only works if identical word values are guaranteed to map to the same table each time, or there is some kind of unique identifier for your elements. This method is a bit more involved than Solution 1, but it relies only on the Beam model instead of having to touch the BigQuery API.
The solution involves making use of table_side_input to dynamically pick which table to place an element even if the element is missing the table field. The basic idea is to create a dict of ID:table (where ID is either the unique ID, or just the word field). Creating this dict can be done with CombineGlobally by combining all elements into a single dict.
Meanwhile, you use a transform to drop the table field from your elements before the WriteToBigQuery transform. Then you pass the dict into the table_side_input parameter of WriteToBigQuery, and write a callable table parameter that checks with the dict to figure out which table to use, instead of the table field.

How do I force the SmartTable to load all items?

My SAP UI5 view contains a SmartTable that is bound to an entity set in an ODataModel in read-only mode.
The table limits the number of items it displays to 100, verifiable by the ?$top=0&$limit=100 parameters it appends ot the data query to the server.
However, my users would like to have the table load and display all items, without paging or having to press some "More" button.
Is there a way to override the SmartTable's default behavior? For example by setting some growingSize property to "Infinity"? Or by modifying the aggregation binding? Or by adding annotations to the OData service?
Since you did not specify the number of expected items or the table you're using here are some general considerations and a few possible solutions.
The type of table
There are a few things to consider between the varying types of tables you can use, there is some advice of SAP itself from the design guidelines:
Do not use the responsive table if: You expect the table to contain more than around 1,000 rows. Try using the analytical table or grid table instead; they are easier to handle, perform better, and are optimised for handling large numbers of items.
Ways around it
First option I can think of, if you're using a responsive table and you expect less than 1000 rows then the scroll to load feature might be of interest which should load more entries when the user reaches the bottom of the current list.
There are ways to increase the default size of 100, both through the table using the growingThreshold property, or through the declaration of the oData model using the sizeLimit
If you're using a grid table then the scroll-to-load works slightly differently, since it doesn't display all rows at the same time. Instead, it destroys the current lines to display new lines, which is why it's recommended for (very) large datasets.
Second, if none of those solutions work for your users you could alternatively first fetch the count of the current list including filters, so you can set an accurate threshold on the table before displaying results. If done correctly, your oData service should return a count using /myserivce/MyEntity/$count?$filters... when queried. CDS does this automatically, older services will need to implement that separately.
Last, if you know the list never exceeds a certain number of lines, you could set the growingThreshold parameter on the table to that number and then you don't have to worry about fetching an accurate count first.
How all of this is implemented depends a bit on how you create the smart table (elements, manually etc) so I'm not sure how to provide usable example code
You can achieve this by using a formatter function which will return how many entries are within your model.
<SmartTable growingThreshold = "{path:'yourModel>/', formatter:'.formatter.sizeCalculator'}">
In the formatter File, which is usually to find in the model folder:
sizeCalcualtor: function(oModel){
let count = 0;
for(let i in oModel){
//add item to count;
}
return count;
}

Sqlite : How many parameters can there be in an 'in' clause

I'd like to perform the following:
delete from images where image_address not in (<a long list>)
How long can this list be? (I'm guessing I might have to think of another way).
If you are using parameters (?), the maximum number is 999 by default.
If you are creating the SQL statement dynamically by inserting the values directly (which is a bad thing to do for strings), there is no upper limit on the lenght of such a list. However, there is a limit on the length of the entire SQL statement, which is one million bytes by default.
If you cannot guarantee that your query does not exceed these limits, you must use a temporary table (see LS_dev's answer).
If you have a long list, I would suggest two approaches:
First solution:
Add all data to temporary table:
CREATE TEMP TABLE lng_list(image_address);
-- Insert all you elements in lng_list table
-- ...
DELETE FROM images WHERE image_address NOT IN (SELECT image_address FROM lng_list);
Make sure to use this inside transaction to get good performace.
Second solution:
(REMOVED: only works for IN, not NOT IN...)
Performance should be fair good for any of those solutions.

Insert extra row into UITableView like Tweetbot iOS

I have NSFetchedResultsController like datasource of my UITableView. It displays some entities with predicate from my database. I try to find an elegant solution to insert utility row between my data rows. I don't want to create fake entity in my database cause I don't want to mix View and Model. But I need to have ability to recreate this utility row (e.g. on other application launch). Any suggestions?
It should look something like this:
You're best bet, in my opinion, is to use a section header or footer for that "utility" row. In the case of Tweetbot, they're most likely caching results locally and then merge in data when the plus button is tapped. Your table will take multiple data sets as arrays (an array of arrays) and treat each separate array as a chunk and put it into its own section.
Any way you implement you'll want to wrap your results from the database with some sort of metadata. I think you're going to have to get away from a fetched results controller, unless you use a separate instance for each chunk, keeping track of the date range for each chunk.

Mapping flat fields to sequential records

I have a source schema that defines a "ShippingCharge" and a "DiscountAmount". My destination schema is an EDI X12 850 message.
I need to create two "fake" iterations for the SAC loop. I need a way to define that for the first iteration, use the ShippingCharge and the second use the DiscountAmount. There are a few additional "default values" that I need to set to SAC01 that also depends on the iteration (1 or 2).
What functoid should I be using? Any suggestions?
Have you tried the Table Looping functoid? You can use the table looping functoid to define multiple rows using input links (ShippingCharge and DiscountAmount) and constants (the SAC01 values). The output would then loop through these rows and create the two SACLoop1 elements.
You will need to use the Table Extractor functiod as well to deal with each data value in the table.
Complete instructions on using Table Looping and Table Extractor can be found here: http://msdn.microsoft.com/en-us/library/aa559310%28v=bts.20%29.aspx

Resources