Insert a lot of data that depends on the previously inserted data - ruby-on-rails

I am trying to store a "navigation" path on the database
The paths are stored on the logfile as a string, something like "a1 b1 c1 d1" where each one is a "token"
I want that for each token I store the path to it, as an example I can have
a1 -> b1 -> c1
a1 -> b1 -> c2
a1 -> b2 -> c2
So, if I ask all the subtokens for a1 I will get [b1 => 2, b2 => 1] on a token => count format.
This way I can get all the subtokens for a given token and the "usage count" for each of those subtokens.
It is possible to have
a1 -> b1 -> c1
g1 -> h1 -> b1
But for me, those two b1 are not the same, the count should not be the same.
There should not be a LOT of tokens, but there will be a lot of entries on the logfile so I will expect a big count value for those tokens.
I am representing the data like that (sqlite3):
id; parent_id; token; count
where the parent_id is a FK to the same table.
My issue is. I have around 50k entries on my log and I can have more.
I am inserting the data on the database using the following procedure
search for a entry that has the parent_id + token (for the first token the parent_id is null)
EXISTS: Update the count
DON'T EXISTS: Create a entry
Save the ID of the updated entry/new entry as a parent_id
Repeat until there are no more tokens to consume
With 50k entries having an average of 4 tokens per entry it gives 200k tokens to process.
It does not write a lot of data on the database as a lot of those tokens repeats, even if I can have the same token with different parent_id.
The issue is.... it is too slow.... I cannot perform an insert in chunks as I depends on the id of an existing one or the id of a new one. Worse I also need to update the count.
I was thinking to use some sort of tree to store this data, but there is the problem where it is possible that there are old records that needs to be preserved and this data needs to be counted on top of the existing one.
I can create the tree using the database + update it with the current data, but it feels like an overcomplicated solution for a problem.
Does anyone have any idea on how to optimize the insertion of this data?
I am using rails (active record) + sqlite 3.

Related

Sybase compare columns with duplicate row ids

So far I have a query with a result set (in a temp table) with several columns but I am only concerned with four. One is a customer ID(varchar), one is Date (smalldatetime), one is Amount(money) and the last is Type(char). I have multiple rows with the same custmer ID and want to evaluate them based on Date, Amount and Type. For example:
Customer ID Date Amount Type
A 1-1-10 200 blue
A 1-1-10 400 green
A 1-2-10 400 green
B 1-11-10 100 blue
B 1-11-10 100 red
For all occurrences of A I want to compare them to identify only one, first by earliest date, then by greatest Amount, then if still tied by comparing Types. I would then return one row for each customer.
I would provide some of the query but I am at home now after spending two days trying to get a correct result. It looks something like this:
(query to populate #tempTable)
GROUP BY customer_id
HAVING date_cd =
(SELECT MIN(date_cd)
FROM order_table ot
WHERE ot.customerID = #tempTable.customerID
)
OR date_cd IS NULL
I assume the HAVING would result in only one row per customer_id. This did not end up being the case since there were some ties there.
I am not sure I can do the OR - there are some with NULL values here - and it did not account for the step to the next comparison if they were all the same anyway. I am not seeing a way to avoid doing some row processing of the temp table with some kind of IF or WHERE loop.
As I write I am thinking maybe I use #tempTable.date_cd in the HAVING clause instead of looking at the original table. but that should return the same dates?
Am I on the right track or is there something missing? Suggestions? More info??
try below query :-
select * from #tempTable
GROUP BY customer_id
HAVING isnull(date_cd,"1900/01/01") =min(isnull(date_cd,"1900/01/01"))

Using SharePoint's Data Query Webpart to link two lists

I have two SharePoint Lists: A & B. List A has a column where the user can add multilple references (displayed as hyperlinks) for each entry to entries in B
A: B:
... | RefB | ... Name | OtherColumns....
----------------- -----------------------
... | B1 | ... B1 |
... | B2,B3 | ... B2 |
... | B1,B3 | ... B3 |
Now I want to display all entries from list B that are referenced by an (specific) entry in A. I.e: I set the filter to [Entry 2] and the Web part displays all the stuff from entries B2 and B3. Is this even possible?
I think the problem you've got which is ruining some of the way's I'm thinking of solving it is that the RefB column is multi-valued. You may have some joy doing filtering with the DataView but it might get messy fast, as you try to split RefB on the comma and compare against the resulting array of values.
I think the problem could be made easier by having only a single value in the RefB column.
Three solutions come to mind.
Have only one value in RefB per item in Table A and repeat the other fields in Table A. You'd have to accept some data redundancy and would need to be careful with data entry.
The normal relational database way of solving your data redundancy problem would be to have a 3rd table joining tabe A to table B. If you're not familiar with relational database techniques, there are lots of straight-forward tutorials on data normalisation on the net. While there's some more work, it may lead to a cleaner solution. Be careful when trying to fake a relational database within SharePoint though - it's not meant for relational data. You may be better off using a SQL database.
Put everything in one table, though I think you've already ruled this one out.

ISQL Perform instruction: after editadd editupdate of table vs. after add update of table

INFORMIX-SQL 7.3 Perform Screens:
According to documentation, in an "after editadd editupdate of table" control block, its instructions are executed before the row is added or updated to the table, whereas in an "after add update of table" control block, its instructions are executed after the row has been added or updated to the table. Supposedly, this would mean that any instructions which would alter values of field-tags linked to table.columns would not be committed to the table, but field-tags linked to displayonly fields will change?
However, when using "after add update of table", I placed instructions which alter values for field-tags linked to table.columns and their displayed and committed values also changed! I would have thought that an "after add update of table" would only alter displayonly fields.
TABLES
customer
transaction
branch
interest
dates
ATTRIBUTES
[...]
q = transaction.trx_type, INCLUDE=("E","C","V","P","T"), ...;
tb = transaction.trx_int_table,
LOOKUP f1 = ta_days1_f,
t1 = ta_days1_t,
i1 = ta_int1,
[...]
JOINING *interest.int_table, ...;
[...]
INSTRUCTIONS
customer MASTER OF transaction
transaction MASTER OF customer
delimiters ". ";
AFTER QUERY DISPLAY ADD UPDATE OF transaction
if z = "E" then let q = "E"
if z = "C" then let q = "C"
if z = "1" then let q = "E"
[...]
END
Is 'z' a column in the transaction table?
Is the trouble that the value in 'z' is causing a change in the value of 'q' (aka transaction.trx_type), and the modified value is being stored in the database?
Is the value in 'z' part of the transaction table?
Have you verified that the value in the DB is indeed changed - using the Query Language option or a simple (default) form?
It might look as if it is because the instruction is also used AFTER DISPLAY, so when the values are retrieved from the DB, the value displayed in 'q' would be the mapped values corresponding to the value stored in 'z'. You would have to inspect the raw data to hide that mapping.
If this is not the problem, please:
Amend the question to show where 'z' comes from.
Also describe exactly what you do and see.
Confirm that the data in the database, as opposed to on the screen, is amended.
Please can you see whether this table plus form behaves the same for you as it does for me?
Table Transaction
CREATE TABLE TRANSACTION
(
trx_id SERIAL NOT NULL,
trx_type CHAR(1) NOT NULL,
trx_last_type CHAR(1) NOT NULL,
trx_int_table INTEGER NOT NULL
);
Form
DATABASE stores
SCREEN SIZE 24 BY 80
{
trx_id [f000]
trx_type [q]
trx_last_type [z]
trx_int_table [f001 ]
}
END
TABLES
transaction
ATTRIBUTES
f000 = transaction.trx_id;
q = transaction.trx_type, UPSHIFT, AUTONEXT,
INCLUDE=("E","C","V","P","T");
z = transaction.trx_last_type, UPSHIFT, AUTONEXT,
INCLUDE=("E","C","V","P","T","1");
f001 = transaction.trx_int_table;
INSTRUCTIONS
AFTER ADD UPDATE DISPLAY QUERY OF transaction
IF z = "E" THEN LET q = "E"
IF z = "C" THEN LET q = "C"
IF z = "1" THEN LET q = "E"
END
Experiments
[The parenthesized number is automatically generated by IDS/Perform.]
Add a row with data (1), V, E, 23.
Observe that the display is: 1, E, E, 23.
Exit the form.
Observe that the data in the table is: 1, V, E, 23.
Reenter the form and query the data.
Update the data to: (1), T, T, 37.
Observe that the display is: 1, T, T, 37.
Exit the form.
Observe that the data in the table is: 1, T, T, 37.
Reenter the form and query the data.
Update the data to: (1), P, 1, 49
Observe that the display is: 1, E, 1, 49.
Exit the form.
Observe that the data in the table is: 1, P, 1, 49.
Reenter the form and query the data.
Observe that the display is: 1, E, 1, 49.
Choose 'Update', and observe that the display changes to: 1, P, 1, 49.
I did the 'Observe that the data in the table is' steps using:
sqlcmd -d stores -e 'select * from transaction'
This generated lines like these (reflecting different runs):
1|V|E|23
1|P|1|49
That is my SQLCMD program, not Microsoft's upstart of the same name. You can do more or less the same thing with DB-Access, except it is noisier (13 extraneous lines of output) and you would be best off writing the SELECT statement in a file and providing that as an argument:
$ echo "select * from transaction" > check.sql
$ dbaccess stores check
Database selected.
trx_id trx_type trx_last_type trx_int_table
1 P 1 49
1 row(s) retrieved.
Database closed.
$
Conclusions
This is what I observed on Solaris 10 (SPARC) using ISQL 7.50.FC1; it matches what the manual describes, and is also what I suggested in the original part of the answer might be the trouble - what you see on the form is not what is in the database (because of the INSTRUCTIONS section).
Do you see something different? If so, then there could be a bug in ISQL that has been fixed since. Technically, ISQL 7.30 is out of support, I believe. Can you upgrade to a more recent version than that? (I'm not sure whether 7.32 is still supported, but you should really upgrade to 7.50; the current release is 7.50.FC4.)
Transcribing commentary before deleting it:
Up to a point, it is good that you replicate my results. The bad news is that in the bigger form we have different behaviour. I hope that ISQL validates all limits - things like number of columns etc. However, there is a chance that they are not properly validated, given the bug, or maybe there is a separate problem that only shows with the larger form. So, you need to ensure you have a supported version of the product and that the problem reproduces in it. Ideally, you will have a smaller version of the table (or, at least, of the form) that shows the problem, and maybe a still smaller (but not quite as small as my example) version that shows the absence of the problem.
With the test case (table schema and Perform screen that shows the problem) in hand, you can then go to IBM Tech Support with "Look - this works correctly when the form is small; and look, it works incorrectly when the form is large". The bug should then be trackable. You will need to include instructions on how to reproduce the bug similar to those I gave you. And there is no problem with running two forms - one simple and one more complex and displaying the bug - in parallel to show how the data is stored vs displayed. You could describe the steps in terms of 'Form A' and 'Form B', with Form A being Absolutely OK and Form B being Believed to be Buggy. So, add a record with certain values in Form B; show what is displayed in Form B after; show what is stored in the database in Form A after too; show that they are not different when they should be.
Please bear in mind that those who will be fixing the issue have less experience with the product than either you or me - so keep it as simple as possible. Remove as many attributes as you can; leave comments to identify data types etc.

Combining table, web service data in Grails

I'm trying to figure out the best approach to display combined tables based on matching logic and input search criteria.
Here is the situation:
We have a table of customers stored locally. The fields of interest are ssn, first name, last name and date of birth.
We also have a web service which provides the same information. Some of the customers from the web service are the same as the local file, some different.
SSN is not required in either.
I need to combine this data to be viewed on a Grails display.
The criteria for combination are 1) match on SSN. 2) For any remaining records, exact match on first name, last name and date of birth.
There's no need at this point for soundex or approximate logic.
It looks like what I should do is extract all the records from both inputs into a single collection, somehow making it a set on SSN. Then remove the blank ssn.
This will handle the SSN matching (once I figure out how to make that a set).
Then, I need to go back to the original two input sources (cached in a collection to prevent a re-read) and remove any records that exist in the SSN set derived previously.
Then, create another set based on first name, last name and date of birth - again if I can figure out how to make a set.
Then combine the two derived collections into a single collection. The collection should be sorted for display purposes.
Does this make sense? I think the search criteria will limit the number of record pulled in so I can do this in memory.
Essentially, I'm looking for some ideas on how the Grails code would look for achieving the above logic (assuming this is a good approach). The local customer table is a domain object, while what I'm getting from the WS is an array list of objects.
Also, I'm not entirely clear on how the maxresults, firstResult, and order used for the display would be affected. I think I need to read in all the records which match the search criteria first, do the combining, and display from the derived collection.
The traditional Java way of doing this would be to copy both the local and remote objects into TreeSet containers with a custom comparator, first for SSN, second for name/birthdate.
This might look something like:
def localCustomers = Customer.list()
def remoteCustomers = RemoteService.get()
TreeSet ssnFilter = new TreeSet(new ClosureComparator({c1, c2 -> c1.ssn <=> c2.ssn}))
ssnFilter.addAll(localCustomers)
ssnFilter.addAll(remoteCustomers)
TreeSet nameDobFilter = new TreeSet(new ClosureComparator({c1, c2 -> c1.firstName + c1.lastName + c1.dob <=> c2.firstName + c2.lastName + c2.dob}))
nameDobFilter.addAll(ssnFilter)
def filteredCustomers = nameDobFilter as List
At this point, filteredCustomers has all the records, except those that are duplicates by your two criteria.
Another approach is to filter the lists by sorting and doing a foldr operation, combining adjacent elements if they match. This way, you have an opportunity to combine the data from both sources.
For example:
def combineByNameAndDob(customers) {
customers.sort() {
c1, c2 -> (c1.firstName + c1.lastName + c1.dob) <=>
(c2.firstName + c2.lastName + c2.dob)
}.inject([]) { cs, c ->
if (cs && c.equalsByNameAndDob(cs[-1])) {
cs[-1].combine(c) //combine the attributes of both records
cs
} else {
cs << c
}
}
}

DELPHI - help need with a ClientDataSet

I have a ClientDataSet with the following data
IDX EVENT BRANCH_ID BRANCH
1 E1 7 B7
2 E2 5 B5
3 E3 7 B7
4 E4 1 B1
5 E5 2 B2
6 E6 7 B7
7 E7 1 B1
I need to transform this data into
IDX EVENT BRANCH_ID BRANCH
1 E1 7 B7
2 E2 5 B5
4 E4 1 B1
5 E5 2 B2
The only fields of importance are the BRANCH_ID and BRANCH
and BRANCH_ID must be unique
As there is a lot of data I do not what two have copies of it.
QUESTION:
Can you suggest a way to transfrom the data using a Cloned version of the original data ?
Cloning won't allow you to actually change data in a clone and not have same change reflected in the original, so if that's what you want you might rethink the cloning idea.
Cloning does give you a separate cursor into the clone and allows you to filter and index (i.e. order) it independently of the master clientdataset. From the data you've provided it looks like you want to filter some branch data and order by branch_id. You can accomplish that by setting up a new filter and index on the clone. Here's a good article that includes examples of how to do that:
http://edn.embarcadero.com/article/29416
Taking a second look at your question, seems like all you'd need to do would be to set up a unique index on branch_id on the cloned dataset. Linked article above has info on how to set up index; check docs on clientdataset.addindex function for more details and info on setting the index to show only unique values, if I recall it may just mean you set branch_id as the primary key.
I can't think of a slick way to do this, but you could index on BRANCH_ID, add an fkInternalCalc boolean field to your dataset, then initialize that field to True on the first row of each branch (using group state or manually) and then filter the clone on the value of the field. You'd have to update the field on data changes though.
I have a feeling that a better solution would be to have a master dataset with a row for each branch.
You don't provide many details about your use case so I'll try to give you some hints:
"A lot of data" suggests that you might have it from a SQL backend. Using a 'SELECT DISTINCT...' or 'SELECT ... GROUP BY BRANCH_ID' (or similar syntax depending on what SQL backend do you) will give the desired result with ease and speed. Please confirm and I'll give you more details.
As the others said a simple 'clone' wouldn't work. The most simpler (and perhaps quicker) sollution, assuming that ussually the brances are few in number WRT to data is to have an index outside of your dataset. If really you want to filter your original data then add a status field (eg boolean) on your data and put a flag (eg. 'True') on the first occurence.
PseudoCode:
(Let's asume that:
your ClientDataSet is cds1
your cds1 have a status field cds1Status (boolean) - this is optional, needed only if you want to sort/filter/search the cds1
you have a lIndex which is a TStringList)
lIndex.Clear;
lIndex.Sorted:=True;
with cds1 do
try
DisableControls;
First;
while not Eof do //scan the dataset
begin
cVal:=cds1Branch_ID.AsString;
Edit; //we anyway update the Status field
if lIndex.Find(cVal, nDummy) then //nDummy - we don't use it.
begin //already in index
cds1Status.AsBoolean:=False; //we say here "No, isn't the 1st occurence"
end
else
begin //Not found! - Well, let's add it...
lIndex.Append(cVal); //update the index
cds1Status.AsBoolean:=True; //mark the first occurence
end;
Post; //save the changes in the status field
Next;
end; //scan
finally
EnableControls; //housekeeping
end;
//WARNING! - NOT tested. I wrote it from my head but I think that you got the idea...
...Depending on what you try to accomplish (which would be the best thing that you might share with us) and what selectivity do you have on BRANCH_ID perhaps the Status engine isn't needed at all. If you have a very low selectivity on that field (selectivity = no. of unique values / no. of records) perhaps it's much faster to have a new dataset and copy there only the unique values rather than putting each record of the original cds in Edit + Post states. (Changing dataset states are costly operations. Especially if your cds is linked to a remote data storage - ie. a server).
hth,
PS: My sollution is intended to be mostly simple. Also you can test with lIndex.Sorted:=False and use lIndex.IndexOf instead of Find. In some (rare) cases is better. Depends on your data. If you want to complicate the things and the speed is really a concern you can implement a full-blown BTree index to do your searces (libraries available). Also you can use the index engine of CDS and index the BRANCH_ID and do many 'Locate' on a clone but because your selectivity is clearly < 1 scaning the cds's entire index theorethically should be slower that a scan on a unique index, especially if your custom-made index is tailored to your data type, structure, distribuition etc.
just my2c

Resources