I have 18 different pairs of table column names like:
name_1, surname_1, ... name_18, surname_18
I would like to generate 18 inserts with Informix SPL using something like:
define _counter Int;
define _name_1 varchar(20);
define _surname_1 varchar(20);
...
define _name_18 varchar(20);
define _surname varchar(20);
select name_1, surname_1, ..., name_18, surname_18
into _name_1, _surname_1, ..., _name_18, _surname_18
from names where name_id = 1;
for _counter = 1 to 18 loop
insert into person(name, surname) values (_name_+_counter, _surname_+_counter);
end loop
If I try this I get syntax error. I am stuck with the terrible table design. Could you please advise if there is some similar/correct way of accomplishing this?
Given the clearer outline of the question, I think you have to forgo the loop. The best you can do is either 18 consecutive INSERT statements, or 18 calls to a stored procedure that executes one statement on each call.
Informix SPL does not have an array type, and you can only really use the loop with an array. (I have seen loops with a CASE statement inside, one case for each iteration of the loop; they're seldom a good solution to a problem, and it isn't a sensible solution to this situation.)
I will repeat an observation from my previous comments: the design of a table with 18 pairs of columns is very sub-optimal. However, it appears that you are trying to transfer data from this sub-optimal schema to a more sensible one with one row per name.
You could also consider using an 18-way UNION:
INSERT INTO Person(Name, Surname)
SELECT Name_1, Surname_1 FROM Names -- WHERE name_id = 1
UNION
SELECT Name_2, Surname_2 FROM Names -- …
UNION …
SELECT Name_18, Surname_18 FROM Names -- …
If the requirement is truly to have just the row where name_id = 1, you will need to add that criterion to each of the 18 SELECT clauses within the UNION SELECT statement. There are other ways to add that filter condition, with different sets of trade-offs at the source code level (and perhaps different trade-offs in the optimizer). Informix does not (yet) support CTEs (common table expressions, aka WITH clauses), which is a pity in this context.
Note that the code shown transfers all the data from Names into Person in a single SQL statement. This might well be the closest to optimal process overall.
Maybe something like this is what you want (using informix 12.10FC6, not sure if it will work on previous versions):
CREATE PROCEDURE copy_paste_names (p_name_id INTEGER);
DEFINE
l_query_string VARCHAR(255);
DEFINE
iter INT;
FOR iter IN (1 TO 18 STEP 1)
LET l_query_string = 'INSERT INTO person (name, surname) SELECT name_' || iter || ', surname_' || iter || ' FROM names WHERE name_id = ' || p_name_id || ';';
EXECUTE IMMEDIATE l_query_string;
END FOR;
END PROCEDURE;
I assume that the names table will always have 18 pairs of columns named name_? and surname_?.
This procedure will just blindly try to copy each pair of name_?, surname_? columns from the names table into a new row in the person table. There isn't any kind of checks to see if there are actually values to be copied or if
Related
I'm modding a game. I'd like to optimize my code if possible for a frequently called function. The function will look into a dictionary table (consisting of estimated 10-100 entries). I'm considering 2 patterns a) direct reference and b) lookup with ipairs:
PATTERN A
tableA = { ["moduleName.propertyName"] = { some stuff } } -- the key is a string with dot inside, hence the quotation marks
result = tableA["moduleName.propertyName"]
PATTERN B
function lookup(type)
local result
for i, obj in ipairs(tableB) do
if obj.type == "moduleName.propertyName" then
result = obj
break
end
end
return result
end
***
tableB = {
[1] = {
type = "moduleName.propertyName",
... some stuff ...
}
}
result = lookup("moduleName.propertyName")
Which pattern should be faster on average? I'd expect the 'native' referencing to be faster (it is certainly much neater), but maybe this is a silly assumption? I'm able to sort (to some extent) tableB in a order of frequency of the lookups whereas (as I understand it) tableA will have in Lua random internal order by default even if I declare the keys in proper order.
A lookup table will always be faster than searching a table every time.
For 100 elements that's one indexing operation compared to up to 100 loop cycles, iterator calls, conditional statements...
It is questionable though if you would experience a difference in your application with so little elements.
So if you build that data structure for this purpose only, go with a look-up table right away.
If you already have this data structure for other purposes and you just want to look something up once, traverse the table with a loop.
If you have this structure already and you need to look values up more than once, build a look up table for that purpose.
I am trying to bulk insert multiple records simultaneously into a KDB+ database:
> trades:([]time:`datetime$();side:`symbol$();qty:`float$();price:`float$();exch:`symbol$();sym:`symbol$())
> t: .z.z / intentionally the same time
> `trades insert (t t;`buy `sell;10 10;10 10;`exch `exch;`sym `sym)
However It raises an error at the sym column
'sym
[0] `depths insert (t t;`buy `sell;10 10;10 10; `exch `exch;`sym `sym)
^
Have no Idea what I could be doing wrong here, but it seems to be value invariant i.e. it always raises an error on the last column irrespective of the value provided.
Could someone please advise me how I should go about inserting bulk records into kdb+ with an time index as depicted above.
Thanks
In your original insert statement, you had spaces between
`sym `sym
,
`exch `exch
and `buy `sell. The spaces between the symbols makes it an apply or index instead of a list which you desire.
Additionally, because you have specified your qty and price as
float
, you would have to specify the numbers as float when you are inserting to the
trades
table.
The following line should accomplish what you are intending to do:
`trades insert (2#t;`buy`sell;10 10f;10 10f;`exch`exch;`sym`sym)
Lastly, I would recommend changing the schema for the qtycolumn to int/long, as quantity generally does not require decimal points.
Hope this helps!
Daniel is on the money. To expand on his answer, q will collate space-separated lists into a single object for numeric values, and even then the type specification must be only present for the last item. Further details on list creation can be found here.
q)a:10f 10f
'10f
q)a:10 10f
Secondly, it's common for those learning kdb to often encounter type errors when appending to tables. The problem in this case is that kdb is not promoting a list of homogeneous atoms to a wider type (which is expected behaviour). The following is a useful little lambda for letting you know where you are going wrong when performing insert or upsert operations:
q)trades:([]time:`datetime$();side:`symbol$();qty:`float$();price:`float$();exch:`symbol$();sym:`symbol$())
q)rows:(t,t;`buy`sell;10 10;10 10;`exch`exch;`sym`sym)
q)insertTest:{[tab;rows] m:0!meta tab; wh: where not m[`t] ~' rt:.Q.ty each rows; #[flip;;enlist] `item`currType`expectedType!(m[`c] wh;rt wh; m[`t] wh)}
item currType expectedType
---------------------------
qty j f
price j f
I have a complex query that contains more than one place where the same primary key value must be substituted. It looks like this:
select Foo.Id,
Foo.BearBaitId,
Foo.LinkType,
Foo.BugId,
Foo.GooNum,
Foo.WorkOrderId,
(case when Goo.ZenID is null or Goo.ZenID=0 then
IsNull(dbo.EmptyToNull(Bar.FanName),dbo.EmptyToNull(Bar.BazName))+' '+Bar.Strength else
'#'+BarZen.Description end) as Description,
Foo.Init,
Foo.DateCreated,
Foo.DateChanged,
Bug.LastName,
Bug.FirstName,
Goo.BarID,
(case when Goo.ZenID is null or Goo.ZenID=0 then
IsNull(dbo.EmptyToNull(Bar.BazName),dbo.EmptyToNull(Bar.FanName))+' '+Bar.Strength else
'#'+BarZen.Description end) as BazName,
GooTracking.Status as GooTrackingStatus
from
Foo
inner join Bug on (Foo.BugId=Bug.Id)
inner join Goo on (Foo.GooNum=Goo.GooNum)
left join Bar on (Bar.Id=Goo.BarID)
left join BarZen on (Goo.ZenID=BarZen.ID)
inner join GooTracking on(Goo.GooNum=GooTracking.GooNum )
where (BearBaitId = :aBaitid)
UNION
select Foo.Id,
Foo.BearBaitId,
Foo.LinkType,
Foo.BugId,
Foo.GooNum,
Foo.WorkOrderId,
Foo.Description,
Foo.Init,
Foo.DateCreated,
Foo.DateChanged,
Bug.LastName,
Bug.FirstName,
0,
NULL,
0
from Foo
inner join Bug on (Foo.BugId=Bug.Id)
where (LinkType=0) and (BearBaitId= :aBaitid )
order by BearBaitId,LinkType desc, GooNum
When I try to use an integer parameter on this non-trivial query, it seems impossible to me. I get this error:
Error
Incorrect syntax near ':'.
The query works fine if I take out the :aBaitid and substitute a literal 1.
Is there something else I can do to this query above? When I test with simple tests like this:
select * from foo where id = :anid
These simple cases work fine. The component is TADOQuery, and it works fine until you add any :parameters to the SQL string.
Update: when I use the following code at runtime, the parameter substitutions are actually done (some glitch in the ADO components is worked around) and a different error surfaces:
adoFooContentQuery.Parameters.FindParam('aBaitId').Value := 1;
adoFooContentQuery.Active := true;
Now the error changes to:
Incorrect syntax near the keyword 'inner''.
Note again, that this error goes away if I simply stop using the parameter substitution feature.
Update2: The accepted answer suggests I have to find two different copies of the parameter with the same name, which bothered me so I reworked the query like this:
DECLARE #aVar int;
SET #aVar = :aBaitid;
SELECT ....(long query here)
Then I used #aVar throughout the script where needed, to avoid the repeated use of :aBaitId. (If the number of times the parameter value is used changes, I don't want to have to find all parameters matching a name, and replace them).
I suppose a helper-function like this would be fine too: SetAllParamsNamed(aQuery:TAdoQuery; aName:String;aValue:Variant)
FindParam only finds one parameter, while you have two with the same name. Delphi dataset adds each parameter as a separate one to its collection of parameters.
It should work if you loop through all parameters, check if the name matches, and set the value of each one that matches, although I normally choose to give each same parameter a follow-up number to distingish between them.
I have a table of strings. I'd like an easy way to remove all of the duplicates of the table.
So if the table is {a, b, c, c, d, e, e} , after this operation it would be {a, b, c, d, e}
Alternatively, and probably preferably, is there a way to add an element to a table, but only if it is not already contained within the table.
<\noobquestion>
What I normally do for this is index the table on the string so for example
tbl[mystring1] = 1
tbl[mystring2] = 1
etc.
When you add a string you simply use the lines above and duplicates will be taken care of. You can then use a for ... pairs do loop to read the data.
If you want to count the number of occurrences
use something like
if tbl[mystring1] == nil then
tbl[mystring1] = 1
else
tbl[mystring1] = tbl[mystring1] + 1
end
As the end of the addition cycle if you need to turn the table around you can simply use something like
newtbl = {}
for s,c in pairs(tbl) do
table.insert(newtbl,s)
end
It sounds like you're trying to implement a Set, a collection of unique elements. This article might help you: http://www.lua.org/pil/13.1.html
The simplest way is using the tables as keys, not as values, in your "container table".
Let's call the container table values. You must currently be doing something similar to this for adding elements to it:
table.insert(values, value)
And you parse values like this:
for i,v in ipairs(values) do
-- v contains the internal values
end
In order to have the tables just once, you can insert them this other way:
values[value] = 1
This will ensure that the inserted values (strings, tables, numbers, whatever) are included just once, because they will be 'overwritten'.
Then you can parse values like this:
for k,_ in pairs(values) do
-- k contains the internal tables
end
I have a table with many fields and additionally several boolean fields (ex: BField1, BField2, BField3 etc.).
I need to make a Select Query, which will select all fields except for boolean ones, and a new virtual field (ex: FirstTrueBool) whose value will equal to the name of the first TRUE Boolean Field.
For ex: Say I have BField1 = False, BField2 = True, BField3 = true, BField4=false, in that case SQL Query should set [FirstTrueBool] to "BField2". Is that possible?
Thank you in advance.
P.S. I use Microsoft Access (MDB) Database and Jet Engine.
If you want to keep the current architecture (mixed 'x' non-null status and 'y' non-status fields) you have (AFAIS now) only the option to use IIF:
Select MyNonStatusField1, /* other non-status fields here */
IIF([BField1], "BField1",
IIF([BField2], "BField2",
...
IIF([BFieldLast], "BFieldLast", "#No Flag#")
))))) -- put as many parenthesis as it needs to close the imbricated IIFs
From
MyTable
Of course you can add any Where clause you like.
EDIT:
Alternatively you can use the following trick:
Set the fields to null when the flag is false and put the order number (iow, "1" for BField1, "2" for BField2 etc.) when the flag is true. Be sure that the status fields are strings (ie. Varchar(2) or, better, Char(2) in SQL terminology)
Then you can use the COALESCE function in order to return the first non-value from the status fields which will be the index number as string. Then you can add in front of this string any text you like (for example "BField"). Then you will end with something like:
Select "BField" || Coalesce(BField1, BField2, BField3, BField4) /*etc. (add as many fields you like) */
From MyTable
Much clearer IMHO.
HTH
You would be better using a single 'int' column as a bitset (provided you have up to 32 columns) to represent the columns.
e.g. see SQL Server: Updating Integer Status Columns (it's sql server, but the same technique applies equally well to MS Access)