Rails add multiple columns after a specific column using sqlite3 [duplicate] - ruby-on-rails

It seems that it is not straightforward for reordering columns in a SQLite3 table. At least the SQLite Manager in Firefox does not support this feature. For example, move the column2 to column3 and move column5 to column2. Is there a way to reorder columns in SQLite table, either with a SQLite management software or a script?

This isn't a trivial task in any DBMS. You would almost certainly have to create a new table with the order that you want, and move your data from one table to the order. There is no alter table statement to reorder the columns, so either in sqlite manager or any other place, you will not find a way of doing this in the same table.
If you really want to change the order, you could do:
Assuming you have tableA:
create table tableA(
col1 int,
col3 int,
col2 int);
You could create a tableB with the columns sorted the way you want:
create table tableB(
col1 int,
col2 int,
col3 int);
Then move the data to tableB from tableA:
insert into tableB
SELECT col1,col2,col3
FROM tableA;
Then remove the original tableA and rename tableB to TableA:
DROP table tableA;
ALTER TABLE tableB RENAME TO tableA;
sqlfiddle demo

You can always order the columns however you want to in your SELECT statement, like this:
SELECT column1,column5,column2,column3,column4
FROM mytable
WHERE ...
You shouldn't need to "order" them in the table itself.

The order in sqlite3 does matter. Conceptually, it shouldn't, but try this experiment to prove that it does:
CREATE TABLE SomeItems (
identifier INTEGER PRIMARY KEY NOT NULL,
filename TEXT NOT NULL, path TEXT NOT NULL,
filesize INTEGER NOT NULL, thumbnail BLOB,
pickedStatus INTEGER NOT NULL,
deepScanStatus INTEGER NOT NULL,
basicScanStatus INTEGER NOT NULL,
frameQuanta INTEGER,
tcFlag INTEGER,
frameStart INTEGER,
creationTime INTEGER
);
Populate the table with about 20,000 records where thumbnail is a small jpeg. Then do a couple of queries like this:
time sqlite3 Catalog.db 'select count(*) from SomeItems where filesize = 2;'
time sqlite3 Catalog.db 'select count(*) from SomeItems where basicScanStatus = 2;'
Does not matter how many records are returned, on my machine, the first query takes about 0m0.008s and the second query takes 0m0.942s. Massive difference, and the reason is because of the Blob; filesize is before the Blob and basicScanStatus is after.
We've now moved the Blob into its own table, and our app is happy.

you can reorder them using the Sqlite Browser

Related

How to insert CDC Data from a stream to another table with dynamic column names

I have a Snowflake stored procedure and I want to use "insert into" without hard coding column names.
INSERT INTO MEMBERS_TARGET (ID, NAME)
SELECT ID, NAME
FROM MEMBERS_STREAM;
This is what I have and column names are hardcoded. The query should copy data from MEMBERS_STREAM to MEMBERS_TARGET. The stream has more columns such as
METADATA$ACTION | METADATA$ISUPDATE | METADATA$ROW_ID
which I am not intending to copy.
I don't know of a way to not copy the METADATA columns if not hardcoding. However if you don't want the data maybe the easiest thing to do is to add them to your target, INSERT using a SELECT * and later in the sp set them to NULL.
Alternatively, earlier in your sp, run an ALTER TABLE ADD COLUMN to add the columns, INSERT using SELECT * and then after that run an ALTER TABLE DROP COLUMN to remove the columns? That way your table structure stays the same, albeit briefly it will have some extra columns.
A SELECT * is not usually recommended but it's the easiest alternative I can think of

Can we alter the dbspace of a informix table?

Suppose I have following schema.
create table tb1
(col1 Integer,
col2 varchar(50)
) in dbspace1 extent size 1000 next 500 lock mode row;
and I want to change the dbspace of above table to dbspace2 . After doing my alteration table schema should be look like as follows .
create table tb1
(col1 Integer,
col2 varchar(50)
) in dbspace2 extent size 1000 next 500 lock mode row;
Is it possible to do? If it is possible what is the command?
On the face of it, the ALTER FRAGMENT statement and the INIT clause allows you to write:
ALTER FRAGMENT ON TABLE tb1 INIT IN dbspace2;
The keyword TABLE is required; you could specify an index instead.
I've not actually experimented to prove that it works, but the syntax diagram certainly allows it.

INSERT INTO with unknown number of column in scriptella

I have to back up a table, which can change number of column. When my etl script starts it doesn't know the number of column. How can I create INSERT INTO table VALUES (?1, ?2, ...) script on the fly?
Regards,
Jacek
Depending on the database, one can use CREATE TABLE FROM SELECT (or similar) to back up the table. Example:
CREATE TABLE new_table AS (SELECT * FROM old_table);

SQLite table taking time to fetch the records in LIKE query

Scenario: database is sqlite (need to encrypt records in database. Hence used SQL cipher API for iOS)
There is a table in the database named partnumber with schema as follows:
CREATE TABLE partnumber (
objid varchar PRIMARY KEY,
description varchar,
make varchar,
model varcha,
partnumber varchar,
SSOKey varchar,
PMOKey varchar
)
This table contains approximately 80K records.
There are 3 text fields in the UI view, in which user can enter search terms and searching is made as soon as user enters the letters there.
3 text fields are: txtFieldDescription, txtFieldMake and txtFieldModel.
Suppose, first user enters the search term as ‘monitor’ in txtFieldDescription. So, the queries that will be executed with each letter are:
1.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%m%’
2.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%mo%’
3.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%mon%’
4.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%moni%’
5.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%monit%’
6.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%monito%’
7.
SELECT DISTINCT description COLLATE NOCASE
FROM partnumber where description like ‘%monitor%’
So far so good. Suppose now if user wants to search for model (txtFieldDescription still contains ‘monitor’). So user clicks on txtFieldModel. As soon as user clicks on model, a query is fired as:
SELECT DISTINCT model COLLATE NOCASE
FROM partnumber where description like ‘%monitor%’
This query will return all the models for the records where description contains monitor (at any position).
Now, if user wants to search for all the models containing word ‘sony’ (description field still contains monitor), then the queries that will get executed with each letter are:
1.
SELECT DISTINCT model COLLATE NOCASE
FROM partnumber WHERE model like ‘%s%’ AND description like ‘%monitor%’
2.
SELECT DISTINCT model COLLATE NOCASE
FROM partnumber WHERE model like ‘%so%’ AND description like ‘%monitor%’
3.
SELECT DISTINCT model COLLATE NOCASE
FROM partnumber WHERE model like ‘%son%’ AND description like ‘%monitor%’
4.
SELECT DISTINCT model COLLATE NOCASE
FROM partnumber WHERE model like ‘%sony%’ AND description like ‘%monitor%’
Now, if user clicks on txtFieldMake and enters search term as ‘1980’, then the queries that get fired are:
1.
SELECT DISTINCT make COLLATE NOCASE
FROM partnumber WHERE make like ‘%1%’
AND model like ‘%sony%’ AND description like ‘%monitor%’
2.
SELECT DISTINCT make COLLATE NOCASE
FROM partnumber WHERE make like ‘%19%’
AND model like ‘%sony%’ AND description like ‘%monitor%’
3.
SELECT DISTINCT make COLLATE NOCASE
FROM partnumber WHERE make like ‘%198%’
AND model like ‘%sony%’ AND description like ‘%monitor%’
4.
SELECT DISTINCT make COLLATE NOCASE
FROM partnumber WHERE make like ‘%1980%’
AND model like ‘%sony%’ AND description like ‘%monitor%’
Here, the time delay in making a transition from txtFieldDescription to txtFieldModel or txtFieldModel to txtFieldMake is too large and in txtFieldModel and txtFieldMake, the letter entered are shown after 5 or 6 secs (after the query has been processed) and hence the cursor hangs there.
On analyzing, I came to know that wildcard before the searching term in like keyword (as in ‘%monitor%’) slows the execution. And in this case there may be as many as 3 such like keywords with AND in between them and hence, execution time is sure to get increase. Also, use of wildcard at the beginning of like negates the indices.
A FEW ADDITIONAL INFORMATION:
The total number of records ~80K
SELECT query is run each time on the table partnumber (~80K)
Result of some queries performed by me:
Sqlite> SELECT count(DISTINCT description COLLATE NOCASE) from partnumber;
Result is: 2599
Sqlite> SELECT count(DISTINCT make COLLATE NOCASE) from partnumber;
Result is: 7129
Sqlite> SELECT count(DISTINCT model COLLATE NOCASE) from partnumber;
Result is: 64644
Sqlite> SELECT count(objid) from partnumber;
Result is: 82135
Indices are created as follows:
CREATE INDEX index_description
ON partnumber (description collate nocase)
CREATE INDEX index_make
ON partnumber (make collate nocase)
CREATE INDEX index_model
ON partnumber (model collate nocase)
SOME ALTERNATIVES TO INCREASE PERFORMANCE:
Since the count of distinct description is only 2599 and that of make is only 7129, so the table can be split into different tables with one containing DISTINCT description COLLATE NOCASE output (total of 2599 rows) and one containing DISTINCT make COLLATE NOCASE (total of 7129 rows). As far as model is concerned, making a different table for it will not help as the number of rows ~64644 is nearly equal to total records ~82135.
But the problem with this approach is that I don’t know how I would be making a search in these tables, what columns must be there in each of them and how many tables must be created. What if user enters some description then enters model and then again enters a new description.
Since the result of this select query is being displayed in a UITableView and the user sees at max 5 rows at a time. So, we can limit the number of rows that are being returned to 500 and when user scrolls, then next 500 can be fetched and so on till the last searched record.
But the problem here is although I need only 500 records, but I will have to search entire table (SCAN ~80K records). So, I need a query that will first search only top 10% of the table and return top 500 rows from this, then next 500 till top 10% records are all searched, then next 10%, then next 10% till 80000 records are being searched (need to search in chunks of 10-10% records).
If the table of 80K records can be split into 4 tables of 20K records each and then searching is performed on all 4 tables simultaneously (in different background threads) to get the result set. But here I don’t know how to run the queries in 4 different threads (sort of batch execution), when to combine the results and how to know that all the threads have finished execution.
If I can replace like %monitor%‘ with another function that returns the same result but whose execution is faster and the use of that function does not affects the use of index, (that is, does not by-passes the use of index), then the execution may get faster. If anyone can suggest me such a function in sqlite, then I can go on with this approach.
If you can help me to implement any one of these alternatives, or if you can suggest me any other solution, then I would be able to increase the execution speed of my query. And pls dont tell me to enable FTS (Full Text Searching) in sqlite because I have already tried doing this, but I dont know the exact steps. Thanks a lot for reading this question so patiently......
EDIT:
Hey All, I got some success. I modified my select query to look like this:
select distinct description collate nocase as description from partnumber where rowid BETWEEN 1 AND (select max(rowid) from partnumber) AND description like '%a%' order by description;
And Bingo, the search time was like never before. But the problem now is when I execute the command EXPLAIN QUERY PLAN like this, it shows me using B-Tree for distinct which I dont want to use.
explain query plan select distinct description collate nocase as description from partnumber where rowid BETWEEN 1 AND (select max(rowid) from partnumber) AND description like '%a%' order by description;
Output:
0|0|0|SEARCH TABLE partnumber USING INTEGER PRIMARY KEY (rowid>? AND rowid<?) (~15625 rows)
0|0|0|EXECUTE SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE partnumber USING INTEGER PRIMARY KEY (~1 rows)
0|0|0|USE TEMP B-TREE FOR DISTINCT
EDIT:
Sorry guys. The above approach (using rowid for searching) is taking more time on device than the original one. I have tried removing the distinct and order by keywords, but it was of no use. Still taking ~8-10 secs on iPhone. Pls help me out.
Anshul,
I know you said "pls dont tell me to enable FTS (Full Text Searching) in sqlite because I have already tried doing this, but I dont know the exact steps", however FTS is the only way you are going to get this to perform well. There is no magic that will make a full table scan perform well. I's suggest reading up on FTS, taking the time to learn it, and then use it: http://sqlite.org/fts3.html.

optimize query for last-auto-inc value

Sybase Advantage Database
I am doing a query
INSERT INTO nametable
SELECT * FROM nametable WHERE [indexkey]=32;
UPDATE nametable Set FieldName=1
WHERE [IndexKey]=(SELECT max([indexKey]) FROM nametable);
The purpose is to copy a given record into a new record, and then update the newly created record with some new values. The "indexKey" is declared as autoinc and is the primary key to the table.
I am not sure if this can be achieved in a single statement with better speed or;;; suggestions appreciated.
It can be achieved with a single statement but it will make the code more susceptible to schema changes. Suppose that there are 2 additional columns in the table besides the FieldName and the indexKey columns. Then the following statement will achieve your objective.
INSERT INTO nametable ( FieldName, Column2, Column3 )
SELECT 1, Column2, Column3 FROM nametable WHERE [indexkey]=32
However, if the table structure changes, this statement will need to be updated accordingly.
BTW, your original implementation is not safe in multi-user scenarios. The max( [indexKey] ) in the UPDATE statement may not be the one generated by the INSERT statement. Another user could have inserted another row between the two statements. To use your original approach, you should use the LastAutoInc() scalar.
UPDATE nametable Set FieldName=1
WHERE [IndexKey] = LastAutoInc( STATEMENT )

Resources