I want to insert thousand of Record in table, right now i am using
INSERT INTO myTable ("id", "values") VALUES ("1", "test")
INSERT INTO myTable ("id", "values") VALUES ("2", "test")
INSERT INTO myTable ("id", "values") VALUES ("3", "test")
Query for insert record one by one, but its take long time for execution,
Now i want to insert all record from one query...
INSERT INTO myTable ("id", "values") VALUES
("1", "test"),
("2", "test"),
("3", "test"),
.....
.....
("n", "test")
But this query not working with Sqllite, Can you please give me some guidance for solve this problem
Thanks,
Please refer my answer here.
There is no query in sqlite that can support your structure, this is what I use to insert 1000s of records in db. Performance is good. You can give a try. :)
Insert into table_name (col1,col2)
SELECT 'R1.value1', 'R1.value2'
UNION SELECT 'R2.value1', 'R2.value2'
UNION SELECT 'R3.value1', 'R3.value2'
UNION SELECT 'R4.value1', 'R4.value2'
You can follow this link on SO for more information. But be careful of the number of insertion you want make, cause there is a limitation for this kind of usage (see here)
INSERT INTO 'tablename'
SELECT 'data1' AS 'column1', 'data2' AS 'column2'
UNION SELECT 'data3', 'data4'
UNION SELECT 'data5', 'data6'
UNION SELECT 'data7', 'data8'
As a further note, sqlite only seems to support upto 500 such union selects per query so if you are trying to throw in more data than that you will need to break it up into 500 element blocks
Related
I have two tables:-
Table 1. Activity
Table 2. ActivityAvailability
Relation Type: HasManyRelation (table 1 HasManyRelation to table2)
I couldn't able use withGraphJoined because it has a limit clause issue with pagination as the objection.js doc says. So I'm using withGraphFetched.
Activity.query().withGraphFetched('activityAvailabilityData').limit(10);
The main difference is that withGraphFetched uses multiple queries under the hood to fetch the result while withGraphJoined uses a single query and joins to fetch the results.
withGraphFetched under the hood queries
Query 1:
select `activity`.`id`, `name`, `type`, `city`, `state`, `address`, `latitude`, `longitude` from `activity` limit ?
Query 2: Here question mark will replace with above query activity IDs.
select `start_age`, `end_age`, `start_date`, `end_date`, `start_time`, `end_time` from `activity_availability` where `activity_availability`.`activity_id` in (?, ?, ?, ?)
Now first problem arries: If I want 10 records each time by limit.I will get if there is no where clause in second query.If incase any where clause is added in the child table (ActivityAvailability),there might be possibility it also eliminates few records from 10 and return.
So I solved it using joins along with withGraphFetched.
Activity.query().withGraphFetched('activityAvailabilityData').join('activity_availability', 'activity.id', '=', 'activity_availability.activity_id')
But this solution also has a drawback.
Whenever any where clause is added in the child table It should also be added in the parent table,just because I used joins.This is becoming difficult to manage.
So please let me know if there is another approach?
It seems that it is not straightforward for reordering columns in a SQLite3 table. At least the SQLite Manager in Firefox does not support this feature. For example, move the column2 to column3 and move column5 to column2. Is there a way to reorder columns in SQLite table, either with a SQLite management software or a script?
This isn't a trivial task in any DBMS. You would almost certainly have to create a new table with the order that you want, and move your data from one table to the order. There is no alter table statement to reorder the columns, so either in sqlite manager or any other place, you will not find a way of doing this in the same table.
If you really want to change the order, you could do:
Assuming you have tableA:
create table tableA(
col1 int,
col3 int,
col2 int);
You could create a tableB with the columns sorted the way you want:
create table tableB(
col1 int,
col2 int,
col3 int);
Then move the data to tableB from tableA:
insert into tableB
SELECT col1,col2,col3
FROM tableA;
Then remove the original tableA and rename tableB to TableA:
DROP table tableA;
ALTER TABLE tableB RENAME TO tableA;
sqlfiddle demo
You can always order the columns however you want to in your SELECT statement, like this:
SELECT column1,column5,column2,column3,column4
FROM mytable
WHERE ...
You shouldn't need to "order" them in the table itself.
The order in sqlite3 does matter. Conceptually, it shouldn't, but try this experiment to prove that it does:
CREATE TABLE SomeItems (
identifier INTEGER PRIMARY KEY NOT NULL,
filename TEXT NOT NULL, path TEXT NOT NULL,
filesize INTEGER NOT NULL, thumbnail BLOB,
pickedStatus INTEGER NOT NULL,
deepScanStatus INTEGER NOT NULL,
basicScanStatus INTEGER NOT NULL,
frameQuanta INTEGER,
tcFlag INTEGER,
frameStart INTEGER,
creationTime INTEGER
);
Populate the table with about 20,000 records where thumbnail is a small jpeg. Then do a couple of queries like this:
time sqlite3 Catalog.db 'select count(*) from SomeItems where filesize = 2;'
time sqlite3 Catalog.db 'select count(*) from SomeItems where basicScanStatus = 2;'
Does not matter how many records are returned, on my machine, the first query takes about 0m0.008s and the second query takes 0m0.942s. Massive difference, and the reason is because of the Blob; filesize is before the Blob and basicScanStatus is after.
We've now moved the Blob into its own table, and our app is happy.
you can reorder them using the Sqlite Browser
Let me begin by apologizing for what may have been a confusing title. I an just beginning my data analyst journey. I am working in BIGQUERY with a Extreme Storm dataset (TABLE1) that has fields for LAT,LONG, and STATE. There are null values in the latitude and longitude fields that I want to replace with general LAT/LONG values from a State Information dataset(TABLE2) also containing LAT,LONG and STATE values. In TABLE1 each record is given a unique EVENT_ID and there are 1.4m rows. In TABLE2 each STATE is a unique record.
I've tried:
Update TABLE1
SET TABLE1.BEGIN_LAT=TABLE2.latitude
From TABLE1
INNER JOIN TABLE2
ON TABLE1.STATE = TABLE2.STATE
WHERE TABLE1.BEGIN_LAT IS NULL
I am getting an error because TABLE1 contains multiple rows with the same STATE and I am trying to use it as my primary key. I know what I am doing wrong but can't figure out how to do it the correct way. Is what I am trying to do possible in BigQuery?
Any help would be appreciated. Even advice on how to ask questions! :)
Thank you.
I believe you have in your query some alias for TABLE1 in Update and for TABLE1 in From. In this case you can add condition to the WHERE clause to also match on EVENT_ID. Like this:
UPDATE TABLE1 TABLE1_U
SET TABLE1_U.BEGIN_LAT=TABLE2.latitude
FROM TABLE1 TABLE1_F
INNER JOIN TABLE2
ON TABLE1_F.STATE = TABLE2.STATE
WHERE TABLE1_U.BEGIN_LAT IS NULL AND TABLE1_U.EVENT_ID = TABLE1_F.EVENT_ID
Also, I would prefer to do SELECT query instead of update and save query results to the new table.
I am writing a hive query to join two tables; table1 and table2. In the result I just need all columns from table1 and no columns from table2.
I know the solution where I can select all the columns manually by specifying table1.column1, table1.column2.. and so on in the select statement. But I have about 22 columns in table 1. Also, I have to do the same for multiple other tables ans its painful process.
I tried using "SELECT table1.*", but I get a parse exception.
Is there a better way to do it?
Hive 0.13 onwards the following query syntax works:
SELECT a.* FROM a JOIN b ON (a.id = b.id)
This query will select all columns from a. So instead of typing all the column names (making the query cumbersome), it is a better idea to use tablealias.*
I am trying a sqlite select query statement as below:
SELECT IndicatorText
FROM Table
where IndicatorID in('13','25','64','52','13','25','328')
AND RubricID in('1','1','1','1','1','1','6')
This gives an output but the duplicate values are not displayed. I want to display all the values of IndicatorText even though it is duplicate.
Please help me with this query.
The two IN conditions are evaluated individually.
To check both values at once, you could concatenate them so that you have a single string to compare:
SELECT IndicatorText
FROM MyTable
WHERE IndicatorID || ',' || RubricID IN (
'13,1', '25,1', '64,1', '52,1', '13,1', '25,1', '328,6')
However, doing this operation on the column values prevents the query optimizer from using indexes, so this query will be slow if the table is big.
To allow optimizations, create a temporary table with the desired values, and join that with the original table:
SELECT IndicatorText
FROM MyTable
NATURAL JOIN (SELECT 13 AS IndicatorID, 1 AS RubricID UNION ALL
SELECT 25, 1 UNION ALL
SELECT 64, 1 UNION ALL
SELECT 52, 1 UNION ALL
SELECT 13, 1 UNION ALL
SELECT 25, 1 UNION ALL
SELECT 328, 6)