Using sqlite3 in iOS Projects - ios

This is how I currently have my methods in my project to use sqlite queries to return NSObjects to be used in my iOS project:
in the launch of my application, each table is checked if it needs to be created
in the application, there is no DROP TABLE .. queries nor ALTER TABLE ..
My question is:
should I be checking if a table exists every time I'm going to create a sqlite3 query?
should i use CREATE TABLE IF NOT EXISTS .. vs checking if a table exists using a different query like:
SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' UNION ALL SELECT name FROM sqlite_temp_master WHERE type IN ('table','view') ORDER BY 1;and iterating the names of the tables and checking if the table name exists?
I want to create the least amount of overhead. i am not using Core Data as well
Also, while updating records, i've noticed if i insert the name of a column, it takes the value of that column:

Related

PowerBI - Join DirectQuery and Imported tables in DAX

I have a DirectQuery table (Weather) which is sourced from an Azure SQL server. I would like to join this with an Imported table (Buckles) from an Excel sheet sourced from SharePoint Online.
Both tables have a UID field that is made up of a concatenation between a SiteID and timestamp. The UID field is named differently for each table.
I have created a One-To-Many relationship between the two tables.
I have tried to create a new DAX table using a NATURALINNERJOIN on Weather and Buckles but I get this error:
"No common join columns detected. The join function 'NATURALINNERJOIN' requires at-least one common join column."
I am confident it is not a problem with the underlying data because I've created a new imported Excel table (Test) with a selection of the data from Weather and I'm able to successfully create the join on Test and Buckles.
Is the joining of DirectQuery and Imported tables supported? I feel like this may be a type casting issue, but as far as I can see, both UID fields are set as Text.
The UID field is named differently for each table.
I suspect this may be the issue. NATURALINNERJOIN looks for matching column names
and if the two tables have no common column names, an error is returned.
Note that if you create a calculated DAX table using a DirectQuery source, I don't think that table will still act like DirectQuery. If I understand correctly, it will materialize the calculated table into your model and DAX that references that calculated table no longer points back to the SQL server (and consequently will only update when the calculated table gets rebuilt).

How to avoid primary key column duplication in PostgreSQL v 10?

I am running a script to update a table structure, my problem is with the primary key column (id), the query creates a new id column each time I run my script.
This the first time that I am trying to write a script to update a database structure.
The database was created by an old version of an application,and now we want to release a new version of the application, but to achieve this goal I wrote a script which in summary is creating tables if they don't exist, adding columns to the tables if they don't exist, deleting old indexes and creating the new ones, etc.
The problem happens when the scripts add the primary key in the tables, in the database, the tables have a primary key column of type integer. But my query is not detecting this primary key column and it creates a new column with the same name and data type, at least that is what I see in PGAdmin v4.8.
Originally the primary key columns were created using the type serial and then PostgreSQL automatically creates a sequence and use it for the primary key.
How can I avoid the duplicated primary key columns?
Originally the column was created like follows
create table mytable(
id serial,
.
.
.
);
And if I look in the table, the column looks like this, which means that PostgreSQL created a sequence mytable_id_seq and used it for the auto increment value of primary key column.
id integer NOT NULL DEFAULT nextval('mytable_id_seq'::regclass)
But after I execute the following query and look in the table, it has a new column with the same name and data type like the one in the previous lines.
ALTER TABLE public.mytable ADD COLUMN IF NOT EXISTS id serial;
I am expecting to see only one column no matter how many times I execute the query.

How to delete specific table's column using FMDB?

I am trying to delete column last_name from Persons using FMDB,
let query = "ALTER TABLE Persons DROP COLUMN last_name;"
try FMDBHelper.database.executeUpdate(query, values: nil)
But comes with error
DB Error: 1 "near "DROP": syntax error".
sqlite does not support DROP COLUMN in ALTER TABLE.
You can only rename tables and add columns.
If you need to remove columns, create a new table, copy the data there, drop the old table and rename the table to its intented name.
Reference: http://www.sqlite.org/lang_altertable.html
Please note that I flagged that your question could be duplicated, I will provide an answer to make it more clear.
I think that you are missing a point, which is: The FMDB is (as mentioned in their repo description):
This is an Objective-C wrapper around SQLite
Keep in mind that since FMDB is built on top of SQLite, it is not a limitation from the library itself; it is related to how SQLite ALTER TABLE works.
The SQLite ALTER TABLE statement is limited to rename a table and add a new column to the desired table:
SQLite supports a limited subset of ALTER TABLE. The ALTER TABLE
command in SQLite allows the user to rename a table or to add a new
column to an existing table.
http://www.sqlite.org/lang_altertable.html
For achieving what are you looking for:
You could check the answers of Delete column from SQLite table.

Most efficient way to duplicate a row in Sqlite for iOS

What is the most efficient way to duplicate a row in an Sqlite3 database exactly except with an updated PrimaryKey?
Use an insert .. select where both the insert and select reference the same relation. Then the entire operation will occur as a single statement within SQLite code itself which cannot be beaten for "efficiency".
It will be easier with an auto-PK (just don't select the PK column), although an appropriate natural key can be assigned as well if such can be derived.
See SQLite INSERT SELECT Query Results into Existing Table?

2 column table, ignore duplicates on mass insert postgresql

I have a Join table in Rails which is just a 2 column table with ids.
In order to mass insert into this table, I use
ActiveRecord::Base.connection.execute("INSERT INTO myjointable (first_id,second_id) VALUES #{values})
Unfortunately this gives me errors when there are duplicates. I don't need to update any values, simply move on to the next insert if a duplicate exists.
How would I do this?
As an fyi I have searched stackoverflow and most the answers are a bit advanced for me to understand. I've also checked the postgresql documents and played around in the rails console but still to no avail. I can't figure this one out so i'm hoping someone else can help tell me what I'm doing wrong.
The closest statement I've tried is:
INSERT INTO myjointable (first_id,second_id) SELECT 1,2
WHERE NOT EXISTS (
SELECT first_id FROM myjointable
WHERE first_id = 1 AND second_id IN (...))
Part of the problem with this statement is that I am only inserting 1 value at a time whereas I want a statement that mass inserts. Also the second_id IN (...) section of the statement can include up to 100 different values so I'm not sure how slow that will be.
Note that for the most part there should not be many duplicates so I am not sure if mass inserting to a temporary table and finding distinct values is a good idea.
Edit to add context:
The reason I need a mass insert is because I have a many to many relationship between 2 models where 1 of the models is never populated by a form. I have stocks, and stock price histories. The stock price histories are never created in a form, but rather mass inserted themselves by pulling the data from YahooFinance with their yahoo finance API. I use the activerecord-import gem to mass insert for stock price histories (i.e. Model.import columns,values) but I can't type jointable.import columns,values because I get the jointable is an undefined local variable
I ended up using the WITH clause to select my values and give it a name. Then I inserted those values and used WHERE NOT EXISTS to effectively skip any items that are already in my database.
So far it looks like it is working...
WITH withqueryname(first_id,second_id) AS (VALUES(1,2),(3,4),(5,6)...etc)
INSERT INTO jointablename (first_id,second_id)
SELECT * FROM withqueryname
WHERE NOT EXISTS(
SELECT first_id FROM jointablename WHERE
first_id = 1 AND
second_id IN (1,2,3,4,5,6..etc))
You can interchange the Values with a variable. Mine was VALUES#{values}
You can also interchange the second_id IN with a variable. Mine was second_id IN #{variable}.
Here's how I'd tackle it: Create a temp table and populate it with your new values. Then lock the old join values table to prevent concurrent modification (important) and insert all value pairs that appear in the new table but not the old one.
One way to do this is by doing a left outer join of the old values onto the new ones and filtering for rows where the old join table values are null. Another approach is to use an EXISTS subquery. The two are highly likely to result in the same query plan once the query optimiser is done with them anyway.
Example, untested (since you didn't provide an SQLFiddle or sample data) but should work:
BEGIN;
CREATE TEMPORARY TABLE newjoinvalues(
first_id integer,
second_id integer,
primary key(first_id,second_id)
);
-- Now populate `newjoinvalues` with multi-valued inserts or COPY
COPY newjoinvalues(first_id, second_id) FROM stdin;
LOCK TABLE myjoinvalues IN EXCLUSIVE MODE;
INSERT INTO myjoinvalues
SELECT n.first_id, n.second_id
FROM newjoinvalues n
LEFT OUTER JOIN myjoinvalues m ON (n.first_id = m.first_id AND n.second_id = m.second_id)
WHERE m.first_id IS NULL AND m.second_id IS NULL;
COMMIT;
This won't update existing values, but you can do that fairly easily too by using with a second query that does an UPDATE ... FROM while still holding the write table lock.
Note that the lock mode specified above will not block SELECTs, only writes like INSERT, UPDATE and DELETE, so queries can continue to be made to the table while the process is ongoing, you just can't update it.
If you can't accept that an alternative is to run the update in SERIALIZABLE isolation (only works properly for this purpose in Pg 9.1 and above). This will result in the query failing whenever a concurrent write occurs so you have to be prepared to retry it over and over and over again. For that reason it's likely to be better to just live with locking the table for a while.

Resources