SSDT- SQL DB DEPLOY Dacpac - on existing production SERVER failed - devops

I am using Azure/Release to SQL Deploy. I have a DACPAC file to deploy, but I (1) have changed column datatypes, (2)deleted some columns on a table that has data already, (3) and added some FK columns. But because there is already production data on this, the deploy fails. What solution should be done on these scenarios?
SQL DEPLOY image
The column [dbo].[TableSample][ColumnSample] is being dropped, data loss could occur.
The type for column Description in table [dbo].[Table2] is currently NVARCHAR (1024) NULL but is being changed to NVARCHAR (100) NOT NULL. Data loss could occur.
The type for column Id in table [dbo].[Table3] is currently UNIQUEIDENTIFIER NOT NULL but is being changed to INT NOT NULL.
There is no implicit or explicit conversion. ***
The column [sampleColumnId] on table [dbo].[Table4] must be added
, but the column has no default value and does not allow NULL values. If the table contains data, the ALTER script will not work.
To avoid this issue you must either: add a default value to the column, mark it as allowing NULL values, or enable the generation of smart-defaults as a deployment option.

Your "data loss" warnings can likely be turned off by using the option to "allow data loss" in your publish options. It's just warning you that you are dropping columns or going to a smaller data length.
Your "Table3" change is just not going to work with keeping the data. GUIDs will not fit in an INT column. You might need to look at dropping/re-creating the table or renaming the current ID column to something else (OldId, maybe) and adding a new ID of type INT, probably with an Identity(1,1).
However the last column - you are trying to add a NOT NULL column with no default to an existing table. Either allow NULLs or put a named default value on the column so the column can be added.

Related

Adding Generated Columns Crashes MariaDB

I am experiencing a strange bug with generated columns and MariaDB running in a Docker container.
The image I'm using is mariadb:10.
I have been trying to add generated columns. The first column I add works fine; the second I add crashes the container and destroys the table.
Here's the first column that is working:
ALTER TABLE program
ADD is_current tinyint AS (
IF (
status IN ('active', 'suspended')
AND start_date >= NOW(),
1,
0
)
);
This one works just fine. The following SQL crashes the container:
ALTER TABLE program
ADD is_covered tinyint AS (
IF (
status IN ('active', 'suspended')
AND start_date <= NOW(),
1,
0
)
);
After restarting the container, I get the following errors:
SELECT * FROM program;
[42S02][1932] Table 'my_local.program' is marked as crashed and should be repaired
repair table my_local.program;
Table 'my_local.program' doesn't exist in engine / operation failed
Following the directions from this question I checked in the container for the existence of the ibdata1 file. It exists, as do the table's .ibd and .rfm files.
I have not been able to fix this; I had to drop the table and re-create it and re-import the data.
If anyone has any suggestions, I'd love to hear.
Checking the reference for MySQL 8 for generated columns I find that
Literals, deterministic built-in functions, and operators are
permitted. A function is deterministic if, given the same data in
tables, multiple invocations produce the same result, independently of
the connected user. Examples of functions that are nondeterministic
and fail this definition: CONNECTION_ID(), CURRENT_USER(), NOW().
This is also true of MySQL 5.7.
When I attempted to create your generated column with MySQL 8 I got this message:
Error Code: 3763. Expression of generated column 'is_covered' contains a disallowed function: now.
I note, however, that you are using mariadb:10. Although it is derived from MySQL, MariaDB is now effectively a different product.
The MariaDB reference for generated columns says: (for 10.2.1 onwards):
Non-deterministic built-in functions are supported in expressions for not indexed VIRTUAL generated columns.
Non-deterministic built-in functions are not supported in expressions for PERSISTENT or indexed VIRTUAL generated columns.
So, If you have MySQL you can't do this at all. If you have MariaDB 10.2.1+ you should be able to do it with certain limitations.
In any case, you should get an error message, not a crashed table. I suggest you check the MariaDB bug reports, and submit one if this is not already there.

What would cause Postgres to lose track of the next ID, and how could I fix it?

I mysteriously got an error in my Rails app locally:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(45) already exists.
The strange thing is that I didn't specify 45 as the ID. This number came from Postgres itself, which also then complained about it. I know this because when I tried it again I got the error with 46. The brute-force fix I used was to just repeat the insertion until it worked, therefore bringing Postgres' idea of the table's next available ID into line with reality.
500.times { User.create({employee_id: 1010101010101, blah_blah: "blah"}) rescue nil }
Since the employee_id has a unique constraint, any subsequent attempts to create the user after the first successful one would fail. And any previous to the first successful one would fail because Postgres tried to use an invalid id (primary key for the table).
So the brute-force approach works, but it's inelegant and it leaves me wondering what could have caused the database to get into this state. It also leaves me wondering how to check to see whether the production database is similarly inconsistent, and how to fix it (short of repeating the brute-force "fix").
Finding your Sequence
The first step to updating your sequence object is to figure out what the name of your sequence is. To find this, you can use the pg_get_serial_sequence() function.
SELECT pg_get_serial_sequence('table_name','id');
This will output something like public.person_id_seq, which is the relation name (regclass).
In Postgres 10+ there is also a pg_sequences view that you can use to find all sorts of information related to your sequences. The last_value column will show you the current value of the sequence:
SELECT * FROM pg_sequences;
Updating your Sequence
Once you have the sequence name, there are a few ways you can reset the sequence value:
1 - Use setval()
SELECT setval('public.person_id_seq',1020); -- Next value will be 1021
SELECT setval('public.person_id_seq',1020, False); -- Next value will be 1020
Source
2 - Use ALTER SEQUENCE (RESTART WITH)
ALTER SEQUENCE person_id_seq RESTART WITH 1030;
In this case, the value you provide (ex. 1030) will be the next value returned, so technically the sequence is being reset to <YOUR VALUE> - 1.
3 - Use ALTER SEQUENCE (START WITH, RESTART)
ALTER SEQUENCE person_id_seq START WITH 1030;
ALTER SEQUENCE person_id_seq RESTART;
Using this method is preferred if you need to repeatedly restart to a specific value. Subsequent calls to RESTART will reset the sequence to 1030 in this example.
This sort of thing happens when rows with specified IDs were inserted into the table. Since the IDs are specified, Postgres doesn't increment its sequence when inserting, and then the sequence becomes out of date with the data in the table. This could happen when manually inserted rows, or copying in rows from a CSV file, or replicating in rows, etc.
To avoid the issue, you simply need to let Postgres always handle the IDs, and never specify the ID yourself. However, if you've already messed up and need to fix the sequence, you can do so with the ALTER SEQUENCE command (using RESTART WITH or INCREMENT).

Implementing a unique surrogate key in Advantage Database Server

I've recently taken over support of a system which uses Advantage Database Server as its back end. For some background, I have years of database experience but have never used ADS until now, so my question is purely about how to implement a standard pattern in this specific DBMS.
There's a stored procedure which has been previously developed which manages an ID column in this manner:
#ID = (SELECT ISNULL(MAX(ID), 0) FROM ExampleTable);
#ID = #ID + 1;
INSERT INTO Example_Table (ID, OtherStuff)
VALUES (#ID, 'Things');
--Do some other stuff.
UPDATE ExampleTable
SET AnotherColumn = 'FOO'
WHERE ID = #ID;
My problem is that I now need to run this stored procedure multiple times in parallel. As you can imagine, when I do this, the same ID value is getting grabbed multiple times.
What I need is a way to consistently create a unique value which I can be sure will be unique even if I run the stored procedure multiple times at the same moment. In SQL Server I could create an IDENTITY column called ID, and then do the following:
INSERT INTO ExampleTable (OtherStuff)
VALUES ('Things');
SET #ID = SCOPE_IDENTITY();
ADS has autoinc which seems similar, but I can't find anything conclusively telling me how to return the value of the newly created value in a way that I can be 100% sure will be correct under concurrent usage. The ADS Developer's Guide actually warns me against using autoinc, and the online help files offer functions which seem to retrieve the last generated autoinc ID (which isn't what I want - I want the one created by the previous statement, not the last one created across all sessions). The help files also list these functions with a caveat that they might not work correctly in situations involving concurrency.
How can I implement this in ADS? Should I use autoinc, some other built-in method that I'm unaware of, or do I genuinely need to do as the developer's guide suggests, and generate my unique identifiers before trying to insert into the table in the first place? If I should use autoinc, how can I obtain the value that has just been inserted into the table?
You use LastAutoInc(STATEMENT) with autoinc.
From the documentation (under Advantage SQL->Supported SQL Grammar->Supported Scalar Functions->Miscellaneous):
LASTAUTOINC(CONNECTION|STATEMENT)
Returns the last used autoinc value from an insert or append. Specifying CONNECTION will return the last used value for the entire connection. Specifying STATEMENT returns the last used value for only the current SQL statement. If no autoinc value has been updated yet, a NULL value is returned.
Note: Triggers that operate on tables with autoinc fields may affect the last autoinc value.
Note: SQL script triggers run on their own SQL statement. Therefore, calling LASTAUTOINC(STATEMENT) inside a SQL script trigger would return the lastautoinc value used by the trigger's SQL statement, not the original SQL statement which caused the trigger to fire. To obtain the last original SQL statement's lastautoinc value, use LASTAUTOINC(CONNECTION) instead.
Example: SELECT LASTAUTOINC(STATEMENT) FROM System.Iota
Another option is to use GUIDs.
(I wasn't sure but you may have already been alluding to this when you say "or do I genuinely need to do as the developer's guide suggests, and generate my unique identifiers before trying to insert into the table in the first place." - apologies if so, but still this info might be useful for others :) )
The use of GUIDs as a surrogate key allows either the application or the database to create a unique identifier, with a guarantee of no clashes.
Advantage 12 has built-in support for a GUID datatype:
GUID and 64-bit Integer Field Types
Advantage server and clients now support GUID and Long Integer (64-bit) data types in all table formats. The 64-bit integer type can be used to store integer values between -9,223,372,036,854,775,807 and 9,223,372,036,854,775,807 with no loss of precision. The GUID (Global Unique Identifier) field type is a 16-byte data structure. A new scalar function NewID() is available in the expression engine and SQL engine to generate new GUID. See ADT Field Types and Specifications and DBF Field Types and Specifications for more information.
http://scn.sap.com/docs/DOC-68484
For earlier versions, you could store the GUIDs as a char(36). (Think about your performance requirements here of course.) You will then need to do some conversion back and forth in your application layer between GUIDs and strings. If you're using some intermediary data access layer, e.g. NHibernate or Entity Framework, you should be able to at least localise the conversions to one place.
If some part of your logic is in a stored procedure, you should be able to use the newid() or newidstring() function, depending on the type of the backing column:
INSERT INTO Example_Table (newid(), OtherStuff)

Executing insert operation with autoinc fields with FireDac TFDCommand and retieving generated values

I'm trying to do an insert operation in a table with autoinc fields, and I'm using FireDac TFDCommand for that. So, the record is getting successfully inserted on db, but how to get the generated values for autoinc fields?
Obs: TFDConnection let me get the last auto gen. value, but the table generates two autoinc fields. I could get the primary key and select the record in db, but it's gonna be another call to db and I need to prevent it.
Any idea?
The only way seems to parse TFDConnection.Messages property, after the insert occurs. Some DBMS, like SQL Server, return messages as an additional result set.
To enable messages processing, set ResourceOptions.ServerOutput to True.
If messages coming from your database server you use doesn't returns any last inserted key information, I fear that the only solution would be another query to retrieve the last ID ...

dbExpress/No key specified

I am working on a database program, using the dbExpress components (Delphi 7). The data is retrieved from the database via the following components: TSQLDataSet -> TDataSetProvider -> TClientDataSet -> TDatasource -> TDBEdit. Until now, the form has worked correctly. The query in the TSQLDataset is
select id, name, byteken, timeflag from scales where id = :p1
I added a large (2048) varchar field to the database table; when I add this field to the above query (and connect either a TDBMemo or a TDBRichEdit) to the TDatasource), I receive the following message when I try to edit the value in the new text field
Unable to find record. No key specified.
I get the same error when there is no TDBMemo on the form (but with the varchar field in the query). As soon as I remove the varchar field from the query, everything works properly again.
What could be the cause of this problem?
==== More information ====
I have now defined persistent fields in the form. The field which holds the key to the table has its provider flags set to [pfInUpdate,pfInWhere,pfInKey], whereas all the other fields have their flags as [pfInUpdate,pfInWhere]. This doesn't solve the problem.
The persistent fields were defined on the clientdataset. When I defined them on the TSQLDataSet, the error message about 'no key specified' does not occur. The program still puts out this error message (which I neglected to mention earlier):
EDatabase error: arithmetic exception, numeric overflow or string truncation
The large string field has the correct value in 'displaywidth' and 'size'.
==== Even more information ====
I rewrote the form to use non-data aware components. One query retrieves the data from the database (using exactly the same query string as I am using in the TSQLDataSet); the data is then transferred to the controls. After the user presses the OK button on the form, the data is passed back to the database via another query which performs an update or an insert. As this works correctly, I don't see what the problem is with the data aware components.
==== Yet another snippet of information ====
I found this question on Stack Overflow which seems to address a similar issue. I changed the query to be
select id, name, name, byteken, timeflag,
cast (constext as varchar (2048)) as fconstext
from scales
where id = :p1
and set the dbMemo's datafield to be 'fconstext'. After adding text to the dbMemo, the 'applyupdates' call now fails with the following message
column unknown 'fconstext'
despite the fact that there is a persistent field created with that name.
I don't know whether this helps or simply muddies the water.
==== More information, 23 April ====
I dropped the field from the database table, then added it back. The program as written works fine as long as the string being entered into the problematic data field is less than about 260 chars. I added ten characters at a time several times without problem until the string length was 256. Then I added some more characters (without counting), tried to save - and got the error. From this point on, trying to add even one more character causes the error message (which comes at the 'applyupdates' method of the clientdataset).
Originally, the field contained 832 characters, so there is not a hard limit to the number of characters which I can successfully store. But once the error message appears, it always appears, as if the database remembers that there is an error.
==== More information, 24 April ====
Once again, I dropped the field from the database then added it back; the character set is WIN1251, for reasons which are not clear to me now (I don't need Cyrillic characters). The maximum number of characters which I can enter using data-aware controls seems to be about 280, regardless of how the field itself is defined.
I have since moved to using non-data aware controls in the real program where this problem occurs, and I can assure you that this limit does not exists there. Thus I am fairly sure that the problem is not due to a mismatch in character size, as has been suggested. Don't forget that I am using Delphi 7, which does not have unicode strings. I think that there is a bug in one of the components, but as I'm using old versions, I imagine that the problem has been solved, but not in the versions which I use.
==== Hopefully final edit, 25/04/12 ====
Following mosquito's advice, I created a new database whose default character set is WIN1252 (UTF-8 did not appear as a choice and and anyway my programs are not unicode). In this clean database I defined the one table, where the 'constext' string's character set was also defined as WIN1252. I ran the data-aware version of the problematic form and was able to enter text without problem (currently over 1700 characters).
It would seem, thus, that the problem was created by having one character set defined for the database and one for the field. I don't know how to check in retrospect what the default character set of the database was defined as, so I can't confirm this.
I now have the small problem of defining a new database (there are 50+ tables) and copying the data from the original database. As this database serves the customer's flagship product, I am somewhat wary of doing this....
Check the UpdateMode property of the provider. If it is set to upWhereChanged or upWhereKeyOnly you need a key in the database table to work properly.
Unable to find record. No key specified.
set select id, name, byteken, timeflag from scales where id = :p1
to
select id, name, byteken, timeflag from scales where id = 245
an existing id while designing.
to casts
cast (constext as varchar (2048)).....
If a column's definition is altered, existing CASTs to that column's type may become invalid
Arithmetic exception, numeric overflow, or string truncation
String truncation
It happens when the concatenated string doesn't fit the underlying CHAR or VARCHAR datatype size. If the result goes into a table column, perhaps it's a valid error. Or maybe you really need to increase the column size. Similar goes for intermediary values stored in stored procedure or trigger variables.
Character transliteration failed
This happens when you have data in database stored in one character set, but the transliteration to required character set fails. There are various points where character set transliteration occurs. There is an automatic one:
Every piece of data you retrieve from database (via SELECT or otherwise) is transliterated from character set of database table's column to connection character set. If character sets are too different, there will be two translations: first from column charset to Unicode and then from Unicode to the connection charset.
Also, you can request transliteration manually by CASTing the column to another charset, example:
CAST(column_name AS varchar(100) character set WIN1251).
The reason that transliteration can fail is that simply some characters don't exist in certain character sets. For example, WIN1252 doesn't contain any Cyrillic characters, so if you use connection charset WIN1252 and try to SELECT from a column with Cyrillic characters, you may get such error.
In modern times, it is best to use Unicode or UTF8 in your applications and UTF8 connection character. And make sure you use at least Firebird 2.0, has UTF8 support.
Wrong order of parameters when using DotNetFirebird
The order in which Parameters are added to a FbCommand when using DotNetFirebird might cause the -303 exception with the hint "Arithmetic exception, numeric overflow, or string truncation". The order of the parameters has to fit the order of the params in the stored procedure - otherwise the exception will be thrown. Example (.NET, C#, DotNetFirebird (using FirebirdSql.Data.FirebirdClient;))
FbCommand CMD = new FbCommand("TBLTEXT_ADDTEXT", cnn);
CMD.Parameters.Add("TEXT1", FbDbType.VarChar, 600).Value = strText1;
CMD.Parameters.Add("TEXT2", FbDbType.VarChar, 600).Value = strText2;
CMD.CommandType = CommandType.StoredProcedure;
CMD.ExecuteNonQuery();
If the order of the parameters inside the procedure "TBLTEXT_ADDTEXT" differ from the order in which you´re adding parameters to the FbCommand-Object, you´ll receive the -303 error.
4.
No'am Newman said But once the error message appears, it always appears, as if the
database remembers that there is an error.
no remembers; the database is damaged !!!
As long as you are not able to change your database character-set and always experiment with dropping and adding fields to a damaged table, it's hard to solve the problem. 1. For every new test there must be an new database created (TIP: create one and copy them x times). 2. The field set with plain text not with Cyrillic characters stored in originally field; you can not see them but they are there. 3. set varchar(8191) and database PAGE_SIZE to 8192. The actual maximum VARCHAR length with UTF8 is 8191
CREATE DATABASE statement:
CREATE DATABASE localhost:mybase
USER SYSDBA
PASSWORD masterkey
PAGE_SIZE 8192
DEFAULT CHARACTER SET UTF8;
SET NAMES ISO8859_1;
CREATE TABLE scales (
ID ...,
byteken VARCHAR(8191) COLLATE DE_DE,
....
Collations
There is no default collation. So you should define a collation for every field that is to be used for sorting (ORDER BY) or comparing (UPPER):
You can also specify the collation with the ORDER BY clause:
ORDER BY LASTNAME COLLATE FR_CA, FIRSTNAME COLLATE FR_CA
or with the WHERE clause:
WHERE LASTNAME COLLATE FR_CA = :lastnametosearch
Unicode
Firebird 2.0. and above. Now there is the new UTF8 character set that correctly handles Unicode strings in UTF-8 format. The Unicode collation algorithm has been implemented so now you can use UPPER() and the new LOWER() function without the need to specify a collation.

Resources