I've a database on SQL Server 2008 R2. In a table there's a column that should be with type ntext but it was designed as a text column. So the data was sent, saved as text then it was saved as question marks (unrecognized).
I've changed the type to ntext. Is there's a way to restore back this saved data? I thought about tracking the captured data that was sent to the stored procedure and maintain it manually, but I searched and found no result.
Any ideas?
No, the data is lost unless you have the original data. Once written to a non-unicode column, unicode data will be lost. This is demonstrated in my answer here: Determine varchar content in nvarchar columns
Also note that ntext is deprecated. You should use nvarchar(max)
ntext , text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and varbinary(max) instead.
Related
I've recently taken over support of a system which uses Advantage Database Server as its back end. For some background, I have years of database experience but have never used ADS until now, so my question is purely about how to implement a standard pattern in this specific DBMS.
There's a stored procedure which has been previously developed which manages an ID column in this manner:
#ID = (SELECT ISNULL(MAX(ID), 0) FROM ExampleTable);
#ID = #ID + 1;
INSERT INTO Example_Table (ID, OtherStuff)
VALUES (#ID, 'Things');
--Do some other stuff.
UPDATE ExampleTable
SET AnotherColumn = 'FOO'
WHERE ID = #ID;
My problem is that I now need to run this stored procedure multiple times in parallel. As you can imagine, when I do this, the same ID value is getting grabbed multiple times.
What I need is a way to consistently create a unique value which I can be sure will be unique even if I run the stored procedure multiple times at the same moment. In SQL Server I could create an IDENTITY column called ID, and then do the following:
INSERT INTO ExampleTable (OtherStuff)
VALUES ('Things');
SET #ID = SCOPE_IDENTITY();
ADS has autoinc which seems similar, but I can't find anything conclusively telling me how to return the value of the newly created value in a way that I can be 100% sure will be correct under concurrent usage. The ADS Developer's Guide actually warns me against using autoinc, and the online help files offer functions which seem to retrieve the last generated autoinc ID (which isn't what I want - I want the one created by the previous statement, not the last one created across all sessions). The help files also list these functions with a caveat that they might not work correctly in situations involving concurrency.
How can I implement this in ADS? Should I use autoinc, some other built-in method that I'm unaware of, or do I genuinely need to do as the developer's guide suggests, and generate my unique identifiers before trying to insert into the table in the first place? If I should use autoinc, how can I obtain the value that has just been inserted into the table?
You use LastAutoInc(STATEMENT) with autoinc.
From the documentation (under Advantage SQL->Supported SQL Grammar->Supported Scalar Functions->Miscellaneous):
LASTAUTOINC(CONNECTION|STATEMENT)
Returns the last used autoinc value from an insert or append. Specifying CONNECTION will return the last used value for the entire connection. Specifying STATEMENT returns the last used value for only the current SQL statement. If no autoinc value has been updated yet, a NULL value is returned.
Note: Triggers that operate on tables with autoinc fields may affect the last autoinc value.
Note: SQL script triggers run on their own SQL statement. Therefore, calling LASTAUTOINC(STATEMENT) inside a SQL script trigger would return the lastautoinc value used by the trigger's SQL statement, not the original SQL statement which caused the trigger to fire. To obtain the last original SQL statement's lastautoinc value, use LASTAUTOINC(CONNECTION) instead.
Example: SELECT LASTAUTOINC(STATEMENT) FROM System.Iota
Another option is to use GUIDs.
(I wasn't sure but you may have already been alluding to this when you say "or do I genuinely need to do as the developer's guide suggests, and generate my unique identifiers before trying to insert into the table in the first place." - apologies if so, but still this info might be useful for others :) )
The use of GUIDs as a surrogate key allows either the application or the database to create a unique identifier, with a guarantee of no clashes.
Advantage 12 has built-in support for a GUID datatype:
GUID and 64-bit Integer Field Types
Advantage server and clients now support GUID and Long Integer (64-bit) data types in all table formats. The 64-bit integer type can be used to store integer values between -9,223,372,036,854,775,807 and 9,223,372,036,854,775,807 with no loss of precision. The GUID (Global Unique Identifier) field type is a 16-byte data structure. A new scalar function NewID() is available in the expression engine and SQL engine to generate new GUID. See ADT Field Types and Specifications and DBF Field Types and Specifications for more information.
http://scn.sap.com/docs/DOC-68484
For earlier versions, you could store the GUIDs as a char(36). (Think about your performance requirements here of course.) You will then need to do some conversion back and forth in your application layer between GUIDs and strings. If you're using some intermediary data access layer, e.g. NHibernate or Entity Framework, you should be able to at least localise the conversions to one place.
If some part of your logic is in a stored procedure, you should be able to use the newid() or newidstring() function, depending on the type of the backing column:
INSERT INTO Example_Table (newid(), OtherStuff)
There is a Java Swing application which uses an Informix database. I have user rights granted for the Swing application (i.e. no source code), and read only access to a mirror of the database.
Sometimes I need to find a database column, which is backing a GUI element (TextBox, TableField, Label...). What would be best approach to find out which database column and table is holding the data shown e.g. in a TextBox?
My general approach is to capture the state of the database. Commit a change using the GUI and then capture the state of the database again. Then I need to examine the difference. I've already tried:
Use the nrows field of systables: Didn't work, because the number in nrows does not seem to be a realtime representation of the row count.
Create a script with SELECT COUNT(*) ... for all tables: didn't work because too many tables (> 5000). Also tried to optimize by removing empty tables, but there are still too many left.
Is there a simple solution that I'm missing?
Please look at the Change Data Capture API and check if this suits your needs
There probably isn't a simple solution.
You probably need to build yourself a map of the database, or a data dictionary for it. It sounds as though you can eliminate many of the tables from consideration since they're empty — at least for a preliminary pass. If you're dealing with information in a text box, the chances are it is some sort of character data; you can analyze which (non-empty) tables which contain longer character strings, and they'd be the primary targets of your searches. If the schema is badly designed with lots of VARCHAR(255) columns even though the columns normally only hold short strings, life is more difficult. Over time, you can begin to classify tables and columns so that you end up knowing where to look for parts of the application.
One problem to beware of: the tabid in informix.systables isn't necessarily as stable as you'd like. Your data dictionary needs to record its own dd_tabid for the table it describes, and can store the last known tabid from informix.systables, but it needs to be ready to find a new tabid value on occasion. You should probably only mark data in your dictionary for logical deletion.
To some extent, this assumes you can create a database in which to record this information. If you can't create an Informix database, you may have to use something else (MySQL, or SQLite, perhaps) to store the data dictionary. Alternatively, go to your DBA team and ask them for the information. Unless you're trying something self-evidently untoward, they're likely to help (but politics can get in the way — I've no idea how collegial your teams are).
I am working on a database program, using the dbExpress components (Delphi 7). The data is retrieved from the database via the following components: TSQLDataSet -> TDataSetProvider -> TClientDataSet -> TDatasource -> TDBEdit. Until now, the form has worked correctly. The query in the TSQLDataset is
select id, name, byteken, timeflag from scales where id = :p1
I added a large (2048) varchar field to the database table; when I add this field to the above query (and connect either a TDBMemo or a TDBRichEdit) to the TDatasource), I receive the following message when I try to edit the value in the new text field
Unable to find record. No key specified.
I get the same error when there is no TDBMemo on the form (but with the varchar field in the query). As soon as I remove the varchar field from the query, everything works properly again.
What could be the cause of this problem?
==== More information ====
I have now defined persistent fields in the form. The field which holds the key to the table has its provider flags set to [pfInUpdate,pfInWhere,pfInKey], whereas all the other fields have their flags as [pfInUpdate,pfInWhere]. This doesn't solve the problem.
The persistent fields were defined on the clientdataset. When I defined them on the TSQLDataSet, the error message about 'no key specified' does not occur. The program still puts out this error message (which I neglected to mention earlier):
EDatabase error: arithmetic exception, numeric overflow or string truncation
The large string field has the correct value in 'displaywidth' and 'size'.
==== Even more information ====
I rewrote the form to use non-data aware components. One query retrieves the data from the database (using exactly the same query string as I am using in the TSQLDataSet); the data is then transferred to the controls. After the user presses the OK button on the form, the data is passed back to the database via another query which performs an update or an insert. As this works correctly, I don't see what the problem is with the data aware components.
==== Yet another snippet of information ====
I found this question on Stack Overflow which seems to address a similar issue. I changed the query to be
select id, name, name, byteken, timeflag,
cast (constext as varchar (2048)) as fconstext
from scales
where id = :p1
and set the dbMemo's datafield to be 'fconstext'. After adding text to the dbMemo, the 'applyupdates' call now fails with the following message
column unknown 'fconstext'
despite the fact that there is a persistent field created with that name.
I don't know whether this helps or simply muddies the water.
==== More information, 23 April ====
I dropped the field from the database table, then added it back. The program as written works fine as long as the string being entered into the problematic data field is less than about 260 chars. I added ten characters at a time several times without problem until the string length was 256. Then I added some more characters (without counting), tried to save - and got the error. From this point on, trying to add even one more character causes the error message (which comes at the 'applyupdates' method of the clientdataset).
Originally, the field contained 832 characters, so there is not a hard limit to the number of characters which I can successfully store. But once the error message appears, it always appears, as if the database remembers that there is an error.
==== More information, 24 April ====
Once again, I dropped the field from the database then added it back; the character set is WIN1251, for reasons which are not clear to me now (I don't need Cyrillic characters). The maximum number of characters which I can enter using data-aware controls seems to be about 280, regardless of how the field itself is defined.
I have since moved to using non-data aware controls in the real program where this problem occurs, and I can assure you that this limit does not exists there. Thus I am fairly sure that the problem is not due to a mismatch in character size, as has been suggested. Don't forget that I am using Delphi 7, which does not have unicode strings. I think that there is a bug in one of the components, but as I'm using old versions, I imagine that the problem has been solved, but not in the versions which I use.
==== Hopefully final edit, 25/04/12 ====
Following mosquito's advice, I created a new database whose default character set is WIN1252 (UTF-8 did not appear as a choice and and anyway my programs are not unicode). In this clean database I defined the one table, where the 'constext' string's character set was also defined as WIN1252. I ran the data-aware version of the problematic form and was able to enter text without problem (currently over 1700 characters).
It would seem, thus, that the problem was created by having one character set defined for the database and one for the field. I don't know how to check in retrospect what the default character set of the database was defined as, so I can't confirm this.
I now have the small problem of defining a new database (there are 50+ tables) and copying the data from the original database. As this database serves the customer's flagship product, I am somewhat wary of doing this....
Check the UpdateMode property of the provider. If it is set to upWhereChanged or upWhereKeyOnly you need a key in the database table to work properly.
Unable to find record. No key specified.
set select id, name, byteken, timeflag from scales where id = :p1
to
select id, name, byteken, timeflag from scales where id = 245
an existing id while designing.
to casts
cast (constext as varchar (2048)).....
If a column's definition is altered, existing CASTs to that column's type may become invalid
Arithmetic exception, numeric overflow, or string truncation
String truncation
It happens when the concatenated string doesn't fit the underlying CHAR or VARCHAR datatype size. If the result goes into a table column, perhaps it's a valid error. Or maybe you really need to increase the column size. Similar goes for intermediary values stored in stored procedure or trigger variables.
Character transliteration failed
This happens when you have data in database stored in one character set, but the transliteration to required character set fails. There are various points where character set transliteration occurs. There is an automatic one:
Every piece of data you retrieve from database (via SELECT or otherwise) is transliterated from character set of database table's column to connection character set. If character sets are too different, there will be two translations: first from column charset to Unicode and then from Unicode to the connection charset.
Also, you can request transliteration manually by CASTing the column to another charset, example:
CAST(column_name AS varchar(100) character set WIN1251).
The reason that transliteration can fail is that simply some characters don't exist in certain character sets. For example, WIN1252 doesn't contain any Cyrillic characters, so if you use connection charset WIN1252 and try to SELECT from a column with Cyrillic characters, you may get such error.
In modern times, it is best to use Unicode or UTF8 in your applications and UTF8 connection character. And make sure you use at least Firebird 2.0, has UTF8 support.
Wrong order of parameters when using DotNetFirebird
The order in which Parameters are added to a FbCommand when using DotNetFirebird might cause the -303 exception with the hint "Arithmetic exception, numeric overflow, or string truncation". The order of the parameters has to fit the order of the params in the stored procedure - otherwise the exception will be thrown. Example (.NET, C#, DotNetFirebird (using FirebirdSql.Data.FirebirdClient;))
FbCommand CMD = new FbCommand("TBLTEXT_ADDTEXT", cnn);
CMD.Parameters.Add("TEXT1", FbDbType.VarChar, 600).Value = strText1;
CMD.Parameters.Add("TEXT2", FbDbType.VarChar, 600).Value = strText2;
CMD.CommandType = CommandType.StoredProcedure;
CMD.ExecuteNonQuery();
If the order of the parameters inside the procedure "TBLTEXT_ADDTEXT" differ from the order in which you´re adding parameters to the FbCommand-Object, you´ll receive the -303 error.
4.
No'am Newman said But once the error message appears, it always appears, as if the
database remembers that there is an error.
no remembers; the database is damaged !!!
As long as you are not able to change your database character-set and always experiment with dropping and adding fields to a damaged table, it's hard to solve the problem. 1. For every new test there must be an new database created (TIP: create one and copy them x times). 2. The field set with plain text not with Cyrillic characters stored in originally field; you can not see them but they are there. 3. set varchar(8191) and database PAGE_SIZE to 8192. The actual maximum VARCHAR length with UTF8 is 8191
CREATE DATABASE statement:
CREATE DATABASE localhost:mybase
USER SYSDBA
PASSWORD masterkey
PAGE_SIZE 8192
DEFAULT CHARACTER SET UTF8;
SET NAMES ISO8859_1;
CREATE TABLE scales (
ID ...,
byteken VARCHAR(8191) COLLATE DE_DE,
....
Collations
There is no default collation. So you should define a collation for every field that is to be used for sorting (ORDER BY) or comparing (UPPER):
You can also specify the collation with the ORDER BY clause:
ORDER BY LASTNAME COLLATE FR_CA, FIRSTNAME COLLATE FR_CA
or with the WHERE clause:
WHERE LASTNAME COLLATE FR_CA = :lastnametosearch
Unicode
Firebird 2.0. and above. Now there is the new UTF8 character set that correctly handles Unicode strings in UTF-8 format. The Unicode collation algorithm has been implemented so now you can use UPPER() and the new LOWER() function without the need to specify a collation.
If I change the type of a field in my database via a Ruby on Rails migration, from string to text, will I lose the data in the field?
As far as I remember, SQLite uses the type only for input/output. Internally, everything is stored as text (that's why you can also store text in an int-field if you want). So no, it shouldn't remove any data, because it's only a superficial change.
No guarantees though, it's been a while since since I last worked with SQLite ;-)
This page explains SQLite's typing system nicely.
You will not lose data. String and text in SQLite are the same. There are really only five types in SQLite (NULL, INTEGER, REAL, TEXT, BLOB). Even if your field originally contained binary data (BLOB) and the database type was changed to TEXT the data is unchanged unless you store new data.
Let's say I have 'myStoredProcedure' that takes in an Id as a parameter, and returns a table of information.
Is it possible to write a SQL statement similar to this?
SELECT
MyColumn
FROM
Table-ify('myStoredProcedure ' + #MyId) AS [MyTable]
I get the feeling that it's not, but it would be very beneficial in a scenario I have with legacy code & linked server tables
Thanks!
You can use a table value function in this way.
Here is a few tricks...
No it is not - at least not in any official or documented way - unless you change your stored procedure to a TVF.
But however there are ways (read) hacks to do it. All of them basically involved a linked server and using OpenQuery - for example seehere. Do however note that it is quite fragile as you need to hardcode the name of the server - so it can be problematic if you have multiple sql server instances with different name.
Here is a pretty good summary of the ways of sharing data between stored procedures http://www.sommarskog.se/share_data.html.
Basically it depends what you want to do. The most common ways are creating the temporary table prior to calling the stored procedure and having it fill it, or having one permanent table that the stored procedure dumps the data into which also contains the process id.
Table Valued functions have been mentioned, but there are a number of restrictions when you create a function as opposed to a stored procedure, so they may or may not be right for you. The link provides a good guide to what is available.
SQL Server 2005 and SQL Server 2008 change the options a bit. SQL Server 2005+ make working with XML much easier. So XML can be passed as an output variable and pretty easily "shredded" into a table using the XML functions nodes and value. I believe SQL 2008 allows table variables to be passed into stored procedures (although read only). Since you cited SQL 2000 the 2005+ enhancements don't apply to you, but I mentioned them for completeness.
Most likely you'll go with a table valued function, or creating the temporary table prior to calling the stored procedure and then having it populate that.
While working on the project, I used the following to insert the results of xp_readerrorlog (afaik, returns a table) into a temporary table created ahead of time.
INSERT INTO [tempdb].[dbo].[ErrorLogsTMP]
EXEC master.dbo.xp_readerrorlog
From the temporary table, select the columns you want.