Noticed schema has changed on an Informix database for checking constraint in the create table statements. Will this be a problem for my application read/write to this table if there are no other differences to the field names, data types, etc..
Example of the original:
check (cs_addl IN ('y' ,'n' )),
Example of the new schema:
check (cs_addl IN ('y' ,'n' )) constraint "informix".cs_check4,
TL;DR — There's no problem and no change of behaviour.
The constraint name appears in the 'wrong' place compared to standard SQL, but that has no effect on the behaviour of the constraint. (You can find more information at GitHub — SQL specifications for SQL-92, SQL-99, SQL-2003). It just gives you a more convenient name to use if you ever need to drop or disable the constraint.
Even NOT NULL constraints formally have names; a name is created for you if you (like everyone else) don't name it.
Related
I have a generic table in global area and i want to use it in SELECT from. Is this possible or is there a way do this ?
Example Code:
FIELD-SYMBOLS: <gt_data> TYPE STANDARD TABLE.
CLASS-DATA: mo_data TYPE REF TO data.
CREATE DATA mo_data LIKE lt_data.
ASSIGN mo_data->* TO <gt_data>.
<gt_data> = lt_data.
SELECT data~matnr,
mbew~malzeme_deger
FROM zmm_ddl_mbew AS mbew
INNER JOIN #<gt_data> AS data ON data~matnr EQ mbew~matnr
INTO TABLE #DATA(lt_mbew).
If the Generic table you are asking about is an internal Table which the code snippet suggests, then
No i dont think you cant build a join to work on 2 different sources.
Unless there are some new kernel developments, the select statements are converted to DB SQL statements.
ABAP 7.5 documentation of Select statement refers to the from "data_source" as dbtab,View or cds_entity as possible sources.
Even if it was possible there are still other generic options that may make more sense. If the source internal data is small enough, then you can build a generic where clause to solve the problem.
Select from DBTAB where (string_cond).
If the size of the internal table is so large that you end up with half the data in memory and half on a DB, there may be a better generic solution anyway.
No, it is not possible. From the SELECT datasource help:
If the FROM clause is specified statically, the internal table cannot be a generically typed formal parameter or a generically typed field symbol. Objects like this can only be specified in a dynamic FROM clause and must represent a matching internal table at runtime
The above rule remains valid whether itab joined with dbtab or not.
I would like to know how I can prevent a duplicate entry (based on my own client/project definition of what that means-below), in an AppSheet mobile app connected to Google Sheets.
AppSheet talks alot about UNIQUEID() which they also encourage using and designating as the KEY field. row_number is another possibility.
This is fine for the KEY in the sense of its purpose is to be unique, meaningless, and uniquely identify a record, and relate to other tables.
However, it doesn't prevent a duplicate ("duplicate" again, as defined by my own client's business rules&process) from occurring. I mean, I assume the UniqueId() theoretically would, but that's abstract theory, because it would only produce unique ones anyway.
MY TABLE HAS THESE COLUMN: [FACILITY NUMBER] and [TIMESTAMP] (date and time of event). We consider it a duplicate event, and want to DISALLOW the adding of such a record to this table, if the 2nd record has the same DATE (time irrelevant), with the same FACILITY. (we just do one facility per day, ever).
In AppSheet how can I create some logic that disallows the add based on that criteria? I even basically know some ways I would do it. it just seems like I can't find a place to "put" it. I created an expression that perfectly evalutes to TRUE or FALSE and nothing else, (by referencing whether or not the FACILIY NUMBER on the new record being added is in a SLICE which I've defined as today's entries). I wanted to place this expression in another (random) field's VALIDIF. To me it seemed like that would meet the platform documentation. the other random field would be considered valid, only if the expression evaluated to true. but instead appsheet thought i wanted to conver the entire [other random column] to a dependent dropdown.
Please help! I will cry tears of joy when appsheet introduces FORM events and RECORD events that can be hooked into at the time of keying, saving, etc.
surprised to see this question here in stackoverflow --- most AppSheet questions are at http://community.appsheet.com.
The brief answer is that you are doing the right thing providing a Valid_If constraint. Your constraint is of the form IN([_THIS], ) so AppSheet is doing the "smart" thing by automatically converting that list into a dropdown of allowed values. From your post, it appears that you may instead want to say NOT(IN([_THIS], )) -- thereby saying that the value [_THIS] is valid as long as it is not in the list specified (making sure it is not a duplicate).
Old question, but in case someone stumbles upon the same:
The (not so simple) answer is given in https://help.appsheet.com/en/articles/961274-list-expressions-and-aggregates.
From the reference:
NOT(IN([_THIS], SELECT(Customers[State], NOT(IN([CustomerId],
LIST([_THISROW].[CustomerId])))))): when used as the Valid_If
condition for the State column, it ensures that every customer has a
unique value for State. In this example, we assume that CustomerId is
the key for the Customers table.
This could be written more schematic like this:
NOT(IN([_THIS], SELECT(<TableName>[<UnqiueColumnName>], NOT(IN([<KeyColumnName>], LIST([_THISROW].[<KeyColumnName>]))))))
Technically it says:
Get me a list of the current values of the column of the table
Ignore the value of the current row (identified by [_THISROW] and looking into the column)
Check, if the given value exists in the resulting list
This statement has to be defined - with the correct values for , & - as Valid_If statement.
I am very new to EF so my descriptions may not make sense. Please ask me to clarify anything and I'll do my best to find the answer. In an existing application we are using EF4. 95% of our string columns in our db are varchar, but we do have 5% being nvarchar. In the edmx file, I see the columns have the proper Unicode property set to true or false. Then we use .tt file to generate our entity classes. The problem is that the generated queries are trying to convert everything to unicode which is obviously slowing down all of our queries.
I found the following answers here but I don't believe they will help me. The first is using ColumnAttribute but from what I can see, this was not available until v4.1. The second seems like it overrides on a global level (although I don't understand where). Because we do have some nvarchar columns, I don't think this will work either. I've also seen use of AsNonUnicode() method. I have not fully researched if this is available in v4 because that seems like it needs to be used specifically every time we send a query. This is a large application so this would be a huge undertaking. Are these my only options here? Am I missing something? Any advice is appreciated.
Entity Framework Data Annotations Set StringLength VarChar
EF Code First - Globally set varchar mapping over nvarchar
Assuming in a Rails app connected to a Postgres database, you have a table called 'Party', which can have less than 5 well-defined party_types such as 'Person' or 'Organization'.
Would you store the party_type in the Party table (e.g. party.party_type = 'Person') or normalize it (e.g. party.party_type = 1 and party_type.id = 1 / party_type.name = 'Person')? And why?
If party type can be defined in code, I'll definitely go with the names "Person" etc.
If you expect such types will be dynamically added by admin/user, and have such GUI for it, then modelling it and set it like party.party_type = 1
Of course there will be a db storage/performance consideration between "1" VS "Person", but that's too minor to considerate when the app is not that big.
There are two issues here:
Are you treating these types generically or not?1
Do you display the type to the user?
If the answer to (1) is "yes", then just adding a row in the table is clearly preferable to changing a constraint and/or your application code.
If the answer to (2) is "yes", then storing a human-readable label in the database may be preferable to translating to human-readable text in the application code.
So in a nutshell, you'd probably want to have a separate table. On the other hand, if all types are known in advance and you just use them to drive specific paths of your application logic without directly displaying to user, then separate table may be superfluous - just define the appropriate CHECK to restrict the field to valid values and clearly document each value.
1 In other words, can you add a new type and the logic of your application will continue to work, or you also need to change the application logic?
We are building ASP.NET MVC3 web applications using Visual Studio, SQL Server 2008 R2 & EF Code First 4.1.
Quite often we have smaller, what we call, "lookup" tables. For example a "Status" table contain an "Id" and a "Name". As the application grows these tables become quite frequent and I would like to know the best way to "group" these lesser important tables away from the crux of the application.
It has been suggest to me to add a prefix like "LkStatus" to help me but what about moving all the lookup tables out of dbo and into there own schema?
Can anyone see any drawbacks in this method?
Thanks Paul
No drawbacks with this method. I'm a fan of schemas personally. I'd use Lookup though
To change your table schema, you have two ways:
ALTER SCHEMA Lookup TRANSFER dbo.SomeTable
or
ALTER AUTHORIZATION ON dbo.SomeTable TO Lookup
This is going to be down to preference. There really isn't a "gotcha" either way. I prefer a table prefix but wouldn't be bothered either way. We use LU_*. As long as either option is enforced that maintenance down the line will be easy.
Since the tables are small, what about grouping them together into a single table? Instead of using the table name as a pseudo-key, use a real key. For example, you could have a table called Lookup, with an Id, Type, Name and Value, where Type = 'Status' for your status values. Seting the clustered index to (Type, Name) would physically group all rows of the same type together, which would make it fast to read them all as a group, if needed.
If your Names can have different data types, add an extra column for each required type: one for integers, one for strings, one for floats, etc. You can do something similar using an XML column; the T-SQL takes just a little more effort.