Is there a system table where Informix database stores column descriptions?
I know how to do it in SQL Server, Oracle, but not in Informix...
There are several methods to obtain information regarding Informix database objects (like tables or columns) using the Informix catalog tables.
You can get the info directly from the catalog using some basic SQL select statement, for example:
SELECT TRIM(c.colname) colname,
CASE
WHEN MOD(coltype,256)=0 THEN 'CHAR'
WHEN MOD(coltype,256)=1 THEN 'SMALLINT'
WHEN MOD(coltype,256)=2 THEN 'INTEGER'
WHEN MOD(coltype,256)=3 THEN 'FLOAT'
WHEN MOD(coltype,256)=4 THEN 'SMALLFLOAT'
WHEN MOD(coltype,256)=5 THEN 'DECIMAL'
WHEN MOD(coltype,256)=6 THEN 'SERIAL'
WHEN MOD(coltype,256)=7 THEN 'DATE'
WHEN MOD(coltype,256)=8 THEN 'MONEY'
-- needs more entries --
ELSE TO_CHAR(coltype)
END AS Type,
BITAND(coltype,256)=256 AS NotNull
FROM systables AS t
JOIN syscolumns AS c ON t.tabid = c.tabid
WHERE t.tabtype = 'T'
AND t.tabname = 'customer'
ORDER by c.colno;
Note that the CASE section will need an entry for each Informix data type.
Description of all catalog tables can be found at:
https://www.ibm.com/docs/en/informix-servers/14.10?topic=reference-system-catalog-tables
https://www.ibm.com/docs/en/informix-servers/14.10?topic=tables-systables
https://www.ibm.com/docs/en/informix-servers/14.10?topic=tables-syscolumns
Another method will be to use the INFO SQL statement as described here:
https://www.ibm.com/docs/en/informix-servers/14.10?topic=statements-info-statement
Something like:
informix#DESKTOP:~/IDS$ dbaccess stores7 -
Database selected.
> info columns for customer;
Column name Type Nulls
customer_num serial no
fname char(15) yes
lname char(15) yes
company char(20) yes
address1 char(20) yes
address2 char(20) yes
city char(15) yes
state char(2) yes
zipcode char(5) yes
phone char(18) yes
>
Alternatively, you can use any metadata method provided by the API (like JDBC's DatabaseMetaData.getColumns()
Related
I have tried to use lookup join but i find this problem:
SELECT
> e.isFired,
> e.eventMrid,
> e.createDateTime,
> r.id AS eventReference_id,
> r.type
> FROM Event e
> JOIN EventReference FOR SYSTEM_TIME AS OF e.createDateTime AS r
> ON r.id = e.eventReference_id;
[ERROR] Could not execute SQL statement. Reason: org.apache.flink.table.api.ValidationException: Event-Time Temporal Table Join requires both primary key and row time attribute in versioned table, but no row time attribute can be found.
Whether that query will be interpreted by the Flink SQL planner as a temporal join or a lookup join depends on the type of the table on the right-hand side. In this case I guess you haven't used a lookup source. And your time attribute might not be defined correctly.
Temporal (time-versioned) joins require
an equality predicate on the primary key of the versioned table
a time attribute
and lookup joins require
a lookup source connector, (e.g., JDBC, HBase, Hive, or something custom)
an equality join predicate
using a processing time attribute in combination with
FOR SYSTEM_TIME AS OF (to prevent needing to update the join results)
I've searched for a bit but wasn't able to find a specific question on this. How do I obtain all the column names in a table whose names contain a specific string? Specifically, if the column name satisfies like %bal% then I would like to write a query that would return that column name and others that meet that criteria in Sybase.
Edit: The Sybase RDBMS is Sybase IQ.
Updated based on OPs additional comments re: question is for a Sybase IQ database.
I don't have a Sybase IQ database in front of me at the moment but we should be able to piece together a workable query based on IQ's system tables/views:
IQ system views
IQ system tables
The easier query will use the system view SYSCOLUMNS:
select cname
from SYS.SYSCOLUMNS
where tname = '<table_name>'
and cname like '%<pattern_to_match>%'
Or going against the system tables SYSTABLE and SYSCOLUMN:
select c.column_name
from SYS.SYSTABLE t
join SYS.SYSCOLUMN c
on t.table_id = c.table_id
where t.table_name = '<table_name>'
and c.column_name like '%<pattern_to_match>%'
NOTE: The Sybase ASE query (below) will probably also work since the referenced (ASE) system tables (sysobjects, syscolumns) also exist in SQL Anywhere/IQ products as a (partial) attempt to provide (ASE) T-SQL compatibility.
Assuming this is Sybase ASE then a quick join between sysobjects and syscolumns should suffice:
select c.name
from dbo.sysobjects o
join dbo.syscolumns c
on o.id = c.id
and o.type in ('U','S') -- 'U'ser or 'S'ystem table
where o.name = '<table_name>'
and c.name like '%<portion_of_column_name>%'
For example, let's say we want to find all columns in the sysobjects table where the column name contains the string 'trig':
select c.name
from dbo.sysobjects o
join dbo.syscolumns c
on o.id = c.id
and o.type in ('U','S')
where o.name = 'sysobjects'
and c.name like '%trig%'
order by 1
go
----------
deltrig
instrig
seltrig
updtrig
Is it possible for a Sumologic user to define data source values inside a Query and use it in subquery condition?
For example in SQL, one can use literal data as source table.
-- example in MySQL
SELECT * FROM (
SELECT 1 as `id`, 'Alice' as `name`
UNION ALL
SELECT 2 as `id`, 'Bob' as `name`
-- ...
) as literal_table
I wonder if Sumo logic also have such kind of functionality.
I believe combining such literal with subqueries would make user's life easier.
I believe the equivalent in a Sumo Logic query would be combining the save operator to create a lookup table in a subquery: https://help.sumologic.com/05Search/Subqueries#Reference_data_from_child_query_using_save_and_lookup
Basically something like this:
_sourceCategory=katta
[subquery:(_sourceCategory=stream explainJSONPlan.ETT) error
| where !(statusmessage="Finished successfully" or statusmessage="Query canceled" or isNull(statusMessage))
| count by sessionId, statusMessage
| fields -_count
| save /explainPlan/neededSessions
| compose sessionId keywords]
| parse "[sessionId=*]" as sessionId
| lookup statusMessage from /explainPlan/neededSessions on sessionid=sessionid
Where /explainPlan/neededSessions is your literal data table that you select from later on in the query (using lookup).
You can define a lookup table with some static map/dictionary you update not so often (you can even point to a file in the internet in case you change the mapping often).
And then you can use the |lookup operator. It's nothing special for subqueries.
Disclaimer: I am currently employed by Sumo Logic.
I am having a scenario, where I am having 5 different tables:
Table 1 - Product, Columns - ProductId, BatchNummer, Status, GroupId, OrderNummer
Table 2 - ProductGrop, Columns - GropId, ProductType, Description
Table 3 - Electronics, Columns - EId, Description, BatchNummer, OrderNummer, OrderData
Table 4 - Manual, Columns - MId, Description, Status, OrderNummer, ProcessStep
Table 5 - ProcessedProduct, columns same as Product with one extra column of datetime
Now, according to business flow, I need to populate all the data from Product table, and have to check if the underlying table (Electronics or Manual, which depends on ProductType column of ProductGoup) has ordernuumer value, then Insert a record in table 5 "ProcessedProduct" else skip the records.
For this requirement, i want to create a procedure. But I am stuck on how to check which underlying table (Electronics/Manual) shall i have to refer and how it can be achieved.
Moreover how should i write the loop for inserting the records.
Note: I cannot change the tables schema.
With a PL/SQL procedure you can just switch within a LOOP, but you don't need an imperative algoritm if you just need to check if OrderNummer is either into Electronics or Manuals.
Supposing the detail table is chosen by ProductType value either "Electronics" or "Manuals", you could:
INSERT INTO ProcessedProduct (ProductId, BatchNummer, Status, GroupId, OrderNummer, TS)
SELECT ProductId, BatchNummer, Status, GroupId, OrderNummer, SYSDATE
FROM Product p
INNER JOIN ProductGroup pg USING (GroupId)
WHERE EXISTS (
SELECT NULL FROM Electronics e
WHERE p.OrderNummer = e.OrderNummer
AND pg.ProductType = 'Electronics'
UNION
SELECT NULL FROM Manuals m
WHERE m.OrderNummer = m.OrderNummer
AND pg.ProductType = 'Manuals')
Plain SQL is always the fastest way, and "WHERE EXISTS" is usually the fastest condition.
I have a column with the type of Varchar in my Postgres database which I meant to be integers... and now I want to change them, unfortunately this doesn't seem to work using my rails migration.
change_column :table1, :columnB, :integer
Which seems to output this SQL:
ALTER TABLE table1 ALTER COLUMN columnB TYPE integer
So I tried doing this:
execute 'ALTER TABLE table1 ALTER COLUMN columnB TYPE integer USING CAST(columnB AS INTEGER)'
but cast doesn't work in this instance because some of the column are null...
any ideas?
Error:
PGError: ERROR: invalid input syntax for integer: ""
: ALTER TABLE table1 ALTER COLUMN columnB TYPE integer USING CAST(columnB AS INTEGER)
Postgres v8.3
It sounds like the problem is that you have empty strings in your table. You'll need to handle those, probably with a case statement, such as:
execute %{ALTER TABLE "table1" ALTER COLUMN columnB TYPE integer USING CAST(CASE columnB WHEN '' THEN NULL ELSE columnB END AS INTEGER)}
Update: completely rewritten based on updated question.
NULLs shouldnt be a problem here.
Tell us your postgresql version and your error message.
Besides, why are you quoting identifiers ? Be aware that unquoted identifiers are converted to lowercase (default behaviour), so there might be a problem with your "columnB" in your query - it appears quoted first, unquoted in the cast.
Update: Before converting a column to integer, you must be sure that all you values are convertible. In this case, it means that columnB should contains only digits (or null).
You can check this by something like
select columnB from table where not columnB ~ E'^[0-9]+$';
If you want your empty strings to be converted to NULL integers, then run first
UPDATE table set columnB = NULL WHERE columnB = '';