I have created two tables in Snowflake.
create or replace TRANSIENT TABLE TESTPARENT (
COL1 NUMBER(38,0) NOT NULL,
COL2 VARCHAR(16777216) NOT NULL,
COL3 VARCHAR(16777216) NOT NULL,
constraint UNIQ_COL3 unique (COL3)
);
create or replace TRANSIENT TABLE TESTCHILD3 (
COL_A NUMBER(38,0) NOT NULL,
COL_B NUMBER(38,0) NOT NULL,
ABCDEF VARCHAR(16777216) NOT NULL,
constraint FKEY_1 foreign key (COL_A, COL_B) references TEST_DB.PUBLIC.TESTPARENT1(COL1,COL2),
constraint FKEY_2 foreign key (ABCDEF) references TEST_DB.PUBLIC.TESTPARENT(COL3)
);
Now I want to execute a query and see the names of columns that are involved in FKEY_2 FOREIGN KEY
in Table TESTCHILD3, but it seems like there are no DB Table/View that keeps this information. I can find out the column names for UNIQUE KEY & PRIMARY KEY but there is nothing for FOREIGN KEYS.
EDIT
I have already tried INFORMATION_SCHEMA.TABLE_CONSTRAINTS, along with INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS and all the other system tables. No luck. Only DESC TABLE is giving me some info related to CONSTRAINTS and COLUMNS but that also has FOREIGN KEY CONSTRAINTS information missing.
SHOW IMPORTED KEYS IN TABLE <fk_table_name>;
Updated answer:
I was checking on something unrelated and noticed a very efficient way to list all primary and foreign keys:
show exported keys in account; -- Foreign keys
show primary keys in account;
When you limit the call to a table, it appears you have to request the foreign keys that point to the parent table:
show exported keys in table "DB_NAME"."SCHEMA_NAME"."PARENT_TABLE";
You can check the documentation for how to limit the show command to a specific database or schema, but this returns rich information in a table very quickly.
maybe you can try to query this view: INFORMATION_SCHEMA.TABLE_CONSTRAINTS
Note: TABLE_CONSTRAINTS only displays objects for which the current role for the session has been granted access privileges.
For more see: https://docs.snowflake.net/manuals/sql-reference/info-schema/table_constraints.html
I keep having a cryptic "Syntax error" in this statement for an Informix database.
CREATE TABLE Historial
(
id_evento SERIAL PRIMARY KEY CONSTRAINT Historial_claves_primarias,
foranea_CI_Persona INT REFERENCES Personas (CI) CONSTRAINT Historial_fk_Personas_CI,
IP varchar(20) NOT NULL CONSTRAINT ip_vacia,
query lvarchar(1000) NOT NULL CONSTRAINT Historial_query_vacia,
fecha_hora DATETIME NOT NULL CONSTRAINT fecha_historial_vacio
);
Can someone help me and point out what it is? I seriously cant see it.
I have just found out that im supposed to give the precision of the measure as stated below, but with a cryptic error like that, what do you expect.
drop table if exists Historial;
CREATE TABLE Historial
(
id_evento SERIAL PRIMARY KEY CONSTRAINT Historial_claves_primarias,
foranea_CI_Persona INT REFERENCES Personas (CI) CONSTRAINT Historial_fk_Personas_CI,
IP varchar(20) NOT NULL CONSTRAINT ip_vacia,
query lvarchar(1000) NOT NULL CONSTRAINT Historial_query_vacia,
fecha_hora datetime year to minute not null constraint Historial_fecha_vacia
);
I am using Entity Framework 6.1.3 and a database first approach.
It is a small database with a many to many relationship between Tags and BoxedItems, a table named ItemsTags holds the relationship.
I get an exception when using the scaffolded code to delete a BoxedItem:
db.BoxedItems.Remove(boxedItem);
db.SaveChanges();
SqlException: The DELETE statement conflicted with the REFERENCE
constraint "FK_ItemsTags_Items". The conflict occurred in database
"TimeBox", table "dbo.ItemsTags", column 'IdItem'.
The relationship table code is bellow. The PK for BoxedItem needs to be a Guid and for Tags is a INT IDENTITY (1, 1).
CREATE TABLE [dbo].[ItemsTags] (
[IdItem] UNIQUEIDENTIFIER NOT NULL,
[IdTag] INT NOT NULL,
CONSTRAINT [PK_ItemsTags] PRIMARY KEY CLUSTERED ([IdItem] ASC, [IdTag] ASC),
CONSTRAINT [FK_ItemsTags_Tags] FOREIGN KEY ([IdTag]) REFERENCES [dbo].[Tags] ([Id]),
CONSTRAINT [FK_ItemsTags_Items] FOREIGN KEY ([IdItem]) REFERENCES [dbo].[BoxedItems] ([Id])
);
Would the EF auto generated code work out of the box if my BoxedItem PK was INT IDENTITY (1, 1)? EF seems to like it more for auto generated code.
Is there a smarter way to delete the BoxedItem, other than a custom SQL instruction?
table creation failed because a foreign key constraint was not correctly formed
the first create table my_seeking, works fine, when I try to run the create contactstoseeking I get the following err msg
Error Code: 1005. Can't create table 'db_ferguson.contactstoseeking'
(errno: 150)
CREATE TABLE IF NOT EXISTS `my_seeking` (
`id` INT NOT NULL ,
`seeking` VARCHAR(45) NULL ,
PRIMARY KEY (`id`) )
ENGINE = InnoDB
;
CREATE TABLE IF NOT EXISTS `contactsTOseeking` (
`id` INT NOT NULL ,
`seek` VARCHAR(45) NOT NULL ,
UNIQUE INDEX `id_UNIQUE` (`id` ASC) ,
UNIQUE INDEX `seek_UNIQUE` (`seek` ASC) ,
CONSTRAINT `fk_contactsTOseeking_my_seeking1`
FOREIGN KEY (`id` )
REFERENCES `mydb`.`my_seeking` (`id` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
A search of the web shows that err 150 is tied to my fk constraint.
Cannot create table. If the error message refers to error 150, table creation failed because a foreign key constraint was not correctly formed. If the error message refers to error –1, table creation probably failed because the table includes a column name that matched the name of an internal InnoDB table."
this code was generated BY MySQL, and this is my first attempt at foreign keys. I tried coding it manually and had issues so I thought that I would let workbench do it. WHAT EVER the problem is I cant see it or it is beyond my current skill set
Did you try removing the 'mydb' text in the reference? It indicates a table in 'mydb' database.
I have an application that I know would make a great cube and would be useful for more than the standard flat Reporting Services report. We're about to jump into BI stuff with a consultant, but I'd like to give it a shot before we do, mostly so I know something of what we're going to do.
The application tracks surveys in nursing homes across the country. They can be annual, complaint, or several other type of survey, they have penalties associated with tags given, and have documentation associated with them.
What I'd like to do is come up with a way that will allow us to leverage the data we have - how many tags in florida for the month of June? How many facilities were on time delivering their documentation? How many annual(surprise) surveys happened in the 1st quarter of this year compared to last year?
I'm including the schemas in hopes that someone will be able to tell me not only what is dim and what is fact, but what data goes where. I figure that'll be a great start.
Anything would be really helpful. I'm trying to get a small data mart set up while I'm pouring through the Data Warehouse Lifecycle Toolkit by Kimball.
Thanks!
M#
The Entity table - a list of all of our facilities: Primary key is a five letter code denoting the building
CREATE TABLE [dbo].[Entity](
[entID] [varchar](10) NOT NULL,
[entShortName] [varchar](150) NULL,
[entNumericID] [int] NOT NULL,
[orgID] [int] NOT NULL,
[regionID] [int] NOT NULL,
[portID] [int] NOT NULL,
[busTypeID] [int] NOT NULL,
[adpID] [varchar](50) NULL,
[eHealthDataID] [varchar](50) NULL,
[updateDate] [datetime] NULL CONSTRAINT [DF_Entity_updateDate] DEFAULT (getdate()),
[powProID] [int] NULL,
[regionReportingID] [int] NULL,
[regionPresEmail] [varchar](300) NULL,
[regionClinDirEmail] [varchar](300) NULL,
CONSTRAINT [PK_EntityNEW] PRIMARY KEY CLUSTERED
(
[entID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 75) ON [PRIMARY]
) ON [PRIMARY]
Survey Main
CREATE TABLE [dbo].[surveyMain](
[surveyID] [int] IDENTITY(1,1) NOT NULL,
[surveyDateFac] AS (([facility]+'-')+CONVERT([varchar],[surveyDate],(101))),
[surveyDate] [datetime] NOT NULL,
[surveyType] [int] NOT NULL,
[surveyBy] [int] NULL,
[facility] [varchar](10) NOT NULL,
[originalSurvey] [int] NULL,
[exitDate] [datetime] NULL,
[dpnaDate] AS (dateadd(month,(3),[exitDate])),
[clearedTags] [varchar](1) NULL,
[substantiated] [varchar](1) NULL,
[firstRevisit] [int] NULL,
[secondRevisit] [int] NULL,
[thirdRevisit] [int] NULL,
[fourthRevisit] [int] NULL,
[updated] [datetime] NULL CONSTRAINT [DF_surveyMain_updated] DEFAULT (getdate()),
CONSTRAINT [PK_tagSurvey] PRIMARY KEY CLUSTERED
(
[surveyID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
Survey Types:
CREATE TABLE [dbo].[surveyTypes](
[surveyTypeID] [int] IDENTITY(1,1) NOT NULL,
[surveyTypeDesc] [varchar](100) NOT NULL,
CONSTRAINT [PK_surveyTypes] PRIMARY KEY CLUSTERED
(
[surveyTypeID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Survey Files
CREATE TABLE [dbo].[surveyFiles](
[surveyFileID] [int] IDENTITY(1,1) NOT NULL,
[surveyID] [int] NOT NULL,
[surveyFilesTypeID] [int] NOT NULL,
[documentDate] [datetime] NOT NULL,
[responseDate] [datetime] NULL,
[receiptDate] [datetime] NULL,
[dateCertain] [datetime] NULL,
[fileName] [varchar](250) NULL,
[fileUpload] [image] NULL,
[fileDesc] [varchar](100) NULL,
[updated] [datetime] NOT NULL CONSTRAINT [DF_surveyFiles_updated] DEFAULT (getdate()),
CONSTRAINT [PK_surveyFiles] PRIMARY KEY CLUSTERED
(
[surveyFileID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 75) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Survey Fines
CREATE TABLE [dbo].[surveyFines](
[surveyFinesID] [int] IDENTITY(1,1) NOT NULL,
[surveyID] [int] NULL,
[surveyFinesTypeID] [int] NULL,
[dateRecommended] [datetime] NULL,
[dateImposed] [datetime] NULL,
[totalFineAmt] [varchar](100) NULL,
[wasImposed] [varchar](3) NULL,
[dateCleared] [datetime] NULL,
[comments] [varchar](500) NULL,
[updated] [datetime] NOT NULL CONSTRAINT [DF_surveyFines_updated] DEFAULT (getdate()),
CONSTRAINT [PK_surveyFines] PRIMARY KEY CLUSTERED
(
[surveyFinesID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 75) ON [PRIMARY]
) ON [PRIMARY]
Survey Tags
CREATE TABLE [dbo].[surveyTags](
[seq] [int] IDENTITY(1,1) NOT NULL,
[surveyID] [int] NOT NULL,
[tagDescID] [int] NOT NULL,
[tagStatus] [int] NULL,
[scopesev] [varchar](5) NOT NULL,
[comments] [varchar](1000) NULL,
[clearedDate] [datetime] NULL,
[updated] [datetime] NULL CONSTRAINT [DF_surveyTags_updated] DEFAULT (getdate()),
CONSTRAINT [PK_tagMain] PRIMARY KEY CLUSTERED
(
[seq] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
What I'd like to do is come up with a way that will allow us to leverage the data we have - how many tags in Florida for the month of June? How many facilities were on time delivering their documentation? How many annual(surprise) surveys happened in the 1st quarter of this year compared to last year?
A dimension is a measurement range. The measurement range can be continuous, like dates, or discrete, like facilities. In your questions, the dimensions are facility and date, date/time, and date, respectively.
The only way you can answer the question "How many tags in Florida for the month of June?" is to associate tags with facilities and tags with dates.
The only way you can answer the question "How many facilities were on time delivering their documentation?" is to associate documentation delivery with facility and date due with facility.
You should follow this same analytical process with the rest of the questions or queries you expect the data warehouse to answer.
A fact is an entity or an object. A tag is a fact. Documentation delivery is a fact. Facts are almost always immutable in a data warehouse once they're loaded.
As to your schema, I'd have to study it more to give specific recommendations, but in general, you want to use a star schema. The center of the star(s) are your facts, entities, and objects. The tables that make up the points of the star are your dimension tables.
The first thing you need to do is separate your facts and your dimensions. None of your entity tables should contain dates, location codes, or whatever else you determine is a dimension. However, your fact tables will contain foreign keys to date tables, location tables, or other dimension tables.
You'll probably also need summary tables. Summary tables contain the same columns as your fact tables, with the addition of one or more sums across different dimensions. As an example, the question "How many tags in Florida for the month of June?" can be answered much quicker if you already have the sum of the tags for Florida (or, more properly, each facility in Florida) for the month (or each of the days) of June, 2010.
The period that you sum for depends on the mixture of queries that you expect. In your data warehouse, day might be too short a period. In other words, it's just as quick to do the summary in SQL as it is to select the summary row.
You'll need a calendar table too. A calendar table makes questions like, "How many annual(surprise) surveys happened in the 1st quarter of this year compared to (the 1st quarter of) last year?" much easier to query.
This is quite a task for a support forum, so I will focus on just one part of the problem.
Seems that one survey can consists of several visits, so I would suggest factSurveyVisit with a grain of one visit-event. The column SurveyID acts as a degenerate dimension in this model and is common to all visits from the same survey. The SurveyVisitSequenceID is a unique auto-increment (integer) and is used to simplify linking of the two bridge tables for documents and tags to the fact table.
You could also promote a survey into a full dimension dimSurvey to add some notes etc; use SurveyID for link.
I did not tackle fines here, for this I would suggest factFine table which would have its own links to dimDate, dimTime, dimFacility, etc so that reports regarding fines ($$) can be done fast without joining to most of the visit related tables. There should also be a bridge table joining factFine to factSurveyVisit, providing fines are related to each visit and not to a completed survey.
EDIT
Just noticed that your Tag table has date_cleared, so admittedly I do not understand the tagging in this business. In the model, dimTag is just a list of available tags. There may be one more factFacilityStatus table linking dimFacility and dimTag, tracking tag status for each facility.
It looks like you have multiple Fines, Files and Tags for each survey.
I would expect 4 fact tables - with the facts in each looking like they are largely datetime data (although these are often modelled as roles of a date and/or time dimension - I've made a couple notes here, but flags are generally going to be in dimensions):
SurveyMain
SurveyFine (wasImposed is in a dimension linked to this fact, totalFineAmt is a fact in this table)
SurveyFile
SurveyTag
They would all share a Survey dimension, and I would go ahead and share an Entity/Facility dimension in each one. You could snowflake through the Survey dimension, but that defeats the most beneficial point of star models allowing you to get to all data directly instead of going though bridge tables.
You have an option of putting the survey type in it's own dimension (or a junk dimension, perhaps) or having it accessed through the Survey dimension (not through a snowflake). That's typical with dimensional modeling - you don't need to follow your entities - you just need to avoid the too many dimensions and too few dimensions trap and watch the cardinality of your dimensions - especially if you've accidentally included some degenerate dimension like an invoice number which changes with every fact and so needs to be stored in the fact table.
Actually, it's sometimes easier to do your star models by doing the typical joins in your 3NF which create typical flat reporting views and then simply taking those flat rows and turning them into stars. (That's how little relevance the entity-relationship model really has to the dimensional model). So you might join SurveyMain to SurveyTypes and SurveyFine on your current normalized keys and look at all the columns. This would be the basis for the SurveyFine fact table. Ditto for the other fact tables I identified. The shared stuff would be a candidate for shared dimensions. Entity is a good candidate for a conformed dimension (i.e. it's going to be shared between these survey models and other models related to your enterprise - like HR models or accounting models).
I would setup SurveyFines, SurveyTag and SurveyFiles fact tables, they are all different grains of facts and they all represent the lowest grain.
They would all have date, Entity and Survey Dimensions with them.
I would then setup pre-aggregated metric tables for those metrics which might need to combine all three facts.
If you would like me to elaborate feel free to ask. I'm in a bit of rush today.
(continuing...)
It would appear to me, that your users want to pivot the measurable data (number of files, date files were sent, sum of fines). They want to look at those metrics by attributes of the Survey. That's why I suggest a survey dimension.
Considering your comment below, I might then build a pre-aggregate metric table,
Date (the date I loaded the metric table)
SurveyDimID
EntityDimID
NumTagsAssigned
NumFilesRequested
NumFilesReceived
NumFines
TotalFines
etc...
I would load this table everyday with the full set of active survey data from my fact tables. This allows the users to go back and forth through history to see how the survey's came in.
I suppose at some point the entire survey process is complete, at that point those records would not be included in the metric load. (They would remain in the facts).