I have a timestamp with time zone field named statdate and the entry looks like this 2021-11-17 12:47:54-08. I want to create a field with just the time of day expressed locally, so it would look like 12:47:54. (This data was recorded on an iPhone and it's 12:28 PST). (Go to bottom of post for solution using views from #AdrianKalver)
select *,statdate :: timestamp :: time as stattime from table
works in PGAdmin and an example result is 12:47:54 as desired. How do I make this an alter table
ALTER TABLE tablename add COLUMN stattime timestamp generated always AS (select *,statdate :: timestamp :: time as stattime from tablename) stored;
is the wrong syntax.
ALTER TABLE tablename add COLUMN stattime timestamp generated always AS ( EXTRACT(HOUR FROM statdate) || ':' || EXTRACT(MINUTE FROM statdate) || ':' || EXTRACT(SECOND FROM statdate)) stored;
ERROR: generation expression is not immutable which I'm presuming is a type problem, although postgres can concatenate strings and numbers with this syntax.
Just tried something else
ALTER TABLE tablename add COLUMN stattime timestamp generated always AS ( Cast(EXTRACT(HOUR FROM statdate) as text) || ':' || cast(EXTRACT(MINUTE FROM statdate) as text) || ':' || cast(EXTRACT(SECOND FROM statdate) as text) ) stored; -- ERROR: generation expression is not immutable
I'm using the hours and minutes for a graph and I can't get in the middle of the Chartkick. Could do it in High Charts, but think it will be simpler to create the view chart and use that. The Rails/Chartkick looks like
<%= line_chart TableName.where(statdate: start..current_date).pluck(:statdate, :y_axis) %>
and can't break that apart. So will go with creating a View Table.
What's the right way to do this? I've looked here and at the postgresql docs and not having much luck.
Following comments, the solution
CREATE OR REPLACE VIEW public.view_bp_with_time AS
SELECT
id,
statdate,
statdate :: time AS stattime,
y-axis
FROM table_name
ORDER BY statdate
Now to bring into Rails. Not as straightforward as I thought. And I'm off the computer for the next week.
Per here:
https://www.postgresql.org/docs/current/sql-createtable.html
GENERATED ALWAYS AS ( generation_expr ) STORED
This clause creates the column as a generated column. The column cannot be written to, and when read the result of the specified expression will be returned.
The keyword STORED is required to signify that the column will be computed on write and will be stored on disk.
The generation expression can refer to other columns in the table, but not other generated columns. Any functions and operators used must be immutable. References to other tables are not allowed.
Basically the cast from timestamptz to timestamp is not immutable as there are time zones involved.
For more information see:
https://www.postgresql.org/docs/14/xfunc-volatility.html
Either:
Create a view that does the conversion.
Include it in your query as you show for the pgAdmin4 example.
Create a timestamp field on the table and either add the value to that field as part of INSERT\UPDATE or add a trigger that does that.
Related
I currently have the following tables in a database:
create table map (
id bigint not null unique,
zone box not null,
...
primary key(id)
);
create table other_map (
id bigint not null unique,
zone box not null,
...
primary key(id),
foreign key(id) references map(id)
);
I don't want to allow a new row to be inserted in other_map if there is a row in map whose id is equal to the new entry's id and their zone attributes overlap. I found this answer, which explains how to detect overlapping boxes, but I'd like to know how to (best) apply that in Postgres.
This is what I've come up with so far, using a trigger and a stored procedure:
CREATE OR REPLACE FUNCTION PROC_other_map_IU()
RETURNS TRIGGER AS $PROC_other_map_IU$
DECLARE
id bigint;
zone box;
BEGIN
SELECT map.id, map.zone INTO id, zone
FROM map
WHERE map.id = NEW.id;
IF zone = NEW.zone THEN
RAISE EXCEPTION '%\'s zone overlaps with existing %\'s zone', NEW.id, id;
END IF;
RETURN NEW;
END;
$PROC_other_map_IU$ LANGUAGE plpgsql;
CREATE TRIGGER TR_other_map_IU
AFTER INSERT OR UPDATE
ON other_map
FOR EACH ROW EXECUTE PROCEDURE PROC_other_map_IU();
Now obviously this is wrong, because it simply checks if the zone attributes are equal.
Thank you in advance for your input! Cheers!
Took me a while, but Postgres' geometric functions and operators (more specifically the && - or overlap - operator) do exactly what I wanted:
IF zone && NEW.zone THEN
I'm trying to come up with a PostgreSQL schema for host data that's currently in an LDAP store. Part of that data is the list of hostnames a machine can have, and that attribute is generally the key that most people use to find the host records.
One thing I'd like to get out of moving this data to an RDBMS is the ability to set a uniqueness constraint on the hostname column so that duplicate hostnames can't be assigned. This would be easy if hosts could only have one name, but since they can have more than one it's more complicated.
I realize that the fully-normalized way to do this would be to have a hostnames table with a foreign key pointing back to the hosts table, but I'd like to avoid having everybody need to do joins for even the simplest query:
select hostnames.name,hosts.*
from hostnames,hosts
where hostnames.name = 'foobar'
and hostnames.host_id = hosts.id;
I figured using PostgreSQL arrays could work for this, and they certainly make the simple queries simple:
select * from hosts where names #> '{foobar}';
When I set a uniqueness constraint on the hostnames attribute, though, it of course treats the entire list of names as the unique value instead of each name. Is there a way to make each name unique across every row instead?
If not, does anyone know of another data-modeling approach that would make more sense?
The righteous path
You might want to reconsider normalizing your schema. It is not necessary for everyone to "join for even the simplest query". Create a VIEW for that.
Table could look like this:
CREATE TABLE hostname (
hostname_id serial PRIMARY KEY
, host_id int REFERENCES host(host_id) ON UPDATE CASCADE ON DELETE CASCADE
, hostname text UNIQUE
);
The surrogate primary key hostname_id is optional. I prefer to have one. In your case hostname could be the primary key. But many operations are faster with a simple, small integer key. Create a foreign key constraint to link to the table host.
Create a view like this:
CREATE VIEW v_host AS
SELECT h.*
, array_agg(hn.hostname) AS hostnames
-- , string_agg(hn.hostname, ', ') AS hostnames -- text instead of array
FROM host h
JOIN hostname hn USING (host_id)
GROUP BY h.host_id; -- works in v9.1+
Starting with pg 9.1, the primary key in the GROUP BY covers all columns of that table in the SELECT list. The release notes for version 9.1:
Allow non-GROUP BY columns in the query target list when the primary
key is specified in the GROUP BY clause
Queries can use the view like a table. Searching for a hostname will be much faster this way:
SELECT *
FROM host h
JOIN hostname hn USING (host_id)
WHERE hn.hostname = 'foobar';
Provided you have an index on host(host_id), which should be the case as it should be the primary key. Plus, the UNIQUE constraint on hostname(hostname) implements the other needed index automatically.
In Postgres 9.2+ a multicolumn index would be even better if you can get an index-only scan out of it:
CREATE INDEX hn_multi_idx ON hostname (hostname, host_id);
Starting with Postgres 9.3, you could use a MATERIALIZED VIEW, circumstances permitting. Especially if you read much more often than you write to the table.
The dark side (what you actually asked)
If I can't convince you of the righteous path, here is some assistance for the dark side:
Here is a demo how to enforce uniqueness of hostnames. I use a table hostname to collect hostnames and a trigger on the table host to keep it up to date. Unique violations raise an exception and abort the operation.
CREATE TABLE host(hostnames text[]);
CREATE TABLE hostname(hostname text PRIMARY KEY); -- pk enforces uniqueness
Trigger function:
CREATE OR REPLACE FUNCTION trg_host_insupdelbef()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
-- split UPDATE into DELETE & INSERT
IF TG_OP = 'UPDATE' THEN
IF OLD.hostnames IS DISTINCT FROM NEW.hostnames THEN -- keep going
ELSE
RETURN NEW; -- exit, nothing to do
END IF;
END IF;
IF TG_OP IN ('DELETE', 'UPDATE') THEN
DELETE FROM hostname h
USING unnest(OLD.hostnames) d(x)
WHERE h.hostname = d.x;
IF TG_OP = 'DELETE' THEN RETURN OLD; -- exit, we are done
END IF;
END IF;
-- control only reaches here for INSERT or UPDATE (with actual changes)
INSERT INTO hostname(hostname)
SELECT h
FROM unnest(NEW.hostnames) h;
RETURN NEW;
END
$func$;
Trigger:
CREATE TRIGGER host_insupdelbef
BEFORE INSERT OR DELETE OR UPDATE OF hostnames ON host
FOR EACH ROW EXECUTE FUNCTION trg_host_insupdelbef();
SQL Fiddle with test run.
Use a GIN index on the array column host.hostnames and array operators to work with it:
Why isn't my PostgreSQL array index getting used (Rails 4)?
Check if any of a given array of values are present in a Postgres array
In case anyone still needs what was in the original question:
CREATE TABLE testtable(
id serial PRIMARY KEY,
refs integer[],
EXCLUDE USING gist( refs WITH && )
);
INSERT INTO testtable( refs ) VALUES( ARRAY[100,200] );
INSERT INTO testtable( refs ) VALUES( ARRAY[200,300] );
and this would give you:
ERROR: conflicting key value violates exclusion constraint "testtable_refs_excl"
DETAIL: Key (refs)=({200,300}) conflicts with existing key (refs)=({100,200}).
Checked in Postgres 9.5 on Windows.
Note that this would create an index using the operator &&. So when you are working with testtable, it would be times faster to check ARRAY[x] && refs than x = ANY( refs ).
P.S. Generally I agree with the above answer. In 99% cases you'd prefer a normalized schema. Please try to avoid "hacky" stuff in production.
Is there a way to create postgres stored function (using plpgsql to be able to set input parameters) that returns a custom data set?
I've tried to do something like this according to official manual:
CREATE FUNCTION extended_sales(p_itemno int)
RETURNS TABLE(quantity int, total numeric) AS $$
BEGIN
RETURN QUERY SELECT quantity, quantity * price FROM sales
WHERE itemno = p_itemno;
END;
$$ LANGUAGE plpgsql;
but result is an array with only one column which contains type (quantity, total), but I need to get two column array with 'quantity' column and 'total' column.
At a guess you're running:
SELECT extended_sales(1);
This will return a composite type column. If you want it expanded, you must instead run:
SELECT * FROM extended_sales(1);
Also, as #a_horse_with_no_name notes, a PL/pgSQL function is completely unnecessary here. Presumably this is a simplified example?
In future please include:
Your PostgreSQL version; and
The exact SQL you ran and the exact output you got
I have 14 fields which are similar and I search the string 'A' on each of them. I would like after that order by "position" field
-- some set in order to remove a lot of useless text
def col='col01'
select '&col' "Fieldname",
&col "value",
position
from oneTable
where &col like '%A%'
/
-- then for the second field, I only have to type two lines
def col='col02'
/
...
def col='col14'
/
Write all the fields which contains 'A'. The problem is that those field are not ordered by position.
If I use UNION between table, I cannot take advantage of the substitution variables (&col), and I have to write a bash in unix in order to make the replacement back into ksh. The problem is of course that database code have to be hard-coded in this script (connection is not easy stuff).
If I use a REFCURSOR with OPEN, I cannot group the results sets together. I have only one request and cannot make an UNION of then. (print refcursor1 union refcursor2; print refcursor1+refcursor2 raise an exception, select * from refcursor1 union select * from refcursor2, does not work also).
How can concatenate results into one big "REFCURSOR"? Or use a union between two distinct run ('/') of my request, something like holding the request while typing new definition of variables?
Thank you for any advice.
Does this answer your question ?
CREATE TABLE #containingAValueTable
(
FieldName VARCHAR(10),
FieldValue VARCHAR(1000),
position int
)
def col='col01'
INSERT INTO #containingAValueTable
(
FieldName , FieldValue, position
)
SELECT '&col' "Fieldname",
&col "value",
position
FROM yourTable
WHERE &col LIKE '%A%'
/
-- then for the second field, I only have to type two lines
def col='col02'
INSERT INTO...
/
def col='col14'
/
select * from #containingAValueTable order by postion
DROP #containingAValueTable
But I'm not totally sure about your use of the 'substitution variable' called "col" (and i only have SQL Server to test my request so I used explicit field names)
edit : Sorry for the '#' charcater, we use it so often in SQL Server for temporaries, I didn't even know it was SQL Server specific (moreover I think it's mandatory in sql server for creating temporary table). Whatever, I'm happy I could be useful to you.
I have a column with the type of Varchar in my Postgres database which I meant to be integers... and now I want to change them, unfortunately this doesn't seem to work using my rails migration.
change_column :table1, :columnB, :integer
Which seems to output this SQL:
ALTER TABLE table1 ALTER COLUMN columnB TYPE integer
So I tried doing this:
execute 'ALTER TABLE table1 ALTER COLUMN columnB TYPE integer USING CAST(columnB AS INTEGER)'
but cast doesn't work in this instance because some of the column are null...
any ideas?
Error:
PGError: ERROR: invalid input syntax for integer: ""
: ALTER TABLE table1 ALTER COLUMN columnB TYPE integer USING CAST(columnB AS INTEGER)
Postgres v8.3
It sounds like the problem is that you have empty strings in your table. You'll need to handle those, probably with a case statement, such as:
execute %{ALTER TABLE "table1" ALTER COLUMN columnB TYPE integer USING CAST(CASE columnB WHEN '' THEN NULL ELSE columnB END AS INTEGER)}
Update: completely rewritten based on updated question.
NULLs shouldnt be a problem here.
Tell us your postgresql version and your error message.
Besides, why are you quoting identifiers ? Be aware that unquoted identifiers are converted to lowercase (default behaviour), so there might be a problem with your "columnB" in your query - it appears quoted first, unquoted in the cast.
Update: Before converting a column to integer, you must be sure that all you values are convertible. In this case, it means that columnB should contains only digits (or null).
You can check this by something like
select columnB from table where not columnB ~ E'^[0-9]+$';
If you want your empty strings to be converted to NULL integers, then run first
UPDATE table set columnB = NULL WHERE columnB = '';