Sql Plus how to copy a table with timestamp - sqlplus

I am trying to copy a data from a remote database in SqlPlus.
I have a source table like so
table_name: source_table
seqId | createDate
1 | 10-SEP-02 02.10.10.123000 AM
2 | 10-SEP-01 02.10.10.123000 AM
with the seqId being a number and createDate being a timestamp.
I attempted to copy the data to my table with:
COPY FROM &user_name/&password#&database
REPLACE x_source_table USING
SELECT *
FROM source_table;
but it throws an invalid data error.
I then attempted to cast the createDate in the syntax with
COPY FROM &user_name/&password#&database
REPLACE x_source_table USING
SELECT cast(createDate as Date) as createDate
FROM source_table;
to attempt to only copy the createDate but it did not work either.

Since the TIMESTAMP datatype is not supported by the COPY command you can't use COPY. I found this: "The COPY command is not being enhanced to handle datatypes or features introduced with, or after Oracle8. The COPY command is likely to be made obsolete in a future release. ". Here's some documentation for 10g: https://docs.oracle.com/cd/B19306_01/server.102/b14357/apb.htm.
You will have to look for an alternative method. Can you do this or a variation instead:
truncate x_dest_table;
insert into x_dest_table select <column_list> from x_source_table
-- or from x_source_table#dblink if you have a database link;
commit;
Or maybe use export/import? Or get a csv of the source table and use sqlldr to load it?
At least you have some options. Let us know what you end up doing.

Related

How to copy data from Netezza DEFINITION_SCHEMA [ignoring the bytea error]

I am trying to analyse the code used in the stored procs on our Netezza server.
First step is to get the definitions/code contained in the stored procs - this can easily be done by either of the following:
Using the system views
select
PROCEDURE,
PROCEDURESOURCE
from _v_procedure
where
PROCEDURE = 'MY_PROC'
;
Or using the base table [view points to this table]
select
PRONAME,
PROSRC as PROCEDURESOURCE
from
DEFINITION_SCHEMA."_T_PROC" P
where
PRONAME= 'MY_PROC'
Now, once I run some analysis on the PROCEDURESOURCE column and try to write this information out to a table, I always get the following error:
ERROR: Type 'bytea' not supported by IBM Netezza SQL
Easy way to replicate this error is simply doing the following
create table MY_SCHEMA.TEST_TMP as
with rs as
(
select
PRONAME,
PROSRC
from
DEFINITION_SCHEMA."_T_PROC" P
where
PRONAME = 'MY_PROC'
)
select * from rs
I have determined that there is a column in DEFINITION_SCHEMA."_T_PROC" of type bytea (column name = PROBIN)
I am however not selecting this column, so I am not sure why I am getting this error
Can anyone help with a workaround on how to copy the PROCEDURESOURCE into a new table and bypass the 'bytea' error
Thanks
3 suggestions:
1) Sometimes the ‘limit all’ trick helps: What are the benefits of using LIMIT ALL in a subquery?
2) Alternatively, do a ‘create external table’ and put your data into a file, then another statement to read it back from the file
3) last guess is that you may be able to explicitly cast the column to a more benign data type (Varchar() or similar)

TFDQuery and SQLite: Type mismatch for field, expecting: LargeInt actual: WideString

Using Delphi 10.2, SQLite and Teecharts. My SQLite database has two fields, created with:
CREATE TABLE HistoryRuntime ('DayTime' DateTime, Device1 INTEGER DEFAULT (0));
I access the table using a TFDQuery called qryGrpahRuntime with the following SQL:
SELECT DayTime AS TheDate, Sum(Device1) As DeviceTotal
FROM HistoryRuntime
WHERE (DayTime >= "2017-06-01") and (DayTime <= "2017-06-26")
Group by Date(DayTime)
Using the Field Editor in the Delphi IDE, I can add two persistent fields, getting TheDate as a TDateTimeField and DeviceTotal as a TLargeIntField.
I run this query in a program to create a TeeChart, which I created at design time. As long as the query returns some records, all this works. However, if there are no records for the requested dates, I get an EDatabaseError exception with the message:
qryGrpahRuntime: Type mismatch for field 'DeviceTotal', expecting: LargeInt actual: Widestring
I have done plenty of searching for solutions on the web on how to prevent this error on an empty query, but have had not luck with anything I found. From what I can tell, SQLite defaults to the wide string field when no data is returned. I have tried using CAST in the query and it did not seem to make any difference.
If I remove the persistent fields, the query will open without problems on an empty return set. However, in order to use the TeeChart editor in the IDE, it appears I need persistent fields.
Is there a way I can make this work with persistent fields, or am I going to have to throw out the persistent fields and then add the TeeChart Series at runtime?
This behavior is described in Adjusting FireDAC Mapping chapter of the FireDAC's SQLite manual:
For an expression in a SELECT list, SQLite avoids type name
information. When the result set is not empty, FireDAC uses the value
data types from the first record. When empty, FireDAC describes those
columns as dtWideString. To explicitly specify the column data type,
append ::<type name> to the column alias:
SELECT count(*) as "cnt::INT" FROM mytab
So modify your command e.g. this way (I used BIGINT, but you can use any pseudo data type that maps to a 64-bit signed integer data type and is not auto incrementing, which corresponds to your persistent TLargeIntField field):
SELECT
DayTime AS "TheDate",
Sum(Device1) AS "DeviceTotal::BIGINT"
FROM
HistoryRuntime
WHERE
DayTime BETWEEN {d 2017-06-01} AND {d 2017-06-26}
GROUP BY
Date(DayTime)
P.S. I did a small optimization by using BETWEEN operator (which evaluates the column value only once), and used an escape sequence for date constants (which, in real you replace by parameter, I guess; so just for curiosity).
This data type hinting is parsed by the FDSQLiteTypeName2ADDataType procedure that takes and parses column name in format <column name>::<type name> in its AColName parameter.

Passing a string saved in a bash variable with an apostrophe to psql query using a bash script

In a BASH script, I am reading in a list of strings from a text file that may contain apostrophe ('). Each string in the list is saved to a BASH environment variable that is passed to my psql query. I have tried everything so far but still when I loop through the list, if I counter an apostrophe, my query fails.
Here is a snipit of the code that fails:
SELECT * FROM table_1 WHERE id = $myid AND name = '$namelist';
namelist is the file that has the entries which may contain apostrophes.
Thanks for you help
Use a prepared SQL statement to avoid SQL injection.
You may also need a solution from this post.

How can I prevent SQL injections during CSV uploads?

I've just started learning about Rails security, and I'm wondering how I can avoid security issues while allowing users to upload CSV files into our database. We're using Postgres' "copy from stdin" functionality to upload the data from the CSV into a temp table, which is then used for upserts into another table. This is the basic code (thanks to this post):
conn = ActiveRecord::Base.connection_pool.checkout
raw = conn.raw_connection
raw.exec("COPY temp_table (col1, col2) FROM STDIN DELIMITER '|'")
# read column values from the CSV line by line in the following format:
# attributes = {column_1: 'column 1 data', column_2: 'column 2 data'}
# line = "#{attributes.values.join('|')}\n"
rc.put_copy_data line
# wrap up copy process & insert into & update primary table
I am wondering what I can or should do to sanitize the column values. We're using Rails 3.2 and Postgres 9.2.
No action is required; COPY never interprets the values as SQL syntax. Malformed CSV will produce an error due to bad quoting / incorrect column count. If you're sending your own data line-by-line you should probably exclude a line containing a single \. followed by a newline, but otherwise it's rather safe.
PostgreSQL doesn't sanitize the data in any way, it just handles it safely. So if you accept a string ');DROP TABLE customer;-- in your CSV it's quite safe in COPY. However, if your application reads that out of the database, assumes that "because it came from the database not the user it's safe," and interpolates it into an SQL string you're still just as stuffed.
Similarly, incorrect use of PL/PgSQL functions where EXECUTE is used with unsafe string concatenation will create problems. You must use of format and the %I or %L specifiers, use quote_literal / quote_ident, or (for literals) use EXECUTE ... USING.
This is not just true of COPY, it's the same if you do an INSERT of the manipulated data then use it unsafely after reading it back from the DB.

Reading Informix-SE audit trail log tables

INFORMIX-SQL 7.32 (SE):
I've created an audit trail "a_trx" for my transaction table to know who/when has added or updated rows in this table, with a snapshot of the rows contents. According to documentation, an audit table is created with the same schema of the table being audited, plus the following audit info header columns pre-fixed:
table a_trx
a_type char(2) {record type: aa = added, dd =deleted,
rr = before update image, ww = after update image.}
a_time integer {internal time value.}
a_process_id smallint {Process ID that changed record.}
a_usr_id smallint {User ID that changed record.}
a_rowid integer {Original rowid.}
[...] {Same columns as table being audited.}
So then I proceeded to generate a default perform screen for a_trx, but could not locate a_trx for my table selection. I aborted and ls'd the .dbs directory and did not see a_trx.dat or a_trx.idx, but found a_trx, which appears to be in .dat format, according to
my disk editor utility. Is there any other method for accessing this .dat clone or do I have to trick the engine by renaming it to a_trx.dat, create an .idx companion for it, tweak SYSTABLES, SYSCOLUMNS, etc. to be able to access this audit table like any other table?.. and what is the internal time value of a_time, number of seconds since 12/31/1899?
The audit logs are not C-ISAM files; they are plain log files. IIRC, they are created with '.aud' as a suffix. If you get to choose the suffix, then you would create it with a '.dat' suffix, making sure the name does not conflict with any table name.
You should be able to access them as if they were a table, but you would have to create a table (data file) and the index file to match the augmented schema, and then arrange for the '.aud' file to refer to the same location as the '.dat' file - presumably via a link or possibly a symbolic link. You can specify where the table is stored in the CREATE TABLE statement in SE.
The time is a Unix time stamp - the number of seconds since 1970-01-01T00:00:00Z.

Resources