Reading Informix-SE audit trail log tables - informix

INFORMIX-SQL 7.32 (SE):
I've created an audit trail "a_trx" for my transaction table to know who/when has added or updated rows in this table, with a snapshot of the rows contents. According to documentation, an audit table is created with the same schema of the table being audited, plus the following audit info header columns pre-fixed:
table a_trx
a_type char(2) {record type: aa = added, dd =deleted,
rr = before update image, ww = after update image.}
a_time integer {internal time value.}
a_process_id smallint {Process ID that changed record.}
a_usr_id smallint {User ID that changed record.}
a_rowid integer {Original rowid.}
[...] {Same columns as table being audited.}
So then I proceeded to generate a default perform screen for a_trx, but could not locate a_trx for my table selection. I aborted and ls'd the .dbs directory and did not see a_trx.dat or a_trx.idx, but found a_trx, which appears to be in .dat format, according to
my disk editor utility. Is there any other method for accessing this .dat clone or do I have to trick the engine by renaming it to a_trx.dat, create an .idx companion for it, tweak SYSTABLES, SYSCOLUMNS, etc. to be able to access this audit table like any other table?.. and what is the internal time value of a_time, number of seconds since 12/31/1899?

The audit logs are not C-ISAM files; they are plain log files. IIRC, they are created with '.aud' as a suffix. If you get to choose the suffix, then you would create it with a '.dat' suffix, making sure the name does not conflict with any table name.
You should be able to access them as if they were a table, but you would have to create a table (data file) and the index file to match the augmented schema, and then arrange for the '.aud' file to refer to the same location as the '.dat' file - presumably via a link or possibly a symbolic link. You can specify where the table is stored in the CREATE TABLE statement in SE.
The time is a Unix time stamp - the number of seconds since 1970-01-01T00:00:00Z.

Related

How to upload Polygons from GeoPandas to Snowflake?

I have a geometry column of a geodataframe populated with polygons and I need to upload these to Snowflake.
I have been exporting the geometry column of the geodataframe to file and have tried both CSV and GeoJSON formats, but so far I either always get an error the staging table always winds up empty.
Here's my code:
design_gdf['geometry'].to_csv('polygons.csv', index=False, header=False, sep='|', compression=None)
import sqlalchemy
from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL
engine = create_engine(
URL(<Snowflake Credentials Here>)
)
with engine.connect() as con:
con.execute("PUT file://<path to polygons.csv> #~ AUTO_COMPRESS=FALSE")
Then on Snowflake I run
create or replace table DB.SCHEMA.DESIGN_POLYGONS_STAGING (geometry GEOGRAPHY);
copy into DB.SCHEMA."DESIGN_POLYGONS_STAGING"
from #~/polygons.csv
FILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = '|' SKIP_HEADER = 1 compression = None encoding = 'iso-8859-1');
Generates the following error:
"Number of columns in file (6) does not match that of the corresponding table (1), use file format option error_on_column_count_mismatch=false to ignore this error File '#~/polygons.csv.gz', line 3, character 1 Row 1 starts at line 2, column "DESIGN_POLYGONS_STAGING"[6] If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option. For more information on loading options, please run 'info loading_data' in a SQL client."
Can anyone identify what I'm doing wrong?
Inspired by #Simeon_Pilgrim's comment I went back to Snowflake's documentation. There I found an example of converting a string literal to a GEOGRAPHY.
https://docs.snowflake.com/en/sql-reference/functions/to_geography.html#examples
select to_geography('POINT(-122.35 37.55)');
My polygons looked like strings describing Polygons more than actual GEOGRAPHYs so I decided I needed to be treating them as strings and then calling TO_GEOGRAPHY() on them.
I quickly discovered that they needed to be explicitly enclosed in single quotes and copied into a VARCHAR column in the staging table. This was accomplished by modifying the CSV export code:
import csv
design_gdf['geometry'].to_csv(<path to polygons.csv>,
index=False, header=False, sep='|', compression=None, quoting=csv.QUOTE_ALL, quotechar="'")
The staging table now looks like:
create or replace table DB.SCHEMA."DESIGN_POLYGONS_STAGING" (geometry VARCHAR);
I ran into further problems copying into the staging table related to the presence of a polygons.csv.gz file I must have uploaded in a previous experiment. I deleted this file using:
remove #~/polygons.csv.gz
Finally, converting the staging table to GEOGRAPHY
create or replace table DB.SCHEMA."DESIGN_GEOGRAPHY_STAGING" (geometry GEOGRAPHY);
insert into DB.SCHEMA."DESIGN_GEOGRAPHY"
select to_geography(geometry)
from DB.SCHEMA."DESIGN_POLYGONS_STAGING"
and I wound up with a DESIGN_GEOGRAPHY table with a single column of GEOGRAPHYs in it. Success!!!

inserting a row having no values for Primary Key columns fails

For the following table:
I run the following stored procedure:
I'm redirected to "Results" tab and seeing nothing. Then if I click on "refresh" icon (below Results tab), then I get the dialog saying:
SQLCODE = -625 validation error for column ID, value "* null *"
And of course, nothing is added...
As far as I understand, firebird expects somevalue for RC_ID (which is my PK and should principally automatically incremented). If I give value also for RC_ID, it is working well.
So, what should I do to make a clear "insert" without these errors?
The problem is that you are not setting a value for the primary key. Contrary to your expectation, primary keys are not automatically incremented. This is the case in any database I'm aware of. You always need to mark it as an identity, auto increment or generated, or something else to get that behavior, although some tools (table builders) may already apply this for you by default.
If you are using Firebird 3, you can define your column as GENERATED BY DEFAULT AS IDENTITY (see Identity Column Type in the Firebird 3 release notes). For earlier Firebird versions the best way is to define a sequence (also known as generator) with a before insert trigger that populates the primary key column.
For more details on how to define an identity column (or define the trigger), see my answer on this question: Easiest way to create an auto increment field in Firebird database.
In firebird, the autoincrement was not working like in MySQL. Thus, sending a value for RC_ID was a must...
I found some working examples based on idea:
create a generator
assign it to column (PK)
call GEN_ID with that generator like this:
:
begin
insert into RESERVATIONCATEGORY (RC_ID, RC_NAAM)
values (
GEN_ID(GEN_RESERVATIONCATEGORY _ID,1), 'selam'
);
suspend;
end

App Inventor Fusion Table Column calling

I developed an app based on App Inventor and Fusion Tables. When I want to update total money by adding some money to already existing money it is giving some error.
When I use SELECT command to get information from fusion table it is taking then number with column Name. When i am trying to add both of them it is giving following error.
Error message
The result from a fusiontable includes always the header row...
from your example SELECT statement the result is
TotalPaid
5000
Obviously to add any value to that result will result in an error, because you only can add numeric values...
You first have to extract the value (in your example 5000) from the result. Convert the result into a list using the split block, just split at \n (new value) to get a list, then select the 2nd item using the select list item block.
Note: to be able to update something in a table, you need the ROWID, see also the SQL Reference Documentation of the Fusion Tables API.
For UPDATE statements the first step to be done is to get the ROWID of the row to be updated with a SELECT statement. The second step is to do the UPDATE.

Check if value in source flat file exists in target table with Informatica

I have a mapping that filters out a number of IDs from source flat file and then inserts it into target table. I want to add a condition to check whether the ID exists in target table, and if the ID doesn't exist, the row should be added to error file. How can I get this done? I know we can use a dynamic lookup but that will only insert or update into target table, which is not what I want.
Do a normal lookup on the target. If the return value is null, then route it to the error file using a router.
Since you want to write the unmatched rows to error file, use DD_REJECT in update strategy trans based on output from lookup
eg: IIF (NOT ISNULL(col_1), DD_REJECT, DD_INSERT)
col_1 is output from LKP

Given the hexadecimal code of a character, how to convert it to the corresponding character in CL program?

Now I need to find a particular entry in a journal using a CL program. The way I use to locate it is to DSPJRNE to put the journal entries in an output file, then use OPNQRYF to filter the desired one. The file is uniquely keyed so my plan is to compare the journal entry data with the key. The problem is that one of the key is a packed decimal so in the journal entry it is treated as hexadecimal code of characters and displayed as some strange symbols. So in order to compare the strings I need to convert the packed decimal key into the corresponding characters. How to achieve this in CL? If using CL is not possible, what about RPG?
To answer your immediate question, the CVTCH MI instruction will convert hex to char but I would not go that route; neither in CL nor RPG. Rather, I would take James' advice with a few additional steps.
DSPJRNE OUTFILE(QTEMP/DSPJRNE)
QRY input file DSPJRNE, output file QRYJRNE, select only JOESD
CRTDUPOBJ PRODUCTION_FILE QTEMP/JRNF DATA(*NO)
CPYF QRYJRNE JRNF FMTOPT(*NOCHK)
This will give you an externally described file with the exact same layout as your production file. You can query that, etc.
If you are pulling journal entries for a specific file you can dump them into an externally described file with a clever use of SQL:
CREATE TABLE QTEMP/QADSPJRN LIKE QSYS/QADSPJRN
ALTER TABLE QTEMP/QADSPJRN DROP COLUMN JOESD
CREATE TABLE QTEMP/DSPJRNE AS (SELECT * FROM QTEMP/QADSPJRN, FILE-LIB/FILE)
WITH NO DATA
DSPJRNE ... OUTPUT(*OUTFILE) OUTFILFMT(*TYPE1) OUTFILE(QTEMP/DSPJRNE)
ENDDTALEN(*CALC)

Resources