I am supporting Filenet Applications and generally focus on performance improvement techniques. Often we face this issue related to the queries optimization. Generally we get the queries from DBA and these are DB SQL which are fired at the database level. Now from the application code we pass the CE SQL and not the DB SQL. I am aware that the CE parse the CE SQL to underlying DB SQL. I am trying to figure out if I have the DB SQL can I get the corresponding CE SQL which is being fired. A code or script which I can write in which I enter the CE SQL and the corresponding DB SQL gets generated. Appreciate if I could get any pointers on this as I am really stuck.
You need to enable Trace Logging for the DB subsystem. This is done through the Trace Control tab of Domain configuration in ACCE. Then you will be able to see database queries in p8_server_trace.log.
For convenience you might want to enable tracing for the SRCH subsystem as well. Then original and generated queries will go hand in hand.
Detailed info on Trace Logging is available in the FileNet P8 documentation.
The way to capture CE SQL queries is to turn on auditing for the object class your are interested in and select Query Event as the event. Now every time a query is performed an event object is created. This object has a property called QueryText which contains the CE query that is performed. You could use the creation time or some other information in the query to match it to your database query.
The query events can be queried using the ACCE or accessed programmatically using the API object com.filenet.api.events.QueryEvent.
Be aware that on a busy system a lot of query events can be generated!
Related
We have several data feeds that run every evening using SSIS packages with SQL table data sources. Part of this standard process is a data engine we've built using stored procedures that run for each data feed and returns that customers data based on their specific parameters. That engine dumps the data into a SQL table where it is retrieved by the package and then remains there until the next evenings run.
Maybe two weeks ago we started to intermittently get the following error running these stored procedures (which are executed using SQL Agent):
"INSERT EXEC failed because the stored procedure altered the schema of the target table. [SQLSTATE 42000] (Error 556). The step failed."
These stored procedures have been running for months, some even years, without being changed. These errors just started intermittently occurring. Inside the stored procedure we do have a temporary table being used that receives the engine data and a table that is dropped and re-created using a statement like this:
SELECT field1, field2 INTO sqlTable FROM #tempTable
I looked for a SQL updated or something that may have changed to cause these errors all of a sudden but can't find anything. It's occurred to several different stored procedures, intermittently, that all have this same kind of structure but I can't identify any particular reason. It will happen one night and not another, to one stored procedure and not another just like it. Any idea what could cause this?
We are running Microsoft SQL Server 2016 Standard 64-bit (13.0.4604.0) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor). This is all on a VM in the Azure environment.
If you are using "INSERT ... EXEC" and has enabled Query Store, it might be the reason.
The Query Data Store periodically runs auto-cleanup.
This has turned out to cause problems when a stored procedure makes a call to another stored procedure by using "INSERT…EXEC" syntax.
This is only an issue with SQL Server 2016
For more details and possible solution, see: https://support.microsoft.com/en-us/help/4465511/error-556-insert-exec-failed-stored-procedure-altered-table-schema
I am using Neo4JClient to connect to my Neo4J database and execute CYPHER queries. My goal is to check performance of queries I send to database. Problem is that I have to check it on the db side so I can't use Stopwatch in .NET. Queries have to be executed using Neo4JClient. I don't need to know execution times for specific queries. I.e. average for last 1000 queries will be enough.
I can use only Neo4J Community Edition.
Thanks in advance!
Neo4j Enterprise Edition has the capability to log slow queries taking longer than a given threshold, see the config settings containing querylog on http://neo4j.com/docs/stable/configuration-settings.html.
We have recently converted our JD Edwards EnterpriseOne system from and AS400/DB2 platform to Windows & SQL Server. In the old system we had a RPG/CL program that would transfer data from AS400 library to the accounting system for further processing. The end users needed to initiate this process so it was executed via a menu command.
To replicate this behavior after the conversion I created a stored procedure in SQL Server 2008 R2 that inserts records into the SQL Server database from the as400 via a linked server and then updates the records on the as400 to indicate that the records have been processed. To allow the end users the ability to execute this process, I created a SSRS (2005) report that executes the stored procedure.
When the SSRS report is executed interactively, we intermittently get an error 'For security resasons DTD is prohibited in this XML document' which from my research is caused by SSRS running out of memory.
Does anyone know of another/better way to transfer the data?
The transfer/update of the stored procedures is essentially
INSERT INTO [SQL DEST TABLE]
SELECT *
FROM [AS400 Linked Server/Table]
UPDATE OPENQUERY (AS400_LINK, 'MY SELECT QUERY')
SET FLAG = PROCESSED;
You should get better database performance can allow a server to perform work with it's own data, where possible, rather than having to transmit it back and forth.
I will make a few guesses about the circumstances, and if you correct me, I will gladly adjust the answer to fit your situation. This sounds like you are extracting data from an accounting transaction table in DB2, and that when done, you want to update the flag in those same records. That might indicate that the records could stay in that table essentially forever, or perhaps that some other process clears them out. There is no WHERE on your SELECT, so I will assume they do get removed. I will assume that we don't know if more records might get added to the transaction table at any time, including any period between extracting the info and updating the flag.
I wonder if you could update the flag immediately upon extraction, before they have actually been processed SQL Server? Would this be allowed logistically, and within your business requirements?
Suppose you...
extract the DB2 unprocessed transaction data into a workfile,
transfer the workfile to SQL Server,
perform whatever processing you want to do in SQL Server
tell DB2 to update the transaction table based on the workfile
So in DB2, #1 might look like
INSERT INTO workfile
SELECT *
FROM transactions
WHERE flag = unprocessed
During step 3, your SQL Server job could update the flag in the workfile to an error status, for any records that SQL Server cannot process properly.
Step 4 on DB2 could be
UPDATE transactions
SET flag = processed
WHERE transid IN (SELECT transid
FROM workfile
WHERE flag <> error
)
Hopefully, processing errors on SQL Server would be fairly rare. If that process updates transactions in the workfile, only on an error, this should be faster than transmitting updates for each success. The UPDATE statement above, should be able to to run faster on DB2, since it is driven by the workfile on the same server, rather than data be transmitted back up to DB2.
Hi I am using neo4j in my application and my structure is as following:
I am using Embedded Graph API
I have several databases that I point to using a pool that I maintain in my application eg-> db1, db2, db3, ..... db100
When I want to access a particular database I point to it using new EmbeddedGraphDatabase("Path to db(n)")
The problem is that when the connection pool count increases the RAM size being consumed by the application keep increasing and breaks down the application at a point of limit.
So I am Thinking of migrating from Neo4j to some other Database.
Additionally only a small part of my database is utilizing the graph structure.
One way for migration is that I write a script for it. Is there any better option?
My another question is what is the best Database so that my structure can be maintained.
Other view-point that I am thinking about is I can keep part of my data into Neo4j and shift another part to some other database.
If anything is unclear I can clarify.
Thanks in advance.
An EmbeddedGraphDatabase instance is not the equivalent of a "connection" in SQL. It's designed to run a long time (days, months). Hence starting/stopping is costly.
What is the use case for having hundreds of separate databases in the same JVM?
Your lots of small databases will perform poorly as the graphdb is designed to hold the whole datamodel on a single host.
Do you run a single JVM per database?
You can control the amount of memory used by neo4j by providing the correct properties for memory mapping and also use the gcr cache from neo4j-enterprise and control the cache size-property variables.
I think it still makes sense to keep the graph part in Neo4j and only move the non-graphy part.
I'm using JAVA DB (derby)
I want to import a public view of my data to another database (also in java db).
I want to pass this data and save in to the other database. I'm having trouble since the general rule is one connection to one database.
Help would be much appreciated.
You need two connections, one to each database.
If you want the two operations to be a single unit of work, you should use XA JDBC drivers so you can do two-phase commit. You'll also need a JTA transaction manager.
This is easy to do with Spring.
SELECT from one connection; INSERT into the other. Just standard JDBC is what I'm thinking. You'll want to batch your INSERTs and checkpoint them if you have a lot of rows so you don't build up a huge rollback segment.
I'd wonder why you have to duplicate data this way. "Don't Repeat Yourself" would be a good argument against it. Why do you think you need it in two places like this?