I recently migrated from Delphi 7 with SQL Server 2000 to Delphi 2010 with SQL Server 2008. I am using dbExpress.
After installing the new version I have found that the on sites that have a lot of data that system has become slow and unstable.
Can any one tell me if there is an issue between dbExpress and SQL Server 2008? Please help!!!!!
By performing a profiler trace you can see if you have any bottlenecks on SQL Server. Your default profiler trace (include TextData for RPC:Completed) should be good enough to start with.
The profiler trace can be analysed to see what takes the longest time. You can easily load the trace into a table and analyse it there. Note that when loaded into a table, the duration column is in microseconds. See the function fn_trace_gettable for a quicker way of loading a trace file into a table.
A common cause for poor performance, especially after a major change, is bad indexing.
Since SQL Server 2005 the optimiser stores within in-memory structures the indices it would like to have seen. These can be accessed with the dynamic management views sys.dm_db_missing_index_details, sys.dm_db_missing_index_groups and sys.dm_db_missing_index_groups_stats.
Here is a simple sample SQL to create your own missing index report, including the basic code to generate the missing index.
select
d.statement
, d.equality_columns
, d.inequality_columns
, d.included_columns
, s.user_seeks Seeks
, s.last_user_seek
, cast (s.avg_total_user_cost as decimal (9,2)) Cost
, s.avg_user_impact [%]
, 'CREATE INDEX MissingIndex_ ON ' + d.statement + '('
+ case when equality_columns IS NOT NULL then equality_columns else '' end
+ case when equality_columns IS NOT NULL AND inequality_columns IS NOT NULL then ', ' else '' end
+ case when inequality_columns IS NOT NULL then inequality_columns else '' end
+ ')'
+ case when included_columns IS NOT NULL then ' INCLUDE (' + included_columns + ')' else '' end
AS SQL
from sys.dm_db_missing_index_details d
INNER JOIN sys.dm_db_missing_index_groups g ON d.index_handle = g.index_handle
INNER JOIN sys.dm_db_missing_index_group_stats s ON g.index_group_handle = s.group_handle
Related
I was wondering if someone could help me with the error message I am getting from Snowflake. I am trying to create a stored procedure that will loop through 125 files in S3 and copy into the corresponding tables in Snowflake. The names of the tables are the same names as the csv files. In the example I only have 2 file names set up (if someone knows a better way than having to liste all 125, that will be extremely. helpful) .
The error message I am getting is the following:
syntax error line 5 at position 11 unexpected '1'.
syntax error line 6 at position 22 unexpected '='. (line 4)
CREATE OR REPLACE PROCEDURE load_data_S3(file_name VARCHAR,table_name VARCHAR)
RETURNS VARCHAR
LANGUAGE SQL
AS
$$
BEGIN
FOR i IN 1 to 2 LOOP
CASE i
WHEN 1 THEN
SET file_name = 'file1.csv';
SET table_name = 'FILE1';
WHEN 2 THEN
SET file_name = 'file2.csv';
SET table_name = 'FILE2';
--WILL LIST THE REMAINING 123 WHEN STATEMENTS
ELSE
-- Do nothing
END CASE;
COPY INTO table_name
FROM #externalstg/file_name
FILE_FORMAT = (type='csv');
END LOOP;
RETURN 'Data loaded successfully';
END;
$$;
There are various ways to list the files in a stage (see the post here). You can loop through the resultset and run COPY INTO on each record
1 the stored procedure
create procedure sp_count_demo(
i_user_id varchar(30)
)
returning p_count as num_of_row ;
define p_count integer ;
set isolation to dirty read ;
let p_row = 0 ;
select count(*)
into p_count
from some_table a
where a.user_id = i_user_id
;
return p_row;
end procedure ;
2 The procedure at (1) will be called from java webapps with connection pool
3 Do I need to set the isolation level back to previous value before returning the result? (ie to avoid another process reusing the connection from having "dirty read" isolation level)
4 What is the default isolation level
5 Where/How can I get the default value for isolation level
Thanks in advance
Since a connection pool is in use the stored procedure should return the isolation level to its previous setting in order to avoid unexpected results when another app uses the same connection. The default isolation level depends on the logging mode of the database:
For an unlogged database it will effectively be "Dirty Read" (shown as NL by the onstat -g ses command).
For a mode ANSI database it will be "Repeatable Read."
For other logged databases it will be "Committed Read."
The onconfig parameter USELASTCOMMITTED can also be used to change how the default isolation level is used. More information on that can be found in the Knowledge Center (search on USELASTCOMMITTED).
It is possible for a session to find out its current isolation level using a query against the sysmaster database. This query was run on Informix 12.10 but should also be valid for 11.70:
select tx.isolevel
from sysmaster:systxptab tx, sysmaster:sysrstcb r, sysmaster:sysscblst s
where s.address = r.scb and tx.owner = r.address
and s.sid = dbinfo("sessionid");
It returns the isolation level as an integer which is an internal value - for example committed read has value 2. I don't believe the mapping of isolation level to integer value is published so you will need to experiment with setting different levels for a session and then running the above query.
Delphi XE2 + Zeos 7.0.3 Stable + Firebird 1.0
I am porting an old app from Delphi 5 + IBX and got this problem:
I have a table that one for the fields is auto calculated:
NUMERIC(18,2)
COMPUTED BY (( (VAL_ITENS +
VAL_SERVICO +
TAXAENTRADA +
VAL_COUVERT +
VAL_ESTACION +
VAL_CONSUM +
VAL_TAXA-DESCONTO_V) - ((VAL_ITENS*DESCONTO_P)/100)))
On IBX it is calculated fine. On ZeosLib it does not get calculated. Using the same database file and server.
Is there a way to force this calculation to happen? I have tried to update the field by program, however it is read only.
ANSWER: I had some problems with Zeos and Firebird, so I was thinking ALL the problems were related to Zeos, and this is not the case, the problem is that one of the fields were NULL and the result was getting calculated as NULL.
Does anybody know, if there is a command string size limitation in Firebird?
When executing a small "insert" script it works perfectly, but when the script has a lot of lines it returns the following errer: "Unexpected end of command - line X, column Y".
Interessting, the line and column number varies dependanding on the actual script size.
I'm using Firebird 2.5
Here is the executing script:
set term ^ ;
EXECUTE BLOCK AS BEGIN
insert into TABLE (COLUMNA) values (13);
...
insert into TABLE (COLUMNA) values (14);
END^
set term ; ^
Firebird 2.5 and earlier have a limitation of 64 kilobytes for the query text, for Firebird 3.0 this limit was increased to 10 MB when the new API is used. An EXECUTE BLOCK is one query, so it should not exceed 64 kilobyte.
What is an effective means of implementing an in memory semantic web triple store using the basic .NET collection classes using F#?
Are there any F# examples or projects already doing this?
There is also SemWeb which is a C# library which provides it's own SQL based Triple Store - http://razor.occams.info/code/semweb/
I'm working on a new C# library for RDF called dotNetRDF and have just released the latest Alpha http://www.dotnetrdf.org.
Here's an equivalent program to the one spoon16 showed:
open System
open VDS.RDF
open VDS.RDF.Parsing
open VDS.RDF.Query
//Get a Graph and fill it from a file
let g = new Graph()
let parser = new TurtleParser()
parser.Load(g, "test.ttl")
//Place into a Triple Store and query
let store = new TripleStore()
store.Load(g)
let results = store.ExecuteQuery("SELECT ?s ?p ?o WHERE {?s ?p ?o} LIMIT 10") :?> SparqlResultSet
//Output the results
Console.WriteLine(results.Count.ToString() ^ " Results")
for result in results.Results do
Console.WriteLine(result.ToString())
done
//Wait for user to hit enter so they can see the results
Console.ReadLine() |> ignore
My library currently supports my own SQL databases, AllegroGraph, 4store, Joseki, Sesame, Talis and Virtuoso as backing stores
Check out LinqToRdf which, in addition to simple VS.NET hosted modeling tools, provides a full LINQ query provider and round-tripping data when dealing with in-memory databases:
var ctx = new MusicDataContext(#"http://localhost/linqtordf/SparqlQuery.aspx");
var q = (from t in ctx.Tracks
where t.Year == "2006" &&
t.GenreName == "History 5 | Fall 2006 | UC Berkeley"
orderby t.FileLocation
select new {t.Title, t.FileLocation}).Skip(10).Take(5);
foreach (var track in q)
{
Console.WriteLine(track.Title + ": " + track.FileLocation);
}
Intellidimension offers a .NET based in-memory triple store as part of their Semantics SDK. They often provide free licenses for research/education if you contact them.
I use their technology every single day from C# and PowerShell though and I really enjoy it.
//disclaimer: really the first time I have used F# so this may not be any good...
//but it does work
open Intellidimension.Rdf
open System.IO
let rdfXml = File.ReadAllText(#"C:\ontology.owl")
let gds = new GraphDataSource()
gds.Read(rdfXml) |> ignore
let tbl = gds.Query("select ?s ?p ?o where {?s ?p ?o} limit 10")
System.Console.Write(tbl.RowCount)
System.Console.ReadLine() |> ignore
Aduna's Sesame framework is ported to .NET. This is a sample code that shows how to use F# to connect to Sesame Http repository: http://debian.fmi.uni-sofia.bg/~toncho/myblog/archives/309-Using-F-to-connect-to-a-Sesame-repository.html
I know this doesn't directly answer your question, but you could use 4store which is a stable, proven triplestore, and write a client for it in .Net, instead of developing your own triplestore.
Related questions
Which Triplestore for rapid semantic web development?
RDF/OWL/SPARQL/Triple Stores/Reasoners and other Semantic Web APIs for C#?