Hi
I have the database with huge data. there are several master-detail relationships within it. so i just wanted to build the Views / stored procedure so that data fetching get much faster. I know I can use Index. but i saw it is a limitation in sql server express version. where as i am using express version. so how to perform this in express version. please guide me .
Creating a stored procedure won't speed up query execution.
Making sure that your foreign keys are indexed is the first step.
The definite answer to the question of query performance will give you a look at query execution plan. Make sure you don't have any Table Scans / Table Seeks.
Related
For database management, my team right now is using a RDBMS based solution (MSSQL to be exact), but we expect to move to Cassandra soon as we're expecting a huge bump in traffic.
The application logic right now is decoupled from insertion logic, as the application only calls the specific procedures in SQL which calls some data validations and makes corresponding insertions.
I want to do something similar in Cassandra. However, I am unable to find anything that could aid me in doing so. UDFs are not useful as they are mostly used in SELECT query. I'd appreciate the community's help/advice on this, thanks!
The closest feature to a stored procedure will be a batch as it will allow you to "bundle" different DML statements associated to an insert, update or delete.
If you are moving from RDBMS to Cassandra, one of the biggest challenges is to adjust to the data modeling required, and more specific, to denormalization of data. The data model is the key factor of success (and failure) of any Cassandra implementation, and because of that, you may find several resources in the web (to mention the basics eBay blog, Datastax academy's Data model course)
Good luck with your implementation!
I'm working against a DB2 database with a DotNet application. Some parts of the system are build and maintained in AS400 LANSA, other parts are DotNet.
We curently have the issue that we are maintaining a lot of summary table that need to be updated with values of many different tables. This causes our data to be out of sync with each other all the time and we need to run script each night checking and correcting errors.
These tables are required since they claim it is impossible to make join or use views in LANSA
Is this correct? Are they alternatives i can supply for them to avoid these problems?
If you use LANSA with RDMLX, then you can use SELECT_SQL to incorporate JOIN's as needed.
By Views, do you mean logical files? Those get used as needed by the database engine automatically. If you mean a VIEW created off multiple physical files, then these can be replicated in LANSA using pre-determined Join Fields. Have a look at the LANSA documentation about these.
You could use something like this:
#MYSQLST := ('SELECT {write your SQL select statement here}') Select_Sql Fields({my fields}) Using(#MYSQLST)
Worth reviewing the LANSA documentation on the use of Free Format SELECT_SQL
Programming Language: Delphi 6
SQL Server in The Back end.
Problem:
Application used to hit DB each time we needed something , Ended up hitting it for more than 2000 times for getting certain things, Caused problems with application being slow. This hitting DB happened for a lot of tables each having different structure and different number of columns. So I’m trying to reduce the number of calls.
We can have about 4000 records at a time from each table.
Proposed Solution:
Let’s get all the data from DB at once and use it when we need it so we don’t have to keep hitting the DB.
How Solution is turning out so far:
This version of Delphi doesn’t have a dictionary. So we already have an implementation of a dictionary from String List (Let us assume that implementation is good).
Solution 1:
Store this in a dictionary that we created with:
A unique field as a key.
And Add rest of the data as strings in String List separated like this:
FiledName1:FileValue,FieldName2:FieldValue2,…..
Might Have to create about 2000 String List to map data to key.
I took a look at the following link:
How Should I Implement a Huge but Simple Indexed StringList in Delphi?
Looks like they could move to a different DB not possible with me.
Is this a sane solution?
Solution 2:
Store this in a dictionary with List.
That list will have Delphi Records.
Records can’t directly be added to so I took a look at this Link:
Delphi TList of records
Solution 3:
Or Given that I’m using TAdoQuery should I use Seek or locate to find my records.
Please advice on the best way to do this?
Requirements:
Need Random Access of this data.
Insertion of data will happened only once when we get all the data , per table as we require them.
Need to only read the data , don’t have to modify it.
Constantly need to search in terms of primary key.
In addition to changing the application we have already done good indexing on DB to take care of things from DB side. This is more to make things run well from the application.
This sounds like a perfect use case for TClientDataSet. It's an in-memory dataset that be indexed, filtered, and searched easily, hold any information you can retrieve from the database using a SQL statement, and it has pretty good performance over a few thousand reasonable-sized rows of data. (The link above is to the current documentation, as I don't have one available for the Delphi 6 docs. They should be very similar, although I don't recall which specific version added the ability to directly include MidasLib in your uses clause to eliminate distributing Midas.dll with your app.)
Carey Jensen wrote a series of articles about it a few years back that you might find useful. The first one can be found in A ClientDataset in Every Database Application - the others in the series are linked from it.
hi all
We do know that CommandType property of a SqlCommand object has 3 options: TableDirect, Text and StoredProcedure or "SP".
Knowing that "SP" has benefits over two other options, my question is do you make lots of SP in your own systems?
Or What solution do you have instead of creating SP?
Thank you
Aside of creating Stored Procedures you can use Object Relational Mapping
Such as:
linq to sql
Nhibernate
Entity Framework
Data Access :SP's vs ORMs
Choose the best way that suits you.
In all production system I used SPs and pure ADO.NET Core to access the data. Systems range from having 100-300 tables and about 500-1000 stored procedures.
Most of the Data Access code is generated using a tool. I've posted the source code and sample application on my blog if you're interested in using/modifying it. The tool can generate over 100,000 lines of code in about 20-25 seconds going against a database with about 750 stored procedures.
Data Access Layer - Code Gen
Of course if you're no familiar with Databases, data modeling/design and stored procedures you're probably better off using Linq to SQL or EF4 (Entity Framework version 4) or similar. If you need brute force performance then ADO.NET core along with Stored procedures is the way to go.
Re: your first question
When you go down the path of stored procedures, the number of stored procedures begins to grow continually for the life of the project. Outside of the basic CRUD operations, each stored procedure tends to be tightly bound to a particular problem and not very re-usable. A rule of thumb is that I can expect 8-12 stored procedures for each data table (excluding reference or code tables, such as the list of states or countries).
The very large number of procs makes naming conventions very important so that you can find anything without constantly visually re-scanning the whole list of 400-500 procs.
Re: your second question
There are a lot of ugly things that happen with sql written inside of strings inside of C# or VB.NET -- it's error prone, ugly, etc.
Linq, nHybernate and many others exist, but the "concept count" (the number of things you need to learn to start being productive), is much higher than learning how to write a good stored procedure executer in C#.
I try to make sure that stored procedures are only created for database functionality - not business logic.
It's Database Functionality when I have some database architecture that's a bit obscure and I want to hide that from callers.
It's Business Logic when it is simply the way in which my application adds or updates or how much validation they do, etc., etc.
Does anyone know of an example of how to dynamically define and build ADO MD (ActiveX Data Objects Multidimensional) catalogs and cube definitions with a set of data other than a database?
Background: we have a huge amount of data in our application that we export to a database and then query using the usual SQL joins, groups, sums etc to produce reports. The data in the application is originally in objects and arrays. The problem is the amount of data is so large the export can take > 2 hours. So I am trying to figure out a good way of querying the objects in memory, either by a custom OLAP algorithm or library, or ADO MD. But I haven't been able to find an example of using ADO MD without a database behind it.
We are using Delphi 2010 so would use ADO ActiveX but I imagine the ADO.NET MD is similar. I realize that if the application data was already stored in a database the problem would solve itself. Also if Delphi had LINQ capability I could query the objects and arrays that way.
The export takes 2 hours? Some companies have dealt with worse! On a nightly basis!
We used to schedule the transfer of data into the warehouse at 3am, and the trigger the cube processing jobs around 6am....then struggle to make 9am availability.
One thing that improved efficiency was making sure that only new data was dealt with, not old values that remained unchanged. For example, our warehouse held restaurant sales for the last 5 years, so there was no need to reload any rows over a month old as they would be the same!
Is it possible to export your entire application's data into a SQL database just once, and then each day after that just add a little bit of new data, or re-export just a subsection of the application? That will help load times.