How can I display the query count in rendering the view.
I would like to show the text message like
Processed with 8 queries.
For PHP version is
function query($query)
{
//We increase the queries counter by 1
global $nb_queries,$timer_queries;
$nb_queries++;
//Execute the query and time the execution time
$beginning = timer();
//Execute the query and save the result in a variable
$req_return = mysql_query($query);
//We add the query duration to the total
$timer_queries += round(timer()-$beginning,6);
return $req_return;
}
How can I do similar in rails
Just found a gem resolve my question.
Query Diet https://github.com/makandra/query_diet
Related
I am executing one SQL statement in Informix Data Studio 12.1. It takes around 50 to 60 ms for execution(One day date).
SELECT
sum( (esrt.service_price) * (esrt.confirmed_qty + esrt.pharmacy_confirm_quantity) ) AS net_amount
FROM
episode_service_rendered_tbl esrt,
patient_details_tbl pdt,
episode_details_tbl edt,
ms_mat_service_header_sp_tbl mmshst
WHERE
esrt.patient_id = pdt.patient_id
AND edt.patient_id = pdt.patient_id
AND esrt.episode_id = edt.episode_id
AND mmshst.material_service_sp_id = esrt.material_service_sp_id
AND mmshst.bill_heads_id = 1
AND esrt.delete_flag = 1
AND esrt.customer_sp_code != '0110000006'
AND pdt.patient_category_id IN(1001,1002,1003,1004,1005,1012,1013)
AND edt.episode_type ='ipd'
AND esrt.generated_date BETWEEN '2017-06-04' AND '2017-06-04';
When i am trying to execute the same by creating function it takes around 35 to 40 Seconds for execution.
Please find the code below.
CREATE FUNCTION sb_pharmacy_account_summary_report_test1(START_DATE DATE,END_DATE DATE)
RETURNING VARCHAR(100),DECIMAL(10,2);
DEFINE v_sale_credit_amt DECIMAL(10,2);
BEGIN
SELECT
sum( (esrt.service_price) * (esrt.confirmed_qty +
esrt.pharmacy_confirm_quantity) ) AS net_amount
INTO
v_sale_credit_amt
FROM
episode_service_rendered_tbl esrt,
patient_details_tbl pdt,
episode_details_tbl edt,
ms_mat_service_header_sp_tbl mmshst
WHERE
esrt.patient_id = pdt.patient_id
AND edt.patient_id = pdt.patient_id
AND esrt.episode_id = edt.episode_id
AND mmshst.material_service_sp_id = esrt.material_service_sp_id
AND mmshst.bill_heads_id = 1
AND esrt.delete_flag = 1
AND esrt.customer_sp_code != '0110000006'
AND pdt.patient_category_id IN(1001,1002,1003,1004,1005,1012,1013)
AND edt.episode_type ='ipd'
AND esrt.generated_date BETWEEN START_DATE AND END_DATE;
RETURN 'SALE CREDIT','' with resume;
RETURN 'IP SB Credit Amount',v_sale_credit_amt;
END
END FUNCTION;
Can someone tell me what is the reason for this time variation?
..in very easy words.
If you create a function the sql is parsed and stored with some optimization stuff in the database. If you call the function, optimizer knows about the sql and execute it. So optimization is done only once, if you create the function.
If you run the SQL, Optimizer parse the sql, optimizes it and then execute it, every time you execute the SQL.
This explains the time difference.
I would say the difference in time is due the parametrized query.
The first SQL has hardcoded dates values, the one in the SPL has parameters. That may cause a different query plan (e.g: which index to follow) to be applied to the query in the SPL than the one executed from Data Studio.
You can try getting the query plan (using set explain) from the first SQL and then use directives in the SPL to force the engine to use that same path.
have a look at:
https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.perf.doc/ids_prf_554.htm
it explains how to use optimizer directives to speed up queries.
All the examples I can find do something like this:
<g:paginate controller="Book" action="list" total="${bookInstanceTotal}" />
and the total attribute is "required" according to the documentation.
This works fine for very simple examples with small record sets (e.g. a few hundred)
If there are say 100k rows returned because the user put in wide search criteria, then I certainly don't want to read them all to find the total to allow pagination, and don't want to transfer all 100k rows from the db to the grails server, and don't want to repeat this each time thy hit the next page. I want to use the mysql limit/offset or similar to only bring back the small number of required rows.
Is this possible, or do I really have to work out the total (by reading all the records, or doing a separate count, then read the records?
I will always prefer to use criteria for pagination.
The example of using criteria :
def c = Account.createCriteria()
def results = c.list (max: 10, offset: 10) {
like("holderFirstName", "Fred%")
and {
between("balance", 500, 1000)
eq("branch", "London")
}
order("holderLastName", "desc")
}
This example is taken from grails documentation and you can read more about criteria in this documentation.
Using this criteria, you will get at max 10 results. But the important part is you can get total count according to the same criteria by using
results.totalCount
You don't read all records from db and load in grails to get the total. You just load 10 or whatever number of records you display in each page and you execute a count query to get the totalCount.
It works like this.
Lets say, you display 10 records on each page and you have 100K records in db.
Lets say UI passes max and offset params.
params.max = params.max ? (params.int(max) < 100 : params.max : 100) : 10
params.offset = params.offset ?: 0
def list = Domain.list(params)
When max option is specified, Domain.list() method returns PagedResultList which has getTotalCount() method which fires a count query and returns totalCount.
And you render the view like this
render(view:"list", model:[list:list, totalCount:list.totalCount)
So here you are not loading all the records from database, you are loading just 10 records and execute a count query to get totalCount
Is there any way to limit the amount of results that are returned in a CKQuery?
In SQL, it is possible to run a query like SELECT * FROM Posts LIMIT 10,15. Is there anything like the last part of the query, LIMIT 10,15 in CloudKit?
For example, I would like to load the first 5 results, then, once the user scrolls down, I would like to load the next 5 results, and so on. In SQL, it would be LIMIT 0,5, then LIMIT 6,10, and so on.
One thing that would work is to use a for loop, but it would be very intensive, as I would have to select all of the values from iCloud, and then loop through them to figure out which 5 to select, and I'm anticipating there to be a lot of different posts in the database, so I would like to only load the ones that are needed.
I'm looking for something like this:
var limit: NSLimitDescriptor = NSLimitDescriptor(5, 10)
query.limit = limit
CKContainer.defaultContainer().publicCloudDatabase.addOperation(CKQueryOperation(query: query)
//fetch values and return them
You submit your CKQuery to a CKQueryOperation. The CKQueryOperation has concepts of cursor and of resultsLimit; they will allow you to bundle your query results. As described in the documentation:
To perform a new search:
1) Initialize a CKQueryOperation object with a CKQuery object containing
the search criteria and sorting information for the records you want.
2) Assign a block to the queryCompletionBlock property so that you can
process the results and execute the operation.
If the search yields many records, the operation object may deliver a
portion of the total results to your blocks immediately, along with a
cursor for obtaining the remaining records. If a cursor is provided,
use it to initialize and execute a separate CKQueryOperation object
when you are ready to process the next batch of results.
3) Optionally configure the return results by specifying values for the
resultsLimit and desiredKeys properties.
4) Pass the query operation object to the addOperation: method of the
target database to execute the operation against that database.
So it looks like:
var q = CKQuery(/* ... */)
var qop = CKQueryOperation (query: q)
qop.resultsLimit = 5
qop.queryCompletionBlock = { (c:CKQueryCursor!, e:NSError!) -> Void in
if nil != c {
// there is more to do; create another op
var newQop = CKQueryOperation (cursor: c!)
newQop.resultsLimit = qop.resultsLimit
newQop.queryCompletionBlock = qop.queryCompletionBlock
// Hang on to it, if we must
qop = newQop
// submit
....addOperation(qop)
}
}
....addOperation(qop)
Most of cases in ZF2 I would do a paginator like this:
public function fetchAll()
{
$resultSet = $this->tableGateway->select();
return $resultSet;
}
$iteratorAdapter = new \Zend\Paginator\Adapter\Iterator($this->getSampleTable()->fetchAll());
$paginator = new \Zend\Paginator\Paginator($iteratorAdapter);
The problem of this aproch is that It is not limiting the query result, and fetchAll returns all the results (producing big trafic between db and php).
In pure php I would do a pagination like this:
$PAGE_NUM = $_GET['page_num'];
$result_num = mysq_query("SELECT * FROM table WHERE some condition");
$result = mysq_query("SELECT * FROM table WHERE some condition LIMIT $PAGE_NUM, 20");
echo do_paginator($result_num, 20, $PAGE_NUM);//THIS JUST MAKE THE CLASSIC < 1 2 3 4 5 >
The advantage of this is that I fetch from the DB only the data I need in the pagination thanks to LIMIT option in mysql. This is translate in good performance on query that returns too much records.
Can I use the ZF2 paginator with a effective limit to the results, or I need to make my own paginator?
Use the Zend\Paginator\Adapter\DbSelect adapter instead, it does exactly what you're asking for, applying limit and offset to the query you give it to begin with, and just fetching those records.
Additionally this adapter does not fetch all records from the database in order to count them. Instead, the adapter manipulates the original query to produce a corresponding COUNT query. Paginator then executes that COUNT query to get the number of rows. This does require an extra round-trip to the database, but this is many times faster than fetching an entire result set and using count(), especially with large collections of data.
I am working on application built on ASP.NET MVC 3.0 and displaying the data in MVC WebGrid.
I am using LINQ to get the records from Entities to EntityViewModel. In doing this I have to convert the records from entity to EntityViewModel.
I have 30K records to be displayed in the grid, for each and every record there are 3 flags where It has to go 3 other tables and compare the existence of the record and paint with true or false and display the same in grid.
I am displaying 10 records at a time, but it is bit very slow as I am getting all the records and storing in my application.
The Paging is in place (I mean to say -only 10 records are being displayed in web grid) but all the records are getting loaded into the application which is taking 15-20 seconds. I have checked the place where this time is being spent by the processor. It's happening in the painting place(where every record is being compared with 3 other tables).
I have converted LINQ query to SQL and I can see my SQL query is getting executed under 2 seconds. By this , I can strongly say that, I do not want to spend time on SQL indexing as the speed of SQL query is good enough.
I have two options to implement
1) Caching for MVC
2) Paging(where I should get only first ten records).
I want to go with the paging technique for performance improvement .
Now my question is how do I pass the number 10(no of records to service method) so that It brings up only ten records. And also how do I get the next 10 records when clicking on the next page.
I would post the code, but I cannot do it as it has some sensitive data.
Any example how to tackle this situation, many thanks.
If you're using SQL 2005 + you could use ROW_NUMBER() in your stored procedure:
http://msdn.microsoft.com/en-us/library/ms186734(v=SQL.90).aspx
or else if you just want to do it in LINQ try the Skip() and Take() methods.
As simple as:
int page = 2;
int pageSize = 10;
var pagedStuff = query.Skip((page - 1) * pageSize).Take(pageSize);
You should always, always, always be limiting the amount of rows you get from the database. Unbounded reads kill applications. 30k turns into 300k and then you are just destroying your sql server.
Jfar is on the right track with .Skip and .Take. The Linq2Sql engine (and most entity frameworks) will convert this to SQL that will return a limited result set. However, this doesn't preclude caching the results as well. I recommend doing that as well. That fastest trip to SQL Server is the one you don't have to take. :) I do something like this where my controller method handles paged or un-paged results and caches whatever comes back from SQL:
[AcceptVerbs("GET")]
[OutputCache(Duration = 360, VaryByParam = "*")]
public ActionResult GetRecords(int? page, int? items)
{
int limit = items ?? defaultItemsPerPage;
int pageNum = page ?? 0;
if (pageNum <= 0) { pageNum = 1; }
ViewBag.Paged = (page != null);
var records = null;
if (page != null)
{
records = myEntities.Skip((pageNum - 1) * limit).Take(limit).ToList();
}
else
{
records = myEntities.ToList();
}
return View("GetRecords", records);
}
If you call it with no params, you get the entire results set (/GetRecords). Calling it will params will get you the restricted set (/GetRecords?page=3&items=25).
You could extend this method further by adding .Contains and .StartsWith functionality.
If you do decide to go the custom stored procedure route, I'd recommend using "TOP" and "ROW_NUMBER" to restrict results rather than a temp table.
Personally I would create a custom stored procedure to do this and then call it through Linq to SQL. e.g.
CREATE PROCEDURE [dbo].[SearchData]
(
#SearchStr NVARCHAR(50),
#Page int = 1,
#RecsPerPage int = 50,
#rc int OUTPUT
)
AS
SET NOCOUNT ON
SET FMTONLY OFF
DECLARE #TempFound TABLE
(
UID int IDENTITY NOT NULL,
PersonId UNIQUEIDENTIFIER
)
INSERT INTO #TempFound
(
PersonId
)
SELECT PersonId FROM People WHERE Surname Like '%' + SearchStr + '%'
SET #rc = ##ROWCOUNT
-- Calculate the final offset for paging --
DECLARE #FirstRec int, #LastRec int
SELECT #FirstRec = (#Page - 1) * #RecsPerPage
SELECT #LastRec = (#Page * #RecsPerPage + 1)
-- Final select --
SELECT p.* FROM People p INNER JOIN #TempFound tf
ON p.PersonId = tf.PersonId
WHERE (tf.UID > #FirstRec) AND (tf.UID < #LastRec)
The #rc parameter is the total number of records found.
You obviously have to model it to your own table, but it should run extremely fast..
To bind it to an object in Linq to SQL, you just have to make sure that the final selects fields match the fields of the object it is to be bound to.