LINQ, Skip and Take against Azure SQL Databse Not Working - asp.net-mvc

I'm pulling a paged dataset on an ASP.NET MVC3 application which uses JQuery to get data for endlesss scroll paging via $ajax call. The backend is a Azure SQL database. Here is the code:
[Authorize]
[OutputCache(Duration=0,NoStore=true)]
public PartialViewResult Search(int page = 1)
{
int batch = 10;
int fromRecord = 1;
int toRecord = batch;
if (page != 1)
{
//note these are correctly passed and set
toRecord = (batch * page);
fromRecord = (toRecord - (batch - 1));
}
IQueryable<TheTable> query;
query = context.TheTable.Where(m => m.Username==HttpContext.User.Identity.Name)
.OrderByDescending(d => d.CreatedOn)
.Skip(fromRecord).Take(toRecord);
//this should always be the batch size (10)
//but seems to concatenate the previous results ???
int count = query.ToList().Count();
//results
//call #1, count = 10
//call #2, count = 20
//call #3, count = 30
//etc...
PartialViewResult newPartialView = PartialView("Dashboard/_DataList", query.ToList());
return newPartialView;
}
The data returned from each call from Jquery $ajax continues to GROW on each subsequent call rather then returning only 10 records per call. So the results return contain all of the earlier calls data as well. Also, I've added 'cache=false' to the $ajax call as well. Any ideas on what is going wrong here?

The values you're passing to Skip and Take are wrong.
The argument to Skip should be the number of records you want to skip, which should be 0 on the first page;
The argument to Take needs to be the number of records you want to return, which will always be equal to batch;
Your code needs to be:
int batch = 10;
int fromRecord = 0;
int toRecord = batch;
if (page != 1)
{
fromRecord = batch * (page - 1);
}

Related

Reactive way of implementing 'standard pagination'

I am just starting with Spring Reactor and want to implement something that I would call 'standard pagination', don't know if there is technical term for this. Basically no matter what start and end date is passed to method, I want to return same amound of data, evenly distributed.
This will be used for some chart drawing in the future.
I figured out rough copy with algorithm that does exactly that, unfortunatelly before I can filter results I need to either count() or take last index() and block to get this number.
This block is surelly not the reactive way to do this, also it makes flux to call DB twice for data (or am I missing something?)
Is there any operator than can help me and get result from count() somehow down the stream for further usage, it would need to compute anyway before stream can be processed, but to get rid of calling DB two times?
I am using mongoDB reactive driver.
Flux<StandardEntity> results = Flux.from(
mongoCollectionManager.getCollection(channel)
.find( and(gte("lastUpdated", begin), lte("lastUpdated", end))))
.map(d -> new StandardEntity(d.getString("price"), d.getString("lastUpdated")));
Long lastIndex = results
.count()
.block();
final double standardPage = 10.0D;
final double step = lastIndex / standardPage;
final double[] counter = {0.0D};
return
results
.take(1)
.mergeWith(
results
.skip(1)
.filter(e -> {
if (lastIndex > standardPage)
if (counter[0] >= step) {
counter[0] = counter[0] - step + 1;
return true;
} else {
counter[0] = counter[0] + 1;
return false;
}
else
return true;
}));

Neo4j : Difference between cypher execution and Java API call?

Neo4j : Enterprise version 3.2
I see a tremendous difference between the following two calls in terms for speed. Here are the settings and query/API.
Page Cache : 16g | Heap : 16g
Number of row/nodes -> 600K
cypher code (ignore syntax if any) | Time Taken : 50 sec.
using periodic commit 10000
load with headers from 'file:///xyx.csv' as row with row
create(n:ObjectTension) set n = row
From Java (session pool, with 15 session at time as an example):
Thread_1 : Time Taken : 8 sec / 10K
Map<String,Object> pList = new HashMap<String, Object>();
try(Transaction tx = Driver.session().beginTransaction()){
for(int i = 0; i< 10000; i++){
pList.put(i, i * i);
params.put("props",pList);
String query = "Create(n:Label {props})";
// String query = "create(n:Label) set n = {props})";
tx.run(query, params);
}
Thread_2 : Time taken is 9 sec / 10K
Map<String,Object> pList = new HashMap<String, Object>();
try(Transaction tx = Driver.session().beginTransaction()){
for(int i = 0; i< 10000; i++){
pList.put(i, i * i);
params.put("props",pList);
String query = "Create(n:Label {props})";
// String query = "create(n:Label) set n = {props})";
tx.run(query, params);
}
.
.
.
Thread_3 : Basically the above code is reused..It's just an example.
Thread_N where N = (600K / 10K)
Hence, the over all time taken is around 2 ~ 3 mins.
The question are the following?
How does CSV load handles internally? Like does it open single session and multiple transactions within?
Or
Create multiple session based on the parameter passed as "Using periodic commit 10000", with this 600K/10000 is 60 session? etc
What's the best way to write via Java?
The idea is achieve the same write performance as CSV load via Java. As the csv load 12000 nodes in ~5 seconds or even better.
Your Java code is doing something very different than your Cypher code, so it really makes no sense to compare processing times.
You should change your Java code to read from the same CSV file. File IO is fairly expensive, but your Java code is not doing any.
Also, whereas your pure Cypher query is creating nodes with a fixed (and presumably relatively small) number of properties, your Java pList is growing in size with every loop iteration -- so that each Java loop creates nodes with between 1 to 10K properties! This may be the main reason why your Java code is much slower.
[UPDATE 1]
If you want to ignore the performance difference between using and not using a CSV file, the following (untested) code should give you an idea of what similar logic would look like in Java. In this example, the i loop assumes that your CSV file has 10 columns (you should adjust the loop to use the correct column count). Also, this example gives all the nodes the same properties, which is OK as long as you have not created a contrary uniqueness constraint.
Session session = Driver.session();
Map<String,Object> pList = new HashMap<String, Object>();
for (int i = 0; i < 10; i++) {
pList.put(i, i * i);
}
Map<String, Map> params = new HashMap<String, Map>();
params.put("props", pList);
String query = "create(n:Label) set n = {props})";
for (int j = 0; j < 60; j++) {
try (Transaction tx = session.beginTransaction()) {
for(int k = 0; k < 10000; k++){
tx.run(query, params);
}
}
}
[UPDATE 2 and 3, copied from chat and then fixed]
Since the Cypher planner is able to optimize, the actual internal logic is probably a lot more efficient than the Java code I provided (above). If you want to also optimize your Java code (which may be closer to the code that Cypher actually generates), try the following (untested) code. It sends 10000 rows of data in a single run() call, and uses the UNWIND clause to break it up into individual rows on the server.
Session session = Driver.session();
Map<String, Integer> pList = new HashMap<String, Integer>();
for (int i = 0; i < 10; i++) {
pList.put(Integer.toString(i), i*i);
}
List<Map<String,Integer>> rows = Collections.nCopies(1, pList);
Map<String, List> params = new HashMap<String, List>();
params.put("rows", rows);
String query = "UNWIND {rows} AS row CREATE(n:Label) SET n = {row})";
for (int j = 0; j < 60; j++) {
try (Transaction tx = session.beginTransaction()) {
tx.run(query, params);
}
}
You can try are creating the nodes using Java API, instead of relying on Cypher:
createNode - http://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/GraphDatabaseService.html#createNode-org.neo4j.graphdb.Label...-
setProperty - http://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/PropertyContainer.html#setProperty-java.lang.String-java.lang.Object-
Also, as predecessor had mentioned, props variable has different values for your cases.
Additionally, notice that every iteration you are performing query parsing (String query = "Create(n:Label {props})";) - unless it is optimized out by neo4j itself.

I need to get more than 100 pages in my query

I want to get all video information posible from Youtube for my proyect. I know that the limit page is 100.
I do the next code:
ArrayList<String> videos = new ArrayList<>();
int i = 1;
String peticion = "http://gdata.youtube.com/feeds/api/videos?category=Comedy&alt=json&max-results=50&page=" + i;
URL oracle = new URL(peticion);
URLConnection yc = oracle.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
yc.getInputStream()));
String inputLine = in.readLine();
while (in.readLine() != null)
{
inputLine = inputLine + in.readLine();
}
System.out.println(inputLine);
JSONObject jsonObj = new JSONObject(inputLine);
JSONObject jsonFeed = jsonObj.getJSONObject("feed");
JSONArray jsonArr = jsonFeed.getJSONArray("entry");
while(i<=100)
{
for (int j = 0; j < jsonArr.length(); j++) {
videos.add(jsonArr.getJSONObject(j).getJSONObject("id").getString("$t"));
System.out.println("Numero " + videosTotales + jsonArr.getJSONObject(j).getJSONObject("id").getString("$t"));
videosTotales++;
}
i++;
}
When the program finish, I have 5000 videos per category, but I need much more, much much more, but the limit is page = 100.
So, how can I get more than 10 millions of videos?
Thank you!
Are those 5000 also unique id's ?
I see the use of max-results=50, but not a start-index parameter in your url.
There is a limit on the results you can get per request. There is also a limit on the number of requests that you can send within some time interval. By checking the statuscode of the response and any error message you can find these limits, as they may change in time.
Besides the category parameter, use some other parameters too. For instance, you may vary the q parameter (used with some keywords) and/or order parameter to get a different results set.
See the documentation for available parameters.
Note, that you are using api version 2, which is deprecated. There is an api version 3.

Is it possible to use an offset when using the Symfony sfPager class?

I would like to disregard the first n items in a pager list of items. I.e. they are being used elsewhere in the design.
So my pager list needs to be as such:
Page 1: Items 8 - 17
Page 2: Items 18 - 27
Page 3: Items 28 - 37
...
However, setting an offset or limit in the criteria object does nothing. I presume they are used by the pager itself.
Is it possible to add a offset to the pager class in some other way?
Ok, I have got around the problem by modifying sfPropelPager.class.php and putting it into a new class file which I have named atPropelPagerOffset.class.php
It works in exactly the same way apart from it takes an extra parameter, $offset
So the top of the file looks like:
protected
$criteria = null,
$peer_method_name = 'doSelect',
$peer_count_method_name = 'doCount',
$offset = 0;
public function __construct($class, $maxPerPage = 10, $offset = 0)
{
parent::__construct($class, $maxPerPage);
$this->setCriteria(new Criteria());
$this->tableName = constant($class.'Peer::TABLE_NAME');
$this->offset = $offset;
}
Then I have made this tiny change around line 50
$c->setOffset($offset+$this->offset);
Works a treat!
Simpler solution would be a custom select method:
$pager->setPeerMethod('doSelectCustom');
and then put your logic in the model Peer Class:
public static function doSelectCustom($c)
{
$c2 = clone $c;
$offset = $c2->getOffset();
$limit = $c2->getLimit();
$someCustomVar = someClass::someMethod();
if ($offset == 0) // we are in the first page
{
$c2->setLimit($limit - $someCustomVar);
$c2->add(self::SOMECOLUMN, false);
} else $c2->setOffset($offset - $someCustomVar);
return self::doSelectRS($c2); // or doSelect if you wanna retrieve objects
}

Amcharts rendering data incorrectly

I have a setup where I am using amcharts that is feed data via appendData from an AJAX call. The call goes to a URL which simply renders the Time.now as the X and 8 lines using the function 2cos(x/2)+2ln (ln is the line number). AJAX request is made every 1 second.
The backend is always correct and always returns a single point, unless it is a duplicate X in which it throws an error. The error causes not to complete and therefore not call appendData.
Anybody have any idea what is going wrong with amcharts? It seems to be an issue only with appendData (which I need to simulate a sliding window).
The Javascript code is below. It assumes that the page creates a line chart with 8 points graphs and passes it to setup_chart_loader. Netcordia.rapid_poller.updateChart is used to update the chart using the Ajax request
Ext.ns("Netcordia.rapid_poller");
Netcordia.rapid_poller.refresh_rate = 1; //seconds
Netcordia.rapid_poller.pause = false; //causes the AJAX to suspend
Netcordia.rapid_poller.chart = null;
Netcordia.rapid_poller.stop = false;
/* This function does everything that is required to get the chart data correct */
Netcordia.rapid_poller.setup_chart_loader = function(chart){
assert(Netcordia.rapid_poller.displaySizeInMinutes,"No display size");
assert(Netcordia.rapid_poller.delta_url, "Data URL is empty");
assert(Netcordia.rapid_poller.delta_params, "No Data params");
if(typeof(chart) !== 'object'){
chart = document.getElementById(chart);
}
Netcordia.rapid_poller.chart = chart;
// 5 seconds raw polling
var maxPoints = Netcordia.rapid_poller.displaySizeInMinutes * 60 / 5;
var count = 0;
var lastUpdate = '';
debug("max number of points: "+maxPoints);
debug('creating updateChart function');
Netcordia.rapid_poller.updateChart = function(){
debug("Sending Data request");
var params = {last: lastUpdate, max: 1}; //maxPoints};
//I have to do this otherwise amcharts get a lot of data and only renders
// one item, then the counts is off
if(lastUpdate === ''){params['max'] = maxPoints;}
if (Netcordia.rapid_poller.pause){
alert("pausing");
params['historical'] = 1;
params['max'] = maxPoints;
}
Ext.apply(params, Netcordia.rapid_poller.delta_params);
//this might need to be moved to within the Ajax request
// incase things start piling up
if(!Netcordia.rapid_poller.stop){
setTimeout(Netcordia.rapid_poller.updateChart,1000*Netcordia.rapid_poller.refresh_rate);
} else {
debug("skipping next poll");
return;
}
Ext.Ajax.request({
url: Netcordia.rapid_poller.delta_url,
baseParams: Netcordia.rapid_poller.delta_params,
params: params,
success: function(response){
//if(Netcordia.rapid_poller.pause){
// debug("Data stopped");
// return;
//}
var json = Ext.util.JSON.decode(response.responseText);
lastUpdate = json.lastUpdate;
if( json.count === 0 ){
debug("no data to append");
return;
}
debug("appending "+json.count);
var remove = (count + json.count) - maxPoints;
if(remove <= 0){ remove = 0; }
count += json.count;
if(count > maxPoints){ count = maxPoints; }
debug("removing "+remove);
debug("count: "+count);
if(Netcordia.rapid_poller.pause){
alert("Pausing for historical");
//append a zero point and delete the existing data
// amcharts can leak extra points onto the screen so deleting
// twice the number is
chart.appendData("00:00:00;0;0;0;0;0;0;0;0",(count*2).toString());
count = json.count;
remove = 1;
Netcordia.rapid_poller.stop = true;
}
chart.appendData(json.lines.toString(),remove.toString());
}
});
};
};
The rails code that returns the data is as follows:
def get_delta
max = 1
begin
current = Time.parse(params[:last])
rescue
current = Time.now
end
if params[:historical]
max = params[:max].to_i || 10
current = Time.at(current.to_i - (max/2))
end
logger.info(current.to_i)
logger.info(max)
n = current.to_i
m = n+max-1
data = (n..m).collect do |x|
logger.info "For Point: #{x}"
point = Math.cos(x/2)
data = [Time.at(x).strftime("%H:%M:%S")]
for i in (1..8)
data.push(2*point+(2*i));
end
data.join(";")
end
render :json => {count: data.size, lastUpdate: Time.now.strftime('%Y-%m-%d %H:%M:%S'), lines: data.join("\n")}
end
Seems to be a bug in Amcharts itself.
Forum Post has the developer's answer.

Resources