How to get last offset in jira active object pagination? - jira

public List<Forest> getForest(int limit, int offset) {
try {
return newArrayList(ao.find(Forest.class, Query.select().order("ID DESC").limit(limit).offset(offset)));
} catch (Exception e) {
return null;
}
}
The above code working fine, but its not return total number of rows.
So i unable to apply last page index.
I have applied First, Next & Previous

I guess you need to "count" your records:
Query query = Query.select().order("ID DESC");
int count = ao.count(Forest.class, query);

Related

calculate column total in Vaadin 7 Grid

In Vaadin 7, is there an easy way to calculate the numeric total for selected columns? I know how to do it for Vaadin 8, defined here. But since Vaadin 7 uses a container, I am trying to think of the best way to do it. Currently, this is the best way I can think of, based on the documentation here. Code is a rough draft, so I expect there are some syntax problems. Treat it more as pseudo code, if possible.
Map<Object,Double> totals = new HashMap();
for (Iterator<?> i = container.getItemIds().iterator(); i.hasNext();) {
Object itemId = i.next();
Item item = container.getItem(itemId);
for(Object totalCol : totalColumns)
{
Object columnVal = item.getItemProperty(totalCol);
Double total = totals.get(totalCol);
if(!(total instanceof Double))
total = 0.0;
if(columnVal instanceof Double)
{
total += (Double)columnVal;
}
else if(columnVal instanceof Long)
{
total += (Long)columnVal;
}
else if(columnVal instanceof Integer)
{
total += (Integer)columnVal;
}
else if(columnVal instanceof String)
{
try {
Long value = Long.parseLong((String) columnVal);
total += value;
} catch (NumberFormatException e) {
try {
Double value = Double.parseDouble((String) columnVal);
total += value;
} catch (NumberFormatException e1) {
}
}
}
totals.put(totalCol, total);
}
/* At this point, go through totals Map, and set value to correct footer column with correct
* text formatting. This part is easy, and clearly documented, so leaving it off this
* code example.
*/
}
By the way, the above idea works, my question is more if this is the best idea or not?

Safe way to access List index

I'm newbee to Dart.
I have troubles to find an easy to read way to "safely" access a List element at any index
final List<String> myList = <String>[]
myList.add("something")
// ...
String myGetter(int index) {
// "heavy" way
if (index < myList.length) {
return myList[index]
}
return null;
}
If I go with regular [index] or elementAt(index) and index is out of boundaries, it throws a RandeError
Is there a method that returns null when the index cannot be reached?
Sorry if double posted, but I try to find the info without any success, + not sure if there is an (un)official slack / discord to ask this kind of "easy" questions
Dart lists do not allow invalid indices. There is no built-in way to get a null when trying. Not in the platform libraries.
You can create your own helper function (like you already do):
T? tryGet<T>(List<T> list, int index) =>
index < 0 || index >= list.length ? null : list[index];
(Remember to check for negative indices too).
As suggested, you can also add it as an extension method:
extension ListGetExtension<T> on List<T> {
T? tryGet(int index) =>
index < 0 || index >= this.length ? null : this[index];
}
which may make it more pleasant to work with.
(I recommend against doing something bad and then catching the error, at least when you can easily check up-front whether it's bad or not).
You can defined an extension method to catch the RangeError and return null:
void main() {
print([1, 2].get(3)); // display null
}
extension SafeLookup<E> on List<E> {
E get(int index) {
try {
return this[index];
} on RangeError {
return null;
}
}
}
You can try this
void main() {
List<int> teste = [1, 2, 3, 4];
print(teste.get(1));
}
extension ListExtension<E> on List<E> {
dynamic get(int value) {
return this.contains(value) ? value : null;
}
}
According to the documentation:
throws a RangeError if index is out of bounds.
So you can use the try-catch block:
String myGetter(int index) {
try {
return myList[index];
}
on RangeError {
// Called when the index is out of bounds
return null;
}
}
If you want to be extra cautious I guess you could put a generic catch at the end (to catch all kinds of throws that are not RangeError), but in a simple getter like this I think that would not be necessary:
[...]
}catch (e) {
// No specified type, handles all other types of error/exceptions
return null;
}
[...]

Keeping data between ActionResults

I am populating a list based on data returned from a stored procedure, this first occurs in the SpecificArea ActionResult:
public ActionResult SpecificArea(ModelCallDetails call, int id = 0 )
{
ReturnSpecificAreas(call, id);
return PartialView("SpecificArea", listCallDetails);
}
When the list is displayed each row is an actionlink, which will sends the data to the SpecificAreaWorker:
[HttpGet]
public ActionResult SpecificAreaWorker(ModelCallDetails call, int id)
{
TempData["StringOfIds"] = StringOfIds;
ReturnSpecificAreas(call, id);
if (ResponseMessage == "Successful")
{
return PartialView("SpecificArea", listCallDetails);
}
else
{
return RedirectToAction("ViewCall");
}
}
I am wanting to collect the id of each row that is clicked and store them in a list in the model so that I can create a string of id's. However, each time a row in the table is clicked it refreshes the model, and I no longer have a list of ids anymore.
public void ReturnSpecificAreas(ModelCallDetails call, int id)
{
SelectedAffectedServiceID = id;
call.AffectedServiceList.Add(SelectedAffectedServiceID);
foreach (int item in call.AffectedServiceList)
{
if (TempData["StringOfIds"] != null)
{
StringOfIds = TempData["StringOfIds"].ToString();
StringOfIds += string.Join(",", call.AffectedServiceList.ToArray());
}
else
{
StringOfIds += string.Join(",", call.AffectedServiceList.ToArray());
}
}
I have tried to mantain the data in tempdata, but can't manage to execute this -will the tempdata refresh each time the actionlink is clicked? Is there a better way to achieve this?
I believe you are using MVC5? If so, use
System.Web.HttpContext
This gets current request
to save....
System.Web.HttpContext.Current.Application["StringOfIds"] = StringOfIds; //Saves global
System.Web.HttpContext.Current.Session["StringOfIds"] = StringOfIds; //Saves Session
To retrieve...
StringOfIds = (string) System.Web.HttpContext.Current.Application ["StringOfIds"]; //Retrieves from global memory
StringOfIds = (string) System.Web.HttpContext.Current.Session ["StringOfIds"]; //retrieves from session memory
Good luck.

Facing Critical Performance issue in Primefaces 4 & 5

I am working on a project which deal with heavy data sets. I am using Primefaces 4 & 5, spring and hibernate. I have to to display a very huge datasets such as min 3000 rows with 100 columns with various features such as sorting, filtering, row-expansion etc. My problem is, my applications took 8 to 10 mins to show the whole page as well as other functionalities(sorting, filtering ) also takes a lot time. My client is not happy at all. However I can use pagination for this but again My client do not want paging. So I decided to use livescroll but unfortunately I failed to implement livescroll with lazyload or without lazyload as there were bugs in PF regarding livescroll. also i have posted this question here earlier but no solution found.
This performance issue is very critical and show stopper for me. To show 3000 rows with 100 columns, the size of the page which is getting loaded is ~10MB.
I have calculated the time consumed by various life-cycles of of JSF, using Phase-listener I figure out that its Browser who is taking time to parse the response rendered by jsf. To complete the all phases my application took only 25 sec.
At minimal I want to increase the performance of my project. Please share any idea, suggestion and anything which could help to overcome this problem
Note: There is no database manipulations in getters and setters as well as no complex business logic.
UPDATE :
This is my datatable without lazyload:
<p:dataTable
style="width:100%"
id="cdTable"
selection="#{controller.selectedArray}"
resizableColumns="true"
draggableColumns="true"
var="cd"
value="#{controller.cdDataModel}"
editable="true"
editMode="cell"
selectionMode="multiple"
rowSelectMode="add"
scrollable="true"
scrollHeight="650"
rowKey="#{cd.id}"
rowIndexVar="rowIndex"
styleClass="screenScrollStyle"
liveScroll="true"
scrollRows="50"
filterEvent="enter"
widgetVar="dt4"
>
Here everything is working except filtering. Once I filter then first page is displayed but unable to sort or livescroll on datatable. Note this I have tested in Primefaces5.
2nd Approch
With lazyload with same datatable
1) When I add rows="100" livescroll happens but problem with row-editing, row-expansion but filter & sorting works.
2) When I remove rows livescroll works with row-editing, row-expansion etc but filter & sorting dont work.
My LazyLoadModel is as follows
public class MyDataModel extends LazyDataModel<YData>
{
#Override
public List<YData> load(int first, int pageSize,
List<SortMeta> multiSortMeta, Map<String, Object> filters) {
System.out.println("multisort wala load");
return super.load(first, pageSize, multiSortMeta, filters);
}
/**
*
*/
private static final long serialVersionUID = 1L;
private List<YData> datasource;
public YieldRecBondDataModel() {
}
public YieldRecBondDataModel(List<YData> datasource) {
this.datasource = datasource;
}
#Override
public YData getRowData(String rowKey) {
// In a real app, a more efficient way like a query by rowKey should be
// implemented to deal with huge data
// List<YData> yList = (List<YData>) getWrappedData();
for (YData y : datasource)
{
System.out.println("datasource :"+datasource.size());
if(y.getId()!=null)
{
if (y.getId()==(new Long(rowKey)))
{
return y;
}
}
}
return null;
}
#Override
public Object getRowKey(YData y) {
return y.getId();
}
#Override
public void setRowIndex(int rowIndex) {
/*
* The following is in ancestor (LazyDataModel):
* this.rowIndex = rowIndex == -1 ? rowIndex : (rowIndex % pageSize);
*/
if (rowIndex == -1 || getPageSize() == 0) {
super.setRowIndex(-1);
}
else
super.setRowIndex(rowIndex % getPageSize());
}
#Override
public List<YData> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String,Object> filters) {
List<YData> data = new ArrayList<YData>();
System.out.println("sort order : "+sortOrder);
//filter
for(YData yInfo : datasource) {
boolean match = true;
for(Iterator<String> it = filters.keySet().iterator(); it.hasNext();) {
try {
String filterProperty = it.next();
String filterValue = String.valueOf(filters.get(filterProperty));
Field yField = yInfo.getClass().getDeclaredField(filterProperty);
yField.setAccessible(true);
String fieldValue = String.valueOf(yField.get(yInfo));
if(filterValue == null || fieldValue.startsWith(filterValue)) {
match = true;
}
else {
match = false;
break;
}
} catch(Exception e) {
e.printStackTrace();
match = false;
}
}
if(match) {
data.add(yInfo);
}
}
//sort
if(sortField != null) {
Collections.sort(data, new LazySorter(sortField, sortOrder));
}
int dataSize = data.size();
this.setRowCount(dataSize);
//paginate
if(dataSize > pageSize) {
try {
List<YData> subList = data.subList(first, first + pageSize);
return subList;
}
catch(IndexOutOfBoundsException e) {
return data.subList(first, first + (dataSize % pageSize));
}
}
else
return data;
}
#Override
public int getRowCount() {
// TODO Auto-generated method stub
return super.getRowCount();
}
}
I am fade up with these issues and becomes show stopper for me. Even i tried Primefaces 5
If your data is loaded from db i suggest you to do a better LazyDataModel like:
public class ElementiLazyDataModel extends LazyDataModel<T> implements Serializable {
private Service<T> abstractFacade;
public ElementiLazyDataModel(Service<T> abstractFacade) {
this.abstractFacade = abstractFacade;
}
public Service<T> getAbstractFacade() {
return abstractFacade;
}
public void setAbstractFacade(Service<T> abstractFacade) {
this.abstractFacade = abstractFacade;
}
#Override
public List<T> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String, Object> filters) {
PaginatedResult<T> pr = abstractFacade.findRange(new int[]{first, first + pageSize}, sortField, sortOrder, filters);
setRowCount(new Long(pr.getTotalItems()).intValue());
return pr.getItems();
}
}
The service is some kind of backend communication (like an EJB) injected in the ManagedBean that use this model.
The service for pagination may be like this:
#Override
public PaginatedResult<T> findRange(int[] range, String sortField, SortOrder sortOrder, Map<String, Object> filters) {
final Query query = getEntityManager().createQuery("select x from " + entityClass.getSimpleName() + " x")
.setFirstResult(range[0]).setMaxResults(range[1] - range[0] + 1);
// Add filter sort etc.
final Query queryCount = getEntityManager().createQuery("select count(x) from " + entityClass.getSimpleName() + " x");
// Add filter sort etc.
Long rowCount = (Long) queryCount.getSingleResult();
List<T> resultList = query.getResultList();
return new PaginatedResult<T>(resultList, rowCount);
}
Note that you have to do the paginated query (with jpa like this the orm do the query for you, but if you don't use orm have to do paginated query, for oracle look at TOP-N query, for example: http://oracle-base.com/articles/misc/top-n-queries.php)
Remember your return obj must be contains also the total record as a fast count:
public class PaginatedResult<T> implements Serializable {
private List<T> items;
private long totalItems;
public PaginatedResult() {
}
public PaginatedResult(List<T> items, long totalItems) {
this.items = items;
this.totalItems = totalItems;
}
public List<T> getItems() {
return items;
}
public void setItems(List<T> items) {
this.items = items;
}
public long getTotalItems() {
return totalItems;
}
public void setTotalItems(long totalItems) {
this.totalItems = totalItems;
}
}
All this is useful if your database table is correctly setup, pay aptention to the execution plan of the possible query and add the right index.
Hope to give some hint to improve you performance
In the end, remember to your final user that the human eyes can't see more that 10-20 record at once, so it is very useless to have thousand record in a page.
You have used the default load implementation which is used in the showcases of Primefaces. This is not the correct implementation for your case where you load your data from a database.
The load method should use the correct query with consideration of :
1) the filter fields that are used, example:
String query = "select e from Entity e where lower(e.f1) like lower('" + filters.get(key) + "'%) and..., etc. for the other fields
2) the sorting columns that are used, example:
query.append("order by ").append(sortField).append(" ").append(SortOrder.ASCENDING.name() ? "" : sortOrder.substring(0, 4)),..., etc. for the other columns.
3) The total count of your query WITH 1) attached to it. Example:
Long totalCount = (Long) entityManager.createQuery("select count(*) from Entity e where lower(e.f1) like lower('filterKey1%') and lower(e.f2) like lower('filterKey2%') ...").getSingleResult();

sqlbulkcopy mem. management

I'm using SQLBULKCOPY to copy some data-tables into a database table, however, because the size of the files I'm copying run sometimes in excess of 600mb, I keep running out of memory.
I'm hoping to get some advice about managing the table size before I commit it to the database so I can free up some memory to continue writing.
Here are some examples of my code (some columns and rows eliminated for simplicity)
SqlBulkCopy sqlbulkCopy = new SqlBulkCopy(ServerConfiguration); //Define the Server Configuration
System.IO.StreamReader rdr = new System.IO.StreamReader(fileName);
Console.WriteLine("Counting number of lines...");
Console.WriteLine("{0}, Contains: {1} Lines", fileName, countLines(fileName));
DataTable dt = new DataTable();
sqlbulkCopy.DestinationTableName = "[dbo].[buy.com]"; //You need to define the target table name where the data will be copied
dt.Columns.Add("PROGRAMNAME");
dt.Columns.Add("PROGRAMURL");
dt.Columns.Add("CATALOGNAME");
string inputLine = "";
DataRow row; //Declare a row, which will be added to the above data table
while ((inputLine = rdr.ReadLine()) != null) //Read while the line is not null
{
i = 0;
string[] arr;
Console.Write("\rWriting Line: {0}", k);
arr = inputLine.Split('\t'); //splitting the line which was read by the stream reader object (tab delimited)
row = dt.NewRow();
row["PROGRAMNAME"] = arr[i++];
row["PROGRAMURL"] = arr[i++];
row["CATALOGNAME"] = arr[i++];
row["LASTUPDATED"] = arr[i++];
row["NAME"] = arr[i++];
dt.Rows.Add(row);
k++;
}
// Set the timeout, 600 secons (10 minutes) given table size--damn that's a lota hooch
sqlbulkCopy.BulkCopyTimeout = 600;
try
{
sqlbulkCopy.WriteToServer(dt);
}
catch (Exception e)
{
Console.WriteLine(e);
}
sqlbulkCopy.Close();//Release the resources
dt.Dispose();
Console.WriteLine("\nDB Table Written: \"{0}\" \n\n", sqlbulkCopy.DestinationTableName.ToString());
}
I continued to have problems getting SQLBulkCopy to work, and I realized I needed to do more work on each record before it was entered into the database, so I developed a simple LinQ to Sql method to do record by record updates, so I could edit other information and create more record information as it was being run,
Problem: This method's been running pretty slow (even on Core i3 machine), any ideas on how to speed it up (threading?) -- on a single processor core, with 1gb of memory it crashes or takes sometimes 6-8 hours to write the same amount of data as one SQLBulkCopy that takes a few moments. It does manage memory better though.
while ((inputLine = rdr.ReadLine()) != null) //Read while the line is not null
{
Console.Write("\rWriting Line: {0}", k);
string[] arr;
arr = inputLine.Split('\t');
/* items */
if (fileName.Contains(",,"))
{
Item = Table(arr);
table.tables.InsertOnSubmit(Item);
/* Check to see if the item is in the db */
bool exists = table.tables.Where(u => u.ProductID == Item.ProductID).Any();
/* Commit */
if (!exists)
{
try
{
table.SubmitChanges();
}
catch (Exception e)
{
Console.WriteLine(e);
// Make some adjustments.
// ...
// Try again.
table.SubmitChanges();
}
}
}
With helper method:
public static class extensionMethods
{
/// <summary>
/// Method that provides the T-SQL EXISTS call for any IQueryable (thus extending Linq).
/// </summary>
/// <remarks>Returns whether or not the predicate conditions exists at least one time.</remarks>
public static bool Exists<TSource>(this IQueryable<TSource> source, Expression<Func<TSource, bool>> predicate)
{
return source.Where(predicate).Any();
}
}
Try specifying the BatchSize property to 1000 which will batch up the insert in a 1000 record batch rather than the whole lot. You can tweak this value to find what is optimal. I have used sqlbulkcopy for similar size data and it works well.
Faced with the same issue, found that problem of OutOfMemory Exception was in DataTable.Rows maximum quantity limitations.
Solved with recreating table, with maximum 500000 rows limit.
Hope, my solution will be helpfull:
var myTable = new System.Data.DataTable();
myTable.Columns.Add("Guid", typeof(Guid));
myTable.Columns.Add("Name", typeof(string));
int counter = 0;
foreach (var row in rows)
{
++counter;
if (counter < 500000)
{
myTable.Rows.Add(
new object[]
{
row.Value.Guid,
row.Value.Name
});
}
else
{
using (var dbConnection = new SqlConnection("Source=localhost;..."))
{
dbConnection.Open();
using (var s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = "MyTable";
foreach (var column in myTable.Columns)
s.ColumnMappings.Add(column.ToString(), column.ToString());
try
{
s.WriteToServer(myTable);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
s.Close();
}
}
}
myTable = new System.Data.DataTable();
myTable.Columns.Add("Guid", typeof(Guid));
myTable.Columns.Add("Name", typeof(string));
myTable.Rows.Add(
new object[]
{
row.Value.Guid,
row.Value.Name
});
counter = 0;
}
}

Resources