Facing Critical Performance issue in Primefaces 4 & 5 - jsf-2

I am working on a project which deal with heavy data sets. I am using Primefaces 4 & 5, spring and hibernate. I have to to display a very huge datasets such as min 3000 rows with 100 columns with various features such as sorting, filtering, row-expansion etc. My problem is, my applications took 8 to 10 mins to show the whole page as well as other functionalities(sorting, filtering ) also takes a lot time. My client is not happy at all. However I can use pagination for this but again My client do not want paging. So I decided to use livescroll but unfortunately I failed to implement livescroll with lazyload or without lazyload as there were bugs in PF regarding livescroll. also i have posted this question here earlier but no solution found.
This performance issue is very critical and show stopper for me. To show 3000 rows with 100 columns, the size of the page which is getting loaded is ~10MB.
I have calculated the time consumed by various life-cycles of of JSF, using Phase-listener I figure out that its Browser who is taking time to parse the response rendered by jsf. To complete the all phases my application took only 25 sec.
At minimal I want to increase the performance of my project. Please share any idea, suggestion and anything which could help to overcome this problem
Note: There is no database manipulations in getters and setters as well as no complex business logic.
UPDATE :
This is my datatable without lazyload:
<p:dataTable
style="width:100%"
id="cdTable"
selection="#{controller.selectedArray}"
resizableColumns="true"
draggableColumns="true"
var="cd"
value="#{controller.cdDataModel}"
editable="true"
editMode="cell"
selectionMode="multiple"
rowSelectMode="add"
scrollable="true"
scrollHeight="650"
rowKey="#{cd.id}"
rowIndexVar="rowIndex"
styleClass="screenScrollStyle"
liveScroll="true"
scrollRows="50"
filterEvent="enter"
widgetVar="dt4"
>
Here everything is working except filtering. Once I filter then first page is displayed but unable to sort or livescroll on datatable. Note this I have tested in Primefaces5.
2nd Approch
With lazyload with same datatable
1) When I add rows="100" livescroll happens but problem with row-editing, row-expansion but filter & sorting works.
2) When I remove rows livescroll works with row-editing, row-expansion etc but filter & sorting dont work.
My LazyLoadModel is as follows
public class MyDataModel extends LazyDataModel<YData>
{
#Override
public List<YData> load(int first, int pageSize,
List<SortMeta> multiSortMeta, Map<String, Object> filters) {
System.out.println("multisort wala load");
return super.load(first, pageSize, multiSortMeta, filters);
}
/**
*
*/
private static final long serialVersionUID = 1L;
private List<YData> datasource;
public YieldRecBondDataModel() {
}
public YieldRecBondDataModel(List<YData> datasource) {
this.datasource = datasource;
}
#Override
public YData getRowData(String rowKey) {
// In a real app, a more efficient way like a query by rowKey should be
// implemented to deal with huge data
// List<YData> yList = (List<YData>) getWrappedData();
for (YData y : datasource)
{
System.out.println("datasource :"+datasource.size());
if(y.getId()!=null)
{
if (y.getId()==(new Long(rowKey)))
{
return y;
}
}
}
return null;
}
#Override
public Object getRowKey(YData y) {
return y.getId();
}
#Override
public void setRowIndex(int rowIndex) {
/*
* The following is in ancestor (LazyDataModel):
* this.rowIndex = rowIndex == -1 ? rowIndex : (rowIndex % pageSize);
*/
if (rowIndex == -1 || getPageSize() == 0) {
super.setRowIndex(-1);
}
else
super.setRowIndex(rowIndex % getPageSize());
}
#Override
public List<YData> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String,Object> filters) {
List<YData> data = new ArrayList<YData>();
System.out.println("sort order : "+sortOrder);
//filter
for(YData yInfo : datasource) {
boolean match = true;
for(Iterator<String> it = filters.keySet().iterator(); it.hasNext();) {
try {
String filterProperty = it.next();
String filterValue = String.valueOf(filters.get(filterProperty));
Field yField = yInfo.getClass().getDeclaredField(filterProperty);
yField.setAccessible(true);
String fieldValue = String.valueOf(yField.get(yInfo));
if(filterValue == null || fieldValue.startsWith(filterValue)) {
match = true;
}
else {
match = false;
break;
}
} catch(Exception e) {
e.printStackTrace();
match = false;
}
}
if(match) {
data.add(yInfo);
}
}
//sort
if(sortField != null) {
Collections.sort(data, new LazySorter(sortField, sortOrder));
}
int dataSize = data.size();
this.setRowCount(dataSize);
//paginate
if(dataSize > pageSize) {
try {
List<YData> subList = data.subList(first, first + pageSize);
return subList;
}
catch(IndexOutOfBoundsException e) {
return data.subList(first, first + (dataSize % pageSize));
}
}
else
return data;
}
#Override
public int getRowCount() {
// TODO Auto-generated method stub
return super.getRowCount();
}
}
I am fade up with these issues and becomes show stopper for me. Even i tried Primefaces 5

If your data is loaded from db i suggest you to do a better LazyDataModel like:
public class ElementiLazyDataModel extends LazyDataModel<T> implements Serializable {
private Service<T> abstractFacade;
public ElementiLazyDataModel(Service<T> abstractFacade) {
this.abstractFacade = abstractFacade;
}
public Service<T> getAbstractFacade() {
return abstractFacade;
}
public void setAbstractFacade(Service<T> abstractFacade) {
this.abstractFacade = abstractFacade;
}
#Override
public List<T> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String, Object> filters) {
PaginatedResult<T> pr = abstractFacade.findRange(new int[]{first, first + pageSize}, sortField, sortOrder, filters);
setRowCount(new Long(pr.getTotalItems()).intValue());
return pr.getItems();
}
}
The service is some kind of backend communication (like an EJB) injected in the ManagedBean that use this model.
The service for pagination may be like this:
#Override
public PaginatedResult<T> findRange(int[] range, String sortField, SortOrder sortOrder, Map<String, Object> filters) {
final Query query = getEntityManager().createQuery("select x from " + entityClass.getSimpleName() + " x")
.setFirstResult(range[0]).setMaxResults(range[1] - range[0] + 1);
// Add filter sort etc.
final Query queryCount = getEntityManager().createQuery("select count(x) from " + entityClass.getSimpleName() + " x");
// Add filter sort etc.
Long rowCount = (Long) queryCount.getSingleResult();
List<T> resultList = query.getResultList();
return new PaginatedResult<T>(resultList, rowCount);
}
Note that you have to do the paginated query (with jpa like this the orm do the query for you, but if you don't use orm have to do paginated query, for oracle look at TOP-N query, for example: http://oracle-base.com/articles/misc/top-n-queries.php)
Remember your return obj must be contains also the total record as a fast count:
public class PaginatedResult<T> implements Serializable {
private List<T> items;
private long totalItems;
public PaginatedResult() {
}
public PaginatedResult(List<T> items, long totalItems) {
this.items = items;
this.totalItems = totalItems;
}
public List<T> getItems() {
return items;
}
public void setItems(List<T> items) {
this.items = items;
}
public long getTotalItems() {
return totalItems;
}
public void setTotalItems(long totalItems) {
this.totalItems = totalItems;
}
}
All this is useful if your database table is correctly setup, pay aptention to the execution plan of the possible query and add the right index.
Hope to give some hint to improve you performance
In the end, remember to your final user that the human eyes can't see more that 10-20 record at once, so it is very useless to have thousand record in a page.

You have used the default load implementation which is used in the showcases of Primefaces. This is not the correct implementation for your case where you load your data from a database.
The load method should use the correct query with consideration of :
1) the filter fields that are used, example:
String query = "select e from Entity e where lower(e.f1) like lower('" + filters.get(key) + "'%) and..., etc. for the other fields
2) the sorting columns that are used, example:
query.append("order by ").append(sortField).append(" ").append(SortOrder.ASCENDING.name() ? "" : sortOrder.substring(0, 4)),..., etc. for the other columns.
3) The total count of your query WITH 1) attached to it. Example:
Long totalCount = (Long) entityManager.createQuery("select count(*) from Entity e where lower(e.f1) like lower('filterKey1%') and lower(e.f2) like lower('filterKey2%') ...").getSingleResult();

Related

Save a list in a property in an entity in xodus

I have not found how to save a general list of primitive types, e.g. ints, or strings, in a property of an entity. I might have missed something obvious...
https://github.com/JetBrains/xodus/wiki/Entity-Stores described that "only Java primitive types, Strings, and ComparableSet values can be used by default".
It seems not hard to convert an Iterable into a ComparableSet. However it is a Set.
I will take a look into PersistentEntityStore.registerCustomPropertyType() to see if that helps. I just feel wrong to do that to just save a list of integers.
Links seemed to be able to serve as a way of saving a list of Entitys. But it seems there is no addProperty() counterpart to addLink().
Appreciated if some one can share a way or a workaround for this, or maybe why this is not supported.
Thanks
As mentioned in the comments, one workaround I came up with was to create a ComparableList, by adopting code from ComparableSet.
The idea was to make a list that is able to convert to and from an ArrayByteIterable, and register it with .registerCustomPropertyType(). To do that, 2 classes are needed, ComparableList and ComparableListBinding. I'm sharing one iteration I used at the bottom. By the way, I made them immutable, comparing to the mutable ComparableSet. The newly implemented types should be registered once in a transaction of the store before using them.
That should allow you to store and retrieve a list. However the items in ComparableList would not get indexed as they would when saved in a ComparableSet -- there are some special treatment for ComparableSet in the entity store implementation. So without modifying the library, indexing would work only with hacks like creating another property to just index the values.
I was considering to implement a different entity store that could better support lists, on top of the xodus key-value store, bypassing the xodus entity store entirely. That might be a better solution to the list issue we are talking about here.
ComparableList:
#SuppressWarnings("unchecked")
public class ComparableList<T extends Comparable<T>> implements Comparable<ComparableList<T>>,
Iterable<T> {
#Nonnull
private final ImmutableList<T> list;
public ComparableList(#Nonnull final Iterable<T> iterable) {
list = ImmutableList.copyOf(iterable);
}
#Override
public int compareTo(#Nonnull final ComparableList<T> other) {
final Iterator<T> thisIt = list.iterator();
final Iterator<T> otherIt = other.list.iterator();
while (thisIt.hasNext() && otherIt.hasNext()) {
final int cmp = thisIt.next().compareTo(otherIt.next());
if (cmp != 0) {
return cmp;
}
}
if (thisIt.hasNext()) {
return 1;
}
if (otherIt.hasNext()) {
return -1;
}
return 0;
}
#NotNull
#Override
public Iterator<T> iterator() {
return list.iterator();
}
#Nullable
public Class<T> getItemClass() {
final Iterator<T> it = list.iterator();
return it.hasNext() ? (Class<T>) it.next().getClass() : null;
}
#Override
public String toString() {
return list.toString();
}
}
ComparableListBinding:
#SuppressWarnings({"unchecked", "rawtypes"})
public class ComparableListBinding extends ComparableBinding {
public static final ComparableListBinding INSTANCE = new ComparableListBinding();
private ComparableListBinding() {}
#Override
public ComparableList readObject(#NotNull final ByteArrayInputStream stream) {
final int valueTypeId = stream.read() ^ 0x80;
final ComparableBinding itemBinding = ComparableValueType.getPredefinedBinding(valueTypeId);
final ImmutableList.Builder<Comparable> builder = ImmutableList.builder();
while (stream.available() > 0) {
builder.add(itemBinding.readObject(stream));
}
return new ComparableList(builder.build());
}
#Override
public void writeObject(#NotNull final LightOutputStream output,
#NotNull final Comparable object) {
final ComparableList<? extends Comparable> list = (ComparableList) object;
final Class itemClass = list.getItemClass();
if (itemClass == null) {
throw new ExodusException("Attempt to write empty ComparableList");
}
final ComparableValueType type = ComparableValueType.getPredefinedType(itemClass);
output.writeByte(type.getTypeId());
final ComparableBinding itemBinding = type.getBinding();
list.forEach(o -> itemBinding.writeObject(output, o));
}
/**
* De-serializes {#linkplain ByteIterable} entry to a {#code ComparableList} value.
*
* #param entry {#linkplain ByteIterable} instance
* #return de-serialized value
*/
public static ComparableList entryToComparableList(#NotNull final ByteIterable entry) {
return (ComparableList) INSTANCE.entryToObject(entry);
}
/**
* Serializes {#code ComparableList} value to the {#linkplain ArrayByteIterable} entry.
*
* #param object value to serialize
* #return {#linkplain ArrayByteIterable} entry
*/
public static ArrayByteIterable comparableSetToEntry(#NotNull final ComparableList object) {
return INSTANCE.objectToEntry(object);
}
}

Selec2 dropdown isnt show all items in Response response

I have problem with select 2. It dont show all Items, but only subset. I dont see on Select2Choice any method, which show all items. Can someone give me a poit how to show whole items.
Here is code:
originStationDropDown = new Select2Choice<>("originDgfStation", new PropertyModel<Station>(this, "originStation") , new StationsProvider(originCountryDD, productDD));
ComponentHelper.addLabelAndComponent(originStationDropDown, this, "originStation.label", ComponentOptions.REQUIRED);
private class StationsProvider extends ChoiceProvider<Station> {
private Select2Choice<Country> countryDD;
private DropDownChoice<Product> productDD;
public StationsProvider(Select2Choice<Country> countryDD, DropDownChoice<Product> productDD) {
this.countryDD = countryDD;
this.productDD = productDD;
}
#Override
public void query(String codeNameFragment, int i, Response<Station> response) {
if(codeNameFragment == null || "".equals(codeNameFragment)) {
List<Station> stations = stationDao.findByCountryAndProduct(countryDD.getModel().getObject(), productDD.getModel().getObject(), "code");
for(Station station : stations) {
response.add(station);
}
} else {
response.addAll(stationDao.findByCountryAndProductAndFragment(countryDD.getModel().getObject(), productDD.getModel().getObject(), codeNameFragment));
}
System.out.println(response.size());
}
#Override
public void toJson(Station station, JSONWriter jsonWriter) throws JSONException {
jsonWriter.key("id").value(station.getId()).key("text").value(station.getNameWithCode());
}
#Override
public Collection<Station> toChoices(Collection<String> collection) {
List<Station> stations = new ArrayList<>();
List<Station> stationList = stationDao.findAll();
for(String id : collection) {
for(Station station : stationList) {
if(station.getId().equals(Long.valueOf(id))) {
stations.add(station);
}
}
}
return stations;
}
}
You don't explain which items are shown and which are not.
I will guess that only the first N items are always shown. The second parameter of #query() method is int page (named i in your code). This parameter should be used to paginate the results. I.e. you should not always return 10000 items and let the JavaScript to deal with them but you have to return 0-20, 21-40, 41-60, etc.

Rate limiting based on user plan in Spring Cloud Gateway

Say my users subscribe to a plan. Is it possible then using Spring Cloud Gateway to rate limit user requests based up on the subscription plan? Given there're Silver and Gold plans, would it let Silver subscriptions to have replenishRate/burstCapacity of 5/10 and Gold 50/100?
I naively thought of passing a new instance of RedisRateLimiter (see below I construct a new one with 5/10 settings) to the filter but I needed to get the information about the user from the request somehow in order to be able to find out whether it is Silver and Gold plan.
#Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/get")
.filters(f ->
f.requestRateLimiter(r -> {
r.setRateLimiter(new RedisRateLimiter(5, 10))
})
.uri("http://httpbin.org:80"))
.build();
}
Am I trying to achieve something that is even possible with Spring Cloud Gateway? What other products would you recommend to check for the purpose if any?
Thanks!
Okay, it is possible by creating a custom rate limiter on top of RedisRateLimiter class. Unfortunately the class has not been architected for extendability so the solution is somewhat "hacky", I could only decorate the normal RedisRateLimiter and duplicate some of its code in there:
#Primary
#Component
public class ApiKeyRateLimiter implements RateLimiter {
private Log log = LogFactory.getLog(getClass());
// How many requests per second do you want a user to be allowed to do?
private static final int REPLENISH_RATE = 1;
// How much bursting do you want to allow?
private static final int BURST_CAPACITY = 1;
private final RedisRateLimiter rateLimiter;
private final RedisScript<List<Long>> script;
private final ReactiveRedisTemplate<String, String> redisTemplate;
#Autowired
public ApiKeyRateLimiter(
RedisRateLimiter rateLimiter,
#Qualifier(RedisRateLimiter.REDIS_SCRIPT_NAME) RedisScript<List<Long>> script,
ReactiveRedisTemplate<String, String> redisTemplate) {
this.rateLimiter = rateLimiter;
this.script = script;
this.redisTemplate = redisTemplate;
}
// These two methods are the core of the rate limiter
// Their purpose is to come up with a rate limits for given API KEY (or user ID)
// It is up to implementor to return limits based up on the api key passed
private int getBurstCapacity(String routeId, String apiKey) {
return BURST_CAPACITY;
}
private int getReplenishRate(String routeId, String apiKey) {
return REPLENISH_RATE;
}
public Mono<Response> isAllowed(String routeId, String apiKey) {
int replenishRate = getReplenishRate(routeId, apiKey);
int burstCapacity = getBurstCapacity(routeId, apiKey);
try {
List<String> keys = getKeys(apiKey);
// The arguments to the LUA script. time() returns unixtime in seconds.
List<String> scriptArgs = Arrays.asList(replenishRate + "", burstCapacity + "",
Instant.now().getEpochSecond() + "", "1");
Flux<List<Long>> flux = this.redisTemplate.execute(this.script, keys, scriptArgs);
return flux.onErrorResume(throwable -> Flux.just(Arrays.asList(1L, -1L)))
.reduce(new ArrayList<Long>(), (longs, l) -> {
longs.addAll(l);
return longs;
}) .map(results -> {
boolean allowed = results.get(0) == 1L;
Long tokensLeft = results.get(1);
Response response = new Response(allowed, getHeaders(tokensLeft, replenishRate, burstCapacity));
if (log.isDebugEnabled()) {
log.debug("response: " + response);
}
return response;
});
}
catch (Exception e) {
/*
* We don't want a hard dependency on Redis to allow traffic. Make sure to set
* an alert so you know if this is happening too much. Stripe's observed
* failure rate is 0.01%.
*/
log.error("Error determining if user allowed from redis", e);
}
return Mono.just(new Response(true, getHeaders(-1L, replenishRate, burstCapacity)));
}
private static List<String> getKeys(String id) {
String prefix = "request_rate_limiter.{" + id;
String tokenKey = prefix + "}.tokens";
String timestampKey = prefix + "}.timestamp";
return Arrays.asList(tokenKey, timestampKey);
}
private HashMap<String, String> getHeaders(Long tokensLeft, Long replenish, Long burst) {
HashMap<String, String> headers = new HashMap<>();
headers.put(RedisRateLimiter.REMAINING_HEADER, tokensLeft.toString());
headers.put(RedisRateLimiter.REPLENISH_RATE_HEADER, replenish.toString());
headers.put(RedisRateLimiter.BURST_CAPACITY_HEADER, burst.toString());
return headers;
}
#Override
public Map getConfig() {
return rateLimiter.getConfig();
}
#Override
public Class getConfigClass() {
return rateLimiter.getConfigClass();
}
#Override
public Object newConfig() {
return rateLimiter.newConfig();
}
}
So, the route would look like this:
#Component
public class Routes {
#Autowired
ApiKeyRateLimiter rateLimiter;
#Autowired
ApiKeyResolver apiKeyResolver;
#Bean
public RouteLocator theRoutes(RouteLocatorBuilder b) {
return b.routes()
.route(p -> p
.path("/unlimited")
.uri("http://httpbin.org:80/anything?route=unlimited")
)
.route(p -> p
.path("/limited")
.filters(f ->
f.requestRateLimiter(r -> {
r.setKeyResolver(apiKeyResolver);
r.setRateLimiter(rateLimiter);
} )
)
.uri("http://httpbin.org:80/anything?route=limited")
)
.build();
}
}
Hope it saves a work day for somebody...

Recommendation Engine using Apache Spark MLIB showing up Zero recommendations after processing all operations

I am a newbie when it comes to Implementation of ML Algorithms. I wanted to implement a recommendation Engine and Got to know after little experimenting that collaborative-filtering can be used for the same. I am using Apache Spark for the same. I got help from one of the blogs and tried to implement the same in my local. PFB Code that I tried out. Every time I execute this the Count of Recommendations that is getting printed is always zero. I don see any Evident Error as such. Could someone please help me understand this. Also, please feel free to provide any other reference that can be referred in this regard.
package mllib.example;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.mllib.recommendation.ALS;
import org.apache.spark.mllib.recommendation.MatrixFactorizationModel;
import org.apache.spark.mllib.recommendation.Rating;
import scala.Tuple2;
public class RecommendationEngine {
public static void main(String[] args) {
// Create Java spark context
SparkConf conf = new SparkConf().setAppName("Recommendation System Example").setMaster("local[2]").set("spark.executor.memory","1g");
JavaSparkContext sc = new JavaSparkContext(conf);
// Read user-item rating file. format - userId,itemId,rating
JavaRDD<String> userItemRatingsFile = sc.textFile(args[0]);
System.out.println("Count is "+userItemRatingsFile.count());
// Read item description file. format - itemId, itemName, Other Fields,..
JavaRDD<String> itemDescritpionFile = sc.textFile(args[1]);
System.out.println("itemDescritpionFile Count is "+itemDescritpionFile.count());
// Map file to Ratings(user,item,rating) tuples
JavaRDD<Rating> ratings = userItemRatingsFile.map(new Function<String, Rating>() {
public Rating call(String s) {
String[] sarray = s.split(",");
return new Rating(Integer.parseInt(sarray[0]), Integer
.parseInt(sarray[1]), Double.parseDouble(sarray[2]));
}
});
System.out.println("Ratings RDD Object"+ratings.first().toString());
// Create tuples(itemId,ItemDescription), will be used later to get names of item from itemId
JavaPairRDD<Integer,String> itemDescritpion = itemDescritpionFile.mapToPair(
new PairFunction<String, Integer, String>() {
#Override
public Tuple2<Integer, String> call(String t) throws Exception {
String[] s = t.split(",");
return new Tuple2<Integer,String>(Integer.parseInt(s[0]), s[1]);
}
});
System.out.println("itemDescritpion RDD Object"+ratings.first().toString());
// Build the recommendation model using ALS
int rank = 10; // 10 latent factors
int numIterations = Integer.parseInt(args[2]); // number of iterations
MatrixFactorizationModel model = ALS.trainImplicit(JavaRDD.toRDD(ratings),
rank, numIterations);
//ALS.trainImplicit(arg0, arg1, arg2)
// Create user-item tuples from ratings
JavaRDD<Tuple2<Object, Object>> userProducts = ratings
.map(new Function<Rating, Tuple2<Object, Object>>() {
public Tuple2<Object, Object> call(Rating r) {
return new Tuple2<Object, Object>(r.user(), r.product());
}
});
// Calculate the itemIds not rated by a particular user, say user with userId = 1
JavaRDD<Integer> notRatedByUser = userProducts.filter(new Function<Tuple2<Object,Object>, Boolean>() {
#Override
public Boolean call(Tuple2<Object, Object> v1) throws Exception {
if (((Integer) v1._1).intValue() != 0) {
return true;
}
return false;
}
}).map(new Function<Tuple2<Object,Object>, Integer>() {
#Override
public Integer call(Tuple2<Object, Object> v1) throws Exception {
return (Integer) v1._2;
}
});
// Create user-item tuples for the items that are not rated by user, with user id 1
JavaRDD<Tuple2<Object, Object>> itemsNotRatedByUser = notRatedByUser
.map(new Function<Integer, Tuple2<Object, Object>>() {
public Tuple2<Object, Object> call(Integer r) {
return new Tuple2<Object, Object>(0, r);
}
});
// Predict the ratings of the items not rated by user for the user
JavaRDD<Rating> recomondations = model.predict(itemsNotRatedByUser.rdd()).toJavaRDD().distinct();
// Sort the recommendations by rating in descending order
recomondations = recomondations.sortBy(new Function<Rating,Double>(){
#Override
public Double call(Rating v1) throws Exception {
return v1.rating();
}
}, false, 1);
System.out.println("recomondations Total is "+recomondations.count());
// Get top 10 recommendations
JavaRDD<Rating> topRecomondations = sc.parallelize(recomondations.take(10));
// Join top 10 recommendations with item descriptions
JavaRDD<Tuple2<Rating, String>> recommendedItems = topRecomondations.mapToPair(
new PairFunction<Rating, Integer, Rating>() {
#Override
public Tuple2<Integer, Rating> call(Rating t) throws Exception {
return new Tuple2<Integer,Rating>(t.product(),t);
}
}).join(itemDescritpion).values();
System.out.println("recommendedItems count is "+recommendedItems.count());
//Print the top recommendations for user 1.
recommendedItems.foreach(new VoidFunction<Tuple2<Rating,String>>() {
#Override
public void call(Tuple2<Rating, String> t) throws Exception {
System.out.println(t._1.product() + "\t" + t._1.rating() + "\t" + t._2);
}
});
Also, I see that this job is Running for real Long time. Every time it creates a model.Is there a way I can Create the Model once, persist it and Load the same for consecutive Predictions. Can we by any chance improve the Speed of execution of this job
Thanks in Advance

How to sort Vectors in blackberry using SimpleSortingVector?

I am having trouble to sort Vector for my blackberry app using SimpleSortingVector. My stuff does not sort it remains the same.
here is what i have so far...
MyComparator class
private Vector vector = new Vector(); //Assume that this vector is populated with elements already
SimpleSortingVector ssv = new SimpleSortingVector();
ssv.setSortComparator(new Comparator() {
public int compare(Object o1, Object o2) {
Record o1C = (Record)o1;
Record o2C = (Record)o2;
return o1C.getName().compareTo(o2C.getName());
}
public boolean equals(Object obj) {
return compare(this, obj) == 0;
}
});
for(int i=0;i<vector.size();i++){
Record record = new Record();
record=(Record) vector.elementAt(i);
//when you add elements to this vector, it is to post to be automatically sorted
ssv.addElement(record);
}
class Record
public class Record {
String name;
int price;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getPrice() {
return price;
}
public void setPrice(int price) {
this.price = price;
}
}
SimpleSortingVector does not sort by default. This was unexpected for me the first time I ran into it, too, given the name of the class.
You can do one of two things. Call SimpleSortingVector.setSort(true) to make sure the vector is always sorted after each change. This is surprisingly not turned on by default.
Or, you can call SimpleSortingVector.reSort() after adding all the elements to the vector, to do the sort in one batch operation.

Resources