Rate limiting based on user plan in Spring Cloud Gateway - rate-limiting

Say my users subscribe to a plan. Is it possible then using Spring Cloud Gateway to rate limit user requests based up on the subscription plan? Given there're Silver and Gold plans, would it let Silver subscriptions to have replenishRate/burstCapacity of 5/10 and Gold 50/100?
I naively thought of passing a new instance of RedisRateLimiter (see below I construct a new one with 5/10 settings) to the filter but I needed to get the information about the user from the request somehow in order to be able to find out whether it is Silver and Gold plan.
#Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/get")
.filters(f ->
f.requestRateLimiter(r -> {
r.setRateLimiter(new RedisRateLimiter(5, 10))
})
.uri("http://httpbin.org:80"))
.build();
}
Am I trying to achieve something that is even possible with Spring Cloud Gateway? What other products would you recommend to check for the purpose if any?
Thanks!

Okay, it is possible by creating a custom rate limiter on top of RedisRateLimiter class. Unfortunately the class has not been architected for extendability so the solution is somewhat "hacky", I could only decorate the normal RedisRateLimiter and duplicate some of its code in there:
#Primary
#Component
public class ApiKeyRateLimiter implements RateLimiter {
private Log log = LogFactory.getLog(getClass());
// How many requests per second do you want a user to be allowed to do?
private static final int REPLENISH_RATE = 1;
// How much bursting do you want to allow?
private static final int BURST_CAPACITY = 1;
private final RedisRateLimiter rateLimiter;
private final RedisScript<List<Long>> script;
private final ReactiveRedisTemplate<String, String> redisTemplate;
#Autowired
public ApiKeyRateLimiter(
RedisRateLimiter rateLimiter,
#Qualifier(RedisRateLimiter.REDIS_SCRIPT_NAME) RedisScript<List<Long>> script,
ReactiveRedisTemplate<String, String> redisTemplate) {
this.rateLimiter = rateLimiter;
this.script = script;
this.redisTemplate = redisTemplate;
}
// These two methods are the core of the rate limiter
// Their purpose is to come up with a rate limits for given API KEY (or user ID)
// It is up to implementor to return limits based up on the api key passed
private int getBurstCapacity(String routeId, String apiKey) {
return BURST_CAPACITY;
}
private int getReplenishRate(String routeId, String apiKey) {
return REPLENISH_RATE;
}
public Mono<Response> isAllowed(String routeId, String apiKey) {
int replenishRate = getReplenishRate(routeId, apiKey);
int burstCapacity = getBurstCapacity(routeId, apiKey);
try {
List<String> keys = getKeys(apiKey);
// The arguments to the LUA script. time() returns unixtime in seconds.
List<String> scriptArgs = Arrays.asList(replenishRate + "", burstCapacity + "",
Instant.now().getEpochSecond() + "", "1");
Flux<List<Long>> flux = this.redisTemplate.execute(this.script, keys, scriptArgs);
return flux.onErrorResume(throwable -> Flux.just(Arrays.asList(1L, -1L)))
.reduce(new ArrayList<Long>(), (longs, l) -> {
longs.addAll(l);
return longs;
}) .map(results -> {
boolean allowed = results.get(0) == 1L;
Long tokensLeft = results.get(1);
Response response = new Response(allowed, getHeaders(tokensLeft, replenishRate, burstCapacity));
if (log.isDebugEnabled()) {
log.debug("response: " + response);
}
return response;
});
}
catch (Exception e) {
/*
* We don't want a hard dependency on Redis to allow traffic. Make sure to set
* an alert so you know if this is happening too much. Stripe's observed
* failure rate is 0.01%.
*/
log.error("Error determining if user allowed from redis", e);
}
return Mono.just(new Response(true, getHeaders(-1L, replenishRate, burstCapacity)));
}
private static List<String> getKeys(String id) {
String prefix = "request_rate_limiter.{" + id;
String tokenKey = prefix + "}.tokens";
String timestampKey = prefix + "}.timestamp";
return Arrays.asList(tokenKey, timestampKey);
}
private HashMap<String, String> getHeaders(Long tokensLeft, Long replenish, Long burst) {
HashMap<String, String> headers = new HashMap<>();
headers.put(RedisRateLimiter.REMAINING_HEADER, tokensLeft.toString());
headers.put(RedisRateLimiter.REPLENISH_RATE_HEADER, replenish.toString());
headers.put(RedisRateLimiter.BURST_CAPACITY_HEADER, burst.toString());
return headers;
}
#Override
public Map getConfig() {
return rateLimiter.getConfig();
}
#Override
public Class getConfigClass() {
return rateLimiter.getConfigClass();
}
#Override
public Object newConfig() {
return rateLimiter.newConfig();
}
}
So, the route would look like this:
#Component
public class Routes {
#Autowired
ApiKeyRateLimiter rateLimiter;
#Autowired
ApiKeyResolver apiKeyResolver;
#Bean
public RouteLocator theRoutes(RouteLocatorBuilder b) {
return b.routes()
.route(p -> p
.path("/unlimited")
.uri("http://httpbin.org:80/anything?route=unlimited")
)
.route(p -> p
.path("/limited")
.filters(f ->
f.requestRateLimiter(r -> {
r.setKeyResolver(apiKeyResolver);
r.setRateLimiter(rateLimiter);
} )
)
.uri("http://httpbin.org:80/anything?route=limited")
)
.build();
}
}
Hope it saves a work day for somebody...

Related

Publish FluxSink of different object types

I have an rsocket endpoint that responds with a flux:
#MessageMapping("responses")
Flux<?> deal(#Payload String message) {
return myService.generateResponses(message);
}
The responses can be any of 3 different types of objects produced asynchronously using the following code (if it worked):
public Flux<?> generateResponses(String request) {
// Setup response sinks
final FluxProcessor publish = EmitterProcessor.create().serialize();
final FluxSink<Response1> sink1 = publish.sink();
final FluxSink<Response2> sink2 = publish.sink();
final FluxSink<Response3> sink3 = publish.sink();
// Get async responses: starts new thread to gather responses and update sinks
new MyResponses(request, sink1, sink2, sink3)
// Return the Flux
Flux<?> output = Flux
.from(publish
.log());
}
The problem is that when I populate the sinks with different objects only the first sink is actually publishing back to the subscriber.
public class MyResponses extends CacheListenerAdapter {
private FluxSink<Response1> sink1;
private FluxSink<Response2> sink2;
private FluxSink<Response3> sink3;
// Constructor is omitted for brevity
#Override
public void afterCreate(EntryEvent event) {
if (event.getNewValue() instanceof Response1) {
Response1 r1 = (Response1)event.getNewValue();
sink1.next(r1);
}
if (event.getNewValue() instanceof Response2) {
Response2 r2 = (Response2)event.getNewValue();
sink2.next(r2);
}
if (event.getNewValue() instanceof Response3) {
Response3 r3 = (Response3)event.getNewValue();
sink3.next(r3);
}
}
}
If I make the sinks of type <?> then there's a .next error:
The method next(capture#2-of ?) in the type FluxSink<capture#2-of ?> is not applicable for the arguments (Response1)
Is there a better approach to this requirement?
The reason this did not work with different object was to do with Spring Boot Data Geode serialization of underlying object types. The way to get the object Flux to work was use 1 sink of type <Object>
public Flux<Object> generateResponses(String request) {
// Setup the Flux
EmitterProcessor<Object> emitter = EmitterProcessor.create();
FluxSink<Object> sink = emitter.sink(FluxSink.OverflowStrategy.LATEST);
// Get async responses: starts new thread to gather responses and update sinks
new MyResponses(request, sink)
// Setup an output Flux to publish the input Flux
Flux<Object> out = Flux
.from(emitter
.log(log.getName()));
}
The event handler then used the 1 sink
public class MyResponses extends CacheListenerAdapter {
private FluxSink<Object> sink;
// Constructor is omitted for brevity
#Override
public void afterCreate(EntryEvent event) {
if (event.getNewValue() instanceof Response1) {
Response1 r1 = (Response1)event.getNewValue();
sink.next(r1);
}
if (event.getNewValue() instanceof Response2) {
Response2 r2 = (Response2)event.getNewValue();
sink.next(r2);
}
if (event.getNewValue() instanceof Response3) {
Response3 r3 = (Response3)event.getNewValue();
sink.next(r3);
}
}
}

How to configure Micronaut and Micrometer to write ILP directly to InfluxDB?

I have a Micronaut application that uses Micrometer to report metrics to InfluxDB with the micronaut-micrometer project. Currently it is using the Statsd Registry provided via the io.micronaut.configuration:micronaut-micrometer-registry-statsd dependency.
I would like to instead output metrics in Influx Line Protocol (ILP), but the micronaut-micrometer project does not offer an Influx Registry currently. I tried to work around this by importing the io.micrometer:micrometer-registry-influx dependency and configuring an InfluxMeterRegistry manually like this:
#Factory
public class MyMetricRegistryConfigurer implements MeterRegistryConfigurer {
#Bean
#Primary
#Singleton
public MeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
#Override
public boolean supports(MeterRegistry meterRegistry) {
return meterRegistry instanceof InfluxMeterRegistry;
}
}
When the application runs, the metrics are exposed on my /metrics endpoint as I would expect, but nothing gets written to InfluxDB. I confirmed that my local InfluxDB accepts metrics at the expected localhost:8086/write?db=metrics endpoint using curl. Can anyone give me some pointers to get this working? I'm wondering if I need to manually define a reporter somewhere...
After playing around for a bit, I got this working with the following code:
#Factory
public class InfluxMeterRegistryFactory {
#Bean
#Singleton
#Requires(property = MeterRegistryFactory.MICRONAUT_METRICS_ENABLED, value =
StringUtils.TRUE, defaultValue = StringUtils.TRUE)
#Requires(beans = CompositeMeterRegistry.class)
public InfluxMeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
}
I also noticed that an InfluxMeterRegistry will be available out of the box in the future for micronaut-micrometer as of v1.2.0.

Recommendation Engine using Apache Spark MLIB showing up Zero recommendations after processing all operations

I am a newbie when it comes to Implementation of ML Algorithms. I wanted to implement a recommendation Engine and Got to know after little experimenting that collaborative-filtering can be used for the same. I am using Apache Spark for the same. I got help from one of the blogs and tried to implement the same in my local. PFB Code that I tried out. Every time I execute this the Count of Recommendations that is getting printed is always zero. I don see any Evident Error as such. Could someone please help me understand this. Also, please feel free to provide any other reference that can be referred in this regard.
package mllib.example;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.mllib.recommendation.ALS;
import org.apache.spark.mllib.recommendation.MatrixFactorizationModel;
import org.apache.spark.mllib.recommendation.Rating;
import scala.Tuple2;
public class RecommendationEngine {
public static void main(String[] args) {
// Create Java spark context
SparkConf conf = new SparkConf().setAppName("Recommendation System Example").setMaster("local[2]").set("spark.executor.memory","1g");
JavaSparkContext sc = new JavaSparkContext(conf);
// Read user-item rating file. format - userId,itemId,rating
JavaRDD<String> userItemRatingsFile = sc.textFile(args[0]);
System.out.println("Count is "+userItemRatingsFile.count());
// Read item description file. format - itemId, itemName, Other Fields,..
JavaRDD<String> itemDescritpionFile = sc.textFile(args[1]);
System.out.println("itemDescritpionFile Count is "+itemDescritpionFile.count());
// Map file to Ratings(user,item,rating) tuples
JavaRDD<Rating> ratings = userItemRatingsFile.map(new Function<String, Rating>() {
public Rating call(String s) {
String[] sarray = s.split(",");
return new Rating(Integer.parseInt(sarray[0]), Integer
.parseInt(sarray[1]), Double.parseDouble(sarray[2]));
}
});
System.out.println("Ratings RDD Object"+ratings.first().toString());
// Create tuples(itemId,ItemDescription), will be used later to get names of item from itemId
JavaPairRDD<Integer,String> itemDescritpion = itemDescritpionFile.mapToPair(
new PairFunction<String, Integer, String>() {
#Override
public Tuple2<Integer, String> call(String t) throws Exception {
String[] s = t.split(",");
return new Tuple2<Integer,String>(Integer.parseInt(s[0]), s[1]);
}
});
System.out.println("itemDescritpion RDD Object"+ratings.first().toString());
// Build the recommendation model using ALS
int rank = 10; // 10 latent factors
int numIterations = Integer.parseInt(args[2]); // number of iterations
MatrixFactorizationModel model = ALS.trainImplicit(JavaRDD.toRDD(ratings),
rank, numIterations);
//ALS.trainImplicit(arg0, arg1, arg2)
// Create user-item tuples from ratings
JavaRDD<Tuple2<Object, Object>> userProducts = ratings
.map(new Function<Rating, Tuple2<Object, Object>>() {
public Tuple2<Object, Object> call(Rating r) {
return new Tuple2<Object, Object>(r.user(), r.product());
}
});
// Calculate the itemIds not rated by a particular user, say user with userId = 1
JavaRDD<Integer> notRatedByUser = userProducts.filter(new Function<Tuple2<Object,Object>, Boolean>() {
#Override
public Boolean call(Tuple2<Object, Object> v1) throws Exception {
if (((Integer) v1._1).intValue() != 0) {
return true;
}
return false;
}
}).map(new Function<Tuple2<Object,Object>, Integer>() {
#Override
public Integer call(Tuple2<Object, Object> v1) throws Exception {
return (Integer) v1._2;
}
});
// Create user-item tuples for the items that are not rated by user, with user id 1
JavaRDD<Tuple2<Object, Object>> itemsNotRatedByUser = notRatedByUser
.map(new Function<Integer, Tuple2<Object, Object>>() {
public Tuple2<Object, Object> call(Integer r) {
return new Tuple2<Object, Object>(0, r);
}
});
// Predict the ratings of the items not rated by user for the user
JavaRDD<Rating> recomondations = model.predict(itemsNotRatedByUser.rdd()).toJavaRDD().distinct();
// Sort the recommendations by rating in descending order
recomondations = recomondations.sortBy(new Function<Rating,Double>(){
#Override
public Double call(Rating v1) throws Exception {
return v1.rating();
}
}, false, 1);
System.out.println("recomondations Total is "+recomondations.count());
// Get top 10 recommendations
JavaRDD<Rating> topRecomondations = sc.parallelize(recomondations.take(10));
// Join top 10 recommendations with item descriptions
JavaRDD<Tuple2<Rating, String>> recommendedItems = topRecomondations.mapToPair(
new PairFunction<Rating, Integer, Rating>() {
#Override
public Tuple2<Integer, Rating> call(Rating t) throws Exception {
return new Tuple2<Integer,Rating>(t.product(),t);
}
}).join(itemDescritpion).values();
System.out.println("recommendedItems count is "+recommendedItems.count());
//Print the top recommendations for user 1.
recommendedItems.foreach(new VoidFunction<Tuple2<Rating,String>>() {
#Override
public void call(Tuple2<Rating, String> t) throws Exception {
System.out.println(t._1.product() + "\t" + t._1.rating() + "\t" + t._2);
}
});
Also, I see that this job is Running for real Long time. Every time it creates a model.Is there a way I can Create the Model once, persist it and Load the same for consecutive Predictions. Can we by any chance improve the Speed of execution of this job
Thanks in Advance

How can I convert an Object to Json in a Rabbit reply?

I have two applications communicating with each other using rabbit.
I need to send (from app1) an object to a listener (in app2) and after some process (on listener) it answer me with another object, now I am receiving this error:
ClassNotFound
I am using this config for rabbit in both applications:
#Configuration
public class RabbitConfiguration {
public final static String EXCHANGE_NAME = "paymentExchange";
public final static String EVENT_ROUTING_KEY = "eventRoute";
public final static String PAYEMNT_ROUTING_KEY = "paymentRoute";
public final static String QUEUE_EVENT = EXCHANGE_NAME + "." + "event";
public final static String QUEUE_PAYMENT = EXCHANGE_NAME + "." + "payment";
public final static String QUEUE_CAPTURE = EXCHANGE_NAME + "." + "capture";
#Bean
public List<Declarable> ds() {
return queues(QUEUE_EVENT, QUEUE_PAYMENT);
}
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(rabbitConnectionFactory);
}
#Bean
public DirectExchange exchange() {
return new DirectExchange(EXCHANGE_NAME);
}
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(EXCHANGE_NAME);
r.setChannelTransacted(false);
r.setConnectionFactory(rabbitConnectionFactory);
r.setMessageConverter(jsonMessageConverter());
return r;
}
#Bean
public MessageConverter jsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
private List<Declarable> queues(String... nomes) {
List<Declarable> result = new ArrayList<>();
for (int i = 0; i < nomes.length; i++) {
result.add(newQueue(nomes[i]));
if (nomes[i].equals(QUEUE_EVENT))
result.add(makeBindingToQueue(nomes[i], EVENT_ROUTING_KEY));
else
result.add(makeBindingToQueue(nomes[i], PAYEMNT_ROUTING_KEY));
}
return result;
}
private static Binding makeBindingToQueue(String queueName, String route) {
return new Binding(queueName, DestinationType.QUEUE, EXCHANGE_NAME, route, null);
}
private static Queue newQueue(String nome) {
return new Queue(nome);
}
}
I send the message using this:
String response = (String) rabbitTemplate.convertSendAndReceive(RabbitConfiguration.EXCHANGE_NAME,
RabbitConfiguration.PAYEMNT_ROUTING_KEY, domainEvent);
And await for a response using a cast to the object.
This communication is between two different applications using the same rabbit server.
How can I solve this?
I expected rabbit convert the message to a json in the send operation and the same in the reply, so I've created the object to correspond to a json of reply.
Show, please, the configuration for the listener. You should be sure that ListenerContainer there is supplied with the Jackson2JsonMessageConverter as well to carry __TypeId__ header back with the reply.
Also see Spring AMQP JSON sample for some help.

Facing Critical Performance issue in Primefaces 4 & 5

I am working on a project which deal with heavy data sets. I am using Primefaces 4 & 5, spring and hibernate. I have to to display a very huge datasets such as min 3000 rows with 100 columns with various features such as sorting, filtering, row-expansion etc. My problem is, my applications took 8 to 10 mins to show the whole page as well as other functionalities(sorting, filtering ) also takes a lot time. My client is not happy at all. However I can use pagination for this but again My client do not want paging. So I decided to use livescroll but unfortunately I failed to implement livescroll with lazyload or without lazyload as there were bugs in PF regarding livescroll. also i have posted this question here earlier but no solution found.
This performance issue is very critical and show stopper for me. To show 3000 rows with 100 columns, the size of the page which is getting loaded is ~10MB.
I have calculated the time consumed by various life-cycles of of JSF, using Phase-listener I figure out that its Browser who is taking time to parse the response rendered by jsf. To complete the all phases my application took only 25 sec.
At minimal I want to increase the performance of my project. Please share any idea, suggestion and anything which could help to overcome this problem
Note: There is no database manipulations in getters and setters as well as no complex business logic.
UPDATE :
This is my datatable without lazyload:
<p:dataTable
style="width:100%"
id="cdTable"
selection="#{controller.selectedArray}"
resizableColumns="true"
draggableColumns="true"
var="cd"
value="#{controller.cdDataModel}"
editable="true"
editMode="cell"
selectionMode="multiple"
rowSelectMode="add"
scrollable="true"
scrollHeight="650"
rowKey="#{cd.id}"
rowIndexVar="rowIndex"
styleClass="screenScrollStyle"
liveScroll="true"
scrollRows="50"
filterEvent="enter"
widgetVar="dt4"
>
Here everything is working except filtering. Once I filter then first page is displayed but unable to sort or livescroll on datatable. Note this I have tested in Primefaces5.
2nd Approch
With lazyload with same datatable
1) When I add rows="100" livescroll happens but problem with row-editing, row-expansion but filter & sorting works.
2) When I remove rows livescroll works with row-editing, row-expansion etc but filter & sorting dont work.
My LazyLoadModel is as follows
public class MyDataModel extends LazyDataModel<YData>
{
#Override
public List<YData> load(int first, int pageSize,
List<SortMeta> multiSortMeta, Map<String, Object> filters) {
System.out.println("multisort wala load");
return super.load(first, pageSize, multiSortMeta, filters);
}
/**
*
*/
private static final long serialVersionUID = 1L;
private List<YData> datasource;
public YieldRecBondDataModel() {
}
public YieldRecBondDataModel(List<YData> datasource) {
this.datasource = datasource;
}
#Override
public YData getRowData(String rowKey) {
// In a real app, a more efficient way like a query by rowKey should be
// implemented to deal with huge data
// List<YData> yList = (List<YData>) getWrappedData();
for (YData y : datasource)
{
System.out.println("datasource :"+datasource.size());
if(y.getId()!=null)
{
if (y.getId()==(new Long(rowKey)))
{
return y;
}
}
}
return null;
}
#Override
public Object getRowKey(YData y) {
return y.getId();
}
#Override
public void setRowIndex(int rowIndex) {
/*
* The following is in ancestor (LazyDataModel):
* this.rowIndex = rowIndex == -1 ? rowIndex : (rowIndex % pageSize);
*/
if (rowIndex == -1 || getPageSize() == 0) {
super.setRowIndex(-1);
}
else
super.setRowIndex(rowIndex % getPageSize());
}
#Override
public List<YData> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String,Object> filters) {
List<YData> data = new ArrayList<YData>();
System.out.println("sort order : "+sortOrder);
//filter
for(YData yInfo : datasource) {
boolean match = true;
for(Iterator<String> it = filters.keySet().iterator(); it.hasNext();) {
try {
String filterProperty = it.next();
String filterValue = String.valueOf(filters.get(filterProperty));
Field yField = yInfo.getClass().getDeclaredField(filterProperty);
yField.setAccessible(true);
String fieldValue = String.valueOf(yField.get(yInfo));
if(filterValue == null || fieldValue.startsWith(filterValue)) {
match = true;
}
else {
match = false;
break;
}
} catch(Exception e) {
e.printStackTrace();
match = false;
}
}
if(match) {
data.add(yInfo);
}
}
//sort
if(sortField != null) {
Collections.sort(data, new LazySorter(sortField, sortOrder));
}
int dataSize = data.size();
this.setRowCount(dataSize);
//paginate
if(dataSize > pageSize) {
try {
List<YData> subList = data.subList(first, first + pageSize);
return subList;
}
catch(IndexOutOfBoundsException e) {
return data.subList(first, first + (dataSize % pageSize));
}
}
else
return data;
}
#Override
public int getRowCount() {
// TODO Auto-generated method stub
return super.getRowCount();
}
}
I am fade up with these issues and becomes show stopper for me. Even i tried Primefaces 5
If your data is loaded from db i suggest you to do a better LazyDataModel like:
public class ElementiLazyDataModel extends LazyDataModel<T> implements Serializable {
private Service<T> abstractFacade;
public ElementiLazyDataModel(Service<T> abstractFacade) {
this.abstractFacade = abstractFacade;
}
public Service<T> getAbstractFacade() {
return abstractFacade;
}
public void setAbstractFacade(Service<T> abstractFacade) {
this.abstractFacade = abstractFacade;
}
#Override
public List<T> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String, Object> filters) {
PaginatedResult<T> pr = abstractFacade.findRange(new int[]{first, first + pageSize}, sortField, sortOrder, filters);
setRowCount(new Long(pr.getTotalItems()).intValue());
return pr.getItems();
}
}
The service is some kind of backend communication (like an EJB) injected in the ManagedBean that use this model.
The service for pagination may be like this:
#Override
public PaginatedResult<T> findRange(int[] range, String sortField, SortOrder sortOrder, Map<String, Object> filters) {
final Query query = getEntityManager().createQuery("select x from " + entityClass.getSimpleName() + " x")
.setFirstResult(range[0]).setMaxResults(range[1] - range[0] + 1);
// Add filter sort etc.
final Query queryCount = getEntityManager().createQuery("select count(x) from " + entityClass.getSimpleName() + " x");
// Add filter sort etc.
Long rowCount = (Long) queryCount.getSingleResult();
List<T> resultList = query.getResultList();
return new PaginatedResult<T>(resultList, rowCount);
}
Note that you have to do the paginated query (with jpa like this the orm do the query for you, but if you don't use orm have to do paginated query, for oracle look at TOP-N query, for example: http://oracle-base.com/articles/misc/top-n-queries.php)
Remember your return obj must be contains also the total record as a fast count:
public class PaginatedResult<T> implements Serializable {
private List<T> items;
private long totalItems;
public PaginatedResult() {
}
public PaginatedResult(List<T> items, long totalItems) {
this.items = items;
this.totalItems = totalItems;
}
public List<T> getItems() {
return items;
}
public void setItems(List<T> items) {
this.items = items;
}
public long getTotalItems() {
return totalItems;
}
public void setTotalItems(long totalItems) {
this.totalItems = totalItems;
}
}
All this is useful if your database table is correctly setup, pay aptention to the execution plan of the possible query and add the right index.
Hope to give some hint to improve you performance
In the end, remember to your final user that the human eyes can't see more that 10-20 record at once, so it is very useless to have thousand record in a page.
You have used the default load implementation which is used in the showcases of Primefaces. This is not the correct implementation for your case where you load your data from a database.
The load method should use the correct query with consideration of :
1) the filter fields that are used, example:
String query = "select e from Entity e where lower(e.f1) like lower('" + filters.get(key) + "'%) and..., etc. for the other fields
2) the sorting columns that are used, example:
query.append("order by ").append(sortField).append(" ").append(SortOrder.ASCENDING.name() ? "" : sortOrder.substring(0, 4)),..., etc. for the other columns.
3) The total count of your query WITH 1) attached to it. Example:
Long totalCount = (Long) entityManager.createQuery("select count(*) from Entity e where lower(e.f1) like lower('filterKey1%') and lower(e.f2) like lower('filterKey2%') ...").getSingleResult();

Resources