I have a fairly simple cache configuration:
<cache name="MyCache"
maxElementsInMemory="200000"
eternal="false"
timeToIdleSeconds="43200"
timeToLiveSeconds="43200"
overflowToDisk="false"
diskPersistent="false"
memoryStoreEvictionPolicy="LRU"
/>
I create my cache in the following way:
private Ehcache myCache =
CacheManager.getInstance().getEhcache("MyCache");
I use my cache like this:
public MyResponse processRequest(MyRequest request) {
Element element = myCache.get(request);
if (element != null) {
return (MyResponse)element.getValue();
} else {
MyResponse response = remoteService.process(request);
myCache.put(new Element(request, response));
return response;
}
}
Every 10,000 calls to processRequest() method, I log stats about my cache like this:
logger.debug("Cache name: " + myCache.getName());
logger.debug("Max elements in memory: " + myCache.getMaxElementsInMemory());
logger.debug("Memory store size: " + myCache.getMemoryStoreSize());
logger.debug("Hit count: " + myCache.getHitCount());
logger.debug("Miss count: " + myCache.getMissCountNotFound());
logger.debug("Miss count (because expired): " + myCache.getMissCountExpired());
..I see a good amount of hits, which tells me that it's working.
..However, what I'm seeing is that after a couple hours, the getMemoryStoreSize() is starting to exceed getMaxElementsInMemory(). Eventually, it gets bigger and bigger, and renders the jvm unstable because GC is starting to do Full GCs nonstop to reclaim memory (and I have a pretty large cap set). When I profiled the heap, it pointed to the LRU's SpoolingLinkedHashMap taking most of the space.
I do have a lot of requests hitting this cache, and my theory is that ehcache's LRU algorithm is perhaps not keeping up with evicting the elements when it's full. I tried LFU policy and it also caused the memory store to go over maxElements.
I then started looked at the ehcache code to see if I could prove my theory (inside LruMemoryStore$SpoolingLinkedHashMap):
private boolean removeLeastRecentlyUsedElement(Element element) throws CacheException {
//check for expiry and remove before going to the trouble of spooling it
if (element.isExpired()) {
notifyExpiry(element);
return true;
}
if (isFull()) {
evict(element);
return true;
} else {
return false;
}
}
..from here looks ok, then looked at the evict() method:
protected final void evict(Element element) throws CacheException {
boolean spooled = false;
if (cache.isOverflowToDisk()) {
if (!element.isSerializable()) {
if (LOG.isDebugEnabled()) {
LOG.debug(new StringBuffer("Object with key ").append(element.getObjectKey())
.append(" is not Serializable and cannot be overflowed to disk"));
}
} else {
spoolToDisk(element);
spooled = true;
}
}
if (!spooled) {
cache.getCacheEventNotificationService().notifyElementEvicted(element, false);
}
}
..this looks like it doesn't actually evict (despite the name) but rather relies on the caller to evict. So I looked at the implementation of the put() method and I don't see it calling it. I'm clearly missing something here and would appreciate some help on this.
Thanks!
Your configuration looks fine to me. Only need is to use right key for caching.
Do not put complete request object as your key for cache. Put some unique value from your request object. For example:
MyResponse response = remoteService.process(request);
myCache.put(new Element(request.getCustomerID(), response));
return response;
This should work for you. The reason your caching is not working is that each time your request object is new object; it never finds the response from cache, so it keeps adding into cache.
maxElementsInMemory attribute is deprecated, use maxEntriesLocalHeap instead
Related
I'm trying to load a large-ish (1000 lines, 68k) text file using
final String enString = await rootBundle.loadString('res/string/string_en.json');
The Dart class function AssetBundle.loadString that loads the string is
Future<String> loadString(String key, { bool cache = true }) async {
final ByteData data = await load(key);
if (data == null)
throw FlutterError('Unable to load asset: $key');
// 50 KB of data should take 2-3 ms to parse on a Moto G4, and about 400 μs
// on a Pixel 4.
if (data.lengthInBytes < 50 * 1024) {
return utf8.decode(data.buffer.asUint8List());
}
// For strings larger than 50 KB, run the computation in an isolate to
// avoid causing main thread jank.
return compute(_utf8decode, data, debugLabel: 'UTF8 decode for "$key"');
}
Looking at the code above, if the file is bigger than 50k, as mine is, an isolate is used.
As a test, I cut my file in half (so 32k) and it loaded in a second (not using the isolate). But, unedited, the function hangs when the isolate is used.
My files is just a simple json file of key-value pairs. Here are the first few lines
{
"ctaButtonConfirm": "Confirm",
"ctaButtonContinue": "Continue",
"ctaButtonReview": "Review",
"balance": "Balance",
"totalBalance": "Total Balance",
"transactions": "Transactions",
:
Seem like it hangs when the isolate is used?
EDIT
Based on the loadString code above I wrote an extension function that doesn't use an isolate and it works fine, so it's looking like the isolate doesn't like my file?
extension AssetBundleX on AssetBundle {
Future<String> loadStringWithoutIsolate(String key) async {
final ByteData data = await load(key);
return utf8.decode(data.buffer.asUint8List());
}
}
You can't access rootBundle from spawned isolate.
So use main isolate instead.
Or in [docs](This is useful for operations that take longer than a few milliseconds, and which would therefore risk skipping frames. For tasks that will only take a few milliseconds, consider SchedulerBinding.scheduleTask instead.)
you can try SchedulerBinding.scheduleTask instead.
I use BatchInserters.batchDatabase to create an embedded Neo4j 2.1.5 data base. When I only put a small amount of data in it, everything works fine.
But if I increase the size of data put in, Neo4j fails to persist the latest properties set with setProperty. I can read back those properties with getProperty before I call shutdown. When I load the data base again with new GraphDatabaseFactory().newEmbeddedDatabase those properies are lost.
The strange thing about this is that Neo4j doesn't report any error or throw an exception. So I have no clue what's going wrong or where. Java should have enough memory to handle both the small data base (Database 2.66 MiB, 3,000 nodes, 3,000 relationships) and the big one (Database 26.32 MiB, 197,267 nodes, 390,659 relationships)
It's hard for me to extract a running example to show you the problem, but I can do if this helps. Here the main steps I do though:
def createDataBase(rules: AllRules) {
// empty the data base folder
deleteFileOrDirectory(new File(mainProjectPathNeo4j))
// Create an index on some properties
db = new GraphDatabaseFactory().newEmbeddedDatabase(mainProjectPathNeo4j)
engine = new ExecutionEngine(db)
createIndex()
db.shutdown()
// Fill the data base
db = BatchInserters.batchDatabase(mainProjectPathNeo4j)
//createBatchIndex
try {
// Every function loads some data
loadAllModulesBatch(rules)
loadAllLinkModulesBatch(rules)
loadFormalModulesBatch(rules)
loadInLinksBatch()
loadHILBatch()
createStandardLinkModules(rules)
createStandardLinkSets(rules)
// validateModel shows the problem
validateModel(rules)
} catch {
// I want to see if my environment (BIRT) is catching any exceptions
case _ => val a = 7
} finally {
db.shutdown()
}
}
validateModel is updating some properties of already created nodes
def validateModule(srcM: GenericModule) {
srcM.node.setProperty("isValidated", true)
assert(srcM.node == Neo4jScalaDataSource.testNode)
assert(srcM.node eq Neo4jScalaDataSource.testNode)
assert(srcM.node.getProperty("isValidated").asInstanceOf[Boolean])
When I finally use Cypher to get some data back
the properties set by validateModel are missing
class Neo4jScalaDataSet extends ScriptedDataSetEventAdapter {
override def beforeOpen(...) {
result = Neo4jScalaDataSource.engine.profile(
"""
MATCH (fm:FormalModule {isValidated: true}) RETURN fm.fullName as fullName, fm.uid as uid
""");
iter = result.iterator()
}
override def fetch(...) = {
if (iter.hasNext()) {
for (e <- iter.next().entrySet()) {
row.setColumnValue(e.getKey(), e.getValue())
}
count += 1;
row.setColumnValue("count", count)
return true
} else {
logger.log(Level.INFO, result.executionPlanDescription().toString())
return super.fetch(dataSet, row)
}
}
batchDatabase indeed causes this problem.
I have switched to BatchInserters.inserter and now everything works just fine.
I have a web app that have a Timer that fires a poll to get data every 3 seconds. It works fine for about 2.5 minutes then Chromium crashes.
My request Dart looks like this
HttpRequest.getString('data/get_load_history_recent.json')
.then((e) => _recentHistoryResponse(e))
.catchError((e) => _recentHistoryError(e));
Can you think of any reasons why this would happen? I assume it's a memory leak...
Edit:
Here is my _recentHistoryResponse()
void _recentHistoryResponse(String data)
{
Map obj = JSON.decode(data);
if(obj['status'] == 'success')
{
List processes = obj['data']['processes'];
List newItems = new List();
List oldIdsArray = new List();
int length = appDataDic.load_history_list.length;
for(HistoryDataVO oldVO in appDataDic.load_history_list)
{
oldIdsArray.add(oldVO.loadID);
}
for(Map process in processes)
{
HistoryDataVO dataVO = new HistoryDataVO();
dataVO.loadID = process['loadID'];
dataVO.time = process['time'];
dataVO.loadType = process['loadType'];
dataVO.fileName = process['fileName'];
dataVO.label = process['label'];
dataVO.description = process['description'];
dataVO.count = process['count'];
dataVO.progress = process['progress'];
dataVO.loadTask = process['loadTask'];
// Check if the item is currently in the list
if(length >= 1)
{
if(!LoadHistoryHelper.exists(oldIdsArray, dataVO.loadID))
{
dataVO.isNew = true;
}
}
newItems.add(dataVO);
}
appDataDic.load_history_list.clear();
appDataDic.load_history_list.addAll(newItems);
}
}
I have commented out the exists check !LoadHistoryHelper.exists(oldIdsArray, dataVO.loadID)) (because this seemed like the obvious place) but it the VM still crashes.
Also, I have taken this same code and put it into an isolated app with the only real difference in the poll check is appDataDic.load_history_list is just an #observable List, not an ObservableList.
Edit 2 :
Ok, so I have discovered that Map obj = JSON.decode(data); causes the crash. I was reading in a Javascript forum that timeouts cause the memory to not be released (I had never thought of this but it makes sense), is this true? Can any one think of a better way to do this? Can I directly call the garbage collection? I'm running out of ideas.
There's another question here, suggesting a memory leak in HttpRequest; however I'm not able to find anything in the Dart issue tracker. If you think this might be a real memory leak, it might be worth raising a bug.
I'm currently trying the Neo4j 2.0.0 M3 and see some strange behaviour. In my unit tests, everything works as expected (using an newImpermanentDatabase) but in the real thing, I do not get results from the graphDatabaseService.findNodesByLabelAndProperty.
Here is the code in question:
ResourceIterator<Node> iterator = graphDB
.findNodesByLabelAndProperty(Labels.User, "EMAIL_ADDRESS", emailAddress)
.iterator();
try {
if (iterator.hasNext()) { // => returns false**
return iterator.next();
}
} finally {
iterator.close();
}
return null;
This returns no results. However, when running the following code, I see my node is there (The MATCH!!!!!!!!! is printed) and I also have an index setup via the schema (although that if I read the API, this seems not necessary but is important for performance):
ResourceIterator<Node> iterator1 = GlobalGraphOperations.at(graphDB).getAllNodesWithLabel(Labels.User).iterator();
while (iterator1.hasNext()) {
Node result = iterator1.next();
UserDao.printoutNode(emailAddress, result);
}
And UserDao.printoutNode
public static void printoutNode(String emailAddress, Node next) {
System.out.print(next);
ResourceIterator<Label> iterator1 = next.getLabels().iterator();
System.out.print("(");
while (iterator1.hasNext()) {
System.out.print(iterator1.next().name());
}
System.out.print("): ");
for(String key : next.getPropertyKeys()) {
System.out.print(key + ": " + next.getProperty(key).toString() + "; ");
if(emailAddress.equals( next.getProperty(key).toString())) {
System.out.print("MATCH!!!!!!!!!");
}
}
System.out.println();
}
I already debugged through the code and what I already found out is that I pass via the InternalAbstractGraphDatabase.map2Nodes to a DelegatingIndexProxy.getDelegate and end up in IndexReader.Empty class which returns the IteratorUtil.EMPTY_ITERATOR thus getting false for iterator.hasNext()
Any idea's what I am doing wrong?
Found it:
I only included neo4j-kernel:2.0.0-M03 in the classpath. The moment I added neo4j-cypher:2.0.0-M03 all was working well.
Hope this answer helps save some time for other users.
#Neo4j: would be nice if an exception would be thrown instead of just returning nothing.
#Ricardo: I wanted to but I was not allowed yet as my reputation wasn't good enough as a new SO user.
I want to store position coords (latitude, longitude) in a table in my MySQL DB querying a url in a way similar to this one: http://locationstore.com/postlocation.php?latitude=var1&longitude=var2 every ten seconds. PHP script works like a charm. Getting the coords in the device ain't no problem either. But making the request to the server is being a hard one. My code goes like this:
public class LocationHTTPSender extends Thread {
for (;;) {
try {
//fetch latest coordinates
coords = this.coords();
//reset url
this.url="http://locationstore.com/postlocation.php";
// create uri
uri = URI.create(this.url);
FireAndForgetDestination ffd = null;
ffd = (FireAndForgetDestination) DestinationFactory.getSenderDestination
("MyContext", uri);
if(ffd == null)
{
ffd = DestinationFactory.createFireAndForgetDestination
(new Context("MyContext"), uri);
}
ByteMessage myMsg = ffd.createByteMessage();
myMsg.setStringPayload("doesnt matter");
((HttpMessage) myMsg).setMethod(HttpMessage.POST);
((HttpMessage) myMsg).setQueryParam("latitude", coords[0]);
((HttpMessage) myMsg).setQueryParam("longitude", coords[1]);
((HttpMessage) myMsg).setQueryParam("user", "1");
int i = ffd.sendNoResponse(myMsg);
ffd.destroy();
System.out.println("Lets sleep for a while..");
Thread.sleep(10000);
System.out.println("woke up");
} catch (Exception e) {
// TODO Auto-generated catch block
System.out.println("Exception message: " + e.toString());
e.printStackTrace();
}
}
I haven't run this code to test it, but I would be suspicious of this call:
ffd.destroy();
According to the API docs:
Closes the destination. This method cancels all outstanding messages,
discards all responses to those messages (if any), suspends delivery
of all incoming messages, and blocks any future receipt of messages
for this Destination. This method also destroys any persistable
outbound and inbound queues. If Destination uses the Push API, this
method will unregister associated push subscriptions. This method
should be called only during the removal of an application.
So, if you're seeing the first request succeed (at least sometimes), and subsequent requests fail, I would try removing that call to destroy().
See the BlackBerry docs example for this here
Ok so I finally got it running cheerfully. The problem was with the transport selection; even though this example delivered WAP2 (among others) as an available transport in my device, running the network diagnostics tool showed only BIS as available. It also gave me the connection parameters that I needed to append at the end of the URL (;deviceside=false;ConnectionUID=GPMDSEU01;ConnectionType=mds-public). The code ended up like this:
for (;;) {
try {
coords.refreshCoordinates();
this.defaultUrl();
this.setUrl(stringFuncs.replaceAll(this.getUrl(), "%latitude%", coords.getLatitude() + ""));
this.setUrl(stringFuncs.replaceAll(this.getUrl(), "%longitude%", coords.getLongitude() + ""));
cd = cf.getConnection(this.getUrl());
if (cd != null) {
try {
HttpConnection hc = (HttpConnection)cd.getConnection();
final int i = hc.getResponseCode();
hc.close();
} catch (Exception e) {
}
}
//dormir
Thread.sleep(15000);
} catch (Exception e) {
} finally {
//cerrar conexiones
//poner objetos a null
}
Thanks for your help #Nate, it's been very much appreciated.