Fetching breeze metadata is slow in Chrome, fast in IE - breeze

It appears that fetching metadata for my model is extremely slow in Chrome, but fast in IE.
My dbcontext contains around 35 entities with lots of navigation properties and each entity I add adds to the delay. Currently the delay is around 20 seconds, starting AFTER the query has returned the raw metadata, and it's entirely CPU which is working heavily, memory usage stays stable. I've got an i7 processor and ample memory.
I know there are differences in how the javascript engine is geared in these two browsers, with the chrome javascript JIT compiler being optimised for floating point operations (which is why webgl graphics are a thousand times faster in chrome than IE) - could this be having an impact on the work which fetchMetaData has to do?
Has anyone else noticed this slowness? Could it be that my relationships are wrong? Once the delay is over everything works though, so i'm doubting that relationships could be a problem.

Found the problem and solution!
Thank you for taking the time to look at this, after your reply I decided to strip the whole project down to basics where I could reproduce the problem and look for any interference.
This was an older project in which I had implemented Breeze. The project used standard jquery post/get methods to get data from MVC, and since dates and times have always been a problem when posting and receiving json data from MVC I had this code in my startup script:
// Add datafilter to jQuery ajax calls to translate dates
$.ajaxSettings.dataFilter = function (data, type) {
//if (type === 'json') {
// convert things that look like Dates into a UTC Date string and completely replace them.
data = data.replace(/(.*?")(\\\/Date\([0-9\-]+\)\\\/)(")/g,
function (fullMatch, $1, $2, $3) {
try {
return $1 + new Date(parseInt($2.substr(7))) + $3;
}
catch (e) { }
// something miserable happened, just return the original string
return $1 + $2 + $3;
});
//}
return data;
};
After removing this code (since breeze does dates properly), everything works as normal. This type of code may be common in other older projects which had to deal properly with dates, I know I got the above snippet from WiredPrairie and i'm sure others will also run into this problem.

Dmitri,
I can't repro this, so I am wondering if there isn't something else involved. Have you tried Firefox as well?

Related

Is there any limit for source in typeahead?

I am using JQuery typeahed from RunningCoder. Typeahead works well if I have few records in my source, but does not work if my source has around 500 records.
It is not related to the result count, which can be managed by maxItem parameter. Also, there is no issue in getting the JSON string from the server, as I can print it without any issue.
I know ideally, I should not have them in my page pre-loaded and search it based on the input, but in my case hitting the server for search is not an option and I want to perform the search from the static data I have in my view. Here is my code:
$.typeahead({
input: "#List .typeahead",
minLength: 3,
templateValue: "{{Text}}",
display: ["Text", "Subtext"],
emptyTemplate: 'No results for "{{query}}"',
template: '<span>' +
'<span class="result" id="{{Value}}">{{Text}}</span>' +
'</span>',
source: {
Issuer: {
data: #Html.Raw(Model.EveryThing)
}
}
});
In my code above if Model.Everything has 40-50 records then it works fine, but does not work for around 500 records.
ADDITIONAL INFO:
After figuring out the issue, would like to explain it a bit as this may help someone. By using above code, you can search the list based on two fields i.e. Text and Subtext, but the user will see only Text in the result and then can select from matching options. This will be very useful if you want to perform the search on more than one field but show just one field.
Figured it out after creating sample data on my own, rather than relying on server response. The issue is not with the length of the result, but null entries in the result.
In my data, there are few objects with Subtext as NULL, and that causes the issue, I fixed it by replacing the NULL with an empty string, and this works as expected now.

firefox addon webrequest.addListener misbehaving

I want to examine http requests in an extension for firefox. To begin figuring out how to do what I want to do I figured I'd just log everything and see what comes up:
webRequest.onResponseStarted.addListener(
(stuff) => {console.log(stuff);},
{urls: [/^.*$/]}
);
The domain is insignificant, and I know the regex works, verified in the console. When running this code I get no logging. When I take out the filter parameter I get every request:
webRequest.onResponseStarted.addListener(
(stuff) => {console.log(stuff);}
);
Cool, I'm probably doing something wrong, but I can't see what.
Another approach is to manually filter on my own:
var webRequest = Components.utils.import("resource://gre/modules/WebRequest.jsm", {});
var makeRequest = function(type) {
webRequest[type].addListener(
(stuff) => {
console.log(!stuff.url.match(/google.com.*/));
if(!stuff.url.match(/google.com.*/))
return;
console.log(type);
console.log(stuff);
}
);
}
makeRequest("onBeforeRequest");
makeRequest("onBeforeSentHeaders");
makeRequest("onSendHeaders");
makeRequest("onHeadersReceived");
makeRequest("onResponseStarted");
makeRequest("onCompleted");
With the console.log above the if, I can see the regex returning true when I want it to and the code making it past the if. When I remove the console.log above the if the if no longer gets executed.
My question is then, how do I get the filtering parameter to work or if that is indeed broken, how can I get the code past the if to be executed? Obviously, this is a fire hose, and to begin searching for a solution I will need to reduce the data.
Thanks
urls must be a string or an array of match patterns. Regular expressions are not supported.
WebRequest.jsm uses resource://gre/modules/MatchPattern.jsm. Someone might get confused with the util/match-pattern add-on sdk api, which does support regular expressions.

Sales order total different with actual total

Just need to know any one of you experiencing this issue with sales order document in acumatica ERP 4.2,
The header level total is wrong when compared to the total of lines. Is there any way we can recalculate the totals in code as i couldn't find fix from acumatica yet?
If document is not yet closed, you can just modify qty or add/remove line.
If document is closed i do not see any possible ways except changing data in DB.
I am adding my recent experience to this topic in hopes it might help others.
Months ago, I wrote the code shown below anticipating its need when called by RESTful services. It was clearly not needed, and even worse, merely written and forgotten...
The code was from a SalesOrderEntryExt graph extension.
By removing the code block, the doubling of Order Total was resolved.
It's also an example of backing out custom code until finding the problem.
protected void _(Events.RowInserted<SOLine> e, PXRowInserted del)
{
// call the base BLC event handler...
del?.Invoke(e.Cache, e.Args);
SOLine row = e.Row;
if (!Base.IsExport) return;
if (row != null && row.OrderQty > 0m)
{
// via RESTful API, raise event
SOLine copy = Base.Transactions.Cache.CreateCopy(row) as SOLine;
copy.OrderQty = 0m;
Base.Transactions.Cache.RaiseRowUpdated(row, copy);
}
}

DataNucleus Memory/Cache Handling for large update/insert

We are running application in Spring context using DataNucleus as our ORM mapping and mysql as our database.
Our application have a daily import job of some data feed into our database. The size of the data feed translate into around 1 millions row of insert/update. The performance of the import start out to be very good but then it degrade overtime (as the number of executed query increase) and at some point the application freeze or stop responding. We will have to wait for the whole job to complete before the application response again.
This behavior looks very like a memory leak to us and we have been looking hard at our code to catch any potential problem, however the problem didn't go away. One interesting thing we found from the heap dump is that org.datanucleus.ExecutionContextThreadedImpl (or the HashSet/HashMap) hold 90% of our memory (5GB) during the import. (I have attahed the screenshot of the dump below). My research on the internet said this reference is the Level1 Cache (not sure am i correct). My question is during a large import, how can i limit/control the size of the level1 cache. May be ask DN to not cache during my import?
If that's not the L1 cache, what's the possible cause of my memory issue?
Our code use a transaction for every insert to prevent locking of large chunk of data in the database. It's call the flush method every 2000 insert
As a temporary fix, we moved our import process to run overnight when no one is using our app. Obviously, this cannot go on forever. Please could someone at least point us in the right direction so that we can do more research and hoping we can find a fixes.
Would be good if someone have knowledge of decoding the heap dump
Your help would be very much appreciated by all of us here. Many thanks!
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_heap_dump.png
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_dump2.png
Code Below - Caller of this method does not have a transaction. This method will process one import object per call, and we need to process around 100K of these object daily
#Override
#PreAuthorize("(hasUserRole('ROLE_ADMIN')")
#Transactional(propagation = Propagation.REQUIRED)
public void processImport(ImportInvestorAccountUpdate account, String advisorCompanyKey) {
ImportInvestorAccountDescriptor invAccDesc = account
.getInvestorAccount();
InvestorAccount invAcc = getInvestorAccountByImportDescriptor(
invAccDesc, advisorCompanyKey);
try {
ParseReportingData parseReportingData = ctx
.getBean(ParseReportingData.class);
String baseCCY = invAcc.getBaseCurrency();
Date valueDate = account.getValueDate();
ArrayList<InvestorAccountInformationILAS> infoList = parseReportingData
.getInvestorAccountInformationILAS(null, invAcc, valueDate,
baseCCY);
InvestorAccountInformationILAS info = infoList.get(0);
PositionSnapshot snapshot = new PositionSnapshot();
ArrayList<Position> posList = new ArrayList<Position>();
Double totalValueInBase = 0.0;
double totalQty = 0.0;
for (ImportPosition importPos : account.getPositions()) {
Asset asset = getAssetByImportDescriptor(importPos
.getTicker());
PositionInsurance pos = new PositionInsurance();
pos.setAsset(asset);
pos.setQuantity(importPos.getUnits());
pos.setQuantityType(Position.QUANTITY_TYPE_UNITS);
posList.add(pos);
}
snapshot.setPositions(posList);
info.setHoldings(snapshot);
log.info("persisting a new investorAccountInformation(source:"
+ invAcc.getReportSource() + ") on " + valueDate
+ " of InvestorAccount(key:" + invAcc.getKey() + ")");
persistenceService.updateManagementEntity(invAcc);
} catch (Exception e) {
throw new DataImportException(invAcc == null ? null : invAcc.getKey(), advisorCompanyKey,
e.getMessage());
}
}
Do you use the same pm for the entire job?
If so, you may want to try to close and create new ones once in a while.
If not, this could be the L2 cache. What setting do you have for datanucleus.cache.level2.type? It think it's a weak map by default. You may want to try none for testing.

Grails flushing not working

I'm working on processing a large csv file and I found this article about batch import: http://naleid.com/blog/2009/10/01/batch-import-performance-with-grails-and-mysql/. I tried to do the same, but it seems to have no effect.
Should the instances be viewable in the database after each flushing? Because now there is either 0 or all of the entites when I try to query 'SELECT COUNT(*) FROM TABLE1', so it looks like the instances are commited all at once.
Then I also noticed that the import works quickly when importing for the first time to the blank table, but when the table is full and the entity should be either updated or saved as new, the whole process is enormously slow. It's mainly because of the memory not being cleaned and decreases to 1MB or less and the app gets stuck. So is it because of not flushing the session?
My code for importing is here:
public void saveAll(List<MedicalInstrument> listMedicalInstruments) {
log.info("start saving")
for (int i = 0; i < listMedicalInstruments.size() - 1; i++) {
def medicalInstrument = listMedicalInstruments.get(i)
def persistedMedicalInstrument = MedicalInstrument.findByCode(medicalInstrument.code)
if (persistedMedicalInstrument) {
persistedMedicalInstrument.properties = medicalInstrument.properties
persistedMedicalInstrument.save()
} else {
medicalInstrument.save()
}
if ((i + 1) % 100 == 0) {
cleanUpGorm()
if ((i + 1) % 1000 == 0) {
log.info("saved ${i} entities")
}
}
}
cleanUpGorm()
}
protected void cleanUpGorm() {
log.info("cleanin GORM")
def session = sessionFactory.currentSession
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
Thank you very much for any help!
Regards,
Lojza
P.S.: my JVM memory has 252.81 MB in total, but it's only testing environment for me and 3 other people.
I had a similar problem once. Then I realized the reason was because I was doing it in a grails service, which was transactional by default. So every call to the methods in the service was itself wrapped in a transaction which made changes to that database linger around until the method completes in which case the results are not flushed.
From my experience they still won't necessarily be visible in the database until they are all committed at the end. I'm using Oracle and the only way I've been able to get them to commit in batches (so visible in the database) is by creating separate transactions for each batch and close it out after the flush. That, however, resulted in errors at the end of the process on the final flush. I didn't have time to figure it out, and using the process above I NEVER had issues no matter how large the data load was.
Using this method still helps tremendously - it really does flush and clear the session. You can watch your memory utilization to see that.
As far as updating a table with records do you have indexes on the table? Sometimes the indexes will slow down mass insert/updates like this because the database is trying to keep the index fresh. Maybe disable the index before the import/update and then enable when it is done?

Resources