I am using the graphaware php client for neo4j.
When running a query with a "large" parameter object (about 200 lines in pretty print, field values are 30 characters max), it freezes.
The $queryparams object looks like
{
"data": {
"someproperty": 30000,
"anotherproperty": "stringentry",
<about 200 more like this here, partially nested>
}
}
where
everything is inside the data wrapper
most of the 200 entries are garbage that the query never uses
The line
$queryresult = $client->run($query, $queryparams);
becomes long-running and gets time-outed by nginx. I tried
try
{
$queryresult = $client->run($query, $queryparams);
} catch (Neo4jException $e)
{
return "error";
}
to no avail.
Running the same query with the same parameters in the neo4j browser, I get my results instantaneously.
Any ideas about what is causing the problem? Is it graphaware?
EDIT: I posted too fast, but this was unexpected to me: there is a field "0": ... somewhere in the $queryparams inside the garbage I mentioned. That is what causing the problem. Is this intended behaviour?
Related
I do load aittable records via their API:
const base = airtable.base(item.baseId);
base("Dishes")
.select({
})
.eachPage(
function page(records, fetchNextPage) {
tableRecords.push(...records);
// To fetch the next page of records, call `fetchNextPage`.
// If there are more records, `page` will get called again.
// If there are no more records, `done` will get called.
fetchNextPage();
},
function done(err) {
if (err) {
console.error(err);
return;
}
console.log("##Done", tableRecords.length);
}
);
and as a result, I receive 2202 records. But in the table in UI I do see 2271 records. And when I do export to csv - I see the same 2271 as well.
Code is pretty basic, I even remove view setting to ensure, that it's not a presentational issue.
Google did not help me (nothing related). Did anyone face the same issue? Any solution?
NB: for sure I already compared both lists and found items I do miss, but while observing those items I see nothing special there. So it says me what I do miss, but not why
const base = airtable.base(item.baseId);
base("Dishes")
.select({})
.all()
I am using JQuery typeahed from RunningCoder. Typeahead works well if I have few records in my source, but does not work if my source has around 500 records.
It is not related to the result count, which can be managed by maxItem parameter. Also, there is no issue in getting the JSON string from the server, as I can print it without any issue.
I know ideally, I should not have them in my page pre-loaded and search it based on the input, but in my case hitting the server for search is not an option and I want to perform the search from the static data I have in my view. Here is my code:
$.typeahead({
input: "#List .typeahead",
minLength: 3,
templateValue: "{{Text}}",
display: ["Text", "Subtext"],
emptyTemplate: 'No results for "{{query}}"',
template: '<span>' +
'<span class="result" id="{{Value}}">{{Text}}</span>' +
'</span>',
source: {
Issuer: {
data: #Html.Raw(Model.EveryThing)
}
}
});
In my code above if Model.Everything has 40-50 records then it works fine, but does not work for around 500 records.
ADDITIONAL INFO:
After figuring out the issue, would like to explain it a bit as this may help someone. By using above code, you can search the list based on two fields i.e. Text and Subtext, but the user will see only Text in the result and then can select from matching options. This will be very useful if you want to perform the search on more than one field but show just one field.
Figured it out after creating sample data on my own, rather than relying on server response. The issue is not with the length of the result, but null entries in the result.
In my data, there are few objects with Subtext as NULL, and that causes the issue, I fixed it by replacing the NULL with an empty string, and this works as expected now.
It appears that fetching metadata for my model is extremely slow in Chrome, but fast in IE.
My dbcontext contains around 35 entities with lots of navigation properties and each entity I add adds to the delay. Currently the delay is around 20 seconds, starting AFTER the query has returned the raw metadata, and it's entirely CPU which is working heavily, memory usage stays stable. I've got an i7 processor and ample memory.
I know there are differences in how the javascript engine is geared in these two browsers, with the chrome javascript JIT compiler being optimised for floating point operations (which is why webgl graphics are a thousand times faster in chrome than IE) - could this be having an impact on the work which fetchMetaData has to do?
Has anyone else noticed this slowness? Could it be that my relationships are wrong? Once the delay is over everything works though, so i'm doubting that relationships could be a problem.
Found the problem and solution!
Thank you for taking the time to look at this, after your reply I decided to strip the whole project down to basics where I could reproduce the problem and look for any interference.
This was an older project in which I had implemented Breeze. The project used standard jquery post/get methods to get data from MVC, and since dates and times have always been a problem when posting and receiving json data from MVC I had this code in my startup script:
// Add datafilter to jQuery ajax calls to translate dates
$.ajaxSettings.dataFilter = function (data, type) {
//if (type === 'json') {
// convert things that look like Dates into a UTC Date string and completely replace them.
data = data.replace(/(.*?")(\\\/Date\([0-9\-]+\)\\\/)(")/g,
function (fullMatch, $1, $2, $3) {
try {
return $1 + new Date(parseInt($2.substr(7))) + $3;
}
catch (e) { }
// something miserable happened, just return the original string
return $1 + $2 + $3;
});
//}
return data;
};
After removing this code (since breeze does dates properly), everything works as normal. This type of code may be common in other older projects which had to deal properly with dates, I know I got the above snippet from WiredPrairie and i'm sure others will also run into this problem.
Dmitri,
I can't repro this, so I am wondering if there isn't something else involved. Have you tried Firefox as well?
I'm working on processing a large csv file and I found this article about batch import: http://naleid.com/blog/2009/10/01/batch-import-performance-with-grails-and-mysql/. I tried to do the same, but it seems to have no effect.
Should the instances be viewable in the database after each flushing? Because now there is either 0 or all of the entites when I try to query 'SELECT COUNT(*) FROM TABLE1', so it looks like the instances are commited all at once.
Then I also noticed that the import works quickly when importing for the first time to the blank table, but when the table is full and the entity should be either updated or saved as new, the whole process is enormously slow. It's mainly because of the memory not being cleaned and decreases to 1MB or less and the app gets stuck. So is it because of not flushing the session?
My code for importing is here:
public void saveAll(List<MedicalInstrument> listMedicalInstruments) {
log.info("start saving")
for (int i = 0; i < listMedicalInstruments.size() - 1; i++) {
def medicalInstrument = listMedicalInstruments.get(i)
def persistedMedicalInstrument = MedicalInstrument.findByCode(medicalInstrument.code)
if (persistedMedicalInstrument) {
persistedMedicalInstrument.properties = medicalInstrument.properties
persistedMedicalInstrument.save()
} else {
medicalInstrument.save()
}
if ((i + 1) % 100 == 0) {
cleanUpGorm()
if ((i + 1) % 1000 == 0) {
log.info("saved ${i} entities")
}
}
}
cleanUpGorm()
}
protected void cleanUpGorm() {
log.info("cleanin GORM")
def session = sessionFactory.currentSession
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
Thank you very much for any help!
Regards,
Lojza
P.S.: my JVM memory has 252.81 MB in total, but it's only testing environment for me and 3 other people.
I had a similar problem once. Then I realized the reason was because I was doing it in a grails service, which was transactional by default. So every call to the methods in the service was itself wrapped in a transaction which made changes to that database linger around until the method completes in which case the results are not flushed.
From my experience they still won't necessarily be visible in the database until they are all committed at the end. I'm using Oracle and the only way I've been able to get them to commit in batches (so visible in the database) is by creating separate transactions for each batch and close it out after the flush. That, however, resulted in errors at the end of the process on the final flush. I didn't have time to figure it out, and using the process above I NEVER had issues no matter how large the data load was.
Using this method still helps tremendously - it really does flush and clear the session. You can watch your memory utilization to see that.
As far as updating a table with records do you have indexes on the table? Sometimes the indexes will slow down mass insert/updates like this because the database is trying to keep the index fresh. Maybe disable the index before the import/update and then enable when it is done?
I'm building an nsIProtocolHandler implementation in Delphi. (more here)
And it's working already. Data the module builds gets streamed over an nsIInputStream. I've got all the nsIRequest, nsIChannel and nsIHttpChannel methods and properties working.
I've started testing and I run into something strange. I have a page "a.html" with this simple HTML:
<img src="a.png">
Both "xxm://test/a.html" and "xxm://test/a.png" work in Firefox, and give above HTML or the PNG image data.
The problem is with displaying the HTML page, the image doesn't get loaded. When I debug, I see:
NewChannel gets called for a.png, (when Firefox is processing an OnDataAvailable notice on a.html),
NotificationCallbacks is set (I only need to keep a reference, right?)
RequestHeader "Accept" is set to "image/png,image/*;q=0.8,*/*;q=0.5"
but then, the channel object is released (most probably due to a zero reference count)
Looking at other requests, I would expect some other properties to get set (such as LoadFlags or OriginalURI) and AsyncOpen to get called, from where I can start getting the request responded to.
Does anybody recognise this? Am I doing something wrong? Perhaps with LoadFlags or the LoadGroup? I'm not sure when to call AddRequest and RemoveRequest on the LoadGroup, and peeping from nsHttpChannel and nsBaseChannel I'm not sure it's better to call RemoveRequest early or late (before or after OnStartRequest or OnStopRequest)?
Update: Checked on the freshly new Firefox 3.5, still the same
Update: To try to further isolate the issue, I try "file://test/a1.html" with <img src="xxm://test/a.png" /> and still only get above sequence of events happening. If I'm supposed to add this secundary request to a load-group to get AsyncOpen called on it, I have no idea where to get a reference to it.
There's more: I find only one instance of the "Accept" string that get's added to the request headers, it queries for nsIHttpChannelInternal right after creating a new channel, but I don't even get this QueryInterface call through... (I posted it here)
Me again.
I am going to quote the same stuff from nsIChannel::asyncOpen():
If asyncOpen returns successfully, the
channel is responsible for keeping
itself alive until it has called
onStopRequest on aListener or called
onChannelRedirect.
If you go back to nsViewSourceChannel.cpp, there's one place where loadGroup->AddRequest is called and two places where loadGroup->RemoveRequest is being called.
nsViewSourceChannel::AsyncOpen(nsIStreamListener *aListener, nsISupports *ctxt)
{
NS_ENSURE_TRUE(mChannel, NS_ERROR_FAILURE);
mListener = aListener;
/*
* We want to add ourselves to the loadgroup before opening
* mChannel, since we want to make sure we're in the loadgroup
* when mChannel finishes and fires OnStopRequest()
*/
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
loadGroup->AddRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this), nsnull);
nsresult rv = mChannel->AsyncOpen(this, ctxt);
if (NS_FAILED(rv) && loadGroup)
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, rv);
if (NS_SUCCEEDED(rv)) {
mOpened = PR_TRUE;
}
return rv;
}
and
nsViewSourceChannel::OnStopRequest(nsIRequest *aRequest, nsISupports* aContext,
nsresult aStatus)
{
NS_ENSURE_TRUE(mListener, NS_ERROR_FAILURE);
if (mChannel)
{
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
{
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, aStatus);
}
}
return mListener->OnStopRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
aContext, aStatus);
}
Edit:
As I have no clue about how Mozilla works, so I have to guess from reading some code. From the channel's point of view, once the original file is loaded, its job is done. If you want to load the secondary items linked in file like an image, you have to implement that in the listener. See TestPageLoad.cpp. It implements a crude parser and it retrieves child items upon OnDataAvailable:
NS_IMETHODIMP
MyListener::OnDataAvailable(nsIRequest *req, nsISupports *ctxt,
nsIInputStream *stream,
PRUint32 offset, PRUint32 count)
{
//printf(">>> OnDataAvailable [count=%u]\n", count);
nsresult rv = NS_ERROR_FAILURE;
PRUint32 bytesRead=0;
char buf[1024];
if(ctxt == nsnull) {
bytesRead=0;
rv = stream->ReadSegments(streamParse, &offset, count, &bytesRead);
} else {
while (count) {
PRUint32 amount = PR_MIN(count, sizeof(buf));
rv = stream->Read(buf, amount, &bytesRead);
count -= bytesRead;
}
}
if (NS_FAILED(rv)) {
printf(">>> stream->Read failed with rv=%x\n", rv);
return rv;
}
return NS_OK;
}
The important thing is that it calls streamParse(), which looks at src attribute of img and script element, and calls auxLoad(), which creates new channel with new listener and calls AsyncOpen().
uriList->AppendElement(uri);
rv = NS_NewChannel(getter_AddRefs(chan), uri, nsnull, nsnull, callbacks);
RETURN_IF_FAILED(rv, "NS_NewChannel");
gKeepRunning++;
rv = chan->AsyncOpen(listener, myBool);
RETURN_IF_FAILED(rv, "AsyncOpen");
Since it's passing in another instance of MyListener object in there, that can also load more child items ad infinitum like a Russian doll situation.
I think I found it (myself), take a close look at this page. Why it doesn't highlight that the UUID has been changed over versions, isn't clear to me, but it would explain why things fail when (or just prior to) calling QueryInterface on nsIHttpChannelInternal.
With the new(er) UUID, I'm getting better results. As I mentioned in an update to the question, I've posted this on bugzilla.mozilla.org, I'm curious if and which response I will get there.