Neo4j GraphSage training does not log anything - neo4j

I am working on extracting graph embeddings with training GraphSage algorihm. I am working on a large graph consisting of (82,339,589) nodes and (219,521,164) edges. When I checked with ":queries" command the query is listed as running. Algorithm started in 6 days ago. When I look the logs with "docker logs xxx" the last logs listed as
2021-12-01 12:03:16.267+0000 INFO Relationship Store Scan (RelationshipScanCursorBasedScanner): Imported 352,492,468 records and 0 properties from 16247 MiB (17,036,668,320 bytes); took 59.057 s, 5,968,663.57 Relationships/s, 275 MiB/s (288,477,487 bytes/s) (per thread: 1,492,165.89 Relationships/s, 68 MiB/s (72,119,371 bytes/s))
2021-12-01 12:03:16.269+0000 INFO [neo4j.BoltWorker-3 [bolt] [/10.0.0.6:56143] ] LOADING
INFO [neo4j.BoltWorker-3 [bolt] [/10.0.0.6:56143] ] LOADING Actual
memory usage of the loaded graph: 8602 MiB
INFO [neo4j.BoltWorker-3 [bolt] [/10.0.0.6:64076] ] GraphSageTrain ::
Start
There is a way to see detailed logs about training process. Is it normal for taking 6 days for graphs with shared sizes ?

It is normal for GraphSAGE to take a long time compared to FastRP or Node2Vec. Starting in GDS 1.7, you can use
CALL gds.beta.listProgress(jobId: String)
YIELD
jobId,
taskName,
progress,
progressBar,
status,
timeStarted,
elapsedTime
If you call without passing in a jobId, it will return a list of all running jobs. If you call with a jobId, it will give you details about a running job.
This query will summarize the details for job 03d90ed8-feba-4959-8cd2-cbd691d1da6c.
CALL gds.beta.listProgress("03d90ed8-feba-4959-8cd2-cbd691d1da6c")
YIELD taskName, status
RETURN taskName, status, count(*)
Here's the documentation for progress logging. The system monitoring procedures might also be helpful to you.

Related

Can the additional time it takes to execute the first instance of a query be eliminated in neo4j?

Whatever steps I take, the first example of any query I make in neo4j always takes longer than any subsequent execution of the same query. So I guess something other than the store is being cached.
I'm using the latest community container image for 3.5 (3.5.20 at the time of writing)
I have plenty of memory to cache absolutely everything if I want to
I'm using well documented warm-up strategies in order to (allegedly) prime the page cache
The database details...
I run CALL apoc.monitor.store(); and it tells me the size of each store: -
+------------------------------------------------------------------------------------------------------------+
| logSize | stringStoreSize | arrayStoreSize | relStoreSize | propStoreSize | totalStoreSize | nodeStoreSize |
+------------------------------------------------------------------------------------------------------------+
| 1224 | 148165607424 | 3016515584 | 26391839040 | 42701007672 | 241318778285 | 2430128610 |
+------------------------------------------------------------------------------------------------------------+
I run CALL apoc.warmup.run(true, true, true); (before running any queries). It takes about 15 minutes and displays a summary of what it's done. The text it outputs is not easily parsed in its raw form so I've summarised salient parts of it below. Basically it tells me the number of pages loaded for each store, and these are: -
nodePages 296,719
relPages 3,234,294
relGroupPages 4,580
propPages 5,233,608
stringPropPages 18,086,620
arrayPropPages 368,225
indexPages 2,235,227
---
Total 29,459,273
With a page size of 8,192 bytes per page that's approximately 225GiB of pages for the displayed stores
I have enough physical memory and I have already set NEO4J_dbms_memory_pagecache_size to 250G
I set NEO4J_dbms_memory_heap_initial__size and NEO4J_dbms_memory_heap_max__size to 8G
So (allegedly) the page cache is "warm" and I have enough physical memory.
Query timings...
I run my query, which returns 1,813 records, and I execute the same query several times in order to illustrate the issue. I see the following (typical) timings: -
1. 1,821 mS
2. 75 mS
3. 60 mS
4. 51 mS
5. 48 mS
6. 42 mS
7. 38 mS
8. 36 mS
9. 36 mS
The actual values are dependent on the query but the first execution of every query is always significantly longer than the second.
ADDENDUM (16/Jul).
Just to be clear, using apoc.warmup.run does help.
If I don't use it, the first query is much longer still.
Having just restarted the DB (without a warm-up) the first query
took 7,829mS. The 2nd was 116mS, the third 66mS
So, warm-up or not, the first query is always longer.
Question...
What's going on?
Can I do anything more to reduce the initial query time?
Oh, and using the query as the warm up is not the answer - I don't know what queries will be used
Not sure why apoc.warmup.run does not speed up your initial query, but you could just try using an initial query invocation as your "warmup" instead.

Google cloud dataflow : Shutting down JVM after 8 consecutive periods of measured GC thrashing

Am using google cloud dataflow to do some transformation
am treading about 3 million records from GBQ and performing a transformation and writing transform result to GCS.
While doing this operation dataflow is failing with error
Error :
Shutting down JVM after 8 consecutive periods of measured GC thrashing
Workflow failed. Causes: S20:Read GBQ/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/Read+Read GBQ/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow+Read GBQ/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable+Read GBQ/Reshuffle.ViaRandomKey/Values/Values/Map+Read GBQ/ReadFiles+Read GBQ/PassThroughThenCleanup/ParMultiDo(Identity)+Read GBQ/PassThroughThenCleanup/View.AsIterable/ParDo(ToIsmRecordForGlobalWindow)+transform+Split results/ParMultiDo(Partition)+Write errors/WriteFiles/RewindowIntoGlobal/Window.Assign+Write errors/WriteFiles/WriteShardedBundlesToTempFiles/ApplyShardingKey+Write errors/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Reify+Write errors/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Write+Write entities Gzip/WriteFiles/WriteShardedBundlesToTempFiles/ApplyShardingKey+Write entities Gzip/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Reify+Write entities Gzip/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Write failed., A work item was attempted 4 times without success. Each time the worker eventually lost contact with the service. The work item was attempted on:
DataConverterOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(DataConverterOptions.class);
Pipeline p = Pipeline.create(options);
EntityCreatorFn entityCreatorFn = EntityCreatorFn.newWithGCSMapping(options.getMapping(),
options.getWithUri(), options.getLineNumberToResult(), options.getIsPartialUpdate(), options.getQuery() != null);
PCollectionList<String> resultByType =
p.apply("Read GBQ", BigQueryIO.read(
(SchemaAndRecord elem) -> elem.getRecord().get("lineNumber") + "|" + elem.getRecord().get("sourceData"))
.fromQuery(options.getQuery()).withoutValidation()
.withCoder(StringUtf8Coder.of()).withTemplateCompatibility()).apply("transform",ParDo.of(entityCreatorFn))
.apply("Split results",Partition.of(2, (Partition.PartitionFn<String>) (elem, numPartitions) -> {
if (elem.startsWith(PREFIX_ERROR)) {
return PARTITION_ERROR;
}
return PARTITION_SUCCESS;
}));
FileIO.Sink sink = TextIO.sink();
resultByType.get(0).apply("Write entities Gzip", FileIO.write().to(options.getOutput()).withCompression(Compression.GZIP).withNumShards(options.getShards()).via(sink));
resultByType.get(1).apply("Write errors", TextIO.write().to(options.getErrorOutput()).withoutSharding());
p.run();
Shutting down JVM after 8 consecutive periods of measured GC thrashing. Memory is used/total/max = 109/301/2507 MB, GC last/max = 54.00/54.00 %, #pushbacks=0, gc thrashing=true.
Does 'EntityCreatorFn.newWithGCSMapping' cache elements in memory by any chance ? Seems like one of the steps in your pipeline is consuming too much memory (note that Dataflow cannot parallelize processing of a single element of a DoFn). I suggest adjusting your pipeline or trying out highmem machines. If the problem persists, please consider contacting Google Cloud Support with relevant job IDs etc.

Neo4j slow concurrent merges

I have been experiencing some extremely bad slowdowns in Neo4j, and having spent a few days on the issue now, I still can't figure out why. I'm really hoping someone here can help. I've also tried the neo slack support group already, but to no avail.
My setup is as follows: the back-end is a django app that connects through the official drivers (pip package neo4j-driver==1.5.0) to a dockerized Neo4j Enterprise 3.2.3 instance. The data we write is added in infrequent bursts of around 15 concurrent merges to the same portion of the graph, and is triggered when a user interacts with some part of our product (each interaction causing a separate merge).
Each merge operation is the following query:
MERGE (m :main:entity:person {user: $user, task: $task, type: $type,
text: $text})
ON CREATE SET m.source = $list, m.created = $timestamp, m.task_id=id(m)
ON MATCH SET m.source = CASE
WHEN $source IN m.source THEN m.source
ELSE m.source + $list
END SET m.modified = $timestamp
RETURN m.task_id AS task_id;
A PROFILE of this query looks like this. As you can see, the individual processing time is in the ms range. We have tested running this 100+ times in quick succession with no issues. We have a Node key configured as in this schema.
The running system however seems to seize up and we see execution times for these queries hit as high as 2 minutes! A snapshot of the running queries looks like this.
Does anyone have any clues as to what may be going on?
Further system info:
ls data/databases/graph.db/*store.db* | du -ch | tail -1
249.6M total
find data/databases/graph.db/schema/index -regex '.*/native.*' | du -hc | tail -1
249.6M total
ps
1 root 297:51 /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -cp /var/lib/neo4j/plugins:/var/lib/neo4j/conf:/var/lib/neo4j/lib/*:/var/lib/neo4j/plugins/* -server -Xms8G -Xmx8G -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPr
printenv | grep NEO
NEO4J_dbms_memory_pagecache_size=4G
NEO4J_dbms_memory_heap_maxSize=8G
The machine is has 16GB total memory and there is nothing else running on it.

Dataflow - Approx Unique on unbounded source

I'm getting unexpected results streaming in the cloud.
My pipeline looks like:
SlidingWindow(60min).every(1min)
.triggering(Repeatedly.forever(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(30)))
)
)
.withAllowedLateness(15sec)
.accumulatingFiredPanes()
.apply("Get UniqueCounts", ApproximateUnique.perKey(.05))
.apply("Window hack filter", ParDo(
if(window.maxTimestamp.isBeforeNow())
c.output(element)
)
)
.toJSON()
.toPubSub()
If that filter isn't there, I get 60 windows per output. Apparently because the pubsub sink isn't window aware.
So in the examples below, if each time period is a minute, I'd expect to see the unique count grow until 60 minutes when the sliding window closes.
Using DirectRunner, I get expected results:
t1: 5
t2: 10
t3: 15
...
tx: growing unique count
In dataflow, I get weird results:
t1: 5
t2: 10
t3: 0
t4: 0
t5: 2
t6: 0
...
tx: wrong unique count
However, if my unbounded source has older data, I'll get normal looking results until it catches up at which point I'll get the wrong results.
I was thinking it had to do with my window filter, but removing that didn't change the results.
If I do a Distinct() then Count().perKey(), it works, but that slows my pipeline considerably.
What am I overlooking?
[Update from the comments]
ApproximateUnique inadvertently resets its accumulated value when result is extracted. This is incorrect when the value is read more than once as with windows firing multiple times. Fix (will be in version 2.4): https://github.com/apache/beam/pull/4688

Erlang/OTP - Timing Applications

I am interested in bench-marking different parts of my program for speed. I having tried using info(statistics) and erlang:now()
I need to know down to the microsecond what the average speed is. I don't know why I am having trouble with a script I wrote.
It should be able to start anywhere and end anywhere. I ran into a problem when I tried starting it on a process that may be running up to four times in parallel.
Is there anyone who already has a solution to this issue?
EDIT:
Willing to give a bounty if someone can provide a script to do it. It needs to spawn though multiple process'. I cannot accept a function like timer.. at least in the implementations I have seen. IT only traverses one process and even then some major editing is necessary for a full test of a full program. Hope I made it clear enough.
Here's how to use eprof, likely the easiest solution for you:
First you need to start it, like most applications out there:
23> eprof:start().
{ok,<0.95.0>}
Eprof supports two profiling mode. You can call it and ask to profile a certain function, but we can't use that because other processes will mess everything up. We need to manually start it profiling and tell it when to stop (this is why you won't have an easy script, by the way).
24> eprof:start_profiling([self()]).
profiling
This tells eprof to profile everything that will be run and spawned from the shell. New processes will be included here. I will run some arbitrary multiprocessing function I have, which spawns about 4 processes communicating with each other for a few seconds:
25> trade_calls:main_ab().
Spawned Carl: <0.99.0>
Spawned Jim: <0.101.0>
<0.100.0>
Jim: asking user <0.99.0> for a trade
Carl: <0.101.0> asked for a trade negotiation
Carl: accepting negotiation
Jim: starting negotiation
... <snip> ...
We can now tell eprof to stop profiling once the function is done running.
26> eprof:stop_profiling().
profiling_stopped
And we want the logs. Eprof will print them to screen by default. You can ask it to also log to a file with eprof:log(File). Then you can tell it to analyze the results. We tell it to collapse the run time from all processes into a single table with the option total (see the manual for more options):
27> eprof:analyze(total).
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- --- ---- [----------]
io:o_request/3 46 0.00 0 [ 0.00]
io:columns/0 2 0.00 0 [ 0.00]
io:columns/1 2 0.00 0 [ 0.00]
io:format/1 4 0.00 0 [ 0.00]
io:format/2 46 0.00 0 [ 0.00]
io:request/2 48 0.00 0 [ 0.00]
...
erlang:atom_to_list/1 5 0.00 0 [ 0.00]
io:format/3 46 16.67 1000 [ 21.74]
erl_eval:bindings/1 4 16.67 1000 [ 250.00]
dict:store_bkt_val/3 400 16.67 1000 [ 2.50]
dict:store/3 114 50.00 3000 [ 26.32]
And you can see that most of the time (50%) is spent in dict:store/3. 16.67% is taken in outputting the result, another 16.67% is taken by erl_eval (this is why you get by running short functions in the shell -- parsing them becomes longer than running them).
You can then start going from there. That's the basics of profiling run times with Erlang. Handle with care, eprof can be quite a load on a production system or for functions that run for too long. Especially on a production system.
You can use eprof or fprof.
The normal way to do this is with timer:tc. Here is a good explanation.
I can recommend you this tool: https://github.com/virtan/eep
You will get something like this https://raw.github.com/virtan/eep/master/doc/sshot1.png as a result.
Step by step instruction for profiling all processes on running system:
On target system:
1> eep:start_file_tracing("file_name"), timer:sleep(20000), eep:stop_tracing().
$ scp -C $PWD/file_name.trace desktop:
On desktop:
1> eep:convert_tracing("file_name").
$ kcachegrind callgrind.out.file_name

Resources