Using a gremlin script and neo4j I try to find all paths between two nodes, descending at most 10 levels down. But all I get as response from the REST API is a
java.lang.ArrayIndexOutOfBoundsException: -1
Here is the script:
x = g.v(2)
y = g.v(6)
x.both.loop(10){!it.object.equals(y)}.paths
I looked through the documentation, but couldnt find anything relevant for this usecase.
In Gremlin the argument to loop is the number of steps back that you wish to go and the closure is evaluated to determine when to break out of the loop. In this case, because you have loop(10) it's going to go back way too far to a point where the pipeline is not defined. With the respect to the closure, you'll need to check not only if the object is the one in question, in which case you should stop, but also whether or not you've done 10 loops already.
What you really want it something like this:
x.both.loop(1){!it.object.equals(y) && it.loops < 10}.paths
However, I should add that if there is a cycle in the graph, this will gladly traverse the cycle over and over and result in far too many paths. You can apply some clever filter and sideEffect to avoid visiting nodes multiple times.
For more information see the Loop Pattern Page on the Gremlin Wiki.
Related
I am working on some fairly complex application that is making use of Dask framework, trying to increase the performance. To that end I am looking at the diagnostics dashboard. I have two use-cases. On first I have a 1GB parquet file split in 50 parts, and on second use case I have the first part of the above file, split over 5 parts, which is what used for the following charts:
The red node is called "memory:list" and I do not understand what it is.
When running the bigger input this seems to block the whole operation.
Finally this is what I see when I go inside those nodes:
I am not sure where I should start looking to understand what is generating this memory:list node, especially given how there is no stack button inside the task as it often happens. Any suggestions ?
Red nodes are in memory. So this computation has occurred, and the result is sitting in memory on some machine.
It looks like the type of the piece of data is a Python list object. Also, the name of the task is list-159..., so probably this is the result of calling the list Python function.
I have SPSSmodeler stream which is now used and updated every week constantly to generate a certain dataset. A raw data for this stream is also renewed on a weekly basis.
In part of this stream, there is a chunk of nodes that were necessary to modify and update manually every week, and the sequence of this part is below: Type Node => Restructure Node => Aggregate Node
To simplify the explanation of those nodes' role, I drew an image of them as bellow.
Because the original raw data is changed weekly basis, the range of Unit value above is always varied, sometimes more than 6 (maybe 100) others less than 6 (maybe 3). That is why somebody has to modify there and update those chunk of nodes on a weekly basis until now. *Unit value has a certain limitation (300 for now)
However, now we are aiming to run this stream automatically without touching any human operations on it that we need to customize there to work perfectly, automatically. Please help and will appreciate your efforts, thanks!
In order to automatize, I suggest to try to use global nodes combined with clem scripts inside the execution (default script). I have a stream that calculates the first date and the last date and those variables are used to rename files at the end of execution. I think you could use something similar as explained here:
1) Create derive nodes to bring the unit values used in the weekly stream
2) Save this information in a table named 'count_variable'
3) Use a Global node named Global with a query similar to this:
#GLOBAL_MAX(variable created in (2)) (only to record the number of variables. The step 2 created a table with only 1 values, so the GLOBAL_MAX will only bring the number of variables).
4) The query inside the execution tab will be similar to this:
execute count_variable
var tabledata
var fn
set tabledata = count_variable.output
set count_variable = value tabledata at 1 1
execute Global
5) You now can use the information of variables just using the already creatde "count_variable"
It's not easy to explain just by typing, but I hope to have been helpful.
Please mark as +1 in this answer if it was relevant one.
I think there is a better, simpler and more effective (yet risky, due to node's requirements to input data) solution to your problem. It is called Transpose node and does exactly that - pivot your table. But just from version 18.1 on. Here's an example:
https://developer.ibm.com/answers/questions/389161/how-does-new-feature-partial-transpose-work-in-sps/
The .split_off method on std::collections::LinkedList is described as having a O(n) time complexity. From the (docs):
pub fn split_off(&mut self, at: usize) -> LinkedList<T>
Splits the list into two at the given index. Returns everything after the given index, including the index.
This operation should compute in O(n) time.
Why not O(1)?
I know that linked lists are not trivial in Rust. There are several resources going into the how's and why's like this book and this article among several others, but I haven't got the chance to dive into those or the standard library's source code yet.
Is there a concise explanation about the extra work needed when splitting a linked list in (safe) Rust?
Is this the only way? And if not why was this implementation chosen?
The method LinkedList::split_off(&mut self, at: usize) first has to traverse the list from the start (or the end) to the position at, which takes O(min(at, n - at)) time. The actual split off is a constant time operation (as you said). And since this min() expression is confusing, we just replace it by n which is legal. Thus: O(n).
Why was the method designed like that? The problem goes deeper than this particular method: most of the LinkedList API in the standard library is not really useful.
Due to its cache unfriendliness, a linked list is often a bad choice to store sequential data. But linked lists have a few nice properties which make them the best data structure for a few, rare situations. These nice properties include:
Inserting an element in the middle in O(1), if you already have a pointer to that position
Removing an element from the middle in O(1), if you already have a pointer to that position
Splitting the list into two lists at an arbitrary position in O(1), if you already have a pointer to that position
Notice anything? The linked list is designed for situations where you already have a pointer to the position that you want to do stuff at.
Rust's LinkedList, like many others, just store a pointer to the start and end. To have a pointer to an element inside the linked list, you need something like an Iterator. In our case, that's IterMut. An iterator over a collection can function like a pointer to a specific element and can be advanced carefully (i.e. not with a for loop). And in fact, there is IterMut::insert_next which allows you to insert an element in the middle of the list in O(1). Hurray!
But this method is unstable. And methods to remove the current element or to split the list off at that position are missing. Why? Because of the vicious circle that is:
LinkedList lacks almost all features that make linked lists useful at all
Thus (nearly) everyone recommends not to use it
Thus (nearly) no one uses LinkedList
Thus (nearly) no one cares about improving it
Goto 1
Please note that are a few brave souls occasionally trying to improve the situations. There is the tracking issue about insert_next, where people argue that Iterator might be the wrong concept to perform these O(1) operations and that we want something like a "cursor" instead. And here someone suggested a bunch of methods to be added to IterMut (including cut!).
Now someone just has to write a nice RFC and someone needs to implement it. Maybe then LinkedList won't be nearly useless anymore.
Edit 2018-10-25: someone did write an RFC. Let's hope for the best!
Edit 2019-02-21: the RFC was accepted! Tracking issue.
Maybe I'm misunderstanding your question, but in a linked list, the links of each node have to be followed to proceed to the next node. If you want to get to the third node, you start at the first, follow its link to the second, then finally arrive at the third.
This traversal's complexity is proportional to the target node index n because n nodes are processed/traversed, so it's a linear O(n) operation, not a constant time O(1) operation. The part where the list is "split off" is of course constant time, but the overall split operation's complexity is dominated by the dominant term O(n) incurred by getting to the split-off point node before the split can even be made.
One way in which it could be O(1) would be if a pointer existed to the node after which the list is split off, but that is different from specifying a target node index. Alternatively, an index could be kept mapping the node index to the corresponding node pointer, but it would be extra space and processing overhead in keeping the index updated in sync with list operations.
pub fn split_off(&mut self, at: usize) -> LinkedList<T>
Splits the list into two at the given index. Returns everything after the given index, including the index.
This operation should compute in O(n) time.
The documentation is either:
unclear, if n is supposed to be the index,
pessimistic, if n is supposed to be the length of the list (the usual meaning).
The proper complexity, as can be seen in the implementation, is O(min(at, n - at)) (whichever is smaller). Since at must be smaller than n, the documentation is correct that O(n) is a bound on the complexity (reached for at = n / 2), however such a large bound is unhelpful.
That is, the fact that list.split_off(5) takes the same time if list.len() is 10 or 1,000,000 is quite important!
As to why this complexity, this is an inherent consequence of the structure of doubly-linked list. There is no O(1) indexing operation in a linked-list, after all. The operation implemented in C, C++, C#, D, F#, ... would have the exact same complexity.
Note: I encourage you to write a pseudo-code implementation of a linked-list with the split_off operation; you'll realize this is the best you can get without altering the data-structure to be something else.
Has anyone implemented a capped data-structure of any kind in Redis? I'm working on building something like a news feed. The feed will wind up being manipulated and read from very frequently, and holding it in a sorted set in Redis would be cheap and perfect for my use case. The only issue is I only ever need n items per feed, and I'm worried about memory overflow, so I'd like to ensure each feed never gets above n items. It seems pretty trivial to make a capped sorted collection in Redis with Lua:
redis-cli EVAL "$(cat update_feed.lua)" 1 feeds:some_feed "thing_to_add", n
Where update_feed.lua looks something like (without testing it):
redis.call('ZADD', KEYS[1], os.time(), ARGV[1])
local num = redis.call('ZCARD', KEYS[1])
if num > ARGV[2]:
redis.call('ZREMRANGEBYRANK', KEYS[1], -n, -inf)
That's not bad at all, and pretty cheap, but it seems like such a basic thing that could be doable much more cheaply by instantiating the sorted set with only n buckets to begin with. I can't find a way to do that in redis, so I guess my question is: did I miss something, and if I didn't, why is there no structure for this in redis, even if it just ran the basic Lua script I described, it seems like it would be a typical enough use-case that it ought to be implemented as an option for redis data structures?
You can use LTRIM if it is a list.
Excerpt from the documentation.
LPUSH mylist someelement
LTRIM mylist 0 99
This pair of commands will push a new element on the list, while making sure that the list will not grow larger than 100 elements. This is very useful when using Redis to store logs for example. It is important to note that when used in this way LTRIM is an O(1) operation because in the average case just one element is removed from the tail of the list.
I use sorted sets myself for this. I too thought about using lists, but then I found that manipulating the INSIDE of a list is fairly expensive -- O(n) -- while manipulating the inside of a sorted set is O(log n).
That's what sealed the deal for me--will you ever be manipulating the inside of the set? If so, stick with sorted sets and just flush the oldest whenever you have to, just like you were thinking.
Detecting cycles in a single linked list is a well known problem. I know that this question has been asked a zillion times all over the internet. The reason why I am asking it again is I thought of a solution which I did not encounter at other places. (I admit I haven't searched that deeply either).
My solution is:
Given a linked list and pointer to some node, break the link between node and node->next();
Then start at node->next() and traverse till either you hit an end (which means there was no loop) or till you reach at node which means there was a loop.
Is there anything wrong/good about above solution ?
Note: Do join the link back once you are done.
That will work to detect complete cycles (i.e., cycles with a period of the whole list), e.g.:
A -> B -> C -> D -> A
But what if we have a cycle somewhere else in the list?
e.g.,
A -> B -> C -> D -> E -> C
I can't see that your algorithm will detect the cycle in this case.
Keep in mind that to detect the first case, we need not even break the link. We could just traverse the list and keep comparing the next link for each node with the head element to see if we'd started back at the start yet (or hit the end).
I guess the most trivial approach (not necessarily the best, but one that everybody should know how to implement in Java in a few lines of code) is to build a Hash Set of the nodes, start adding them until you find one that you already saw before. Takes extra memory though.
If you can mark nodes, start marking them until you find one you marked before (the hash map is essentially an external marker).
And check the usual graph theory books...
You are not allowed to break a link, even if you join it back at the end. What if other programs read the list at the same timeĀ ?
The algorithm must not damage the list while working on it.