I'm trying to use Propel's NestedSet feature. However, I'm missing something about inserting such that the tree is balanced as it is created (i.e. fill it in horizontally).
Say I have these elements:
root
r1c1 r1c2
r2c1 r2c2
I want to insert r2c3 as the 1st child of r1c2 (i.e. fill row 2 before starting on row 3).
My first stab at this was to create this function:
function where(User $root,$depth=0)
{
$num = $root->getNumberOfDescendants();
if ( $num < 2 )
return $root;
foreach($root->getChildren() as $d)
{
if ( $d->getNumberOfChildren() < 2 )
{
return $d;
}
}
foreach($root->getChildren() as $d)
{
return where($d, $depth+1);
}
}
However, this will insert a child on r2c1, rather at r1c2 as I want.
Is there a way to insert an entry into the tree at the next available spot somehow?
TIA
Mike
OK, thanks to http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/, I found that this algorithm will do what I want:
function where($root)
{
$num = $root->getNumberOfDescendants();
if ( $num < 2 )
return $root;
$finder = DbFinder::from('User')->
where('LeftId','>=',$root->getLeftId())->
where('RightId','<=',$root->getRightId())->
whereCustom('user.RightId = user.LeftId + ?',1,'left')->
whereCustom('user.RightId = user.LeftId + ?',3,'right')->
combine(array('left','right'),'or')->
orderBy('ParentId');
return $finder->findOne();
}
It basically executes this SQL:
SELECT u.*
FROM user u
WHERE u.LEFT_ID >= $left AND u.RIGHT_ID <= $right AND
(u.RIGHT_ID = u.LEFT_ID+1 OR u.RIGHT_ID = u.LEFT_ID+3)
ORDER BY u.PARENT_ID
LIMIT 1
A leaf has RIGHT=LEFT+1, A node with 1 child has RIGHT=LEFT+3. By adding the ORDER BY u.PARENT_ID, we find the highest node in the tree available. If you use LEFT_ID or RIGHT_ID, it does not balance the tree.
Related
I am looking at the following Geeks for Geeks problem:
Given two sorted linked lists consisting of N and M nodes respectively. The task is to merge both of the list (in-place) and return head of the merged list.
Example 1
Input:
N = 4, M = 3
valueN[] = {5,10,15,40}
valueM[] = {2,3,20}
Output: 2 3 5 10 15 20 40
Explanation: After merging the two linked
lists, we have merged list as 2, 3, 5,
10, 15, 20, 40.
Below answer is the GFG answer. I don't understand how its space complexity is O(1). We are creating a new node, so it must be O(m+n).
Node* sortedMerge(Node* head1, Node* head2)
{
struct Node *dummy = new Node(0);
struct Node *tail = dummy;
while (1) {
if (head1 == NULL) {
tail->next = head2;
break;
}
else if (head2 == NULL) {
tail->next = head1;
break;
}
if (head1->data <= head2->data){
tail->next = head1;
head1 = head1->next;
}
else{
tail->next = head2;
head2 = head2->next;
}
tail = tail->next;
}
return dummy->next;
}
Could someone explain how the space complexity is O(1) here?
I can't understand how it's space complexity is O(1). Since we are creating a new node so it must be O(m+n).
Why should it be O(m+n) when it creates one node? The size of that node is a constant, so one node represents O(1) space complexity. Creating one node has nothing to do with the size of either of the input lists. Note that the node is created outside of the loop.
It is actually done this way to keep the code simple, but the merge could be done even without that dummy node.
I'm currently working on a graph where nodes are connected via probabilistic edges. The weight on each edge defines the probability of existence of the edge.
Here is an example graph to get you started
(A)-[0.5]->(B)
(A)-[0.5]->(C)
(B)-[0.5]->(C)
(B)-[0.3]->(D)
(C)-[1.0]->(E)
(C)-[0.3]->(D)
(E)-[0.3]->(D)
I would like to use the Neo4j Traversal Framework to traverse this graph starting from (A) and return the number of nodes that have been reached based on the probability of the edges found along the way.
Important:
Each node that is reached can only be counted once. -> If (A) reaches (B) and (C), then (C) need not reach (B). On the other hand if (A) fails to reach (B) but reaches (C) then (C) will attempt to reach (B).
The same goes if (B) reaches (C), (C) will not try and reach (B) again.
This is a discrete time step function, a node will only attempt to reach a neighboring node once.
To test the existence of an edge (whether we traverse it) we can generate a random number and verify if it's smaller than the edge weight.
I have already coded part of the traversal description as follows. (Here it is possible to start from multiple nodes but that is not necessary to solve the problem.)
TraversalDescription traversal = db.traversalDescription()
.breadthFirst()
.relationships( Rels.INFLUENCES, Direction.OUTGOING )
.uniqueness( Uniqueness.NODE_PATH )
.uniqueness( Uniqueness.RELATIONSHIP_GLOBAL )
.evaluator(new Evaluator() {
#Override
public Evaluation evaluate(Path path) {
// Get current
Node curNode = path.endNode();
// If current node is the start node, it doesn't have previous relationship,
// Just add it to result and keep traversing
if (startNodes.contains(curNode)) {
return Evaluation.INCLUDE_AND_CONTINUE;
}
// Otherwise...
else {
// Get current relationhsip
Relationship curRel = path.lastRelationship();
// Instantiate random number generator
Random rnd = new Random();
// Get a random number (between 0 and 1)
double rndNum = rnd.nextDouble();
// relationship wc is greater than the random number
if (rndNum < (double)curRel.getProperty("wc")) {
String info = "";
if (curRel != null) {
Node prevNode = curRel.getOtherNode(curNode);
info += "(" + prevNode.getProperty("name") + ")-[" + curRel.getProperty("wc") + "]->";
}
info += "(" + curNode.getProperty("name") + ")";
info += " :" + rndNum;
System.out.println(info);
// Keep node and keep traversing
return Evaluation.INCLUDE_AND_CONTINUE;
} else {
// Don't save node in result and stop traversing
return Evaluation.EXCLUDE_AND_PRUNE;
}
}
}
});
I keep track of the number of nodes reached like so:
long score = 0;
for (Node currentNode : traversal.traverse( nodeList ).nodes())
{
System.out.print(" <" + currentNode.getProperty("name") + "> ");
score += 1;
}
The problem with this code is that although NODE_PATH is defined there may be cycles which I don't want.
Therefore, I would like to know:
Is there is a solution to avoid cycles and count exactly the number of nodes reached?
And ideally, is it possible (or better) to do the same thing using PathExpander, and if yes how can I go about coding that?
Thanks
This certainly isn't the best answer.
Instead of iterating on nodes() I iterate on the paths, and add the endNode() to a set and then simply get the size of the set as the number of unique nodes.
HashSet<String> nodes = new HashSet<>();
for (Path path : traversal.traverse(nodeList))
{
Node currNode = path.endNode();
String val = String.valueOf(currNode.getProperty("name"));
nodes.add(val);
System.out.println(path);
System.out.println("");
}
score = nodes.size();
Hopefully someone can suggest a more optimal solution.
I'm still surprised though that NODE_PATH didn't not prevent cycles from forming.
I want to randomly populate a grid in Lua using a list of possible items, which is defined as follows:
-- Items
items = {}
items.glass = {}
items.glass.color = colors.blue
items.brick = {}
items.brick.color = colors.red
items.grass = {}
items.grass.color = colors.green
So the keys of the table are "glass", "brick" and "grass".
How do I randomly select one of these keys if they are not addressable by a numeric index?
Well, I kind of got a workaround, but I would be open to any better suggestions.
The first solution consists of having a secondary table which serves as an index to the first table:
item_index = {"grass", "brick", "glass"}
Then I can randomly store a key of this table (board is a matrix that stores the value of the random entry in item_index):
local index = math.random(1,3)
board[i][j] = item_index[index]
After which I can get details of the original list as follows:
items[board[y][x]].color
The second solution, which I have decided on, involves adding the defined elements as array elements to the original table:
-- Items
items = {}
items.glass = {}
items.glass.color = colors.blue
table.insert(items, items.glass) --- Add item as array item
items.brick = {}
items.brick.color = colors.red
table.insert(items, items.brick) --- Add item as array item
items.grass = {}
items.grass.color = colors.green
table.insert(items, items.grass) --- Add item as array item
Then, I can address the elements directly using an index:
local index = math.random(1,3)
board[i][j] = items[index]
And they can be retrieved directly without the need for an additional lookup:
board[y][x].color
Although your second method gives concise syntax, I think the first is easier to maintain. I can't test here, but I think you can get the best of both, won't this work:
local items = {
glass = {
color = colors.blue,
},
brick = {
color = colors.red,
},
grass = {
color = colors.green,
},
}
local item_index = {"grass", "brick", "glass"}
local index = math.random(1,3)
board[i][j] = items[item_index[index]]
print('color:', board[i][j].color)
If you're table is not too big and you can just break off at a random point. This method assumes that you know the number of entries in the table (which is not equal to #table value, if the table has non-number keys).
So find the length of the table, then break at random(1, length(table)), like so:
local items = {} ....
items.grass.color = colors.green
local numitems = 0 -- find the size of the table
for k,v in pairs(items) do
numitems = numitems + 1
end
local randval = math.random(1, numitems) -- get a random point
local randentry
local count = 0
for k,v in pairs(items) do
count = count + 1
if(count == randentry) then
randentry = {key = k, val = v}
break
end
end
The goods: You don't have to keep track of the keys. It can be any table, you don't need to maintain it.
The bad and ugly: It is O(n) - two linear passes.So, it is not at all ideal if you have big table.
The above answers assume you know what all of the keys are, which isn't something I was able to do earlier today. My solution:
function table.randFrom( t )
local choice = "F"
local n = 0
for i, o in pairs(t) do
n = n + 1
if math.random() < (1/n) then
choice = o
end
end
return choice
end
Explanation: we can't use table.getn( t ) to get the size of the table, so we track it as we go. The first item will have a 1/1=1 chance of being picked; the second 1/2 = 0.5, and so on...
If you expand for N items, the Nth item will have a 1/N chance of being chosen. The first item will have a 1 - (1/2) - (1/3) - (1/4) - ... - (1/N) chance of not being replaced (remember, it is always chosen at first). This series converges to 1 - (N-1)/N = 1/N, which is equal to the chance of the last item being chosen.
Thus, each item in the array has an equal likelihood of being chosen; it is uniformly random. This also runs in O(n) time, which isn't great but it's the best you can do if you don't know your index names.
How to make a query in Ax with advanced filtering (with x++):
I want to make such filter criteria On SalesTable form to show SalesTable.SalesId == "001" || SalesLine.LineAmount == 100.
So result should show SalesOrder 001 AND other salesOrders which has at least one SalesLine with LineAmount = 100?
Jan's solution works fine if sales order '001' should only be selected if it has sales lines. If it doesn't have lines it won't appear in the output.
If it is important to you that sales order '001' should always appear in the output even if it doesn't have sales lines, you can do it via union as follows:
static void AdvancedFiltering(Args _args)
{
Query q;
QueryRun qr;
QueryBuildDataSource qbds;
SalesTable salesTable;
;
q = new Query();
q.queryType(QueryType::Union);
qbds = q.addDataSource(tablenum(SalesTable), identifierstr(SalesTable_1));
qbds.fields().dynamic(false);
qbds.fields().clearFieldList();
qbds.fields().addField(fieldnum(SalesTable, SalesId));
qbds.addRange(fieldnum(SalesTable, SalesId)).value(queryValue('001'));
qbds = q.addDataSource(tablenum(SalesTable), identifierstr(SalesTable_2), UnionType::Union);
qbds.fields().dynamic(false);
qbds.fields().clearFieldList();
qbds.fields().addField(fieldnum(SalesTable, SalesId));
qbds = qbds.addDataSource(tablenum(SalesLine));
qbds.relations(true);
qbds.joinMode(JoinMode::ExistsJoin);
qbds.addRange(fieldnum(SalesLine, LineAmount )).value(queryValue(100));
qr = new QueryRun(q);
while (qr.next())
{
salesTable = qr.get(tablenum(SalesTable));
info(salesTable.SalesId);
}
}
The AX select statement supports exists join such as:
while select salesTable
exits join salesLine
where salesLine.SalesId == salesTable.SalesId &&
salesLine.LineAmount == 100
X++ does not support exists clause as a subquery in the where clause. Therefore it is not possible to express the exists in combination with or.
However AX supports query expressions in a query.
Therefore your query should be possible to express like this:
static void TestQuery(Args _args)
{
SalesTable st;
QueryRun qr = new QueryRun(new Query());
QueryBuildDataSource qst = qr.query().addDataSource(tableNum(SalesTable));
QueryBuildDataSource qsl = qst.addDataSource(tableNum(SalesLine));
str qstr = strFmt('((%1.SalesId == "%2") || (%3.LineAmount == %4))',
qst.name(), queryValue("001"),
qsl.name(), queryValue(100));
qsl.relations(true); // Link on SalesId
qsl.joinMode(JoinMode::ExistsJoin);
qsl.addRange(fieldNum(SalesLine,RecId)).value(qstr);
info(qstr); // This is the query expression
info(qst.toString()); // This is the full query
while (qr.next())
{
st = qr.get(tableNum(SalesTable));
info(st.SalesId);
}
}
However, if sales order 001 does not contain lines, it will not be selected.
Other than that the output is as you requested:
((SalesTable_1.SalesId == "001") || (SalesLine_1.LineAmount == 100))
SELECT FIRSTFAST * FROM SalesTable
EXISTS JOIN FIRSTFAST * FROM SalesLine WHERE SalesTable.SalesId =
SalesLine.SalesId AND ((((SalesTable_1.SalesId == "001") ||
(SalesLine_1.LineAmount == 100))))
001
125
175
I would like to disregard the first n items in a pager list of items. I.e. they are being used elsewhere in the design.
So my pager list needs to be as such:
Page 1: Items 8 - 17
Page 2: Items 18 - 27
Page 3: Items 28 - 37
...
However, setting an offset or limit in the criteria object does nothing. I presume they are used by the pager itself.
Is it possible to add a offset to the pager class in some other way?
Ok, I have got around the problem by modifying sfPropelPager.class.php and putting it into a new class file which I have named atPropelPagerOffset.class.php
It works in exactly the same way apart from it takes an extra parameter, $offset
So the top of the file looks like:
protected
$criteria = null,
$peer_method_name = 'doSelect',
$peer_count_method_name = 'doCount',
$offset = 0;
public function __construct($class, $maxPerPage = 10, $offset = 0)
{
parent::__construct($class, $maxPerPage);
$this->setCriteria(new Criteria());
$this->tableName = constant($class.'Peer::TABLE_NAME');
$this->offset = $offset;
}
Then I have made this tiny change around line 50
$c->setOffset($offset+$this->offset);
Works a treat!
Simpler solution would be a custom select method:
$pager->setPeerMethod('doSelectCustom');
and then put your logic in the model Peer Class:
public static function doSelectCustom($c)
{
$c2 = clone $c;
$offset = $c2->getOffset();
$limit = $c2->getLimit();
$someCustomVar = someClass::someMethod();
if ($offset == 0) // we are in the first page
{
$c2->setLimit($limit - $someCustomVar);
$c2->add(self::SOMECOLUMN, false);
} else $c2->setOffset($offset - $someCustomVar);
return self::doSelectRS($c2); // or doSelect if you wanna retrieve objects
}