XPath 2.0 Leading '/' cannot select the root node of the tree containing the context item: the context item is not a node - saxon

Trying to check that one value (once tokenized) doesn't match another value in my document:
/foo/bar/baz/tokenize(value,',')[not(. =(/foo/biz/value/string(),'bing'))]
Specifically here, checking that /foo/bar/baz/value (which is 'ding,dong,bing') doesn't match /foo/biz/value/string() or the value 'bong'.
But I'm getting "Leading '/' cannot select the root node of the tree containing the context item: the context item is not a node"
Is there any way that I can do this in XPath, or do I need to get out into XQuery and start to worry about variables?

Given that you're using Saxon, you can take advantage of the fact that XPath 3.0 allows you to bind variables:
let $foo := /foo return $foo/bar/baz/tokenize(value,',')
[not(. =($foo/biz/value/string(),'bing'))]
or you could pull the expression out of the predicate:
let $exceptions := (/foo/biz/value/string(),'bing')
return /foo/bar/baz/tokenize(value,',')[not(. = $exceptions)]
If you want pure XPath 2.0 you can achieve the same with an ugly "for" binding:
for $foo in /foo return $foo/bar/baz/tokenize(value,',')
[not(. =($foo/biz/value/string(),'bing'))]
If you're in XSLT, of course, you can use current().

Related

Nix maintain original order of a set

Assumptions:
You have yq and nix installed on your OS running NixOS or some Linux distro.
Question:
Can nix maintain the original ordering of a set? i.e. If I create a sample.nix file:
{pkgs}:
let
dockerComposeConfig = {
version = "1.0";
services = {
srv1 = { name = "srv1"; };
srv2 = { name = "srv2"; };
};
};
in writeTextFile {
name = "docker-compose.json";
text = builtins.toJSON dockerComposeConfig;
}
When I build and convert the output to yaml below I notice is that the set has been alphabetized by Nix. Is there a workaround that keeps my JSON in the same ordering as intended by a Docker user such that the `dockerComposeConfig attrributes remain in the order they are created?
# Cmd1
nix-build -E "with import <nixpkgs> {}; callPackage ./sample.nix {}"
# Cmd2
cat /nix/store/SOMEHASH-docker-compose.json | yq r - --prettyPrint
Nix attribute sets don't have an ordering to their attributes and they are represented as a sorted array in memory. Canonicalizing values helps with reproducibility.
If it's really important you could write a function that turns a list of key value pairs into a JSON object as a Nix string. But that's not going to be easy to use like builtins.toJSON. I'd consider the JSON as "compiled build output" and not worry too much about aesthetics.
Side note: Semantically, they are not even created in any order. The Nix language is declarative: a Nix expression (excluding derivations) describes something that is, not how to create it, although it may be defined in terms of functions.
This is necessary for Nix's laziness to be effective.

Comprehension pattern and neo4j-embedded

I have a problem when using comprehension with a neo4j-embedded (version 3.5.3).
For exemple, this kind of query works perfectly fine with neo4j enterprise 3.5.3, but does not work with neo4j-embedded :
MATCH (myNode:MyNode {myId:'myid'})
MATCH path = ( (myNode) -[*0..]- (otherNode:MyNode) )
WHERE
ALL(n in nodes(path) where [ (n)<--(state:MyState) | state.isConnected ][0] = true)
RETURN myNode, otherNode
The error I get when using neo4j-embedded is difficult to understand, and looks like an internal error :
org.neo4j.driver.v1.exceptions.DatabaseException: This expression should not be added to a logical plan:
VarExpand(myNode, BOTH, OUTGOING, List(), otherNode, UNNAMED62, VarPatternLength(0,None), ExpandInto, UNNAMED62_NODES, UNNAMED62_RELS, Equals(ContainerIndex(PatternComprehension(None,RelationshipsPattern(RelationshipChain(NodePattern(Some(Variable( UNNAMED62_NODES)),List(),None,None),RelationshipPattern(Some(Variable( REL136)),List(),None,None,INCOMING,false,None),NodePattern(Some(Variable(state)),List(LabelName(MyState)),None,None))),None,Property(Variable(state),PropertyKeyName(isConnected))),Parameter( AUTOINT1,Integer)),True()), True(), List((Variable(n),Equals(ContainerIndex(PatternComprehension(None,RelationshipsPattern(RelationshipChain(NodePattern(Some(Variable(n)),List(),None,None),RelationshipPattern(Some(Variable( REL136)),List(),None,None,INCOMING,false,None),NodePattern(Some(Variable(state)),List(LabelName(MyState)),None,None))),None,Property(Variable(state),PropertyKeyName(isConnected))),Parameter( AUTOINT1,Integer)),True())))) {
LHS -> CartesianProduct() {
LHS -> Selection(Ands(Set(In(Property(Variable(myNode),PropertyKeyName(myId)),ListLiteral(List(Parameter( AUTOSTRING0,String))))))) {
LHS -> NodeByLabelScan(myNode, LabelName(MyNode), Set()) {}
}
RHS -> NodeByLabelScan(otherNode, LabelName(MyNode), Set()) {}
}
}
Any idea ?
It was quite a complicated issue but here is the full explanation.
First, I found that it was not specific to neo4j-embedded. The internal error exception was raised because of an assert in Neo4J, witch would trigger an exception only if the flag -ea (enable assertions) is set. And this flag is set only when running tests with maven or any IDE.
Drilling down Neo4J's code on github, I found also that this assert whas added because of some concerns on recursive comprehension pattern. (The commit is here : https://github.com/neo4j/neo4j/commit/dfbe8ce397f7b72cf7d9b9ff1500f24a5c4b70b0)
In my case, I do use comprehension pattern but not recursively, so I think everything should be fine, except when unit testing :)
I submitted the problem to Neo4J's support, and they will provide a fix in a future release.

How can I include the current workspace name in the default argument value of a rule?

Let's say I have a rule:
blah = rule(
attrs = {
"foo": attr.string(default = "#HELP#"),
},
)
I want the default value of foo to contain the name of the workspace that invokes the rule. How can I accomplish this?
(Note: An acceptable approach is to leave a placeholder in the value and replace it when the rule uses the attribute, but I can't figure out how to get the current workspace there either. The closest I can find is ctx.label.workspace_root, but that is empty for the "main" workspace, and e.g. external/foo for other things.)
ctx.workspace_name does not give the correct answers. For example, if I print("'%s' -> '%s'", (ctx.label.workspace_root, ctx.workspace_name)), I get results like:
'externals/foo' -> 'main'
'externals/bar' -> 'main'
...which is wrong; those should be 'foo' and 'bar', not 'main' ('main' being my main/root workspace). Note that labels from those contexts are e.g. '#foo//:foo', so Bazel does apparently know the correct workspace name.
You can use a placeholder attribute and then use ctx.workspace_name in the implementation.
def _impl(ctx):
print("ws: %s" % ctx.workspace_name)
blah = rule(
implementation = _impl,
)
As far as getting the workspace name, this seems sub-optimal, but also seems to work:
def _workspace(ctx):
"""Compute name of current workspace."""
# Check for meaningful workspace_root
workspace = ctx.label.workspace_root.split("/")[-1]
if len(workspace):
return workspace
# If workspace_root is empty, assume we are the root workspace
return ctx.workspace_name
Per Kristina's answer and comment in the original question, this can then be used to replace a placeholder in the parameter value.

How do I do a "getElementsByTagName" using the Dart Petitparser XML parser?

I may have overlooked it somewhere, but what is the nice way to get all elements of a specific name (similar to the old getElementsByTagName) via the Dart version of PetitParser?
I managed to load an XML file and successfully parse it using PetitParser, but now I want to go through all nodes with a specific name (eg. see below nodes with the "importantData").
The result.value.length also seems to be very high (16654) for the 665 "importantData" nodes from my test xml file which are in result.value.children[1].children
<xml>
<toplevel>
<importantData>
<attribute1>Value</attribute1>
<attribute2>Value</attribute2>
</importantData>
<importantData>
<attribute1>Value</attribute1>
<attribute2>Value</attribute2>
</importantData>
<importantData>
<attribute1>Value</attribute1>
<attribute2>Value</attribute2>
</importantData>
...
</toplevel>
</xml>
The XmlNode is an Iterable<XmlNode> over all its children.
If root is the parsed root node of your XML tree you can write:
for (var node in root) {
if (node is XmlElement && node.name.local == 'importantData') {
// do something with the node
}
}
If you are more into functional programming, you can use the following expression returning an iterable over all elements in question:
root.where((node) => node is XmlElement && node.name.local == 'importantData')

How to handle un-named parameters when one can be piped

I want my Powershell script to be able to handle two parameter sets as shown below.
Set 1:
Param1: GroupName via pipe
Param2: FilePath
Called like: "GROUPNAME" | script.ps1 FilePath
Set 2:
Param1: GroupName
Param2: FilePath
Called like: script.ps1 GroupName FilePath
In both cases both arguments are mandatory.
I have tried everything I can think of and the closest I think I have gotten is this:
[CmdletBinding(DefaultParameterSetName="Pipe")]
param (
[Parameter(Mandatory=$true,Position=0,ValueFromPipeline=$false,HelpMessage="AD Group Name",ParameterSetName="Param")]
[Parameter(Mandatory=$true,ValueFromPipeline=$true,HelpMessage="AD Group Name",ParameterSetName="Pipe")]
[ValidateNotNullOrEmpty()]
[String]$GroupName,
[Parameter(Mandatory=$true,Position=1,ValueFromPipeline=$false,HelpMessage="Path to CSV",ParameterSetName="Param")]
[Parameter(Mandatory=$true,Position=0,ValueFromPipeline=$false,HelpMessage="Path to CSV",ParameterSetName="Pipe")]
[ValidateNotNullOrEmpty()]
[String]$FilePath
)
This does not work, as it always expect the second argument at position 1; any ideas?
You don't need two parameter sets. ValueFromPipeline=$true makes the function accept input from the pipeline, but doesn't require that it come from the pipeline - it can be specified as an argument just as well.

Resources