Whilst attempting to perform AST based type 2 clone detection, I encountered the following:
class TestEnum {
enum Level {
LOW,
MEDIUM,
HIGH
}
}
When visiting the AST obtained with createAstFromFile, from a file that contains only the example above.
The class and enum nodes representing line 1 and 2 have the default unknown source location (|unknown:///|). Whereas lines 3 through 5 will be represented by enumConstants with their
source location set as expected.
What causes the class and enum nodes to have no actual source location set? What is a valid approach to finding all AST nodes which start on a specific line?
Related
I'm working on a project that requires me to add a model through Parser (which requires the plant to be of the same type as the array used) before Setting the position of the model in said plant and taking distance queries. These queries only work when the query object generated from the scene graph is of type float.
I've run into a problem where setting the position doesn't work due to the array being used being of type AutoDiff. A possible solution would then be converting the plant of type float to Autodiff with plant.ToAutoDiff(), but this only creates a copy of the plant without coupling it to the scene graph (and in turn the query object) from which the queries are derived. Taking queries with a query object generated from the original plant would then fail to reflect the new position passed to the AutoDiff copy.
Is there a way to create a new scene graph from the already finalized symbolic copy of the original plant, so that I can perform the queries with it?
A couple of thoughts:
Don't just convert the plant to autodiff. Convert the whole diagram. That will give you a converted, connected network.
You're stuck with the current workflow. Presumably, your proximity geometries are specified in your parsed file (as <collision> tags). The parsing process is ephemeral. The declaration is consumed, passed through MultibodyPlant into SceneGraph. If there is no SceneGraph at parse time, all knowledge of the declared collision geometry is forgotten.
So, the typical workflow is:
Create a float-valued diagram.
Scalar convert it to an AutoDiff-valued diagram.
Keep both around to serve the different roles.
We don't have a tutorial that directly shows scalar converting an entire diagram, but it's akin to what is shown in this MultibodyPlant-specific tutorial. Just call ToScalarType() on the Diagram root.
I'm attempting to do some graph normalization following the URDNA2015 algorithm.
If I'm understanding the spec, blank nodes should have labels like _:c14nX where X is an incrementing counter.
I can produce a graph that has blank nodes with these labels, but when serializing the graph to NTRIPLES these run through NodeFmtLib#encodeBNodeLabel which performs some encoding -- at the very least always prefixes the resulting node with 'B'. For example c14n92 -> Bc14n92 or _:c14n92 -> BX5FX3Ac14n92 due to hex encoding.
My serialization code is very basic currently:
StringWriter sw = new StringWriter();
RDFDataMgr.write(sw, normalizedGraph, Lang.NTRIPLES);
What is the suggested way of having finer control over this serialization?
EDIT:
One approach I found that works, but I'm not sure if it is the recommended way:
RDFWriterRegistry.register(RDFFormat.NTRIPLES_UTF8, new CustomWriterGraphRIOTFactory());
Then implement a chain of classes that override:
WriterGraphRIOTFactory
NTriplesWriter
StreamRDFLib
WriterStreamRDFPlain
NodeFormatter
to ultimately get to a place of overriding formatBNode:
public class CustomNodeFormatter extends NodeFormatterNT {
public CustomNodeFormatter(CharSpace charSpace) {
super(charSpace);
}
#Override
public void formatBNode(AWriter w, String label) {
w.print(label);
}
}
The Jena writers work on graphs and a graph is a set of triple - unordered. As triples can be deleted and re-added, order isn't so easy even in a single threaded program because changes to the graph may reorder hash tables.
If you are doing this from JSON-LD - Jena currently uses jsonld-java - check whether the JSON-LD parsing is in a consistent order and labelling.
If you want to respect the order in syntax in other formats, look at parsing files to a StreamRDF object (- the parser output stream, - as well as having a custom FactoryRDF (which controls the label used for the blank node - you could for example make them 1,2,3 at this point).
RDFParser.create().source(...).factory(FactoryRDF).parse(StreamRDF);
Note when doing it via output without control of the input the order of output may change from run to run as blank nodes get different ids each parser run.
I want to study TensorFlow for a long time , so I want to read the source code of it , but at the beginning. For example :
I can’t find the deeper level of the function.
Where does the function Shape compute?
The code fragment that you have shown is an automatically generated piece of code that adds a "Shape" operation to the graph. The string "Shape" in the arguments to _op_def_lib.apply_op() determines the operation-type of the node. The standard operation types are registered in C++ source code, in the tensorflow/core/ops/ directory of the TensorFlow source code. In particular, the "Shape" operation is registered in tensorflow/core/ops/array_ops.cc. These registrations are used to define the types of the inputs to, attrs of, and outputs from each operation, and the Python wrappers are generated from these registrations.
The first time you run a subgraph containing that node (i.e. in a call to tf.Session.run()), TensorFlow will look up the appropriate kernel that implements the operation on a particular device. (For example, there are often separate kernels for CPU and GPU implementations of operations.) The standard kernel implementations are registered in C++ source code, in the tensorflow/core/kernels/ directory of the TensorFlow source code. In particular, the "Shape" kernels are registered in tensorflow/core/kernels/shape_ops.cc. The kernel registration names a class the implements the kernel, which must be a subclass of tensorflow::OpKernel, and in this case is the tensorflow::ShapeOp class. The constructor is called when the subgraph runs for the first time, and the Compute() method is called each time the operation runs.
I tried to follow this tutorial on using ELKI with pre-computed distances for clustering.
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
I used the following set of command line options:
-dbc.filter FixedDBIDsFilter -dbc.startid 0 -algorithm clustering.OPTICS
-algorithm.distancefunction external.FileBasedDoubleDistanceFunction
-distance.matrix /path/to/matrix -optics.minpts 5 -resulthandler ResultWriter
ELkI fails with a configuration error saying db.in file is needed to make the computation.
The following configuration errors prevented execution:
No value given for parameter "dbc.in":
Expected: The name of the input file to be parsed.
No value given for parameter "parser.distancefunction":
Expected: Distance function used for parsing values.
My question is what is db.in file? Why should I provide it in addition to the distance matrix file since the pair-wise distance matrix file completely specifies all the information about the point cloud. (also I don't have access to any other information other than the pair-wise distance information).
What should I do about db.in? Should I override it, or specify some dummy information etc. Kindly help me understand.
thank you.
This is documented in the ELKI HowTos:
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
Using without primary data
-dbc DBIDRangeDatabaseConnection -idgen.count 100
However, there is a bug (patch is on the howto page, and will be in the next release) so you right now can't fully use this; as a workaround you can use a text file that enumerates the objects.
The reason for this is that ELKI is designed to work on multi-relational data. It's not just processing matrixes. But some algorithms may e.g. need a geographic representation of an object, some measurements for this object, and a label for evaluation. That is three relations.
What the DBIDRange data source essentially does is create a single "fake" relation that is just the DBIDs 0 to 99. On algorithms that don't need actual data, but only distances (e.g. LOF or DBSCAN or OPTICS), it is sufficient to have object IDs and a distance matrix.
I am trying to adapt a monophone-based recogniser to a specific speaker. I am using the recipe given in HTKBook 3.4.1 section 3.6.2. I am getting stuck on the HHEd part which I am invoking like sp:
HHEd -A -D -T 1 -H hmm15/hmmdefs -H hmm15/macros -M classes regtree.hed monophones1eng
The error I end up with is as follows:
ERROR [+999] Components missing from Base Class list (2413 3375)
ERROR [+999] BaseClass check failed
The folder classes contains the file global which has the following contents:
~b ‘‘global’’
<MMFIDMASK> *
<PARAMETERS> MIXBASE
<NUMCLASSES> 1
<CLASS> 1 {*.state[2-4].mix[1-25]}
The hmmdefs file within hmm15 had some mixture components (I am using 25 mixture components per state of each phone) missing. I tried to "fill in the blanks" by giving in mixture components with random mean and variance values but zero weigths. This too has had no effect.
The hmms are left-right hmms with 5 states (3 emitting), each state modelled by a 25 component mixture. Each component in turn is modelled by an MFCC with EDA components. There are 46 phones in all.
My questions are:
1. Is the way I am invoking HHEd correct? Can it be invoked in the above manner for monophones?
2. I know that the base class list (rtree.base must contain every single mixture component, but where do I find these missing mixture components?
NOTE: Please let me know in case more information is needed.
Edit 1: The file regtree.hed contains the following:
RN "models"
LS "stats_engOnly_3_4"
RC 32 "rtree"
Thanks,
Sriram
They way you invoke HHEd looks fine. The components are missing as they have become defunct. To deal with defunct components read HTKBook-3.4.1 Section 8.4 page 137.
Questions:
- What does regtree.hed contain?
- How much data (in hours) are you using? 25 mixtures might be excessive.
You might want to use a more gradual increase in mixtures - MU +1 or MU +2 and limit the number of mixtures (a guess: 3-8 depending on training data amount).