In Thymeleaf 2.1.6 we had AbstractSingleAttributeModifierAttrProcessor. What are the classes I need to use, if we are migrating to Thymeleaf 3.0.6? This class is not there now. I see that there are many improvements done in 3 w.r.t processors - https://github.com/thymeleaf/thymeleaf/issues/400 and https://github.com/thymeleaf/thymeleaf/issues/399
Thanks
From
https://www.thymeleaf.org/doc/tutorials/2.1/extendingthymeleaf.pdf
Special kinds of processors
Although processors can execute on any node in the DOM tree, there are two specific kinds of processors that can
benefit from performance improvements inside the Thymeleaf execution engine:
attribute processors
and
element
processors
.
Attribute Processors
Those processors (implementations of
IProcessor
) which
getMatcher()
method returns a matcher implementing the
org.thymeleaf.processor.IAttributeNameProcessorMatcher interface
are considered
“attribute processors”
.
Because of the type of matchers they define, these processors are triggered when a DOM element (usually an
XML/XHTML/HTML5 tag) contains an attribute with a specific name. For example, most processors in the
Standard
Dialects
act like this, defining matchers for attributes like th:text
, th:each, th:if, etc.
For the sake of simplicity, Thymeleaf offers an utility abstract class from which attribute processors can extend: org.thymeleaf.processor.attr.AbstractAttrProcessor
. This class already returns as matcher an implementation of
IAttributeNameProcessorMatcher and makes it easier to create this kind of processors.
According to the PDF's front page, this documentation refers to
Project version: 3.0.9.RELEASE
Related
i added map(), reduce() and where(qlint : string) to a Spring4D fork of mine.
While i was programming these functions, i found out that there is a differnce in the behaviour of the lists, when they are created in different ways.
If i create them with TList<TSomeClass>.create the objects in the enumerables are of the type TSomeClass.
If i create them with TCollections.CreateList<TSomeClass> the objects in the enumerables are of the type TObject.
So the question is:
Is there a downside by using TList<TSomeClass>.create ?
Or in other words: Why should i use TCollections.CreateList<TSomeClass> ?
btw: with TCollections.CreateList i got a TObjectList and not a TList. So it should be called TCollections.CreateObjectList... but that's another story.
Depending on the compiler version many of the Spring.Collections.TCollections.Create methods are applying what the compiler is unable to: folding the implementation into only a very slim generic class. Some methods are doing that from XE on, some only since XE7 ( GetTypeKind intrinsic function makes it possible to do the type resolution at compile time - see the parameterless TCollections.CreateList<T> for example).
This greatly reduces the binary size if you are creating many different types of IList<T> (where T are classes or interfaces) because it folds them into TFolded(Object|Interface)List<T>. However via the interface you are accessing the items as what you specified them and also the ElementType property returns the correct type and not only TObject or IInterface. On Berlin it adds less than 1K for every different object list while it would add around 80K if the folding is not applied due to all the internal classes involved for the different operations you can call on an IList<T>.
As for TCollections.CreateList<T> returning an IList<T> that is backed by a TFoldedObjectList<T> when T is a class that is completely as designed. Since the OwnsObject was passed as False it has the exact same behavior as a TList<T>.
The Spring4D collections are interface based so it does not matter what class is behind an interface as long as it behaves accordingly to the contract of the interface.
Make sure that you only carry the lists around as IList<T> and not TList<T> - you can create them both ways (with the benefits I mentioned before when using the TCollections methods). In our own application some places are still using the constructor of the classes while many other places are using the static methods from Spring.Collections.TCollections.
BTW:
I saw the activity in your fork and imo there is no need to implement Map/Reduce because that is already there. Since the Spring4D collections are modelled after .NET they are called Select and Aggregate (see Spring.Collections.TEnumerable). They are not available on IEnumerable<T> directly though because interfaces must not have generic parameterized methods.
After I went through OData doc, I still do not understand the meaning of <FunctionImport>.
What is that used for?
Some one said that "Function imports are used to perform custom operations on a JPA entity in addition to CRUD operations. For example, consider a scenario where you would like to check the availability of an item to promise on the sales order line items. ATP check is a custom operation that can be exposed as a function import in the schema of OData service."
But I think above requirement can be achieved by general <Function> also, right?
What is the difference between <FunctionImport> and <Function> exactly?
I do appreciate anyone's help!
Thanks
There are three types of functions in OData:
Functions that are bound to something (e.g. an entity). Example would be
GET http://host/service/Products(1)/Namespace.GetCategories()
such function is defined in the metadata using the <function> element and with its isBound attribute set to true.
Unbound functions. They are usually used in queries. E.g.
GET http://host/service/Products?$filter(Name eq Namespace.GetTheLongestProductName())
such function is defined in the metadata using the <function> element with its isBound attribute set to false
Function imports. They are the functions that can be invoked at the service root. E.g.
GET http://host/service/GetMostExpensiveProduct()
Their concept is a little bit similar as the concept of static functions in program languages, and they are defined in metadata using the <functionimport> element.
Similar distinguishing applies to <action> and <actionimport> as well.
OK, I got the answer by myself.
<OData Version 4.0 Part 1: Protocol Plus Errata 02>:
Operations allow the execution of custom logic on parts of a data model. Functions are operations that do not have side effects and may support further composition, for example, with additional filter operations, functions or an action. Actions are operations that allow side effects, such as data modification, and cannot be further composed in order to avoid non-deterministic behavior. Actions and functions are either bound to a type, enabling them to be called as members of an instance of that type, or unbound, in which case they are called as static operations. Action imports and function imports enable unbound actions and functions to be called from the service root.
I'm wondering how grails handles memory usage and loading (fetching) of domain objects by GORM methods like:
findAllWhere
findAllBy
list
...
Are they fully loaded (respectively their proxies) into memory?
If I traverse them with each/every/any are they backed by an iterator loading them lazily?
Should I prefer createCriteria() {...}.scroll() for better memory usage?
Assuming that we ignore the different behavior of each type of DB driver GORM uses, we can find the answers in the code and the documentation.
The dynamic finders are provided to Domain classes by org.grails.datastore.gorm.GormStaticApi and the finder classes within the org.grails.datastore.gorm.finders package.
Reviewing these classes we are able to see that queries which return multiple results are always assembled as a DetachedCriteria and always invoke the criteria.list() method; this means that the whole result batch is assembled and held in memory. Traversing the results with Groovy's collection methods won't make any difference because you're essentially invoking those methods on the returned result list.
As to the question "How much of the result Domain is loaded?" - this depends on the composition of the domain, but you may assume that the fields of the Domain are loaded and that any associations are by default lazy.
In scenarios that require better memory usage you can certainly self-compose the criteria in conjunction with result projections and use scroll (note that this feature depends on the type of DB).
In extreme cases I even bypass GORM and work directly with the DB driver.
In the Inferring a JVM Model section of the Xtext documentation (http://www.eclipse.org/Xtext/documentation.html#_17) it starts by saying:
"In many cases, you will want your DSLs concepts to be usable as Java elements. E.g. an Entity will become a Java class and should be usable as such".
In the example above, how can I use the generated Entity class outside of xbase, i.e. in real Java code in a different project to the xtext one?
What I am essentially asking is if the Java classes created my by the model Inferrer can actually be used as real java classes, which can have their methods called and fields accessed from java code, in an altogether different project, and if so how this can be done?
My going through the documentation has lead me to fear that the generated "Java classes" are only Xbase types, only referenceabe in an xtext context, and are not therefore real java classes...
The Xbase compiler can compile all Xbase expressions to plain Java code usable everywhere where Java codes are available.
If you add your own elements to the language, you have to extend the generator to also support these elements - for this reason you define your own JVMModelInferrer.
The basic Xtext compiler then executes the JVMModelInferrer, calculates the JVM model that might (or might not) contain Xbase expressions as well; then this JVM model can be generated into a Java-compilable (thus Java-reusable) code.
If you want to test this functionality, simply generate the Xtext Domain Model example (available from the New... wizards in Xtext/Examples category), and evaluate the results: when you edit your domain model, Xtext automatically generates the usable Java code (if the required dependencies are set).
I'm using an interpreter for my domain specific language rather than a compiler (despite the performance). I'm struggling to understand some of the concepts though:
Suppose I have a DSL (in XML style) for a game so that developers can make building objects easily:
<building>
<name> hotel </name>
<capacity> 10 </capacity>
</building>
The DSL script is parsed, then what happens?
Does it execute an existing method for creating a new building? As I understand it does not simply transform the DSL into a lower level language (as this would then need to be compiled).
Could someone please describe what an interpreter would do with the resulting parsed tree?
Thank you for your help.
Much depends on your specific application details. For example, are name and capacity required? I'm going to give a fairly generic answer, which might be a bit overkill.
Assumptions:
All nested properties are optional
There are many nested properties, possibly of varying depth
This invites 2 ideas: structuring your interpreter as a recursive descent parser and using some sort of builder for your objects. In your specific example, you'd have a BuildingBuilder that looks something like (in Java):
public class BuildingBuilder {
public BuildingBuilder() { ... }
public BuildingBuilder setName(String name) { ... return this; }
public BuildingBuilder setCapacity(int capacity) { ... return this; }
...
public Building build() { ... }
}
Now, when your parser encounters a building element, use the BuildingBuilder to build a building. Then add that object to whatever context the DSL applies to (city.addBuilding(building)).
Note that if the name and capacity are exhaustive and are always required, you can just create a building by passing the two parameters directly. You can also construct the building and set the properties directly as encountered instead of using the builder (the Builder Pattern is nice when you have many properties and said properties are both immutable and optional).
If this is in a non-object-oriented context, you'll end up implementing some sort of buildBuilding function that takes the current context and the inner xml of the building element. Effectively, you are building a recursive descent parser by hand with an xml library providing the actual parsing of individual elements.
However you implement it, you will probably appreciate having a direct semantic mapping between xml elements in your DSL and methods/objects in your interpreter.