GeneXus Extension: List properties of all Generators in a KB - (PropertyName, PropertyValue) - sdk

I need to iterate through all the properties of all the generators of a KB in a GeneXus Extension.
I would like to understand how the KB / Version / Environment / Models / Generator are modeled in Gx16 and Gx17.
Does anyone have an example in C# of how to list the properties of a generator?
To list the properties of the KB, I am using the code:
foreach (Property kbp in UIServices.KB.CurrentKB.Properties.Properties)
{
string kbpvalue = "";
if (kbp.Value != null)
kbpvalue = kbp.Value.ToString();
writer.AddTableData(new string[] { "KB", kbp.Name, kbpvalue, kbp.IsDefault.ToString()}) ;
}
I need the equivalent to Generator properties.

All properties are handled through the PropertiesObject class itself, or eventually subclasses of it.
Accessing KB properties is pretty straightforward, given a KnowledgeBase instance, the KB properties are under the Properties property. e.g.
PropertiesObject kbProps = UIServices.KB.CurrentKB.Properties;
Version and Environment properties are modeled in the KBModel class which is a subclass of PropertiesObject. The difference between a Version and a Environment is that the former has the Type set to Design, while the later is Prototype.
For a UI package, the most convenient way to access the active Version, and the active Environment is:
KBModel design = UIServices.KB.CurrentModel;
KBModel target = design.Environment.TargetModel;
Here design is a KBModel whose Type property is Design, and it represent the version properties, and target is also a KBModel but it's Type property is Prototype. In the SDK you may find that the name target referring to a prototype KBModel is used interchangeably with working.
DataStore and Generator properties are another thing. Both concepts are represented by KBObjects, so there is a DataStoreCategory object and a GeneratorCategory object. The peculiarity with these objects, is that they don't have too many properties. The most commonly used properties in DataStores and Generators are properties dependent on the target Environment. In order to access those Environment dependent properties, you can use GxDataStore and GxGenerator classes, that can be requested to the corresponding model part. e.g.
KBModel design = UIServices.KB.CurrentModel;
KBModel target = design.Environment.TargetModel;
foreach (GxGenerator gen in target.Parts.Get<GeneratorsPart>().Generators)
{
gen.Properties....
}
foreach (GxDataStore ds in target.Parts.Get<DataStoresPart>().DataStores)
{
ds.Properties....
}
The properties in GxDataStore and GxGenerator are loaded dynamically based on the type of data store or generator. That's the reason for the properties to be under a Properties property, and not having GxDataStore and GxGenerator inherit from PropertiesObject themselves.
The last thing to mention, all these names are valid in GX 17. For generators particularly, there was a big rename in APIs from GX 16 to 17, due to the names used in the previous version not aligned properly with the conceptual model. The details of everything that was renamed from GX 16 to 17 is summarized here.

Related

Google Dart visibility of methods and properties

Just to avoid some misunderstanding, I know that Google Dart handles things on a library level and that private properties and methods can be identified with a underscore prefix.
Is this still up to date as of 2017? Are there any plans for adding object level visibility keywords like: private, protected or public?
I dont just want to do something random but im rather interested in best practices. The way I see it is: if I dont want class one to see what class two has then both must be in different libraries, those libraries then are part of a bigger package.
libraries = privacy between classes
packages = privacy between files
What about fine grained control of privacy? I mean maybe there is 1 thing I want private. What about using visibility when using inheritance? I mean protected keywords can be really valuable.
Here a little example in one file:
class one {
int n = 1;
one() {
var test = new two(n);
print(test.showNumber());
}
}
class two {
int n = 2;
two(n) {
this.n += n;
}
int showNumber() {
return n;
}
}
As it stands now, both classes can do what they want.
Dart still has only library-level privacy.
Library-level privacy with identifiers starting with an underscore is also enforced at runtime.
The analyzer provides some additional features during static analysis which are ignored at runtime though.
Per convention also libraries inside lib/src are considered private, and should now be imported from other packages. The linter, a plugin for the analyzer notifies about violations. Seems to be part of the analyzer itself.
The meta package provides some annotations that are supported by the analyzer.
#protected produces a warning if public members are referenced by code from other libraries that is not within subclasses.
#visibleForTesting produces a warning if public members are references by code that is not within the test directory (of the same package I assume) Not sure if the analyzer actually warns about violations yet, otherwise it's planned to do that.
As far as I remember there are some issues for more rules but not yet implemented.
From #lrn's comment below
One reason for the current design is that Dart allows dynamic calls, even in strong mode. That means that o.foo() cannot be rejected based on "class level privacy" without retaining and checking extra information at runtime. With library/lexical based privacy, it's absolutely clear whether o.foo() or o._foo() is allowed (it always is, it's just that the latter can only refer to the _foo name of the same library). If Dart only had static resolution of identifiers, then it could use static information (like a private declaration) to reject at compile time without a runtime overhead.

Create object from Map - Dart

I have a class with a large number of properties that map to some JSON data I've parsed into a Map object elsewhere. I'd like to be able to instantiate a class by passing in this map:
class Card {
String name, layout, mana_cost, cmc, type, rarity, text, flavor, artist,
number, power, toughness, loyalty, watermark, border,
timeshifted, hand, life, release_date, starter, original_text, original_type,
source, image_url, set, set_name, id;
int multiverse_id;
List<String> colors, names, supertypes, subtypes, types, printings, variations, legalities;
List<Map> foreign_names, rulings;
// This doesn't work
Card.fromMap(Map card) {
for (var key in card.keys) {
this[key] = card[key];
}
}
}
I'd prefer to not have to assign everything manually. Is there a way to do what I'm trying to do?
I don't think there is a good way to do it in the language itself.
Reflection would be one approach but it's good practice to avoid it in the browser because it can cause code bloat.
There is the reflectable package that limits the negative size impact of reflection and provides almost the same capabilities.
I'd use the code generation approach, where you use tools like build, source_gen to generate the code that assigns the values.
built_value is a package that uses that approach. This might even work directly for your use case.

What is the preferred way of getting value in swift, var vs. func?

What's the preferred way of getting a value in swift?
Using a read-only variable
var getString: String? {
return "Value"
}
or using a function?
func getString() -> String? {
return "Value"
}
Also, is there a performance difference between the two?
First, neither of these would be appropriate names. They should not begin with get. (There are historical Cocoa meanings for a get prefix that you don't mean, and so even if you mean "go out to the internet and retrieve this information" you'd want to use something like fetch, but certainly not in the case you've given.)
These issues are addressed in various sections of the Swift API Design Guidelines. First, a property is a property, whether it is stored or computed. So there is no difference in design between:
let someProperty: String?
and
var someProperty: String? { return "string" }
You should not change the naming just because it's computed. We can then see in the guidelines:
The names of other types, properties, variables, and constants should read as nouns.
Furthermore, as discussed in The Swift Programming Language:
Properties associate values with a particular class, structure, or enumeration. Stored properties store constant and variable values as part of an instance, whereas computed properties calculate (rather than store) a value.
So if this is best thought of as a value associated with the type (one of its "attributes"), then it should be a property (computed or stored). If it is something that is not really "associated" with the type (something that the caller expects this type to retrieve from elsewhere for instance), then it should be a method. Again from the Design Guidelines:
Document the complexity of any computed property that is not O(1). People often assume that property access involves no significant computation, because they have stored properties as a mental model. Be sure to alert them when that assumption may be violated.
If "stored properties as a mental model" doesn't match what you mean to express, then it probably shouldn't be a property in the first place (and you need to document the discrepancies if you make it a property anyway). So, for instance, accessing a property should generally have no visible side effects. And if you read from a property immediately after writing to it, you should get back the value you wrote (again, as a general mental model without getting into the weeds of multi-threaded programming).
If you use a method, it can often result in a different appropriate name. See the "Strive for Fluent Usage" section of the Design Guidelines for more on that. There are several rules for selecting good method names. As a good example of when to use properties vs methods, consider the x.makeIterator(), i.successor() and x.sorted() examples and think about why these are methods and why they're named as they are. This is not to say there is exactly one answer in all cases, but the Design Guidelines will give you examples of what the Swift team intends.
With no discernible difference in performance, make the choice for readability:
When an attribute behaves like a variable, use a property. Your example falls into this category.
When reading an attribute changes object state, use a function. This includes
Attributes that behave like a factory, i.e. returns new objects when you access them
Attributes that produce new values, such as random number generators
Peripheral readers
Input iterators
Of course, if the attribute is computed based on one or more argument, you have no other choice but to use a function.
Just as a note: If you want to use both getters and setters in Swift you can do as follows:
var myString: String {
get {
return "My string"
}
set {
self.myPrivateString = newValue
}
}
This way you can access your value as if it was a regular variable, but you can do some "under-the-hood magic" in your getters and setters

Why should i use TCollections.CreateList<T> and not TList<T>.Create

i added map(), reduce() and where(qlint : string) to a Spring4D fork of mine.
While i was programming these functions, i found out that there is a differnce in the behaviour of the lists, when they are created in different ways.
If i create them with TList<TSomeClass>.create the objects in the enumerables are of the type TSomeClass.
If i create them with TCollections.CreateList<TSomeClass> the objects in the enumerables are of the type TObject.
So the question is:
Is there a downside by using TList<TSomeClass>.create ?
Or in other words: Why should i use TCollections.CreateList<TSomeClass> ?
btw: with TCollections.CreateList i got a TObjectList and not a TList. So it should be called TCollections.CreateObjectList... but that's another story.
Depending on the compiler version many of the Spring.Collections.TCollections.Create methods are applying what the compiler is unable to: folding the implementation into only a very slim generic class. Some methods are doing that from XE on, some only since XE7 ( GetTypeKind intrinsic function makes it possible to do the type resolution at compile time - see the parameterless TCollections.CreateList<T> for example).
This greatly reduces the binary size if you are creating many different types of IList<T> (where T are classes or interfaces) because it folds them into TFolded(Object|Interface)List<T>. However via the interface you are accessing the items as what you specified them and also the ElementType property returns the correct type and not only TObject or IInterface. On Berlin it adds less than 1K for every different object list while it would add around 80K if the folding is not applied due to all the internal classes involved for the different operations you can call on an IList<T>.
As for TCollections.CreateList<T> returning an IList<T> that is backed by a TFoldedObjectList<T> when T is a class that is completely as designed. Since the OwnsObject was passed as False it has the exact same behavior as a TList<T>.
The Spring4D collections are interface based so it does not matter what class is behind an interface as long as it behaves accordingly to the contract of the interface.
Make sure that you only carry the lists around as IList<T> and not TList<T> - you can create them both ways (with the benefits I mentioned before when using the TCollections methods). In our own application some places are still using the constructor of the classes while many other places are using the static methods from Spring.Collections.TCollections.
BTW:
I saw the activity in your fork and imo there is no need to implement Map/Reduce because that is already there. Since the Spring4D collections are modelled after .NET they are called Select and Aggregate (see Spring.Collections.TEnumerable). They are not available on IEnumerable<T> directly though because interfaces must not have generic parameterized methods.

Access static Java variables from js code in Nashorn engine

While trying to port old code running Rhino engine to Nashorn in Java 8, I got the trouble, static properties/methods cannot be accessed from running js script. If I use Rhino, it runs perfectly. I don't know what happens with the implementation of the new Nashorn engine.
import javax.script.*;
public class StaticVars {
public static String myname = "John\n";
public static void main(String[] args) {
try{
ScriptEngine engine;
ScriptEngineManager manager = new ScriptEngineManager();
engine=System.getProperty("java.version").startsWith("1.8")?
manager.getEngineByName("Nashorn") : //j1.8_u51
manager.getEngineByName("JavaScript"); //j1.7
engine.put("staticvars", new StaticVars());
engine.eval("print(staticvars.myname);");
//print "John" if ran with java 7
//print "undefined" if ran with java 8
} catch(Exception e){e.printStackTrace();}
}
}
In Nashorn, you can't access class static members through class instances. There are multiple ways to get at statics. You can obtain a type object that acts as both a constructor and as a static namespace, much like a type name acts in Java:
var StaticVars = Java.type("StaticVars"); // use your full package name if you have it
print(StaticVars.myname);
Or, pass in a java.lang.Class object and use the .static pseudo-property to access the statics:
engine.put("StaticVarsClass", StaticVars.class);
followed by:
var StaticVars = StaticVarsClass.static;
print(StaticVars.myname);
in the script. In general, .static is the inverse operation to .class:
var BitSet = Java.type("java.util.BitSet");
var bitSetClass = BitSet.class; // produces a java.lang.Class object, just like in Java
print(BitSet === bitSetClass.static); // prints true
var bitSet = new BitSet(); // type object creates a new bitset when used as a constructor
var thisWontWork = new bitSetClass(); // java.lang.Class can't be used as a constructor.
As you can see, we distinguish three concepts:
the runtime class objects, which are instances of java.lang.Class. They aren't special, and you only can use the Class API on them (.getSuperclass(), .getName(), etc.)
instances of classes themselves (normal objects that you can access instance members on)
type objects, which are both namespaces for static members of classes they represent, as well as constructors. The closest equivalent in Java to them is the name of the class as used in source code; in JavaScript they are actual objects.
This actually produces least ambiguity, as everything is in its place, and maps most closely to Java platform idioms.
This is not the way js should work. I think this is a design bug in Nashorn. Assume you have a mixed util passing vars from some java runtime system to the js script. This object contains one static method fmtNumber(someString) and one object method jutil.getFormVar(someString). The users don't need to know that Java is serving this platform. You simply tell them jutil is a "system hook" belonging to the framework foo. As a user of this framework I don't care about if its static or not. I am a js developer, i don't know about static or not. I want to script something real quick. This is how the code in rhino looks like.
var x = jutil.getFormVar("x");
print(jutil.fmtNumber(x));
Now in nashorn I have to distinguish between them. Even worse I even have to educate my users to distinguish between them and teach them java terms, which they might not know because this is what an abstraction layer is all about: a self containing system without the need to know the underlying mechanisms. This distinction is way to much cognitive overload and you did not think about other usecases than java developers scripting for them self which they probably wont to because the already know a good language called Java. You are thinking form your implementation as a Java developer when instead you should think how you could use the power of the Java Plattform in the background, hiding all the nasty details from JS developers. What would a webdeveloper say if he needs to distinguish between the static C++ implementation in the browser?

Resources