After I went through OData doc, I still do not understand the meaning of <FunctionImport>.
What is that used for?
Some one said that "Function imports are used to perform custom operations on a JPA entity in addition to CRUD operations. For example, consider a scenario where you would like to check the availability of an item to promise on the sales order line items. ATP check is a custom operation that can be exposed as a function import in the schema of OData service."
But I think above requirement can be achieved by general <Function> also, right?
What is the difference between <FunctionImport> and <Function> exactly?
I do appreciate anyone's help!
Thanks
There are three types of functions in OData:
Functions that are bound to something (e.g. an entity). Example would be
GET http://host/service/Products(1)/Namespace.GetCategories()
such function is defined in the metadata using the <function> element and with its isBound attribute set to true.
Unbound functions. They are usually used in queries. E.g.
GET http://host/service/Products?$filter(Name eq Namespace.GetTheLongestProductName())
such function is defined in the metadata using the <function> element with its isBound attribute set to false
Function imports. They are the functions that can be invoked at the service root. E.g.
GET http://host/service/GetMostExpensiveProduct()
Their concept is a little bit similar as the concept of static functions in program languages, and they are defined in metadata using the <functionimport> element.
Similar distinguishing applies to <action> and <actionimport> as well.
OK, I got the answer by myself.
<OData Version 4.0 Part 1: Protocol Plus Errata 02>:
Operations allow the execution of custom logic on parts of a data model. Functions are operations that do not have side effects and may support further composition, for example, with additional filter operations, functions or an action. Actions are operations that allow side effects, such as data modification, and cannot be further composed in order to avoid non-deterministic behavior. Actions and functions are either bound to a type, enabling them to be called as members of an instance of that type, or unbound, in which case they are called as static operations. Action imports and function imports enable unbound actions and functions to be called from the service root.
Related
i added map(), reduce() and where(qlint : string) to a Spring4D fork of mine.
While i was programming these functions, i found out that there is a differnce in the behaviour of the lists, when they are created in different ways.
If i create them with TList<TSomeClass>.create the objects in the enumerables are of the type TSomeClass.
If i create them with TCollections.CreateList<TSomeClass> the objects in the enumerables are of the type TObject.
So the question is:
Is there a downside by using TList<TSomeClass>.create ?
Or in other words: Why should i use TCollections.CreateList<TSomeClass> ?
btw: with TCollections.CreateList i got a TObjectList and not a TList. So it should be called TCollections.CreateObjectList... but that's another story.
Depending on the compiler version many of the Spring.Collections.TCollections.Create methods are applying what the compiler is unable to: folding the implementation into only a very slim generic class. Some methods are doing that from XE on, some only since XE7 ( GetTypeKind intrinsic function makes it possible to do the type resolution at compile time - see the parameterless TCollections.CreateList<T> for example).
This greatly reduces the binary size if you are creating many different types of IList<T> (where T are classes or interfaces) because it folds them into TFolded(Object|Interface)List<T>. However via the interface you are accessing the items as what you specified them and also the ElementType property returns the correct type and not only TObject or IInterface. On Berlin it adds less than 1K for every different object list while it would add around 80K if the folding is not applied due to all the internal classes involved for the different operations you can call on an IList<T>.
As for TCollections.CreateList<T> returning an IList<T> that is backed by a TFoldedObjectList<T> when T is a class that is completely as designed. Since the OwnsObject was passed as False it has the exact same behavior as a TList<T>.
The Spring4D collections are interface based so it does not matter what class is behind an interface as long as it behaves accordingly to the contract of the interface.
Make sure that you only carry the lists around as IList<T> and not TList<T> - you can create them both ways (with the benefits I mentioned before when using the TCollections methods). In our own application some places are still using the constructor of the classes while many other places are using the static methods from Spring.Collections.TCollections.
BTW:
I saw the activity in your fork and imo there is no need to implement Map/Reduce because that is already there. Since the Spring4D collections are modelled after .NET they are called Select and Aggregate (see Spring.Collections.TEnumerable). They are not available on IEnumerable<T> directly though because interfaces must not have generic parameterized methods.
It happens that there are sometimes lines of code or methods that can't produce mutants that are going to be killed by any relevant test. (For instance I may be using a null pattern object, and some of the implemented methods are not relevant in prod, so any implementation (even throwing) would be correct).
It would be nice to be able to tell pit to avoid them (so that the mutation coverage is more relevant), but I couldn't find a way to do it in the documentation.
Is there a way to do it?
PIT currently has five mechanisms by which code can be filtered out.
By class, using the excludedClasses parameter
By method, using excludedMethods
Using a custom mutation filter
By adding an annotation named Generated, CoverageIgnore, or DoNotMutate with at least class level retention. (N.B javax.annotation.Generated has only source retention and will not work)
The arcmutate base extensions allow mutation to be filtered using code comments or external text files
For your use case it sounds like options 1, 4 or 5 would fit.
Option 2 only allows for a method to be filtered in all classes (this is most commonly used to prevent mutations in toString or hashcode methods).
Option 3 is a little involved, but would allow you to (for example) to filter out methods with a certain annotation.
An aside.
I don't I follow your example of the null object pattern.
A null object needs to implement all methods of an interface and it is expected that they will be called. If they were to throw this would break the pattern.
In the most common version of the pattern the methods would be empty, so there would be nothing to mutate apart from return values.
This behaviour would be worth describing with tests. If your null object fails to return whatever values are considered neutral this would cause a problem.
I've read about the great capabilities of Type Providers, such as static-typing when querying JSON documents, so I can imagine that I can create what I have in my mind at the moment, with this technology.
Let's say I want to allow a consumer of my TypeProvider-library Foo a way to create a type Bar, which will have the following pre-condition for each of their methods: check the mutable state of a boolean disposed field, if it's true, throw an ObjectDisposedException.
Would this be possible? How could one define such an implementation of this high-level type creator?
A couple of years back Keith Battocchi published a project called ILBuilder. Among other things ILBuilder contains a method type provider in ILBuilder.fs that provides methods for types in mscorlib, e.g.
MethodProvider.Methods.System.Console.``WriteLine : string*obj->unit`
It is possible you could use this as a starting point for a type provider that wraps classes from another assembly and provides methods.
Another option might be to consider Ross McKinlay's Mixin Type Provider that (ab)uses F#'s Type Provider mechanism to provide meta-programming capabilities.
Yet another option might be to use PostSharp, Fody etc. to do IL weaving, or code generation via reflection to build proxy classes.
That said probably the lowest friction solution would be to create a function that checks for disposal and manually add it to each member.
For instance I have this bit of code
public class ProductService{
private IProductDataSource _dataSource = DependencyManager.Get<IProductDataSource>();
public Product Get(int id){
return _dataSource.Select(id);
}
}
I have 2 different data source:
XML file which contains the informations only in 1 language,
a SQL data base which contains the informations in many languages.
So I created 2 implementation for IProductDataSource, for for each kind of datasource.
But how do I send the required language to the SQL data source ?
I add the parameter "language" to the method "IProductDataSource.Select" even if I won't use it in the case of the XML implementation.
Inside the SQL implementation I get the language from a global state ?
I add the language to the constructor of my SQL implementation, but then I won't use my DependencyManager and handle my self the dependency injection.
Maybe my first solution is not good.
The third option is the way to go. Inject the language configuration to your SQL implementation. Also get rid of your DependencyManager ServiceLocator and use constructor injection instead.
If your application needs to work with multiple languages in a single instance I think point one is a sensible approach. If the underlying data does not provide translations for a request language then return null. There is another solution in this scenario. I'm assuming that what you have is a list of products and language translations for each product. Can you refactor your model so that you do not need to specify or asertain the langauge until you reference language specific text? The point being a product is a product regardless of the language you choose to describe it. i.e. one product instance per product, only the product id on the Datasource.Select(..) method and some other abstraction mechanism to deal with accessing the correct text translation.
If however each instance of your application is only concerned with one language set I second Mr Gloor.
First of all I need to point out that you are NOT injecting any dependencies with your example - you are depending on a service locator (DependencyManager) to get them for you. Dependency injection, simply put, is when your classes are unaware of who provides the dependencies, e.g. using a constructor, a setter, a method. As it was already mentioned in the other answers, Service locator is an anti-pattern and should be avoided. The reasons are described in this great article.
Another thing is that the settings you are mentioning, such as language or currency, seem to be localization related and would probably be better dealt with using the built-in mechanisms of your language of choice (e.g. resource files, etc).
Now, having said that, depending on how the rest of your code is structured you have several options to solve this while still using Service locator:
You could have SqlDataSource depend on some ILanguageProvider which pulls the current language from somewhere. However, with more settings like these (or if it is difficult to get current language in an isolated way) this can get messy very fast.
You could depend on IProductDataSourceFactory instead (or, if you are using C#, Func<IProductDataSource>) which would return the concrete implementation with the correct settings. Again, you need to be able to get the current language in an isolated way in order to use this.
You could go with option 1 in your question. This would be a leaky abstraction but would be the simplest to implement.
However, if you decide to get rid of service locator and start using some DI container, the best solution would be using option 3 (as it was already stated) and configuring container accordingly to provide the correct value. Some good ideas of how to do this in an elegant way can be found in the answer to this question
For recursion in F#, existing documentation is clear about how to do it in the special case where it's just one function calling itself, or a group of physically adjacent functions calling each other.
But in the general case where a group of functions in different modules need to call each other, how do you do it?
I don't think there is a way to achieve this in F#. It is usually possible to structure the application in a way that doesn't require this, so perhaps if you described your scenario, you may get some useful comments.
Anyway, there are various ways to workaround the issue - you can declare a record or an interface to hold the functions that you need to export from the module. Interfaces allow you to export polymorphic functions too, so they are probably a better choice:
// Before the declaration of modules
type Module1Funcs =
abstract Foo : int -> int
type Module2Funcs =
abstract Bar : int -> int
The modules can then export a value that implements one of the interfaces and functions that require the other module can take it as an argument (or you can store it in a mutable value).
module Module1 =
// Import functions from Module2 (needs to be initialized before using!)
let mutable module2 = Unchecked.defaultof<Module2Funcs>
// Sample function that references Module2
let foo a = module2.Bar(a)
// Export functions of the module
let impl =
{ new Module1Funcs with
member x.Foo(a) = foo a }
// Somewhere in the main function
Module1.module2 <- Module2.impl
Module2.module1 <- Module1.impl
The initializationcould be also done automatically using Reflection, but that's a bit ugly, however if you really need it frequently, I could imagine developing some reusable library for this.
In many cases, this feels a bit ugly and restructuring the application to avoid recursive references is a better approach (in fact, I find recursive references between classes in object-oriented programming often quite confusing). However, if you really need something like this, then exporting functions using interfaces/records is probably the only option.
This is not supported. One evidence is that, in visual stuido, you need to order the project files correctly for F#.
It would be really rare to recursively call two functions in two different modules.
If this case does happen, you'd better factor the common part of the two functions out.
I don't think that there's any way for functions in different modules to directly refer to functions in other modules. Is there a reason that functions whose behavior is so tightly intertwined need to be in separate modules?
If you need to keep them separated, one possible workaround is to make your functions higher order so that they take a parameter representing the recursive call, so that you can manually "tie the knot" later.
If you were talking about C#, and methods in two different assemblies needed to mutually recursively call each other, I'd pull out the type signatures they both needed to know into a third, shared, assembly. I don't know however how well those concepts map to F#.
Definetely solution here would use module signatures. A signature file contains information about the public signatures of a set of F# program elements, such as types, namespaces, and modules.
For each F# code file, you can have a signature file, which is a file that has the same name as the code file but with the extension .fsi instead of .fs.