I am pulling in some XML data using XmlProvider, and I will be accessing it from C#. As you can't use type provided fields directly from C#, I need create record out of them. I can do this by hand but I believe this should be possible to automate using reflection. If I create record types with the same names and types as the fields in the type provider, I should be able to use something like FSharpValue.MakeRecord(typeof<MyType>,values) where values is an array of objects.
What I don't know is how to get the array of values out of the type provider, and how to handle nested records, for instance:
type Address =
{
Address1 : string
City : string
State : string
}
type Client =
{
Id : int
FullName : string
Address : Address
}
In this case Client contains one Address. Will I need to walk the tree and use MakeRecord on the leaves and work my way up?
If you're willing to hand code the types, why do you need the type provider in the first place?
If you're doing some additional logic on F# side, you'll have no choice but to create the records manually anyway. And if you're not doing anything, you can just use the .NET out of the box serializer (or another library) to create them from xml.
Related
I'm inserting data into Azure CosmosDB via FSharp.ComosDb. Here is the record type that I write in the DB:
[<CLIMutable>]
type DbType =
{ id: Guid
Question: string
Answer: int }
The persistence layer works fine but I face an inelegant redundancy. The record I'm inserting originates from the Data Transfer Object (DTO) with the following shape:
type DataType =
{ QuestionId: Guid
Question: string
Answer: int }
CosmosDb accepts only records with a lowercase id. Is there any way to derive the DbType from DataType or I have to define DbType from scratch?
Is there anything a la copy and update record expression record2 = { record1 with id = record1.QuestionId } but at the type level?
There's no type-level way of deriving one record type from another the way you describe, you can however get reasonably close with the addition of anonymous records in F# 4.6.
type DataType =
{ QuestionId: Guid
Question: string
Answer: int }
let example =
{ QuestionId = Guid.NewGuid()
Question = "The meaning of life etc."
Answer = 42 }
let extended =
{| example with id = example.QuestionId |}
This gives you a value of an anonymous record type with an added field, and may be well suited to your scenario, however it's unwieldy to write code against such type once it leaves the scope of the function you create it in.
If all you care is how this single field is named - serialization libraries usually have ways of providing aliases for field names (like Newtonsoft.Json's JsonProperty attribute). Note that this might be obscured from you by the CosmosDb library you're using, which I'm not familiar with.
Another more involved approach is to use generic envelope types so that the records you persist have a uniform data store specific header across your application:
type Envelope<'record> =
{
id: string
// <other fields as needed>
payload: 'record
}
In that case the envelope contains the fields that your datastore expects to be there to fulfill the contract (+ any application specific metadata you might find useful, like timestamps, event types, versions, whatnot) and spares you the effort of defining data store specific versions of each type you want to persist.
Note that it is still a good idea in general to decouple the internal domain types from the representation you use for storage for maintainability reasons.
When I save a new F# Record, I'm getting an extra column called Id# in the RavenDb document, and it shows up when I load or view the object in code; it's even being converted to JSON through my F# API.
Here is my F# record type:
type Campaign = { mutable Id : string; name : string; description : string }
I'm not doing anything very exciting to save it:
let save c : Campaign =
use session = store.OpenSession()
session.Store(c)
session.SaveChanges()
c
Saving a new instance of a record creates a document with the Id of campaigns/289. Here is the full value of the document in RavenDb:
{
"Id#": "campaigns/289",
"name": "Recreating Id bug",
"description": "Hello StackOverflow!"
}
Now, when I used this same database (and document) in C#, I didn't get the extra Id# value. This is what a record looks like when I saved it in C#:
{
"Description": "Hello StackOverflow!",
"Name": "Look this worked fine",
}
(Aside - "name" vs "Name" means I have 2 name columns in my document. I understand that problem, at least).
So my question is: How do I get rid of the extra Id# property being created when I save an F# record in RavenDb?
As noted by Fyodor, this is caused by how F# generates a backing field when you create a record type. The default contract resolver for RavenDB serializes that backing field instead of the public property.
You can change the default contract resolver in ravendb. It will look something like this if you want to use the Newtonsoft Json.Net:
DocumentStore.Conventions.JsonContractResolver <- new CamelCasePropertyNamesContractResolver()
There is an explanation for why this works here (see the section titled: "The explanation"). Briefly, the Newtonsoft library uses the public properties of the type instead of the private backing fields.
I also recommend, instead of having the mutable property on the Id, you can put the [<CLIMutable>] attribute on the type itself like:
[<CLIMutable>]
type Campaign = { Id : string; name : string; description : string }
This makes it so libraries can mutate the values while preventing it in your code.
This is a combination of... well, you can't quite call them "bugs", so let's say "non-straightforward features" in both F# compiler and RavenDb.
The F# compiler generates a public backing field for the Id record field. This field is named Id# (a standard pattern for all F# backing fields), and it's public, because the record field is mutable. For immutable record fields, backing fields will be internal. Why it needs to generate a public backing field for mutable record fields, I don't know.
Now, RavenDb, when generating the schema, apparently looks at both properties and fields. This is a bit non-standard. The usual practice is to consider only properties. But alas, Raven picks up the public field named Id#, and makes it part of the schema.
You can combat this problem in two ways:
First, you could make the Id field immutable. I'm not sure whether that would work for you or RavenDb. Perhaps not, since the Id is probably generated on insert.
Second, you could declare your Campaign not as an F# record, but as a true class:
type Campaign( id: int, name: string, description: string ) =
member val Id = id with get, set
member val name = name
member val description = description
This way, all backing fields stay internal and no confusion will arise. The drawback is that you have to write every field twice: first as constructor argument, then as class member.
I work with a Web Service API that can pump through a generic type of Results that all offer certain basic information, most notably a unique ID. That unique ID tends to be--but is not required to be--a UUID defined by the sender, which is not always the same person (but IDs are unique across the system).
Fundamentally, the API results in something along the lines of this (written in Java, but the language should be irrelevant), where only the base interface represents common details:
interface Result
{
String getId();
}
class Result1 implements Result
{
public String getId() { return uniqueValueForInstance; }
public OtherType1 getField1() { /* ... */ }
public OtherType2 getField2() { /* ... */ }
}
class Result2 implements Result
{
public String getId() { return uniqueValueForInstance; }
public OtherType3 getField3() { /* ... */ }
}
It's important to note that each Result type may represent a completely different kind of information. Some of it cannot be correlated with other Results, and some of it can, whether or not they have identical types (e.g., Result1 may be able to be correlated with Result2, and therefore vice versa, but some ResultX may exist that cannot be correlated because it represents different information).
We are currently implementing a system that receives some of those Results and correlates them where possible, which generates a different Result object that is a container of what it correlated together:
class ContainerResult implements Result
{
public String getId() { return uniqueValueForInstance; }
public Collection<Result> getResults() { return containedResultsList; }
public OtherType4 getField4() { /* ... */ }
}
class IdContainerResult implements Result
{
public String getId() { return uniqueValueForInstance; }
public Collection<String> getIds() { return containedIdsList; }
public OtherType4 getField4() { /* ... */ }
}
These are two containers, which present different use cases. The first, ContainerResult, allows someone to receive the correlated details as well as the actual complete, correlated data. The second, IdContainerResult, sacrifices the complete listing in favor of bandwidth by only sending the associated IDs. The system doing the correlating is not necessarily the same as the client, and the client can receive Results that those IDs would represent, which is intended to allow them to show correlations on their system by simply receiving the IDs.
Now, my problem may be non-obvious to some, and it may be obvious to others: if I send only the ID as part of the IdContainerResult, then how does the client know how to match the Result on their end if they do not have a single ID-store? The types of data that are actually represented by each Result implementation lend themselves to being segregated when they cannot be correlated, which means that a single ID-store is unlikely in most situations without forcing a memory or storage burden.
The current solution that we have come up with entails creating a new type of ID, we'll call it TypedId, which combines the XML Namespace and XML Name from each Result with the Result's ID.
My main problem with that solution is that it requires either maintaining a mutable collection of types that is updated as they are discovered, or prior knowledge of all types so that the ID can be properly associated on any client's system. Unfortunately, I cannot come up with a better solution, but the current solution feels wrong.
Has anyone faced a similar situation where they want associate generic Results with its original type, particularly with the limitations of WSDLs in mind, and solved it in a cleaner way?
Here's my suggestion:
You want to have "the client know how to match the Result on their end". So include in your response an extra discriminator field called "RequestType", a String.
You want to avoid "maintaining a mutable collection of types that is updated as they are discovered, or prior knowledge of all types so that the ID can be properly associated on any client's system". Obviously, each client request call DOES know what area of processing the Result will relate to. So you can have the client pass the "RequestType" string in as part of the request. As long as the RequestType is a unique string for each different type of client request, your service can process and correlate it without hard-coding any knowledge.
Here's one possible example of java classes for request and response messages (i.e. not the actual service endpoint):
interface Request {
String getId();
String getRequestType();
// anything else ...
}
interface Result {
String getId();
String getRequestType();
}
class Result1 implements Result {
public String getId() { return uniqueValueForInstance; }
public OtherType1 getField1() { /* ... */ }
public OtherType2 getField2() { /* ... */ }
}
class Result2 implements Result {
public String getId() { return uniqueValueForInstance; }
public OtherType3 getField3() { /* ... */ }
}
Here's the gotcha. (2) and (3) above do not give a completely dynamic solution. You want your service to be able to return a flexible record structure relating to each different request. You have the following options:
4A) In XSD, declare Result as a singular strongly-typed variant record type, and in WSDL return Result from a single service endpoint and single operation. The XSD will still need to hardcode the values for the discriminator element when declaring variant record structure.
4B) In XSD, declare multiple strongly-typed unique types Result1, Result2, etc for each possible client request. In WSDL, have a multiple uniquely named operations to return each one of these. These operations can be across one or many service endpoints - or even across multiple WSDLs. While this avoids hard coding the request type as a specific field per se, it is not actually a generic client-independent solution because you are still explicitly hard-coding to discriminate each request type by creating a uniquely name for each result type and each operation. So any apparent dynamism is a mirage.
4C) In XSD, define a flexible generic data structure that is not variant, but has plenty of generally named fields that could be able to handle all possible results required. Example fields could be "stringField1", "stringField2", "integerField1", "dateField1058", etc. i.e. use extremely weak typing and put the burden on the client to magically know what data is in each field. This option may be very generic, but it is usually considered terrible practice. It is inelegant, pretty unreadable, error prone and has limitations/assumptions built in anyway - how do you know you have enough generic fields included? In your case, (4A) is probably the best option.
4D) Use flexible XSD schema design tactics - type substitutability and use of "any" element. See http://www.xfront.com/ExtensibleContentModels.html.
4E) Use the #Produces #SomeQualifier annotations against your own factory class method which creates a high level type. This tells CDI to always use this method to construct the specificied bean type & qualifier. Your factory method can have fancy logic to decide which specific low-level type to construct upon each call. #SomeQualifier can have additional parameters to give guidance towards selecting the type. This potentially reducing the number of qualifiers to just one.
If you use (4D) you will have a flexible service endpoint design that can deal with changing requirements quite effectively. BUT your service implementation still needs to implement the flexible behaviour to decide which results fields to return for each request. Fact is, if you have a logical requirement for varying data structures, your code must know how to process these data structures for each separate request, so must depend on some form of RequestType / unique operation names to discriminate. Any goal of completely dynamic processing (without adapting to each client's needs for results data) is over-ambitious.
I've heard a number of similar questions for other languages, but I'm looking for a specific scenario.
My app has a Core Data model called "Record", which has a number of columns/properties like "date, column1 and column2". To keep the programming clean so I can adapt my app to multiple scenarios, input fields are mapped to a Core Data property inside a plist (so for example, I have a string variable called "dataToGet" with a value of 'column1'.
How can I retrieve the property "column1" from the Record class by using the dataToGet variable?
The Key Value Coding mechanism allows you to interact with a class's properties using string representations of the property names. So, for example, if your Record class has a property called column1, you can access that property as follows:
NSString* dataToGet = #"column1";
id value = [myRecord valueForKey:dataToGet];
You can adapt that principle to your specific needs.
I'm working on an application at the moment in ASP.NET MVC which has a number of look-up tables, all of the form
LookUp {
Id
Text
}
As you can see, this just maps the Id to a textual value. These are used for things such as Colours. I now have a number of these, currently 6 and probably soon to be more.
I'm trying to put together an API that can be used via AJAX to allow the user to add/list/remove values from these lookup tables, so for example I could have something like:
http://example.com/Attributes/Colours/[List/Add/Delete]
My current problem is that clearly, regardless of which lookup table I'm using, everything else happens exactly the same. So really there should be no repetition of code whatsoever.
I currently have a custom route which points to an 'AttributeController', which figures out the attribute/look-up table in question based upon the URL (ie http://example.com/Attributes/Colours/List would want the 'Colours' table). I pass the attribute (Colours - a string) and the operation (List/Add/Delete), as well as any other parameters required (say "Red" if I want to add red to the list) back to my repository where the actual work is performed.
Things start getting messy here, as at the moment I've resorted to doing a switch/case on the attribute string, which can then grab the Linq-to-Sql entity corresponding to the particular lookup table. I find this pretty dirty though as I find myself having to write the same operations on each of the look-up entities, ugh!
What I'd really like to do is have some sort of mapping, which I could simply pass in the attribute name and get out some form of generic lookup object, which I could perform the desired operations on without having to care about type.
Is there some way to do this to my Linq-To-Sql entities? I've tried making them implement a basic interface (IAttribute), which simply specifies the Id/Text properties, however doing things like this fails:
System.Data.Linq.Table<IAttribute> table = GetAttribute("Colours");
As I cannot convert System.Data.Linq.Table<Colour> to System.Data.Linq.Table<IAttribute>.
Is there a way to make these look-up tables 'generic'?
Apologies that this is a bit of a brain-dump. There's surely imformation missing here, so just let me know if you'd like any further details. Cheers!
You have 2 options.
Use Expression Trees to dynamically create your lambda expression
Use Dynamic LINQ as detailed on Scott Gu's blog
I've looked at both options and have successfully implemented Expression Trees as my preferred approach.
Here's an example function that i created: (NOT TESTED)
private static bool ValueExists<T>(String Value) where T : class
{
ParameterExpression pe = Expression.Parameter(typeof(T), "p");
Expression value = Expression.Equal(Expression.Property(pe, "ColumnName"), Expression.Constant(Value));
Expression<Func<T, bool>> predicate = Expression.Lambda<Func<T, bool>>(value, pe);
return MyDataContext.GetTable<T>().Where(predicate).Count() > 0;
}
Instead of using a switch statement, you can use a lookup dictionary. This is psuedocode-ish, but this is one way to get your table in question. You'll have to manually maintain the dictionary, but it should be much easier than a switch.
It looks like the DataContext.GetTable() method could be the answer to your problem. You can get a table if you know the type of the linq entity that you want to operate upon.
Dictionary<string, Type> lookupDict = new Dictionary<string, Type>
{
"Colour", typeof(MatchingLinqEntity)
...
}
Type entityType = lookupDict[AttributeFromRouteValue];
YourDataContext db = new YourDataContext();
var entityTable = db.GetTable(entityType);
var entity = entityTable.Single(x => x.Id == IdFromRouteValue);
// or whatever operations you need
db.SubmitChanges()
The Suteki Shop project has some very slick work in it. You could look into their implementation of IRepository<T> and IRepositoryResolver for a generic repository pattern. This really works well with an IoC container, but you could create them manually with reflection if the performance is acceptable. I'd use this route if you have or can add an IoC container to the project. You need to make sure your IoC container supports open generics if you go this route, but I'm pretty sure all the major players do.