nested type provider - f#

I have one type provider that connects to the network to retrieve data.
And produce (the fiction we call) 'static type' through type providers mechanism.
Of course, I might not always be connected. I might be raging in a private jet with satellite connection down.
Has anyone experience building an "offline type provider" which take (somehow) a type (from a type provider) as an input, stores its definition on disk, and provides you later with said type definition for easy access while on your way to Koh Phangan ?
Since types are not allowed as parameter to TP, I was thinking in providing assembly name + type name to be offlined.

You can enhance your original type provider to work both in online and offline modes. I.e. provider tries to connect to data source and fetch schema, if successful schema is cached on disk (in some format that provider can understand). After that provider exposes types using schema information on disk. If for some reason connection to data source is not available - provider checks if cached schema exists and if yes - uses it. For example standard type providers (LINQ2SQL or EF) allow you to specify schema file that can be used if direct connection to database is not possible.

This is a tricky aspect of writing F# type providers. But I think the main problem is that when you're developing in a private jet and you're using type providers to access some external data source, then you won't be able to access the data.
Schema caching -
If the type provider supports some form of schema caching (i.e. by storing the schema in a XML file like LINQ to SQL mentioned by #desco), then you'll be able to write some code and compile it, but you still won't be able to test the code. I think this makes schema caching less useful for the private-jet scenario. However, it is useful in scenario where you build code on a build server that does not have access to the schema.
Local data - For the private-jet scenario, you probably need some sort of local data (or a subset), to be actually able to test the code you write and then you can often point the type provider to your local copy (database, CSV or XML file etc.).
Meta-provider - I think the idea of having meta-provider is pretty cool - it should work to some extent - you would be able to cache the schema, but you probably wouldn't be able to cache the data (perhaps the values of properites, but I guess methods would not work). I think it should be possible to just pass the name of the provider to mock as an argument to your meta-provider. Something like:
type CachedDB =
SchemaCachingProvider<"FSharp.Data.TypeProviders.dll", "SqlDataConnection", "..">
I'm not aware of any plans for doing something like this, but if you started, I'm sure the FSharpX people would be interested in looking at it :-).

Related

GOOGLE FIRESTORE: Are these security rules safe?

So I am going through the security rules documentation of firestore right now in an effort to make sure the data users put in my app will be okay. As of right now, all I need users to be able to do is to read data (really only the 'get', but 'read' is fine too), and create data. So, my security rules for the firestore data right now are:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /jumpSpotAnnotations/{id} {
// 'get' instead of 'read' would work too
allow read, create;
}
}
}
I have the exact same 'allow read, create;' for my storage data too. Will this be okay upon release or is this dangerous? In the documentation, they write:
"As you set up Cloud Firestore, you might have set your rules to allow open access during development. You might think you're the only person using your app, but if you've deployed it, it's available on the internet. If you're not authenticating users and configuring security rules, then anyone who guesses your project ID can steal, modify, or delete the data."
This text precedes an example where the rules are, 'allow read, write;', as opposed to my 'allow read, create'. Are my rules also subject to the deletion/modification of the data? I put create because I assume that that only lets people create the data, and not delete or modify it.
Final part of this question, but how could a user guess my project ID? Would they not have to sign in on my google account to then be able to manually delete, modify, or steal data? I'm not sure how that works. My app interface allows for the user to only create data, or read data, nothing else. So could some random person still somehow get into this database online and mess with it?
Thanks for any help.
Your rule allows anyone with an internet connection to read and create documents in the jumpSpotAnnotations collection. We don't know if that's "safe" for your app. You have to determine for yourself if that situation is safe. If you're OK with someone anonymously loading up that collection with documents, and you're OK with paying for that behavior, then it's safe.
Your project ID is baked into your app before you publish it. All someone has to do is download and decompile your app to find it. It's not hard. Your project ID is not private information.
No, your rules are not secure, to understand how someone can guess your project id and steal data first you have to understand that Firebase provides a simple REST API to access stored data. All of the data is stored in JSON format, so public databases can be accessed by making a request to the database URL appended by “.json”.
Now the main concern that how someone can guess your project id, see there are many tools available through which you can set up a proxy on your network and analyze each and every request going through. As Google already said that firebase simply uses rest API so the API endpoints can be known easily by intercepting HTTP requests and then if your rules are not secured then your data could be compromised.
Now solution, how to protect your data. See there are many ways even firebase provides tons of ways to secure data just read their docs about database security. But there is something which you could do from your side so that if your data is compromised then also someone can't actually read it.
You can prevent the apps from reading the data in plaintext. Use public-key algorithms to encrypt the data. Keep the private key on the systems that have to read the data. Then the app cannot read the data in plain text. This also will not prevent the manipulation or deletion of data.

Export breeze entities in the server side to json

I am looking for a way to export breeze entities on the server side to a json string which a breezejs manager can import from the client side. I looked all over the breeze APIs (both public and internal source code) but I couldn't find an obvious way of achieving this. There is a possibility of getting the desired results by using BreezeSharp (a .NET breeze client) on the server side but I would like to see if this is achievable with using the breeze server APIs only.
First you need to determine the shape of the bundle to be imported, i.e. something that manager.importEntities will understand. I don't think the format is documented, but you can reverse-engineer it by using:
var exported = manager.exportEntities(['Customer', 'Product'], {asString:true, includeMetadata:false});
Then pretty-print the value of exported to see the data format. See EntityManager.exportEntities for more info.
Once you have that, you can re-create it on the server. In C#, you can build it up using Dictionary and List objects, then serialize it using Json.NET.
An alternative approach would be to have your webhook just tell the client to initiate a query to retrieve the data from the server.

How to peek at message while dependencies are being built?

I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).

Does OData get data on the client or does it offer an XML syntax to express Linq queries?

I am just reading up on OData from here.
http://msopentech.com/odataorg/introduction/
Sorry, I am getting a bit impatient.
I just have a simple question for now before I go through the rest of the material. Which of the two options describe OData?
I understand it provides a protocol (much like SOAP or XML/Json over HTTP or XML-RPC) to transfer data from services over the web to clients. What I am intrigued by is that it also helps query that data, which is a great problem to solve as it help reduce payloads that you usually encounter when querying large data sets with XML/SOAP web services or other means (XML over Http, Json over Http, RPC responses, you name it).
Option A
Does oData get all the data to the client, use some client-based storage (like HTML 5 local storage for desktop browsers) to store it, and then query the data on the client using an in-process API?
Or
Option B
Does it provide an XML-based syntax for translation Linq like expressions and getting only the relevant result sets (filtered, ordered, whatever else) stuff from the server?
It's funny how when you type your thoughts, you end up solving your own problems. I think just typing the question has given me the answer. Option A sounds preposterous for so many reasons:
1) If it's a data-centric protocol, it has to not care about what type of client or consumer will want the data, so it cannot have any affinity to client or the capabilities (caching on client side) of the client.
2) It is a data-centric protocol and hence does not prescribe how data must be read or offer any tools on the client or server sides. It merely prescribes a data format, I would imagine.
It has to be Option B. Still, I just want a confirmation or correction.
Yes, it is Option B.
You could obviously write a terrible implementation of a client that would download ALL the data and then filter and show data based on client-side logic. But that would be rather silly.
The way you "write" your queries is quite well detailed in OData.org's "URL Conventions" page, typically something along the lines of: http://someserver/odata.svc/Customers(Location eq 'New York')

How to escape a period (.) in WCF Data Services QueryString

I have a WCF Data Services service that exposes a set of ICD codes. The primary key for the underlying table and the data set that WCF provides access to is a varchar or string in C#.
The service works properly if I have a query like this:
http://somehost/someService.svc/IcdSet('001')
If, however, the ICD code happens to have a . in the identifier as many do, the service fails. Here's an example of one that won't work (IIS gives a 404 - Not Found response):
http://somehost/someService.svc/IcdSet('001.1')
So the question is how can I escape the period or properly pass it to WCF Data Services? It must be interpreting it as a different type of filter condition.
Note: The code for the underlying class seems irrelevant to the question but I can provide it if needed.
Edit: My guess at the moment is that IIS is trying to find a file that ends with .1') which is then producing the 404 error. But how can I tell IIS that it shouldn't be looking for files as these are all data queries?
check this out http://blogs.msdn.com/b/peter_qian/archive/2010/05/25/using-wcf-data-service-with-restricted-characrters-as-keys.aspx
Also might be of interest if you're using .Net 3.5 http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=5121

Resources