ADO retrievable data service? - ado

Need pointers on how to make a data provider/service ADO-compatible. This requirement is quite similar to how we use classic ado to query an LDAP server (active directory). Here is an e.g. of the same --> http://www.4guysfromrolla.com/webtech/041800-1.shtm
However the stuff I am expecting this provider would do is, actually talk to a WCF service underneath, and somehow generate a recordset for downstream consumption.

If you're looking to make the data you are extracting available via an ADO interface, you might take a look at the article here - its a bit dated, but I suspect the basic concepts are still valid.
ADO Providers

Related

Breeze.Sharp with non web-api/wcf data services

We are using EasyNetQ(RabbitMQ) with a data layer that uses EF6.1
We are developing a WPF client that will request data via the Message Bus. We would love to be able to use Breeze.Sharp to manage the data on the client but the only DataServices that are currently available are for WebApi/web(HttpClient) services.
Is it possible to introduce an interface so that we can provide a custom DataService that will communicate with the EasyNetQ message bus?
This is absolutely possible, the breeze.sharp product is intended to be able to talk to all of the same data services that our breeze.js product does.
Take a look at the breeze.dataService.mongo adapter ( part of the breeze.js product). It is used to talk to a mongoDB database running on Node with Express. ( i.e. no WebApi and actually no .NET on the server at all. )
That said, we have not yet built other adapters for the breeze.sharp product, although we plan to, as well as provide documentation on how to do this yourself. No timeframes yet unfortunately, we have a lot on our plate.
Another alternative to waiting, is to contact breeze#ideablade.com to build the adapter for you.

BreezeJS with a Linux backend

I am working on a project where we have a very slim server (Linux, Nginx, Sqlite), but our web application shall not show any signs of shortcomings (should contain charts, dashboards, nice looking controls) – so I need the client to do all the heavy work.
I assume that BreezeJS would be good in this case, because it manages data on the client, in a way that reduces workload on the server. The server only sends the data to the client and at some point gets data back that has to be saved to the database. All caching and other stuff is managed on the client.
I assume AngularJS would also be good in this case, because it is a client-side MVC-framework, again reducing workload on the server. It also works seamlessly together with BreezeJS.
I assume Wijmo would also be good in this case, because it provides nice looking controls and also works seamlessly together with BreezeJS and AngularJS.
Are my assumptions right? Any comments?
My only concerns are how I get BreezeJS to “talk” with the Linux-server (Nginx, Sqlite). Are there any samples regarding this? Is anyone working on something similar?
We will be releasing a NodeJS/Express/Mongo example within the next few weeks that should show how to communicate with an arbitrary non-.NET backend. (also see the current 'Edmunds' example in the Breeze zip). But we don't have anything yet that explicity shows Breeze working with a Linux backend. Please vote for this here: Breeze User Voice

Entity framework along with plain old ADO.Net

I am building a new applications architecture and I need your advice. We have a central MSSQL server database hosted as SQL Azure. This database needs to be accessed from many different applications, most of them are web applications hosted in windows azure and couple of them are winforms apps.
Accessing database for web application is straight forward with ADO.Net. For winforms applications, the wcf data services technology seems impressive along with client authentication services for security.
I need to know whether this mixed mode of database access will work? In other words, will database integrity will be maintained if it is being hit by applications using a mix of ADO.Net and Entity framework.
Thanks in advance.
If you query the database using EntityFramework it will cache the data until you call SaveChanges(). If the database is modified (e.g. using plain old ADO.NET) in the meantime there is a risk of the data from the database being overriden by the application that is using Entity Framework. To prevent from this you need to use Concurrency Token. You can find some details here: http://social.technet.microsoft.com/wiki/contents/articles/3866.aspx
Note that when you start using concurrency tokens you need to be aware of possible concurrency exceptions which you need to handle. You can take a look at this blog post http://blogs.msdn.com/b/rickandy/archive/2011/02/17/handling-optimistic-concurrency-exception-with-ef-and-mvc-3.aspx for some ideas. WCF Data Services uses ETags for concurrency (http://blogs.msdn.com/b/astoriateam/archive/2008/04/22/optimistic-concurrency-data-services.aspx) but you may not need to do anything here if you setup concurrency in the EF model for the database that is exposed via WCF Data Services.
We are going with WCF RIA services. They seem to work well with multiple client types providing out of the box data access layer.

OData on top of 2+ OData Feeds

Say I have the following model
I would like to present a unified front for these OData feeds to my clients.
Is there a nice way with OData to do this? Or should I just take IQueryables from the OData feeds and make a reflection endpoint on top of these?
If I use the reflection stuff on top of the OData that talks to the database (via Entity Framework) what kind of problems am I going to encounter?
I would not use the reflection provider over the client library, mainly because the client library LINQ provider doesn't support all the constructs used by the server. As a result some queries would simply not work at all (projections and expansions usually get broken).
Assuming you don't want to create any associations between the databases, you should be able to simply point the users at the right service. You can still expose something which looks like a unified endpoint without the need of having the same URL for all of them.
The main idea is that you unify the $metadata (if your model is static you can do this manually, if not you should be able to write some kind of "merge" tool pretty easily) and then provide a service document which points to the respective URLs for each entity set. In the WCF Data Services client, there's now support for these kind of services through entity set resolver: http://blogs.msdn.com/b/astoriateam/archive/2010/11/29/entity-set-resolver.aspx
The latest CTP with that support is here: http://blogs.msdn.com/b/astoriateam/archive/2011/06/30/announcing-wcf-data-services-june-2011-ctp-for-net4-amp-sl4.aspx
Not happy with the current accepted answer for this question, for me it's more of an anti-answer, of what not to do. My solution here applies as much today as it did in '11
To support a tenancy scenario, where each user/client data will always reside on the same Database, and the data schemas all match then all you need to do is change the connection string when the data context is instantiated.
Another term for this concept is Sharding, MS have some tools and APIs that can help, This is a simple enough walkthrough: Azure SQL Database Elastic database tools: Shard Elasticity but you can do this pretty easily from first principals.
If swapping out the connection string will work for your scenario we need to identify the mechanism that you will use to determine the connection string, there are two common solutions to this:
The simple way out is to use fixed host headers, a route or token in each request to the service, then you can hardcode the logic for determining the connection string without complicated mapping logic.
Use a master / header / mapping DB to store your configuration.
This database has a separate schema that's primary purpose is for retrieving the correct connection string for each request.
In most cases we combine this with the Authentication process, in which case
you keep the authentication in this central database, not in the individual databases.
In terms of the OData Controller, even with WCF Data Services, you just need to implement your logic for retrieving the connection string and use that when you instantiate your data context.
Of course, this doesn't help you if your client's data is spread across multiple databases, but it is a pretty common pattern for sclaing out large databases withough having to deploy a new farm of services for each database.

Why doesn't Microsoft support OLE DB connections to SQL Azure?

At the MSDN website it says, "Connecting to SQL Azure by using OLE DB is not supported."
There are other places on the web where folks report that it works fine for them after tweaking the server name in the connection string, such as here and here. Even SQL Server's Analysis Services uses OLE DB to connect to SQL Azure!
I develop a native/unmanaged application in Delphi that connects to SQL Server using ADO through the OLE DB provider for SQL Server. I'm considering adding SQL Azure support. It would be really helpful if I could reuse the majority of my code without not too much change. I probably wouldn't consider going this direction otherwise.
It would be helpful if Microsoft were more clear on why "OLE DB is not supported". If there are certain limitations within the use of OLE DB, what are they? Maybe I can work around them, or maybe it wouldn't affect me.
Microsoft also mentions that ODBC is supported. So could I use the "OLE DB provider to ODBC" and connect this way? Or is any combination that includes OLE DB "not supported"?
You can use it, however it has not been thoroughly tested for all cases. Essentially, it should work for most things, but there might be a few edge cases where it won't work. Until we document those cases, it remains unsupported. That being said, if you were to use and run into errors, we would love to know about it and prioritize that to be fixed.
Vote for the OleDB support for Azure here:
http://www.mygreatwindowsazureidea.com/forums/34685-sql-azure-feature-voting/suggestions/407269-ole-db-provider-for-connecting-to-sql-azure?ref=title
You can use ADO using the SQL Native Client although this information is hard to find you can read about it here http://msdn.microsoft.com/en-us/library/ms130978(SQL.110).aspx and here http://msdn.microsoft.com/en-us/library/ms131035(SQL.110).aspx.
In the connection string instead of using Provider=SQLOLEDB; we can use Provider=SQLNCLI10;. Also it is recommended to use DataTypeCompatibility=80;. So a SQL Native Client supported connection string would look like this:
"Provider=SQLNCLI10;Server=tcp:MyServerName.database.windows.net;Database=AdventureWorks2008R2;Uid=MyUserName#MyServerName;Pwd=MyPa$$w0rd;Encrypt=Yes;DataTypeCompatibility=80;"
You can also add "MARS Connection=True;" to the connection string for multiple recordsets.

Resources