Restful WCF Data Service to up- and download large files? - asp.net-mvc

I'm thinking about to write a restful service which is able to upload and stream large video files (GB) (in future it might not only be videos and could also be large documents.
I researched so far and what really makes sense to me could be to use off:
WCF Data Services and Implement IDataServiceStreamProvider and on the back-end I want to Strore the large files into SQL SERVER 2008 using the new SQL Type FILESTREAM.Looks also like I had to use some Win 32 API to access the filesystem SafeFileHandle handle = SqlNativeClient.OpenSqlFilestream
Since WCF Data Services likes to play with Entity Framework or Linq-To-SQL who can be the streaming implementation and is there a support for the SQL Server Filestream Type?
this is the plan but I don't know how to assemble it together... I thougt about chunking the large files and to be able to resume and cancel.
For the upload: I am not sure to use the silverlight upload control or some other nifty ajax tool.
Can anyone point me in the right direction here... or would u think this is this a way to go? Thoughts, Links? whould be great...

I did something where I was sending huge data files. I used these two examples to help write my code
http://msdn.microsoft.com/en-us/library/ms751463.aspx
http://www.codeproject.com/KB/WCF/WCFDownloadUploadService.aspx
This is a very important number to know 2147483647

silverfighter:
Only on IIS6, I could not configure WCF Data Services to send more than 30 MB Stream over the network. I believe it is not built for large stream transactions. Just try to upload a 27 MB file and monitor the relevant w3wp process, you will be surprised by the amount of memory consumed.
The solution was to create a WCF Service Application hosted under its own w3wp process and responsible only for download / upload over WCF. i recommend you use the following project http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP
Hope the above could help.

Not Related to the question but related to answer of #Houssam Hamdan :
The 30 MB limit is not because of WCF data services but it's IIS's limitation that can be changed through config file and settings of IIS and catching some exceptions thrown by IIS

Related

Is it dangerous for performance to provide in MVC file download as Stream Forwarding from another Stream source

I want to provide in Azure MVC web site a Download link for files that are stored in Blob storage. I do not want the users see my blob storage Url and I want to provide my own dowload link to provide the name of the file by this as well.
I think this can be done with passing(forwarding) the stream. Found many similar questions here in SO, eg here: Download/Stream file from URL - asp.net.
The problem what I see is here: Imagine 1000 users start downloading one file simultaneously. This will totaly kill my server as there is limited number of threads in the pool right?
I should say, that the files I want to forward are about 100MB big so 1 request can take about 10 minutes.
I am right or can I do it with no risks? Would async method in MVC5 help? Thx!
Update: My azure example is here only to give some background. I am actualy interrested in the theoretical problem of the Long Streaming Methods in MVC.
in your situation Lukas, I'd actually recommend you look at using the local, temporary storage area for the blob and serve it up from there. This will result in a delay in delivering the file the first time, but all subsequent requests will be faster (in my experience) and result in fewer azure storage transaction calls. it also then eliminates the risk of running into throttling on the azure storage account or blob. Your throughput limits would be based on the outbound bandwidth of the vm instance and number of connections it can support. I have a sample for this type of approach at: http://brentdacodemonkey.wordpress.com/2012/08/02/local-file-cache-in-windows-azure/

trying to load QC11.5 using Neoload but recorded response not stored by neoload

I am trying to do a loadtest on QC11.5 application using Neoload, While recording request are being captured but response body not been stored.
<<body not stored by Neoload >> Error
please help to resolve this issue
Hum... It sounds like a well-known project ;> Here is a summary for the other readers. Hewlett Packard Quality Center 10 or 11 is not a "full" web application. It is kind of local application, installed through Internet Explorer with .cab and .ocx using a HTTP tunnel. The problem to load test it is that the dialog sent by this fat client is fully encrypted. For NeoLoad, an encrypted conversation (over HTTP or HTTPS) is considered as binary and it is not stored in the design. But it is clearly showed in the "Check VU" step. Here we speak about an "alien" encrypting service on top of standard services like SSL, where NeoLoad performs well.
For the readers, to put it in a nutshell, QC cannot be load tested with a network-based approach, like all the majors and professional load testing tools do. Here it is one of the rare situations where a synchronized functional test could be the solution... with hundreds or thousands of desktops.

Sharing data system wide

Good evening.
I'm looking for a method to share data from my application system-wide, so that other applications could read that data and then do whatever they want with it (e.g. format it for display, use it for logging, etc). The data needs to be updated dynamically in the method itself.
WMI came to mind first, but then you've got the issue of applications pausing while reading from WMI. Additionally, i've no real idea how to setup my own namespace or classes if that's even possible in Delphi.
Using files is another idea, but that could get disk heavy, and it's a real awful method to use for realtime data.
Using a driver would probably be the best option, but that's a little too intrusive on the users end for my liking, and i've no idea on where to even start with it.
WM_COPYDATA would be great, but i'm not sure if that's dynamic enough, and whether it'll be heavy on resources or not.
Using TCP/IP would be the best choice for over the network, but obviously is of little use when run on a single system with no networking requirement.
As you can see, i'm struggling to figure out where to go with this. I don't want to go into one method only to find that it's not gonna work out in the end. Essentially, something like a service, or background process, to record data and then allow other applications to read that data. I'm just unsure on methods. I'd prefer to NOT need elevation/UAC to do this, but if needs be, i'll settle for it.
I'm running in Delphi 2010 for this exercise.
Any ideas?
You want to create some Client-Server architecture, which is also called IPC.
Using WM_COPYDATA is a very good idea. I found out it is very fast, lightweight, and efficient on a local machine. And it can be broadcasted over the system, to all applications at once (to be used with care if some application does not handle it correctly).
You can also share some memory, using memory mapped files. This is may be the fastest IPC option around for huge amount of data, but synchronization is a bit complex (if you want to share more than one buffer at once).
Named pipes are a good candidates for local. They tend to be difficult to implement/configure over a network, due to security issues on modern Windows versions (and are using TCP/IP for network communication - so you should better use directly TCP/IP instead).
My personal advice is that you shall implement your data sharing with abstract classes, able to provide several implementations. You may use WM_COPYDATA first, then switch to named pipes, TCP/IP or HTTP in order to spread your application over a network.
For our Open Source Client-Server ORM, we implemented several protocols, including WM_COPY_DATA, named pipe, HTTP, or direct in-process access. You can take a look at the source code provided for implementation patterns. Here are some benchmarks, to give you data from real implementations:
Client server access:
- Http client keep alive: 3001 assertions passed
first in 7.87ms, done in 153.37ms i.e. 6520/s, average 153us
- Http client multi connect: 3001 assertions passed
first in 151us, done in 305.98ms i.e. 3268/s, average 305us
- Named pipe access: 3003 assertions passed
first in 78.67ms, done in 187.15ms i.e. 5343/s, average 187us
- Local window messages: 3002 assertions passed
first in 148us, done in 112.90ms i.e. 8857/s, average 112us
- Direct in process access: 3001 assertions passed
first in 44us, done in 41.69ms i.e. 23981/s, average 41us
Total failed: 0 / 15014 - Client server access PASSED
As you can see, fastest is direct access, then WM_COPY_DATA, then named pipes, then HTTP (i.e. TCP/IP). Message was around 5 KB of JSON data containing 113 rows, retrieved from server, then parsed on the client 100 times (yes, our framework is fast :) ). For huge blocks of data (like 4 MB), WM_COPY_DATA is slower than named pipes or HTTP-TCP/IP.
Where are several IPC (inter-process communication) methods in Windows. Your question is rather general, I can suggest memory-mapped files to store your shared data and message broadcasting via PostMessage to inform other application that the shared data changed.
If you don't mind running another process, you could use one of the NoSQL databases.
I'm pretty sure that a lot of them won't have Delphi drivers, but some of them have REST drivers and hence can be driven from pretty much anything.
Memcached is an easy way to share data between applications. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects).
A Delphi 2010 client for Memcached can be found on google code:
http://code.google.com/p/delphimemcache/
related question:
Are there any Caching Frameworks for Delphi?
Googling for 'delphi interprocess communication' will give you lots of pointers.
I suggest you take a look at http://madshi.net/, especially MadCodeHook (http://help.madshi.net/madCodeHook.htm)
I have good experience with the product.

Real-time ASP.NET MVC Web Application

I need to add a "real-time" element to my web application. Basically, I need to detect "changes" which are stored in a SQL Server table, and update various parts of the UI when a change has occured.
I'm currently doing this by polling. I send an ajax request to the server every 3 seconds asking for any new changes - these are then returned and processed. It works, but I don't like it - it means that for each browser I'll be issuing these requests frequently, and the server will always be busy processing them. In short, it doesn't scale well.
Is there any clever alternative that avoids polling overhead?
Edit
In the interests of completeness, I'm updating this to mention the solution we eventually went with - SignalR. It's OS and comes from Microsoft. It's risen in popularity, and I can heartily recommend this, or indeed WebSync which we also looked at.
Check out WebSync, a comet server designed for ASP.NET/IIS.
In particular, what I would do is use the SQL Dependency class, and when you detect a change, use RequestHandler.Publish("/channel", data); to send out the info to the appropriate listening clients.
Should work pretty nicely.
taken directly from the link refernced by Jakub (i.e.):
Reverse AJAX with IIS/ASP.NET
PokeIn on codeplex gives you an enhanced JSON functionality to make your server side objects available in client side. Simply, it is a Reverse Ajax library which makes it easy to call JavaScript functions from C#/VB.NET and to call C#/VB.NET functions from JavaScript. It has numerous features like event ordering, resource management, exception handling, marshaling, Ajax upload control, mono compatibility, WCF & .NET Remoting integration and scalable server push.
There is a free community license option for this library and the licensing option is quite cost effective in comparison to others.
I've actually used this and the community edition is pretty special. well worth a look as this type of tech will begin to dominate the landscape in the coming months/years. the codeplex site comes complete with asp.net mvc samples.
No matter what: you will always be limited to the fact that HTTP is (mostly) a one-way street. Unless you implement some sensible code on the client (ie. to listen to incoming network requests) anything else will involve polling the server for updates, no-matter what others will tell you.
We had a similar requirement: to have very fast response time in one of our real-time web applications, serving about 400 - 500 clients per web server. Server would need to notify the clients almost within 0.1 of a second (telephony & VoIP).
In the end we implemented an Async Handler. On each polling request we put the request to sleep for 5 seconds, waiting for a semaphore pulse signal to respond to the client. If the 5 seconds are up, we respond with a "no event" and the client will post the request again (immediately). This resulted in very fast response times, and we never had any problems with up to 500 clients per machine.. no idea how many more we could add before the polling requests might create a problem.
take a look at this article
I've read somewhere (didn't remember where) that using this WCF feature make the host process handle requests in a way that didn't consume blocked threads.
Depending on the restrictions on you application you can use Silverlight to do this connection. You don't need to have any UI for Silverlight, but you can use Sockets have a connection that accepts server side pushes of data.

Can Intraweb Run More that 65,536 concurrent sessions?

Im trying to build a web-link to a busy social networking website using intraweb.
Intraweb creates temporary folders for each session to store temporary files, which auto-delete when the session expires.
If hosted on Win 32, the limit is 65,536 folders - which means only 65k concurrent sessions are possible.
Is there a way to turn off the temp file creation or allow for more concurrent sessions in intraweb?
IntraWeb is just not designed for handling such session amounts. IntraWeb is designed for Web applications and not for Web sites. Eventhough a plain IntraWeb session takes only a few kbytes, IntraWeb's session handling model is more a "fat" model. It is perfectly suited for creating complex stateful applications that can handle a few hundred concurrent sessions.
For Web sites with thousands of users per day - where many users just open one page and go away again - you /could/ certainly use Webbroker - but that basically means that you have to build up everything from scratch.
If you are a Delphi guy then I would recommend looking into Delphi Prism plus ASP.NET. There are tons of ASP.NET controls that simplify building your Web site in a rapid way. ASP.NET controls from DevExpress.com, Telerik.com and others work perfectly fine in Delphi Prism.
I am pretty sure you'll run out of system resources before you get close to 65,000 users on one box. To handle that load you'll need a load-balanced cluster, and then the 65K limit won't be an issue. I would not focus on this limitation.

Resources