I developed an application through which client can upload and download files to server. Now, I want to allocate the complete bandwidth of client when he upload or download files through my application. My client end is Adobe flash builder and cpp at server side. Any one can help me so that my client can do file transfer rapidly through my application. Thank you.
Usually the user's operating system will allocate bandwidth sensibly. Your application will get all of it unless it's needed for something else at the same time.
Related
I want to provide in Azure MVC web site a Download link for files that are stored in Blob storage. I do not want the users see my blob storage Url and I want to provide my own dowload link to provide the name of the file by this as well.
I think this can be done with passing(forwarding) the stream. Found many similar questions here in SO, eg here: Download/Stream file from URL - asp.net.
The problem what I see is here: Imagine 1000 users start downloading one file simultaneously. This will totaly kill my server as there is limited number of threads in the pool right?
I should say, that the files I want to forward are about 100MB big so 1 request can take about 10 minutes.
I am right or can I do it with no risks? Would async method in MVC5 help? Thx!
Update: My azure example is here only to give some background. I am actualy interrested in the theoretical problem of the Long Streaming Methods in MVC.
in your situation Lukas, I'd actually recommend you look at using the local, temporary storage area for the blob and serve it up from there. This will result in a delay in delivering the file the first time, but all subsequent requests will be faster (in my experience) and result in fewer azure storage transaction calls. it also then eliminates the risk of running into throttling on the azure storage account or blob. Your throughput limits would be based on the outbound bandwidth of the vm instance and number of connections it can support. I have a sample for this type of approach at: http://brentdacodemonkey.wordpress.com/2012/08/02/local-file-cache-in-windows-azure/
Let's say I have some system that coordinates the transfer of many files; that is, I have an Indy TCP server controlling the synchronization of files over a large distributed system.
Currently, in order to send files to specific clients, it requires the locking of the Contexts list on the server.
If I have 500 clients all connected and synchronizations taking place, this locking I suspect would be quite costly on performance as it halts all the client connection threads.
Is there any way to speed this up, or is this not really an issue? Is it worth distributing clients on many servers? What's the trick?
Cheers,
Adrian
There is no need to lock the Contexts list just to send files. Let the OS handle any file locking for you. When sending a file to a client, have the client open the file in read-only mode. This allows multiple clients to read from the same file at the same time. If a client is uploading a file, open the file in exclusive mode so other clients cannot access the file until the upload is finished.
If the clients are always connected, the OnExecute method - which runs in a loop until the connection terminates - can be used to send data to the clients when it is available. This however requires that the protocol is under your control.
A related question and a detailed answer which shows how the locklist can be avoided can be found here:
TCPserver without OnExecute event
I'm building a system with some remote desktop capabilities. The client is considered every computer which is sharing its desktop, the server is considered a central server with a database which receives the images of all the multiple desktops. On the client side, I would like to build two projects: A windows service application and a VCL forms application. Each client app would presumably be running under a different user account on the computer, so there might be multiple client apps running at once, and they all send their image into this client service, which relays them to the central server.
The service will be responsible for connecting to the server, sending the image, and receiving mouse/keyboard events. The application, which is running in the background, will connect to this service some how and transmit the screenshots into the service. The goal is that one service is running while multiple "clients" are able to connect to it and send their desktop image. This service will be connected to the "central server" which receives all these different screenshots from different "clients". The images will then be either saved and logged or re-directed to any "dashboard" which might be viewing that "client".
The question is through what method should I use to connect the client applications to the client service to send images? They will be running on the same computer. I will need both the abilities to send simple command packets as well as stream a chunk of an image. I was about to use the Indy components (TIdTCPServer etc.) but I'm sure there must be an easier and cleaner way to do it. I'm using the Indy components elsewhere in the projects too.
Here's a diagram of the overall system I'm aiming for - I'm just worried about the parts on the far right and far left - where the apps connect to the service within the same computer. As you can see, since there are many layers, I need to make sure whatever method(s) I use are powerful enough to accommodate for streaming massive amounts of image data.
Communicates among processes, you can use Pipe/Mailslots/Socket, I also think while sending a stream file Shared Memory maybe the most efficient way
I've done this a few times now, in a number of different configurations. The key to making it easy for me was using the RemObjects SDK which took care of the communications part. With a thread that controls its state, I can have a connection to a server or service that is reliable, and can transfer anything from a status byte through to transferring many megabytes of data (it is recommended that you use small chunks for large data so that you have more fine grained control over errors and flow). I now have a set of high reliability templates that I can deploy to make a new variation quite easily, and it can be updated with new function calls without much hassle (first thing I do is negotiate versions between the client and server so they know what they can support). Because it all works at a high level, my code is just making "function calls" and never worrying about what the format on the wire is. Likewise I can switch from their binary format to standard SOAP or other without changing the core logic. Finally, the connections can be local, to the same machine (I use this for end user apps talking to a background service) or to a machine on the LAN or internet. All in the same code.
I've been looking for a way to monitor file uploading information without using flash, but probably using ajax, i suppose. I want to monitor speed and percentage of finished file upload.
Do you know of any resource that describes how to do that, or what i should follow to do it ?
In the pre-HTML5 world I believe this requires web-server support. I've used this Apache module successfully in the past:
http://piotrsarnacki.com/2008/06/18/upload-progress-bar-with-mod_passenger-and-apache/
The only way without flash is to do it on the server. The gist is:
Start the file upload
Open a streaming connection to the server
Have the server read the post headers to tell you how large the file is going to be
Have the server repeatedly check the file size (in /tmp generally) to see how complete it is
stream the % done back to the client
I've done it before in other languages, but never in ruby, so not sure of a project that's done it, sorry.
I'm thinking about to write a restful service which is able to upload and stream large video files (GB) (in future it might not only be videos and could also be large documents.
I researched so far and what really makes sense to me could be to use off:
WCF Data Services and Implement IDataServiceStreamProvider and on the back-end I want to Strore the large files into SQL SERVER 2008 using the new SQL Type FILESTREAM.Looks also like I had to use some Win 32 API to access the filesystem SafeFileHandle handle = SqlNativeClient.OpenSqlFilestream
Since WCF Data Services likes to play with Entity Framework or Linq-To-SQL who can be the streaming implementation and is there a support for the SQL Server Filestream Type?
this is the plan but I don't know how to assemble it together... I thougt about chunking the large files and to be able to resume and cancel.
For the upload: I am not sure to use the silverlight upload control or some other nifty ajax tool.
Can anyone point me in the right direction here... or would u think this is this a way to go? Thoughts, Links? whould be great...
I did something where I was sending huge data files. I used these two examples to help write my code
http://msdn.microsoft.com/en-us/library/ms751463.aspx
http://www.codeproject.com/KB/WCF/WCFDownloadUploadService.aspx
This is a very important number to know 2147483647
silverfighter:
Only on IIS6, I could not configure WCF Data Services to send more than 30 MB Stream over the network. I believe it is not built for large stream transactions. Just try to upload a 27 MB file and monitor the relevant w3wp process, you will be surprised by the amount of memory consumed.
The solution was to create a WCF Service Application hosted under its own w3wp process and responsible only for download / upload over WCF. i recommend you use the following project http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP
Hope the above could help.
Not Related to the question but related to answer of #Houssam Hamdan :
The 30 MB limit is not because of WCF data services but it's IIS's limitation that can be changed through config file and settings of IIS and catching some exceptions thrown by IIS