Transferring a file using FTP - c#-2.0

First of all this is not a question seeking Programming help. I am doing a project using FTP in that i employed a logic I want peoples to make comment whether the logic is ok or I have to employ a better one. I transfer files using FTP, for ex if the filesize is 10 MB I will split that file into "X" no of files of "y" size depending upon my network speed, after that I send these files one by one and merge it on the client machine.
the network speed in my client side is very low(1kbps) so I want to split files into size of 512 bytes and send it.

It would be better not to split the file but use a client and server that both support resuming file transfers.

Related

aws iot file transfer

I am trying to use AWS IoT to communicate with my beaglebone board, I got the MQTT messages transferring from the board to the server using MQTT. I was wondering if there is a way to transfer files (text or binary) files to the server and from the server to beaglebone using AWS IoT.
The payload of a MQTT message is just a byte stream, so can carry just about anything (up to the max size of 268,435,456 bytes according to the spec [AWS may have other limits in it's implementation]).
You will have to implement your own code to publish files and to subscribe and save files. You will also have to implement a payload format that includes any metadata you might need (e.g. file names)
You can transfer file using MQTT, but you should first divide it to smaller pieces and then send it, but the payload has limitations 128 kB. More information about AWS IoT and its limitations here.
But I would suggest not using MQTT to transfer files, because messaging also cost money, so if the file size is big and you send it periodically, then it may cost you much. You can find AWS IoT Core prices here.
You can upload your file(s) to S3 bucket and then access the file(s) from there.

Is it dangerous for performance to provide in MVC file download as Stream Forwarding from another Stream source

I want to provide in Azure MVC web site a Download link for files that are stored in Blob storage. I do not want the users see my blob storage Url and I want to provide my own dowload link to provide the name of the file by this as well.
I think this can be done with passing(forwarding) the stream. Found many similar questions here in SO, eg here: Download/Stream file from URL - asp.net.
The problem what I see is here: Imagine 1000 users start downloading one file simultaneously. This will totaly kill my server as there is limited number of threads in the pool right?
I should say, that the files I want to forward are about 100MB big so 1 request can take about 10 minutes.
I am right or can I do it with no risks? Would async method in MVC5 help? Thx!
Update: My azure example is here only to give some background. I am actualy interrested in the theoretical problem of the Long Streaming Methods in MVC.
in your situation Lukas, I'd actually recommend you look at using the local, temporary storage area for the blob and serve it up from there. This will result in a delay in delivering the file the first time, but all subsequent requests will be faster (in my experience) and result in fewer azure storage transaction calls. it also then eliminates the risk of running into throttling on the azure storage account or blob. Your throughput limits would be based on the outbound bandwidth of the vm instance and number of connections it can support. I have a sample for this type of approach at: http://brentdacodemonkey.wordpress.com/2012/08/02/local-file-cache-in-windows-azure/

Delphi (Indy) Specific Locking

Let's say I have some system that coordinates the transfer of many files; that is, I have an Indy TCP server controlling the synchronization of files over a large distributed system.
Currently, in order to send files to specific clients, it requires the locking of the Contexts list on the server.
If I have 500 clients all connected and synchronizations taking place, this locking I suspect would be quite costly on performance as it halts all the client connection threads.
Is there any way to speed this up, or is this not really an issue? Is it worth distributing clients on many servers? What's the trick?
Cheers,
Adrian
There is no need to lock the Contexts list just to send files. Let the OS handle any file locking for you. When sending a file to a client, have the client open the file in read-only mode. This allows multiple clients to read from the same file at the same time. If a client is uploading a file, open the file in exclusive mode so other clients cannot access the file until the upload is finished.
If the clients are always connected, the OnExecute method - which runs in a loop until the connection terminates - can be used to send data to the clients when it is available. This however requires that the protocol is under your control.
A related question and a detailed answer which shows how the locklist can be avoided can be found here:
TCPserver without OnExecute event

Delphi - Folder Synchronization over network

I have an application that connects to a database and can be used in multi-user mode, whereby multiple computers can connect the the same database server to view and modify data. One of the clients is always designated to be the 'Master' client. This master also receives text information from either RS232 or UDP input and logs this data every second to a text file on the local machine.
My issue is that the other clients need to access this data from the Master client. I am just wondering the best and most efficient way to proceed to solve this problem. I am considering two options:
Write a folder synchronize class to synchronize the folder on the remote (Master) computer with the folder on the local (client) computer. This would be a threaded, buffered file copying routine.
Implement a client/server so that the Master computer can serve this data to any client that connects and requests the data. The master would send the file over TCP/UDP to the requesting client.
The solution will have to take the following into account:
a. The log files are being written to every second. It must avoid any potential file locking issues.
b. The copying routine should only copy files that have been modified at a later date than the ones already on the client machine.
c. Be as efficient as possible
d. All machines are on a LAN
e. The synchronization need only be performed, say, every 10 minutes or so.
f. The amount of data is only in the order of ~50MB, but once the initial (first) sync is complete, then the amount of data to transfer would only be in the order of ~1MB. This will increase in the future
Which would be the better method to use? What are the pros/cons? I have also seen the Fast File Copy post which i am considering using.
If you use a database, why the "master" writes data to a text file instead of to the database, if those data needs to be shared?
Why invent the wheel? Use rsync instead. Package for windows: cwrsync.
For example, on the Master machine install rsync server, and on the client machines install rsync clients or simply drop files in your project directory. Whenever needed your application on a client machine shall execute rsync.exe requesting to synchronize necessary files from the server.
In order to copy open files you will need to setup Windows Volume Shadow Copy service. Here's a very detailed description on how the Master machine can be setup to allow copying of open files using Windows Volume Shadow Copy.
Write a web service interface, so that the clients an connect to the server and pull new data as needed. Or, you could write it as a subscribe/push mechanism so that clients connect to the server, "subscribe", and then the server pushes all new content to the registered clients. Clients would need to fully sync (get all changes since last sync) when registering, in case they were offline when updates occurred.
Both solutions would work just fine on the LAN, the choice is yours. You might want to also consider those issues related to the technology you choose:
Deployment flexibility. Using file shares and file copy requires file sharing to work, and all LAN users might gain access to the log files.
Longer term plans: File shares are only good on the local network, while IP based solutions work over routed networks, including Internet.
The file-based solution would be significantly easier to implement compared to the IP solution.

How can I read sections of a large remote file (via tcpip?)

A client has a system which reads large files (up to 1 GB) of multiple video images. Access is via an indexing file which "points" into the larger file. This works well on a LAN. Does anyone have any suggestions as to how I can access these files through the internet if they are held on a remote server. The key constraint is that we cannot afford the time necessary to download the whole file before accessing individual images within it.
You could put your big file behind an HTTP server like Apache, then have your client side use HTTP Range headers to fetch the chunk it needs.
Another alternative would be to write a simple script in PHP, Perl or server-language-of-your-choice which takes the required offsets as input and returns the chunk of data you need, again over HTTP.
If I understand the question correctly, it depends entirely on the format chosen to contain the images as a video. If the container has been designed in such a way that the information about each image is accessible just before or just after the image, rather than at the end of the container, you could extract images from the video container and the meta-data of the images, to start working on what you have downloaded until now. You will have to have an idea of the binary format used.
FTP does let you use 'paged files' where sections of the file can be transferred independently
To transmit files that are discontinuous, FTP defines a page
structure. Files of this type are sometimes known as
"random access files" or even as "holey files".
In FTP, the sections of the file are called pages -- rfc959
I've never used it myself though.

Resources