How to upload big files to webserver? - delphi

My application needs to send big files (>2GB) to a webserver. I tried to receive the file with a php script on server side, but the file size exceeds the php limit. Sadly, my server hoster does not allow me to increase the file size limit that much.
So, it seems the only possible solution is using FTP. But when using FTP, I need to store the credentials in the source code and someone could reverse engineer my software and get the credentials.
I didn't find any suitable and safe solution for this. Can someone give me a hint how this should be done?

Related

How to transfer files from iPhone to EC2 instance or EBS?

I am trying to create an iOS app, which will transfer the files from an iPhone to a server, process them there, and return the result to the app instantly.
I have noticed that AWS offers an SDK to transfer files from iOS app to S3, but not to EC2 (or at least to EBS which can be attached to EC2). I wonder why I have to go through S3, when my business logic doesn't warrant storage of files. I have used file system softwares such as s3cmd and s3fs to connect to S3 from EC2, but they are very slow at transferring files to EC2. I am concerned that the route through S3 will kill time, especially when the users expect result in a split second.
Could you please guide me on how can I bypass the S3 route to transfer files in real time from iOS app to EC2 (or EBS)?
Allowing an app to write directly to an instance file system is a non starter, short of treating it as a network drive which would be pretty convoluted, not to mention the security issues youll almost certainly have. This really is what s3 is there for. You say you are seeing bad performance between ec2 and s3, this does not sound at all right, this will be an internal datacenter connection which would be at the very least several orders of magnitude faster than a connection from a mobile device to the amazon datacentre. Are you sure you created your bucket and instance in the same region? Alternatively it might be the clients you're using, dont try and setup file system access, just use the aws cli.
If you are really tied to the idea of going direct to the ec2 instance you will need to do it via some network connection, either by running a web server or perhaps using some variety of copy over ssh if that is available on ios. It does seem pointless to set this up when s3 has already done it for you. Finally depending on how big the files are you may be able to get away with sqs or some kind of database store.
It's okay being a newbie!! I ran up against exactly the same processing problem and solved it by running a series of load-balanced webservers where the mobile calls an upload utility, uploads the file, processes it, and then deploys the results to s3 using a signed URL which the mobile can display. It is fast, reliable and secure. The results are cached using CloudFront so once written, are blazing fast to re-access on the mobile. hope this helps.

Is it dangerous for performance to provide in MVC file download as Stream Forwarding from another Stream source

I want to provide in Azure MVC web site a Download link for files that are stored in Blob storage. I do not want the users see my blob storage Url and I want to provide my own dowload link to provide the name of the file by this as well.
I think this can be done with passing(forwarding) the stream. Found many similar questions here in SO, eg here: Download/Stream file from URL - asp.net.
The problem what I see is here: Imagine 1000 users start downloading one file simultaneously. This will totaly kill my server as there is limited number of threads in the pool right?
I should say, that the files I want to forward are about 100MB big so 1 request can take about 10 minutes.
I am right or can I do it with no risks? Would async method in MVC5 help? Thx!
Update: My azure example is here only to give some background. I am actualy interrested in the theoretical problem of the Long Streaming Methods in MVC.
in your situation Lukas, I'd actually recommend you look at using the local, temporary storage area for the blob and serve it up from there. This will result in a delay in delivering the file the first time, but all subsequent requests will be faster (in my experience) and result in fewer azure storage transaction calls. it also then eliminates the risk of running into throttling on the azure storage account or blob. Your throughput limits would be based on the outbound bandwidth of the vm instance and number of connections it can support. I have a sample for this type of approach at: http://brentdacodemonkey.wordpress.com/2012/08/02/local-file-cache-in-windows-azure/

How to monitor File Uploads without using Flash?

I've been looking for a way to monitor file uploading information without using flash, but probably using ajax, i suppose. I want to monitor speed and percentage of finished file upload.
Do you know of any resource that describes how to do that, or what i should follow to do it ?
In the pre-HTML5 world I believe this requires web-server support. I've used this Apache module successfully in the past:
http://piotrsarnacki.com/2008/06/18/upload-progress-bar-with-mod_passenger-and-apache/
The only way without flash is to do it on the server. The gist is:
Start the file upload
Open a streaming connection to the server
Have the server read the post headers to tell you how large the file is going to be
Have the server repeatedly check the file size (in /tmp generally) to see how complete it is
stream the % done back to the client
I've done it before in other languages, but never in ruby, so not sure of a project that's done it, sorry.

How can I read sections of a large remote file (via tcpip?)

A client has a system which reads large files (up to 1 GB) of multiple video images. Access is via an indexing file which "points" into the larger file. This works well on a LAN. Does anyone have any suggestions as to how I can access these files through the internet if they are held on a remote server. The key constraint is that we cannot afford the time necessary to download the whole file before accessing individual images within it.
You could put your big file behind an HTTP server like Apache, then have your client side use HTTP Range headers to fetch the chunk it needs.
Another alternative would be to write a simple script in PHP, Perl or server-language-of-your-choice which takes the required offsets as input and returns the chunk of data you need, again over HTTP.
If I understand the question correctly, it depends entirely on the format chosen to contain the images as a video. If the container has been designed in such a way that the information about each image is accessible just before or just after the image, rather than at the end of the container, you could extract images from the video container and the meta-data of the images, to start working on what you have downloaded until now. You will have to have an idea of the binary format used.
FTP does let you use 'paged files' where sections of the file can be transferred independently
To transmit files that are discontinuous, FTP defines a page
structure. Files of this type are sometimes known as
"random access files" or even as "holey files".
In FTP, the sections of the file are called pages -- rfc959
I've never used it myself though.

Limit upload speed for testing on lighttpd

I'm implementing ubr upload. It used Perl and PHP to upload files with a progress bar. I'm running a lighttpd development server and would like to test it fully. Currently it just transfer the files instantly since its really just moving files on my computer. Is there a way to make it seem like it actually transfers it slowly so I can watch the progress bar?
I tried adding the following to my lighttpd.conf. It may have slowed down loading the pages a little, but uploads are still instantanteous.
$HTTP["host"] == "localhost" {
server.kbytes-per-second = 8
}
Thanks
Instead of throttling things on the server side, you could try throttling your client machine. There's a nice article on how to throttle bandwidth on macs over at O'Reilly:
Exploring the Mac OS X firewall
ipfw is a BSD thing, but on Linux you could try using the shaper module and shapecfg:
Traffic Shaping Basics
$HTTP['host'] contains the host of the server. You could put the config variable in the configuration file without the host check.
Thanks for the help! Actually, I'm dual booting and just tested my exact script on my apache server. When I transfer a 200mb file on apache it actually displays the progress bar as the file transfers. On my lighttpd server, the page is "busy" as it posts the file in the background, then the bar pops up as 100% complete.
I think the way the script works is that CGI posts the file, and as it is doing that it keeps writing the size it has written into another file. Then a php script is being called every second which opens this file and looks at how much has been written.
It seems like my lighttpd server is not allowing perl and php to work at the same time.. I may be wrong though.
On my windows server I actually installed WAMP and perl. My lighttpd is using fastcgi for the php and just mod_cgi module for the perl scripts.
Ah it looks like other people have issues with lighttpd and uber uploader...
(can't link to it since I'm new)
Now the question is if lighttpd is worth using since I'll have to change this on top of all my mod_rewrite stuff.
Try using charles: http://www.charlesproxy.com/
You can limit your browser bandwidth by using the Sloppy HTTP proxy: http://www.dallaway.com/sloppy/
Sloppy deliberately slows the transfer of data between client and server.
Example usage: you probably build web sites on your local network, which is fast. Using Sloppy is one way to get the "dial-up experience" of your work without the hassle of having to install a modem.

Resources