With Flash 10.1+ and the ability to use appendBytes on a NetStream, its possible to use HTTP streaming in Flash for video delivery. But it seems that the delivery method requires the segments to be stored in a single file on disk, which can only be broken into discrete segment files with an FMS or an Apache module. You can cache the individual segment files once they're created, but the documentation indicates that you still must always use an FMS / Apache module to produce those files in the first instance.
Is it possible to break the single on-disk file into multiple on-disk segments without using an FMS, Wowza product or Apache?
There was an application which decompiled the output of the F4fpackager to allow it to be hosted anywhere, without the Apache Module. Unfortunately this application was withdrawn.
It should be possible to use a proxy to cache the fragments. Then you can use these cached files on any webserver.
Related
I want to provide in Azure MVC web site a Download link for files that are stored in Blob storage. I do not want the users see my blob storage Url and I want to provide my own dowload link to provide the name of the file by this as well.
I think this can be done with passing(forwarding) the stream. Found many similar questions here in SO, eg here: Download/Stream file from URL - asp.net.
The problem what I see is here: Imagine 1000 users start downloading one file simultaneously. This will totaly kill my server as there is limited number of threads in the pool right?
I should say, that the files I want to forward are about 100MB big so 1 request can take about 10 minutes.
I am right or can I do it with no risks? Would async method in MVC5 help? Thx!
Update: My azure example is here only to give some background. I am actualy interrested in the theoretical problem of the Long Streaming Methods in MVC.
in your situation Lukas, I'd actually recommend you look at using the local, temporary storage area for the blob and serve it up from there. This will result in a delay in delivering the file the first time, but all subsequent requests will be faster (in my experience) and result in fewer azure storage transaction calls. it also then eliminates the risk of running into throttling on the azure storage account or blob. Your throughput limits would be based on the outbound bandwidth of the vm instance and number of connections it can support. I have a sample for this type of approach at: http://brentdacodemonkey.wordpress.com/2012/08/02/local-file-cache-in-windows-azure/
I'm using lighttpd on an embedded device with relatively low amounts of RAM and flash storage, and I'm running into the issue with Lighttpd where it's buffering to disk(or RAM) the entire file upload and the system is running out of space. When using apache, it will essentially stream through the data directly to the CGI program, which is what I need.
From my research, I haven't been able to find any way to configure lighttpd (or nginx) in order so that it will not buffer the entire file upload, but rather pass it directly or stream it to the CGI program which will consume it.
The application is a system upgrade which will be written directly to a certain area of flash by the CGI program, but I simply don't have the space for any type of buffering/caching which seems to be required by the lightweight web servers I have looked at.
Does anyone know of a way to avoid this buffering with lighttpd/nginx or other lightweight web server ?
The Nginx Upload Module was written to handle these types of situations but it appears abandoned by the author and apparently does not work with Nginx 1.3.9+
The Nginx Big Upload Module is an extension to the Nginx Lua Module to handle this.
If you prefer to do things yourself, you can try the Lua Resty Upload extension to the Nginx Lua Module written by the author of the Lua Module himself.
Since lighttpd 1.4.40 (released July 2016) server.stream-request-body = 2
See lighttpd server.stream-request-body doc
(old question, but it came up at the top of a search, so I am updating with an answer)
I'm looking into using Apache Commons VFS for a project that will need to transfer files between local server and remote servers via ftp, sftp and https.
The standard usage examples are getting the FileSystemManager from a static method
FileSystemManager fsManager = VFS.getManager();
Is it safe to use the same FileSystemManager across multiple threads?
And a second question is about properly releasing resources in a finally block: I find the following methods in the Javadoc API:
http://commons.apache.org/proper/commons-vfs/apidocs/org/apache/commons/vfs2/FileObject.html#close()
http://commons.apache.org/proper/commons-vfs/apidocs/org/apache/commons/vfs2/FileSystemManager.html#closeFileSystem(org.apache.commons.vfs2.FileSystem)
http://commons.apache.org/proper/commons-vfs/apidocs/org/apache/commons/vfs2/FilesCache.html#close()
http://commons.apache.org/proper/commons-vfs/apidocs/org/apache/commons/vfs2/impl/DefaultFileSystemManager.html#close()
But it's not clear to me which of these resources should typically be closed.
The filemanager and filesystem objects are supposed to be thread safe, however I would not bet my live on it. Some internal locking (especially around renames) depend on the instance of the FileObject, so you should not use a FileCache which does not keep those (i.e. the default cache is fine).
FileContent and streams should not be used concurrently (in fact FileContent.close() for example only acts on streams of the current thread).
There are some resource leaks in this area (hopefully all fixed in 2.1-SNAPSHOT).
The VFS.getManager provides a single manager ie. single access to the filesystem, so I wont recommend using it from multithreaded environment. You can create your own DefaultFileSystemManager and use the close method when you are done.
I've been looking for a way to monitor file uploading information without using flash, but probably using ajax, i suppose. I want to monitor speed and percentage of finished file upload.
Do you know of any resource that describes how to do that, or what i should follow to do it ?
In the pre-HTML5 world I believe this requires web-server support. I've used this Apache module successfully in the past:
http://piotrsarnacki.com/2008/06/18/upload-progress-bar-with-mod_passenger-and-apache/
The only way without flash is to do it on the server. The gist is:
Start the file upload
Open a streaming connection to the server
Have the server read the post headers to tell you how large the file is going to be
Have the server repeatedly check the file size (in /tmp generally) to see how complete it is
stream the % done back to the client
I've done it before in other languages, but never in ruby, so not sure of a project that's done it, sorry.
A client has a system which reads large files (up to 1 GB) of multiple video images. Access is via an indexing file which "points" into the larger file. This works well on a LAN. Does anyone have any suggestions as to how I can access these files through the internet if they are held on a remote server. The key constraint is that we cannot afford the time necessary to download the whole file before accessing individual images within it.
You could put your big file behind an HTTP server like Apache, then have your client side use HTTP Range headers to fetch the chunk it needs.
Another alternative would be to write a simple script in PHP, Perl or server-language-of-your-choice which takes the required offsets as input and returns the chunk of data you need, again over HTTP.
If I understand the question correctly, it depends entirely on the format chosen to contain the images as a video. If the container has been designed in such a way that the information about each image is accessible just before or just after the image, rather than at the end of the container, you could extract images from the video container and the meta-data of the images, to start working on what you have downloaded until now. You will have to have an idea of the binary format used.
FTP does let you use 'paged files' where sections of the file can be transferred independently
To transmit files that are discontinuous, FTP defines a page
structure. Files of this type are sometimes known as
"random access files" or even as "holey files".
In FTP, the sections of the file are called pages -- rfc959
I've never used it myself though.