Stopping the upload process if the upload limit I chose is exceeded - upload

I am working on a web site project PHP/APACHE without any js until now.
I found out various ways to set the upload limit of an image to the server.
They work, but when I upload a very large one, the delay before the message "your file is too big" is from far too long. This means if a user does'nt understand what max 2.4MB is he will be likely to wait more than a minute or 2 before seeing the message.
My question is :
Do you know any mean to have the uopload automatically cancelled if the image he tries to transfer exceeds the limit ?
Thank a lot
SunnyOne.

Basically, there are 2 ways to do this: With Flash/Java, or with fancy HTML5 JavaScript that only works on some browsers (and the most recent version of those, as well.
Check these other SO questions for pointers:
Client Checking file size using HTML5? and Detecting file upload size on the client side?.
Also, check out these tools: YUI2 Uploader, FancyUpload, SWFUpload

Related

Zoomify .zif format bad performance

The new .zif single file format provided by Zoomify Pro seems to have some performance issues. Comparing it to the old file structure it loads the page 3 to 4 times slower and the requests that it sends exceed 50% more (Tested with the same initial image in multiple file formats).
Using the old format is not feasible for out product and we are stuck with over a minute of load time.
Has anyone encountered this issue, and are there some workarounds? The results in the internet and the official site doesn't seem to be of any help.
NOTE: Contacting the vendor hasn't led to anything yet.
Although the official site claims the zif format could handle very large image, I'm skeptical about it because the viewer tries to do everything in Javascript. The performance is entire based on the client's machine. Try opening it on a faster machine and see if it improves.
Alternative solution: You could create Deep Zoom Image tiles by using VIPS library.
More information here:
https://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html
Scroll further down in the article and you'll see this snippet:
With 7.40 and later, you can use --container to set the container
type. Normally dzsave will write a tree of directories, but with
--container zip you'll get a zip file instead. Use .zip as the directory suffix to turn on zip format automatically:
$ vips dzsave wtc.tif mypyr.zip
to write a zipfile containing the tiles.
Also, checkout this tutorial:
Serve deepzoom images from a zip archive with openseadragon
https://web.archive.org/web/20170310042401/https://literarymachin.es/deepzoom-osd-server/
The community (openseadragon and vips) is much stronger over there so you'll get help when you hit a wall.
If you want to take a break from all of this and just want the images zoomable, you could use 3rd party service such as zoomable.ca or zoomo.ca. It’s free and user friendly (upload your image and embed the viewer to your site like Google Map).
ZIF format designer here... ZIF can easily handle monstrous images, up to hundreds of terabytes in size.
Without a server, of course the viewer tries to do everything, it's the only option. As a result, serving ZIF directly from a webserver will not be as performant as using an image server. But... you can DO it. Using Zoomify tile folders, speed will be faster, but you may have hundreds of thousands or millions of tiles to deal with at the server side, and transfers will be horrendously slow and error-prone.
There are always trade-offs.See zif.photo for specification.

Watch video in the time they are uploaded

It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.

Why is Performing Multi-part Uploads to S3 on iOS not supported?

Problem statement:
I want to upload a large binary (such as an audio clip) from an iOS app to S3, and I'd like to make the app's handling of disconnects (or low connectivity) as robust as possible, preferably by uploading the binary as a series of chunks.
Unfortunately, neither the AWSiOS SDK, nor ASI's S3 framework seem to support to multi-part uploads, or indicate that they plan to add support. I realize that I can initiate a 'longish' upload using beginBackgroundTaskWithExpirationHandler: and that'll give me a window of time to complete the upload (currently 600 seconds, I believe), but what's to be done if I'm not in a situation to complete said upload within that timeframe?
Aside from worrying about completing tasks within that time frame, is their a 'best practice' for how an app should resume uploads, or even just break a larger upload into smaller chunks?
I've thought about writing a library to talk to S3's REST API specifically for multi-part uploads, but this seems like a problem other have either been solved, or realized needn't be solved (perhaps for being completely in appropriate for the platform).
Another (overly complicated) solution would be chunking the file on the device, uploading those to S3 (or elsewhere) and have them re-assembled on S3 via a server process. This seems even more unpalatable than rolling my own library for multi-part upload.
How are others handling this problem?
Apparently I was looking at some badly out of date documentation.
in AmazonS3Client see:
- (S3MultipartUpload * AmazonS3Client)initiateMultipartUploadWithKey:(NSString *)theKey withBucket:(NSString *)theBucket
Which will give you a S3MultipartUpload which will contain an uploadId.
You can then put together an S3UploadPartRequest using initWithMultipartUpload: (S3MultipartUpload *) multipartUpload and send that as you usually would.
S3UploadPartRequest contains an int property partNumber where you can specify the part # you're uploading.
you can write some code to do so, you can refer code from http://dextercoder.blogspot.in/2012/02/multipart-upload-to-amazon-s3-in-three.html. Core java code, steps can be used for iOS.

What is the cause of frequent corrupted/incomplete download from Amazon CDN to iOS devices?

I mentioned Amazon CDN and iOS devices because I am not sure which part is the culprit.
I host jpg and PDF files in Amazon CDN.
I have an iOS application that download a large number of jpg and PDF files in a queue. I have tried using dataWithContentOfURL and ASIHttpRequest, but I get the same result. ASIHttpRequest at least gives a callback to indicate that there is problem with the download, so I can force it to retry.
But this happens very often. Out of 100 files, usually 1-5 files have to be redownloaded. If I check the file size, it is smaller than the original file size and can't be opened.
The corrupted files are usually different everytime.
I've tried this on different ISP and network. It's the same.
Is there a configuration that I missed out in Amazon CDN, or is there something else I missed out in iOS doWnload? Is it not recommended to queue large number of files for download?
I wouldn't download more than 3 or 4 items at once on an iPhone. Regardless of implementation limitations (ASIHTTPRequest is decent anyway) or the potential for disk thrashing, you have to code for the 3G use case unless your app is explicitly marked (as in, with an Info.plist setting) that it requires Wi-Fi.
A solution exists within ASIHTTPRequest itself to make these operations sequential rather than concurrent. Add your ASIHTTPRequest objects to an ASINetworkQueue (the one returned by [ASINetworkQueue queue] will do fine). They will be executed one after the other.
Note that if you encounter any errors, all the requests on the queue will by default be cancelled unless you set the queue's shouldCancelAllRequestsOnFailure to NO.
EDIT: I just noticed you already mentioned using a download queue. I would therefore suspect it's more of an issue at the other end than on your device. Connections could be dropping for various reasons: keep-alive setting is too short, resources too low on the server so it's timing out, some part of the physical link between the server and the Internet backbone is failing intermittently. To be sure, though, you probably need to test your app on multiple devices to make sure it's consistently failing across all of them to really be able to say that.
You could possibly try reducing the number of concurrent downloads:
ASIHTTPRequest.sharedQueue.maxConcurrentOperationCount = 2;
This changes the default ASIHTTPRequest queue - if you're using your own queue set the value on that instead.
The default is 4, which is above the limit recommended by the HTTP 1.1 RFC when using persistent connections and when all the content is on the same server.

HTML5 Ipad Error while downloading large numbers of pictures

I want to develop an offline html5 website with a large number of pictures (10 000).
The problem is that during the downloading process when safari ask me to increase the cache limit it stop the download and i need to start it again.
It's possible to make such application in html5 on a ipad ? can we breach the offline cach limit :) ?
Thks a lot in advance !!!!!!
The limit is set inside the browser itself, typically at 5MB. Currently, as far as I'm aware, the only way to increase it is by the user actually doing it via the browser menu.
There might be a way of increasing it programmatically, I don't know, but you would most certainly have to ask the user's permission to do so.
I also think it's a bit much to be expecting someone to allow 10k+ images to be stored on their system, that could be huge.

Resources