handling large file image upload - asp.net-mvc

On my asp.net mvc 4 site I have a feature where a user can upload a photo, via standard file uploader. The photo gets saved in to a file table within sql server.
I have run in to an issue recently where users are uploading very large photos which in return means bandwidth being eaten up when image is being rendered.
What is the best way to handle this? Can I restrict the size of file being uploaded? Or is there a way of reducing the number of bytes being uploaded while maintaining quality?

Refer to this post for the maximumRequestLength config setting and way to provide a more friendly error
This question and answer may also be helpful
You can also check the size of the file in javascript before uploading so that it doesn't even get sent to the server if it is too big (the code below check for anything bigger than 15MB):
if( Math.floor( file.size / 1024 /1024 ) >= 15 )
{
alert( 'File size is greater than maximum allowed. Please make sure that the file is smaller than 15 MegaBytes.' );
return false;
}
Alternatively, on the server side you can use WebImage.Resize() to resize once the file has been uploaded. It won't help with the bandwidth during upload, but it will make subsequent downloads a lot faster. Making an image smaller will cause some loss in quality, but generally it does a good job, just make sure that you choose the option to maintain the aspect ratio to prevent distortion.
As for reducing the bytes before uploading there isn't any way I know of to do this in the browser. You could provide a separate client-side application that will resize the files for them before the upload using the WebImage.Resize method in you app.

Related

How can I set a maximum file size for pages in Scanbot

In my app documents are scanned and handled by Scanbot.io and then uploaded to the backend server. Is there away to configure a maximum size for this documents in Scanbot.
I checked the documentation could not find a relevant setting
Ideally scanned Page objects that are too large for upload would be flagged in the detectionResult field and could be handled accordingly.
Are there any experiences on achieving something similar?
A bit late, but in my experience with Scanbot there are a few settings to help control resulting size. It's not based on physical file size, but based on quality/resolution desires:
In initializeSdk you can set a storageImageQuality setting. I've
heard that this can confidently be set at 80 since quality above that
likely won't be noticeable to the eye.
A quality option exists on calls to other functions too (e.g. detectDocument, applyImageFilter), similar to above
The startDocumentScanner method has the ability to pass a documentImageSizeLimit object, which allows you to restrict the max width/height values.

Displaying interlaced (progressive) Image in UIImageView

I am trying to display a JPEG image as it downloads, using part of the data, similiar to many web browsers do, or the facebook app.
there is a low-quality version of the image(just part of the data) and then display the full image in full quality.
this is best shown in the VIDEO HERE
I followed this SO question:
How do I display a progressive JPEG in an UIImageView while it is being downloaded?
but all I got was a imageview that is being rendered as data keeps comes in, no low-quality version first, no true progressive download and render.
can anyone share a code snippet or point me to where I can find more info as to how this can be implemented in an iOS app ?
tried this link for example which shows JPEG info, it identifies the image as progressive
http://www.webpagetest.org/jpeginfo/jpeginfo.php?url=http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg
and I used the correct code sequence
-(void)connection:(NSURLConnection*)connection didReceiveData:(NSData*)data
{
/// Append the data
[_dataTemp appendData:data];
/// Get the total bytes downloaded
const NSUInteger totalSize = [_dataTemp length];
/// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (CFDataRef)_dataTemp, (totalSize == _expectedSize) ? true : false);
/// We know the expected size of the image
if (_fullHeight > 0 && _fullWidth > 0)
{
[_imageView setImage:[UIImage imageWithCGImage:image]];
CGImageRelease(image);
}
}
but the code only shows the image when it is finished loading, with other images, it will show it as it downloading, but only top to bottom, no low quality version and then progressively add detail as browsers do.
DEMO PROJECT HERE
This is a topic I've had some interest in for a while: there appears to be no way to do what you want using Apple's APIs, but if you can invest time in this you can probably make it work.
First, you are going to need a JPEG decoding library: libjpeg or libjpeg-turbo. You will then need to integrate it into something you can use with Objective-C. There is an open source project that uses this library, PhotoScrollerNetwork, that uses leverages the turbo library to decode very large jpegs "on the fly" as they download, so they can be panned and zoomed (PhotoScroller is an Apple project that does the panning and zooming, but it requires pre-tiled images).
While the above project is not exactly what you want, you should be able to lift much of the libjpeg-turbo interface to decode progressive images and return the low quality images as they are received. It would appear that your images are quite large, otherwise there would be little need for progressive images, so you may find the panning/zooming capability of the above project of use as well.
Some users of PhotoScrollerNetwork have requested support for progressive images, but it seems there is very little general use of them on the web.
EDIT: A second idea: if it's your site that you would use to vend progressive images (and I assume this since there are so few to be found normally), you could take a completely different tact.
In this case, you would construct a binary file of your own design - one that had say 4 images inside it. The first four bytes would provide the length of the data following it (and each subsequent image would use the same 4-byte prefix). Then, on the iOS side, as the download starts, once you got the full bytes of the first image, you could use those to build a small low res UIImage, and show it while the next image was being received. When the next one fully arrives, you would update the low res image with the newer higher res image. Its possible you could use a zip container and do on the fly decompression - not 100% sure. In any case, the above is a standard solution to your problem, and would provide near-identical performance to libjpeg, with much much less work.
I have implemented a progressive loading solution for an app I am currently working on. It does not use progressive Jpeg as I needed more flexibility loading different-res versions, but I get the same result (and it works really well, definitely worth implementing).
It's a camera app working in tandem with a server. So the images originate with the iPhone's camera and are stored remotely. When the server gets the image, it gets processed (using imageMagick, but could be any suitable library) and stored in 3 sizes - small thumb (~160 x 120), large thumb (~400x300) and full-size (~ double retina screensize). Target devices are retina iPhones.
I have an ImageStore class which is responsible for loading images asynchronously from wherever they happen to be, trying the fastest location first (live cache, local filesystem cache, asset library, network server).
typedef void (^RetrieveImage)(UIImage *image);
- (void) fullsizeImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)largeThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)smallThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
Each of these methods will also attempt to load lower-res versions. The completion block actually loads the image into it's imageView.
Thus
fullsizeImageFromPath
will get the fullsized version, and also call largeThumbImageFromPath
largeThumbImageFromPath
will get the large thumb and also call smallThumbImageFromPath
smallThumbImageFromPath
will just get the small thumb
These methods invoke calls that are wrapped in cancellable NSOperations. If a larger-res version arrives before any of it's lower-res siblings, those respective lower-res calls are cancelled. The net result is that fullsizeImageFromPath may end up applying the small thumb, then the large thumb, and finally the full-res image to a single imageView depending on which arrives first. The result is really smooth.
Here is a gist showing the basic idea
This may not suit you as you may not be in control of the server side of the process. Before I had implemented this, I was pursuing the solution that David H describes. This would have been a lot more work, and less useful once I realised I also needed access to lower-res images in their own right.
Another approach which might be closer to your requirements is explained here
This has evolved into NYXProgressiveImageView, a subclass of UIImageView which is distributed as part of NYXImagesKit
Finally ... for a really hacky solution you could use a UIWebView to display progressive PNGs (progressive JPegs do not appear to be supported).
update
After recommending NYXProgressiveImageView, I realised that this is what you have been using. Unfortunately you did not mention this in your original post, so I feel I have been on a bit of a runaround. In fact, reading your post again, I feel you have been a little dishonest. From the text of your post, it looks as if the "DEMO" is a project that you created. In fact you didn't create it, you copied it from here:
http://cocoaintheshell.com/2011/05/progressive-images-download-imageio/ProgressiveImageDownload.zip
which accompanies this blog entry from cocoaintheshell
The only changes you have made is one NSLog line, and to alter the JPG test URL.
The code snippet that you posted isn't yours, it is copied from this project without attribution. If you had mentioned this in your post it would have saved me a whole heap of time.
Anyway, returning to the post... as you are using this code, you should probably be using the current version, which is on github:
https://github.com/Nyx0uf/NYXImagesKit
see also this blog entry
To keep your life simple, you only need these files from the project:
NYXProgressiveImageView.h
NYXProgressiveImageView.m
NYXImagesHelper.h
NYXImagesHelper.m
Next you need to be sure you are testing with GOOD images
For example, this PNG works well:
http://www.libpng.org/pub/png/img_png/pnglogo-grr.png
You also need to pay attention to this cryptic comment:
/// Note: Progressive JPEG are not supported see #32
There seems to be an issue with JPEG tempImage rendering which I haven't been able to work out - maybe you can. That is the reason why your "Demo" is not working correctly, anyway.
update 2
added gist
I believe this is what you are looking for:
https://github.com/contentful-labs/Concorde
A framework for downloading and decoding progressive JPEGs on iOS and OS X, that uses libjpeg-turbo as underlying JPEG implementation.
Try it, may be its useful for you :
https://github.com/path/FastImageCache
https://github.com/rs/SDWebImage
I have the same problem then i found something tricky its not proper solution but it works.
You have to load low resolution/thumbnail image when loaded then load
actual image.
This is example for android i hope you can transform it into ios version.

Is the chunking option required with plupload and asp.net MVC?

I have seen various posts where developers have opted for the chunking option to upload files, particularly large files.
It seems that if one uses the chunking option, the files are uploaded and progressively saved to disk, is this correct? if so it seems there needs to be a secondary operation to process the files.
If the config is set to allow large files, should plupload work without chunking up to the allowed file size for multiple files?
It seems that if one uses the chunking option, the files are uploaded
and progressively saved to disk, is this correct ?
If you mean "automatically saved to disk", as far as I know, it is not correct. Your MVC controller will have to handle as many requests as there are chunks, concatenate each chunk in a temp file, then rename the file after handling the last chunk.
It is handled this way in the upload.php example of plupload
if so it seems there needs to be a secondary operation to process the
files.
I'm not sure I understand this (perhaps you weren't meaning "automatically saved to disk")
If the config is set to allow large files, should plupload work
without chunking up to the allowed file size for multiple files ?
The answer is yes... and no.... It should work, then fail with some combination of browsers / plupload runtimes when size comes around 100 MB. People also seem to encounter problems to setup the config.
I handle small files (~15MB) and do not have to use chunking.
I would say that if you are to handle large files, chunking is the way to go.

iOS Upload Photo in Parts

Let's say I want to preserve the full resolution of a photo on the iPhone, and then upload it to a web service for storing. Quality is critical. Unfortunately, the size of a 3200x2400 photo taken with the iPhone camera is approximately 10-12MB for a PNG, and 1-3MB for a JPG (as of my latest tests).
Here we have a dilemma. On a 3G connection, a 12MB upload is an eternity (relatively speaking, of course). So I've explored a few options, including streams/chunking and background uploading. Still, it's not ideal. I'd like the upload to be as fast as possible. See edit.
So my question is this: would it be possible to split an image into separate data chunks, upload them all concurrently using multiple asynchronous connections, and then re-assemble them server side? Does an implementation exist for this?
EDIT: So speed is capped by bandwidth as has been discussed in the comments. But there are other uses for chunking/splitting that I would like to explore. So the question still stands.
What you can do is actually split the image into several pieces, and upload each, then reassemble later.
I guess a benefit of that would be getting a partial image on failed connection, then continuing uploading the remaining pieces afterwards.

What's the best way of saving/displaying images? (not blob vs. txt)

I’m making a gallery on site. And don’t know what the best solution for it. Need advice.
For my opinion there are two ways of operating with images.
User uploads image. I save it on server only once, only with its original size. And then, when there’s a need of displaying that image on screen I resize it to the necessary size, for example as avatar. So I store only ONE original-sized image and resize it to ANY proper size RIGHT BEFORE displaying.
User uploads image. I save it on server with original size and also make and save several copies (thumbnails-sized), for example, avatar-sized, erc. So that if the image is displayed it’s not resized every time it is displayed, just proper-sized copy taken.
I think that the second way is better. Because there’s no need to spend server strength on resizing images every time. But what if I’ll decide to change design of my site and some dimensions of images on it will be resized too? I’ll get the situation of having lots of images on server that doesn’t fit new design.
All around different forums they explain how to make galleries and every time they say that thumbnail-sized copies are also made and saved. But it looks like it doesn’t make sense if design is changed in time. Please, advise. Language – PHP.
One solution that others have come up with is a mix between the two. So, the user uploads the photo and you save it in its original form on your server. Then, when an avatar is needed, you check to see if you have the avatar saved on disk (maybe user12345_50x50.jpg - where 50x50 is widthxheight). If it does exist, show that image. If not, then use the server to resize/crop whatever, then save that image to disk and serve that to the user. This will allow you to request any size file and serve it as-needed -- taking advantage of caching those that have already been requested [Note that this is a server-side cache, so would apply for all users].
You sort of get the best of both worlds. You don't need to handle all of the image manipulation up front, just as needed. The first time the image is processed, that user will have to wait, but any other request will get the processed file.
One implementation that uses this solution in PHP is phpthumb: http://phpthumb.sourceforge.net/

Resources