Is the chunking option required with plupload and asp.net MVC? - asp.net-mvc

I have seen various posts where developers have opted for the chunking option to upload files, particularly large files.
It seems that if one uses the chunking option, the files are uploaded and progressively saved to disk, is this correct? if so it seems there needs to be a secondary operation to process the files.
If the config is set to allow large files, should plupload work without chunking up to the allowed file size for multiple files?

It seems that if one uses the chunking option, the files are uploaded
and progressively saved to disk, is this correct ?
If you mean "automatically saved to disk", as far as I know, it is not correct. Your MVC controller will have to handle as many requests as there are chunks, concatenate each chunk in a temp file, then rename the file after handling the last chunk.
It is handled this way in the upload.php example of plupload
if so it seems there needs to be a secondary operation to process the
files.
I'm not sure I understand this (perhaps you weren't meaning "automatically saved to disk")
If the config is set to allow large files, should plupload work
without chunking up to the allowed file size for multiple files ?
The answer is yes... and no.... It should work, then fail with some combination of browsers / plupload runtimes when size comes around 100 MB. People also seem to encounter problems to setup the config.
I handle small files (~15MB) and do not have to use chunking.
I would say that if you are to handle large files, chunking is the way to go.

Related

Zoomify .zif format bad performance

The new .zif single file format provided by Zoomify Pro seems to have some performance issues. Comparing it to the old file structure it loads the page 3 to 4 times slower and the requests that it sends exceed 50% more (Tested with the same initial image in multiple file formats).
Using the old format is not feasible for out product and we are stuck with over a minute of load time.
Has anyone encountered this issue, and are there some workarounds? The results in the internet and the official site doesn't seem to be of any help.
NOTE: Contacting the vendor hasn't led to anything yet.
Although the official site claims the zif format could handle very large image, I'm skeptical about it because the viewer tries to do everything in Javascript. The performance is entire based on the client's machine. Try opening it on a faster machine and see if it improves.
Alternative solution: You could create Deep Zoom Image tiles by using VIPS library.
More information here:
https://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html
Scroll further down in the article and you'll see this snippet:
With 7.40 and later, you can use --container to set the container
type. Normally dzsave will write a tree of directories, but with
--container zip you'll get a zip file instead. Use .zip as the directory suffix to turn on zip format automatically:
$ vips dzsave wtc.tif mypyr.zip
to write a zipfile containing the tiles.
Also, checkout this tutorial:
Serve deepzoom images from a zip archive with openseadragon
https://web.archive.org/web/20170310042401/https://literarymachin.es/deepzoom-osd-server/
The community (openseadragon and vips) is much stronger over there so you'll get help when you hit a wall.
If you want to take a break from all of this and just want the images zoomable, you could use 3rd party service such as zoomable.ca or zoomo.ca. It’s free and user friendly (upload your image and embed the viewer to your site like Google Map).
ZIF format designer here... ZIF can easily handle monstrous images, up to hundreds of terabytes in size.
Without a server, of course the viewer tries to do everything, it's the only option. As a result, serving ZIF directly from a webserver will not be as performant as using an image server. But... you can DO it. Using Zoomify tile folders, speed will be faster, but you may have hundreds of thousands or millions of tiles to deal with at the server side, and transfers will be horrendously slow and error-prone.
There are always trade-offs.See zif.photo for specification.

PDF uploading malicious content vulnerability with Rails

I am implementing pdf upload using Carrierwave with Rails 4. I was asked by the client about malicious content, e.g. if someone attempts to upload a malicious file masked as a pdf. I will be restricting filetype on the frontend to 'application/pdf'. Is there anything else I need to worry about, assuming the uploaded file has a .pdf extension?
File uploads is often a security issue, since there are so many ways to get it wrong. Regarding just the issue of masking a malicious file as a PDF, checking the content type (application/pdf) is good, but not enough, since it's controlled by the client and can be modified.
Filtering on the .pdf extension is definitely advisable, but make sure you don't accept files like virus.pdf.exe.
Other filename attack techniques exist, e.g. involving null or control characters.
Consider using a file type detector to determine that the file is really a PDF document.
But that's just for restricting the file type. There are many other issues you need to be aware of when accepting file uploads.
PDF files can contain malicious code and are a common attack vector.
Make sure uploaded files are written to an appropriate directory on the server. If they aren't meant to be publicly accessible, choose a directory outside of the web root.
Restrict the maximum upload file size.
This is not a complete list by any means. Check out the Unrestricted File Upload vulnerability by OWASP for more info.
In addition to #StefanOS 's great answer, PDF files are required to start with the string:
%PDF-[VERSION]
Generally, at least often, the first couple of bytes (or more) indicate the file type - especially for executables (i.e., Windows executables, called PE files, should start - if memory serves - with "MZ").
For uploaded PDF files, opening the uploaded file and reading the first 5 bytes should always yield %PDF-.
This might be a good enough verification. for most use-cases.

Need an use case example for stream response in ChicagoBoss

ChicageBoss controller API has this
{stream, Generator::function(), Acc0}
Stream a response to the client using HTTP chunked encoding. For each
chunk, the Generator function is passed an accumulator (initally Acc0)
and should return either {output, Data, Acc1} or done.
I am wondering what is the use case for this? There are others like Json, output. When will this stream be useful?
Can someone present an use case in real world?
Serving large files for download might be the most straight-forward use case.
You could argue that there are also other ways to serve files so that users can download them, but these might have other disadvantages:
By streaming the file, you don't have to read the entire file into memory before starting to send the response to the client. For small files, you could just read the content of the file, and return it as {output, BinaryContent, CustomHeader}. But that might become tricky if you want to serve large files like disk images.
People often suggest to serve downloadable files as static files (e.g. here). However, these downloads bypass all controllers, which might be an issue if you want things like download counters or access restrictions. Caching might be an issue, too.

handling large file image upload

On my asp.net mvc 4 site I have a feature where a user can upload a photo, via standard file uploader. The photo gets saved in to a file table within sql server.
I have run in to an issue recently where users are uploading very large photos which in return means bandwidth being eaten up when image is being rendered.
What is the best way to handle this? Can I restrict the size of file being uploaded? Or is there a way of reducing the number of bytes being uploaded while maintaining quality?
Refer to this post for the maximumRequestLength config setting and way to provide a more friendly error
This question and answer may also be helpful
You can also check the size of the file in javascript before uploading so that it doesn't even get sent to the server if it is too big (the code below check for anything bigger than 15MB):
if( Math.floor( file.size / 1024 /1024 ) >= 15 )
{
alert( 'File size is greater than maximum allowed. Please make sure that the file is smaller than 15 MegaBytes.' );
return false;
}
Alternatively, on the server side you can use WebImage.Resize() to resize once the file has been uploaded. It won't help with the bandwidth during upload, but it will make subsequent downloads a lot faster. Making an image smaller will cause some loss in quality, but generally it does a good job, just make sure that you choose the option to maintain the aspect ratio to prevent distortion.
As for reducing the bytes before uploading there isn't any way I know of to do this in the browser. You could provide a separate client-side application that will resize the files for them before the upload using the WebImage.Resize method in you app.

Large File Uploads

Do large file uploads block an applications request/response cycle? I have an app that allows users to upload multiple large files (images in particular). These files are stored on a remote host. I cannot use async background jobs to upload these images as these have to be immediately accessible to the user once the upload finishes. How best should i handle such large uploads? Does it affect concurrency? This is my first time with uploads on a large scale. What should i be wary of other than the huge bills of course? Any input from developers who have created apps which use large file uploads will be greatly appreciated.
Why can't you use an async upload, and just handle the event that signifies that it's done? That's generally how async operations work - you kick them off and then store the pointer somewhere, and then either handle the "Complete" event, or just periodically iterate through he pointers for uploads you've started and check each one to see if it's complete.
It's an old question but still, I was worried about the same problem with large files uploads thinking that the processes get blocked while the file is getting uploaded but it turned out, if I got it right, that the nginx and probably other servers as well buffer the content of the file while its being sent so no rails processes get blocked, only when the upload is finished and rails is processing it, like resizing images or something.

Resources