So, I've implemented plupload using flash runtime in MVC3.
It works perfectly, in the sense that it uploads using the correction Action and runs it all. However, I'd really like to be able to control the response, and handle it in plupload, but I can't seem to get any response through.
I've tried overriding fileUploaded, but I can't seem to get anything out of the arguments. I've tried return simple strings, json and what have you. I can't seem to get anything out on the client side. And of course being sent through flash, I can't even debug the requests with firebug :/
The same with the Error event, and throwing exceptions. It correctly interprets the exception as an error, but it's always that #IO ERROR with some code like 2038 or something coming out the other end. I can't show my exception string or anything at all. Can anyone help?
Bonus question: How would I send session/cookie data along with the plupload, so I can access the session in my action?
The following has worked for me:
[HttpPost]
public ActionResult Upload(int? chunk, string name)
{
var fileUpload = Request.Files[0];
var uploadPath = Server.MapPath("~/App_Data");
chunk = chunk ?? 0;
using (var fs = new FileStream(Path.Combine(uploadPath, name), chunk == 0 ? FileMode.Create : FileMode.Append))
{
var buffer = new byte[fileUpload.InputStream.Length];
fileUpload.InputStream.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
}
return Json(new { message = "chunk uploaded", name = name });
}
and on the client:
$('#uploader').pluploadQueue({
runtimes: 'html5,flash',
url: '#Url.Action("Upload")',
max_file_size: '5mb',
chunk_size: '1mb',
unique_names: true,
multiple_queues: false,
preinit: function (uploader) {
uploader.bind('FileUploaded', function (up, file, data) {
// here file will contain interesting properties like
// id, loaded, name, percent, size, status, target_name, ...
// data.response will contain the server response
});
}
});
As far as the bonus question is concerned I am willing to answer it by don't use sessions, as they don't scale well, but because I know that you probably won't like this answer you have the possibility to pass a session id in the request using the multipart_params:
multipart_params: {
ASPSESSID: '#Session.SessionID'
},
and then on the server perform some hacks to create the proper session.
Look here:
$("#uploader").pluploadQueue({
// General settings
runtimes: 'silverlight',
url: '/Home/Upload',
max_file_size: '10mb',
chunk_size: '1mb',
unique_names: true,
multiple_queues: false,
// Resize images on clientside if we can
resize: { width: 320, height: 240, quality: 90 },
// Specify what files to browse for
filters: [
{ title: "Image files", extensions: "jpg,gif,png" },
{ title: "Zip files", extensions: "zip" }
],
// Silverlight settings
silverlight_xap_url: '../../../Scripts/upload/plupload.silverlight.xap'
});
// Client side form validation
$('form').submit(function (e) {
var uploader = $('#uploader').pluploadQueue();
// Files in queue upload them first
if (uploader.files.length > 0) {
// When all files are uploaded submit form
uploader.bind('StateChanged', function () {
if (uploader.files.length === (uploader.total.uploaded + uploader.total.failed)) {
$('form')[0].submit();
}
});
uploader.start();
} else {
alert('You must queue at least one file.');
}
return false;
});
And in Controller:
[HttpPost]
public string Upload( ) {
HttpPostedFileBase FileData = Request.Files[0];
if ( FileData.ContentLength > 0 ) {
var fileName = Path.GetFileName( FileData.FileName );
var path = Path.Combine( Server.MapPath( "~/Content" ), fileName );
FileData.SaveAs( path );
}
return "Files was uploaded successfully!";
}
That's all...No chunk is needed in Controller...
Related
I am currently a developing an application in MVC Core that is using a PDFTron webviewer. Is there anyway to save the edited pdf edited with pdftron webviewer to the server?
There is a feature of pdftron that saves annotations to the server, but I need to save the whole pdf with the edits to the server.
WebViewer({
path: '/lib/WebViewer',
initialDoc: '/StaticResource/Music.pdf', fullAPI: !0, enableRedaction: !0
}, document.getElementById('viewer')).then(
function(t) {
samplesSetup(t);
var n = t.docViewer;
n.on('documentLoaded', function() {
document.getElementById('apply-redactions').onclick = function() {
t.showWarningMessage({
title: 'Apply redaction?',
message: 'This action will permanently remove all items selected for redaction. It cannot be undone.',
onConfirm: function () {
alert( );
t.docViewer.getAnnotationManager().applyRedactions()
debugger
var options = {
xfdfString: n.getAnnotationManager().exportAnnotations()
};
var doc = n.getDocument();
const data = doc.getFileData(options);
const arr = new Uint8Array(data);
const blob = new Blob([arr], { type: 'application/pdf' });
const data = new FormData();
data.append('mydoc.pdf', blob, 'mydoc.pdf');
// depending on the server, 'FormData' might not be required and can just send the Blob directly
const req = new XMLHttpRequest();
req.open("POST", '/DocumentRedaction/SaveFileOnServer', true);
req.onload = function (oEvent) {
// Uploaded.
};
req.send(data);
return Promise.resolve();
},
});
};
}),
t.setToolbarGroup('toolbarGroup-Edit'),
t.setToolMode('AnnotationCreateRedaction');
}
);
When i send the request to the Controller i am not getting the file it is coming null
[HttpPost]
public IActionResult SaveFileOnServer(IFormFile file)
{
return Json(new { Result="ok"});
}
Can any one suggest me where i am going wrong
Thanks in adavance
For JavaScript async function, you need to wait for it completes before doing other things. For example, AnnotationManager#applyRedactions() returns a Promise, the same for AnnotationManager#exportAnnotations() and Document#getFileData().
For JS async functions, you can take a look at:
https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Promises
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await
So here you may want to use await to wait for the Promise completes.
The webapp I am developing is opened inside a webview via post. The post body parameters(contextual user inputs) are inserted in to the index.html.
So the repeat loads are failing because the contextual input is absent.
The official docs say that nothing can be done about it. It says all you can do now is go network first and enable navigation preload.
(https://developers.google.com/web/tools/workbox/modules/workbox-navigation-preload ---------
"This feature is intended to reduce navigation latency for developers who can't precache their HTML......")
Hence I am looking for a way to edit my cached index.html before is gets used. I want to insert the post body parameters in to the index.html. I am not able to find any documentation on editing the cache. Hence any help/inputs from community would be appreciated.
Workbox !== service worker. Workbox is built on top of service worker, but raw service workers give you full control over the request and response, so you can do pretty much whatever you want.
Editing a response
Here's how you might change the text of a response:
addEventListener('fetch', event => {
event.respondWith(async function() {
// Get a cached response:
const cachedResponse = await caches.match('/');
// Get the text of the response:
const responseText = await cachedResponse.text();
// Change it:
const newText = responseText.replace(/Hello/g, 'Goodbye');
// Serve it:
return new Response(newText, cachedResponse);
}());
});
There's a potential performance issue here, that you end up loading the full response into memory, and doing the replacement work, before you serve the first byte. With a little more effort, you can do the replacement in a streaming manner:
function streamingReplace(find, replace) {
let buffer = '';
return new TransformStream({
transform(chunk, controller) {
buffer += chunk;
let outChunk = '';
while (true) {
const index = buffer.indexOf(find);
if (index === -1) break;
outChunk += buffer.slice(0, index) + replace;
buffer = buffer.slice(index + find.length);
}
outChunk += buffer.slice(0, -(find.length - 1));
buffer = buffer.slice(-(find.length - 1));
controller.enqueue(outChunk);
},
flush(controller) {
if (buffer) controller.enqueue(buffer);
}
})
}
addEventListener('fetch', event => {
const url = new URL(event.request.url);
if (!(url.origin === location.origin && url.pathname === '/sw-content-change/')) return;
event.respondWith((async function() {
const response = await fetch(event.request);
const bodyStream = response.body
.pipeThrough(new TextDecoderStream())
.pipeThrough(streamingReplace('Hello', 'Goodbye'))
.pipeThrough(new TextEncoderStream());
return new Response(bodyStream, response);
})());
});
Here's a live demo of the above.
Getting the POST params of a response
The other part you need, is getting the POST body of the response:
addEventListener('fetch', event => {
event.respondWith(async function() {
if (event.request.method !== 'POST') return;
const formData = await event.request.formData();
// Do whatever you want with the form data…
console.log(formData.get('foo'));
}());
});
See the MDN page for FormData for the API.
I am implementing direct upload with Shrine, jquery.fileupload and cropper.js
in the add portion I am loading the image from the file upload to modal, define the cropper and show the modal
if (data.files && data.files[0]) {
var reader = new FileReader();
var $preview = $('#preview_avatar');
reader.onload = function(e) {
$preview.attr('src', e.target.result); // insert preview image
$preview.cropper({
dragMode: 'move',
aspectRatio: 1.0 / 1.0,
autoCropArea: 0.65,
data: {width: 270, height: 270}
})
};
reader.readAsDataURL(data.files[0]);
$('#crop_modal').modal('show', {
backdrop: 'static',
keyboard: false
});
}
Then on the modal button click I get the cropped canvas call on it toBlob and submit to S3
$('#crop_button').on('click', function(){
var options = {
extension: data.files[0].name.match(/(\.\w+)?$/)[0], // set extension
_: Date.now() // prevent caching
};
var canvas = $preview.cropper('getCroppedCanvas');
$.getJSON('/images/cache/presign', options).
then(function (result) {
data.formData = result['fields'];
data.url = result['url'];
data.paramName = 'file';
if (canvas.toBlob) {
canvas.toBlob(
function (blob) {
var file = new File([blob], 'cropped_file.jpeg');
console.log('file', file);
data.files[0] = file;
data.originalFiles[0] = data.files[0];
data.submit();
},
'image/jpeg'
);
}
});
});
After the upload to S3 is done I am writing to image attributes to hidden field, closing the modal and destroying the cropper
done: function (e, data) {
var image = {
id: data.formData.key.match(/cache\/(.+)/)[1], // we have to remove the prefix part
storage: 'cache',
metadata: {
size: data.files[0].size,
filename: data.files[0].name.match(/[^\/\\]*$/)[0], // IE returns full path
// mime_type: data.files[0].type
mime_type: 'image/jpeg'
}
};
console.log('image', image);
$('.cached-avatar').val(JSON.stringify(image));
$('#crop_modal').modal('hide');
$('#preview_avatar').cropper('destroy');
}
An chrome everything worked fine from the very beginning, but then I figured out the safari has no toBlob functionality.
I found this one:
https://github.com/blueimp/JavaScript-Canvas-to-Blob
And toBlob is not a function error was gone..
Now I can not save the image due to some mime type related issue.
I was able to find out the exact location where it fails on safari but not chrome.
determine_mime_type.rb line 142
on line 139 in the options = {stdin_data: io.read(MAGIC_NUMBER), binmode: true}
the stdin_data is empty after the io.read
Any ideas?
Thank you!
UPDATE
I was able to figure out that the url to the cached image returned by the
$.getJSON('/images/cache/presign', options)
returns empty file when cropped and uploaded from safari.
So as I mentioned in the question safari uploaded empty file once it was cropped by cropper.js.
The problem clearly originated from this block:
if (canvas.toBlob) {
canvas.toBlob(
function (blob) {
var file = new File([blob], 'cropped_file.jpeg');
console.log('file', file);
data.files[0] = file;
data.originalFiles[0] = data.files[0];
data.submit();
},
'image/jpeg'
);
}
I found in some comment on one of the articles I read that safari does some thing like "file.toString" which in my case resulted in empty file upload.
I appended the blob directly without creating a file from it first and everything worked fine.
if (canvas.toBlob) {
canvas.toBlob(
function (blob) {
data.files[0] = blob;
data.files[0].name = 'cropped_file.jpeg';
data.files[0].type = 'image/jpeg';
data.originalFiles[0] = data.files[0];
data.submit();
},
'image/jpeg'
);
}
As of now (Dojo 1.9.2) I haven't been able to find a Dojo autocomplete widget that would satisfy all of the following (typical) requirements:
Only executes a query to the server when a predefined number of characters have been entered (without this, big datasets should not be queried)
Does not require a full REST service on the server, only a URL which can be parametrized with a search term and simply returns JSON objects containing an ID and a label to display (so the data-query to the database can be limited just to the required data fields, not loading full data-entities and use only one field thereafter)
Has a configurable time-delay between the key-releases and the start of the server-query (without this excessive number of queries are fired against the server)
Capable of recognizing when there is no need for a new server-query (since the previously executed query is more generic than the current one would be).
Dropdown-stlye (has GUI elements indicating that this is a selector field)
I have created a draft solution (see below), please advise if you have a simpler, better solution to the above requirements with Dojo > 1.9.
The AutoComplete widget as a Dojo AMD module (placed into /gefc/dijit/AutoComplete.js according to AMD rules):
//
// AutoComplete style widget which works together with an ItemFileReadStore
//
// It will re-query the server whenever necessary.
//
define([
"dojo/_base/declare",
"dijit/form/FilteringSelect"
],
function(declare, _FilteringSelect) {
return declare(
[_FilteringSelect], {
// minimum number of input characters to trigger search
minKeyCount: 2,
// the term for which we have queried the server for the last time
lastServerQueryTerm: null,
// The query URL which will be set on the store when a server query
// is needed
queryURL: null,
//------------------------------------------------------------------------
postCreate: function() {
this.inherited(arguments);
// Setting defaults
if (this.searchDelay == null)
this.searchDelay = 500;
if (this.searchAttr == null)
this.searchAttr = "label";
if (this.autoComplete == null)
this.autoComplete = true;
if (this.minKeyCount == null)
this.minKeyCount = 2;
},
escapeRegExp: function (str) {
return str.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&");
},
replaceAll: function (find, replace, str) {
return str.replace(new RegExp(this.escapeRegExp(find), 'g'), replace);
},
startsWith: function (longStr, shortStr) {
return (longStr.match("^" + shortStr) == shortStr)
},
// override search method, count the input length
_startSearch: function (/*String*/ key) {
// If there is not enough text entered, we won't start querying
if (!key || key.length < this.minKeyCount) {
this.closeDropDown();
return;
}
// Deciding if the server needs to be queried
var serverQueryNeeded = false;
if (this.lastServerQueryTerm == null)
serverQueryNeeded = true;
else if (!this.startsWith(key, this.lastServerQueryTerm)) {
// the key does not start with the server queryterm
serverQueryNeeded = true;
}
if (serverQueryNeeded) {
// Creating a query url templated with the autocomplete term
var url = this.replaceAll('${autoCompleteTerm}', key, this.queryURL);
this.store.url = url
// We need to close the store in order to allow the FilteringSelect
// to re-open it with the new query term
this.store.close();
this.lastServerQueryTerm = key;
}
// Calling the super start search
this.inherited(arguments);
}
}
);
});
Notes:
I included some string functions to make it standalone, these should go to their proper places in your JS library.
The JavaScript embedded into the page which uses teh AutoComplete widget:
require([
"dojo/ready",
"dojo/data/ItemFileReadStore",
"gefc/dijit/AutoComplete",
"dojo/parser"
],
function(ready, ItemFileReadStore, AutoComplete) {
ready(function() {
// The initially displayed data (current value, possibly null)
// This makes it possible that the widget does not fire a query against
// the server immediately after initialization for getting a label for
// its current value
var dt = null;
<g:if test="${tenantInstance.technicalContact != null}">
dt = {identifier:"id", items:[
{id: "${tenantInstance.technicalContact.id}",
label:"${tenantInstance.technicalContact.name}"
}
]};
</g:if>
// If there is no current value, this will have no data
var partnerStore = new ItemFileReadStore(
{ data: dt,
urlPreventCache: true,
clearOnClose: true
}
);
var partnerSelect = new AutoComplete({
id: "technicalContactAC",
name: "technicalContact.id",
value: "${tenantInstance?.technicalContact?.id}",
displayValue: "${tenantInstance?.technicalContact?.name}",
queryURL: '<g:createLink controller="partner"
action="listForAutoComplete"
absolute="true"/>?term=\$\{autoCompleteTerm\}',
store: partnerStore,
searchAttr: "label",
autoComplete: true
},
"technicalContactAC"
);
})
})
Notes:
This is not standalone JavaScript, but generated with Grails on the server side, thus you see <g:if... and other server-side markup in the code). Replace those sections with your own markup.
<g:createLink will result in something like this after server-side page generation: /Limes/partner/listForAutoComplete?term=${autoCompleteTerm}
As of dojo 1.9, I would start by recommending that you replace your ItemFileReadStore by a store from the dojo/store package.
Then, I think dijit/form/FilteringSelect already has the features you need.
Given your requirement to avoid a server round-trip at the initial page startup, I would setup 2 different stores :
a dojo/store/Memory that would handle your initial data.
a dojo/store/JsonRest that queries your controller on subsequent requests.
Then, to avoid querying the server at each keystroke, set the FilteringSelect's intermediateChanges property to false, and implement your logic in the onChange extension point.
For the requirement of triggering the server call after a delay, implement that in the onChange as well. In the following example I did a simple setTimeout, but you should consider writing a better debounce method. See this blog post and the utility functions of dgrid.
I would do this in your GSP page :
require(["dojo/store/Memory", "dojo/store/JsonRest", "dijit/form/FilteringSelect", "dojo/_base/lang"],
function(Memory, JsonRest, FilteringSelect, lang) {
var initialPartnerStore = undefined;
<g:if test="${tenantInstance.technicalContact != null}">
dt = {identifier:"id", items:[
{id: "${tenantInstance.technicalContact.id}",
label:"${tenantInstance.technicalContact.name}"
}
]};
initialPartnerStore = new Memory({
data : dt
});
</g:if>
var partnerStore = new JsonRest({
target : '<g:createLink controller="partner" action="listForAutoComplete" absolute="true"/>',
});
var queryDelay = 500;
var select = new FilteringSelect({
id: "technicalContactAC",
name: "technicalContact.id",
value: "${tenantInstance?.technicalContact?.id}",
displayValue: "${tenantInstance?.technicalContact?.name}",
store: initialPartnerStore ? initialPartnerStore : partnerStore,
query : { term : ${autoCompleteTerm} },
searchAttr: "label",
autoComplete: true,
intermediateChanges : false,
onChange : function(newValue) {
// Change to the JsonRest store to query the server
if (this.store !== partnerStore) {
this.set("store", partnerStore);
}
// Only query after your desired delay
setTimeout(lang.hitch(this, function(){
this.set('query', { term : newValue }
}), queryDelay);
}
}).startup();
});
This code is untested, but you get the idea...
I wrote an MVC action that runs a utility with input parameters and writes the utilities output to the response html. here is the full method:
var jobID = Guid.NewGuid();
// save the file to disk so the CMD line util can access it
var inputfilePath = Path.Combine(#"c:\", String.Format("input_{0:n}.json", jobID));
var outputfilePath = Path.Combine(#"c:\", String.Format("output{0:n}.json", jobID));
using (var inputFile = System.IO.File.CreateText(inputfilePath))
{
inputFile.Write(i_JsonInput);
}
var psi = new ProcessStartInfo(#"C:\Code\FoxConcept\FoxConcept\test.cmd", String.Format("{0} {1}", inputfilePath, outputfilePath))
{
WorkingDirectory = Environment.CurrentDirectory,
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true,
CreateNoWindow = true
};
using (var process = new Process { StartInfo = psi })
{
// delegate for writing the process output to the response output
Action<Object, DataReceivedEventArgs> dataReceived = ((sender, e) =>
{
if (e.Data != null) // sometimes a random event is received with null data, not sure why - I prefer to leave it out
{
Response.Write(e.Data);
Response.Write(Environment.NewLine);
Response.Flush();
}
});
process.OutputDataReceived += new DataReceivedEventHandler(dataReceived);
process.ErrorDataReceived += new DataReceivedEventHandler(dataReceived);
// use text/plain so line breaks and any other whitespace formatting is preserved
Response.ContentType = "text/plain";
// start the process and start reading the standard and error outputs
process.Start();
process.BeginErrorReadLine();
process.BeginOutputReadLine();
// wait for the process to exit
process.WaitForExit();
// an exit code other than 0 generally means an error
if (process.ExitCode != 0)
{
Response.StatusCode = 500;
}
}
Response.End();
The utility takes around a minute to complete and it displays relevant information along the way.
is it possible to display the the information as it runs on the user's browser ?
I hope this link will help: Asynchronous processing in ASP.Net MVC with Ajax progress bar
You can call Controller's action method and get the status of the process.
Controller code:
/// <summary>
/// Starts the long running process.
/// </summary>
/// <param name="id">The id.</param>
public void StartLongRunningProcess(string id)
{
longRunningClass.Add(id);
ProcessTask processTask = new ProcessTask(longRunningClass.ProcessLongRunningAction);
processTask.BeginInvoke(id, new AsyncCallback(EndLongRunningProcess), processTask);
}
jQuery Code:
$(document).ready(function(event) {
$('#startProcess').click(function() {
$.post("Home/StartLongRunningProcess", { id: uniqueId }, function() {
$('#statusBorder').show();
getStatus();
});
event.preventDefault;
});
});
function getStatus() {
var url = 'Home/GetCurrentProgress/' + uniqueId;
$.get(url, function(data) {
if (data != "100") {
$('#status').html(data);
$('#statusFill').width(data);
window.setTimeout("getStatus()", 100);
}
else {
$('#status').html("Done");
$('#statusBorder').hide();
alert("The Long process has finished");
};
});
}