I have a controller that needs to redirect after receiving a file. I have saved the file successfully on the server side. Now, the only things that is bogging me down is how do I redirect to another site while sending the uploaded file that was saved on the server? Any tips? I am desparate.
OK so here it is, first I save the file on serverB:
file.SaveAs(Server.MapPath("~/ImageCache/") + file.FileName);
WebClient client = new WebClient();
Then I do the post:
byte[] data;
client.Headers.Set(HttpRequestHeader.ContentType, "image/jpeg");
data = client.UploadFile("http://hostA.com/Search/", "POST", Server.MapPath("~/ImageCache/") + file.FileName);
return Redirect( WHAT DO I WRITE HERE??);
Need to get to the place where I find the other service showing me the page when it has received the file.
How are you uploading the file? If this is the usual case of an <input type="file" />, you can just return Redirect("new url"); within your action.
Edit:
If you want to relay this to another web service, you don't need to redirect. There should be some sort of upload method defined in the webservice (including what type of webservice would help). You should be able to call that like you would any other webservice method, probably specifying the FileContents byte[] as a parameter.
Related
We currently have a generic MVC method that GET's data from ASP.NET Web API
public static T Get<T>(string apiURI, object p)
{
using (HttpClient client = new HttpClient())
{
client.BaseAddress = new Uri(Config.API_BaseSite);
HttpResponseMessage response = client.GetAsync(apiURI).Result;
// Check that response was successful or throw exception
if (response.IsSuccessStatusCode == false)
{
string responseBody = response.Content.ReadAsStringAsync().Result;
throw new HttpException((int)response.StatusCode, responseBody);
}
T res = response.Content.ReadAsAsync<T>().Result;
return (T)res;
}
}
Our question is:- obviously, we can not send 'p' as you would with a post,
client.PostAsync(apiURI, new StringContent(p.ToString(), Encoding.UTF8, "application/json")
but how we go about sending this object / JSON with a get.
We have seen sending it as part of the URL, however, is there an alternative?
GET sends the values with the query string (end of url), in regards to "but how we go about sending this object / JSON with a get. We have seen sending it as part of the URL, however, is there an alternative?".
The alternative is POST or PUT.
PUT is best used when the user creates the key/url. You can look at examples such as cnn.com - where the URL's are just short versions of the article title. You want to PUT a page at that URL.
Example:
http://newday.blogs.cnn.com/2014/03/19/five-things-to-know-for-your-new-day-wednesday-march-19-2014/?hpt=hp_t2
has the url of "five-things-to-know-for-your-new-day-wednesday-march-19-2014", which was generated from the article title of "Five Things to Know for Your New Day – Wednesday, March 19, 2014"
In general, you should follow these guidelines:
Use GET when you want to fetch data from the server. Think of search engines. You can see your search query in the query string. You can also book mark it. It doesn't change anything on the server at all.
Use POST when you want to create a resource.
Use PUT when you want to create resources, but it also overwrites them. If you PUT an object twice, the servers state is only changed once. The opposite is true for POST
Use DELETE when you want to delete stuff
Neither POST nor PUT use the query string. GET does
Following the tutorial found on ASP.NET, implemented a Web API controller method for doing asynchronous file uploads that looks like this:
public Task<HttpResponseMessage> PostFormData()
{
// Check if the request contains multipart/form-data.
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);
// Read the form data and return an async task.
var task = Request.Content.ReadAsMultipartAsync(provider).
ContinueWith<HttpResponseMessage>(t =>
{
if (t.IsFaulted || t.IsCanceled)
{
Request.CreateErrorResponse(HttpStatusCode.InternalServerError, t.Exception);
}
return Request.CreateResponse(HttpStatusCode.OK);
});
return task;
}
Uploading a file via a standard multipart HTML form works perfectly. However, when another developer attempts to upload a file via multipart form constructed by Flex's FileReference class, an error is thrown:
Unexpected end of MIME multipart stream. MIME multipart message is not complete.
I have no idea if the problem lies in Web API or Flex. I've found some sort of related fixes that had no affect (Multipart form POST using ASP.Net Web API), and more recently this one ("MIME multipart stream. MIME multipart message is not complete" error on webapi upload). If the second link holds true, does anyone know if it's out in the current release of Web API available via Nuget? The discussion was in May, the most recent release from Nuget was August, so I assume this fix was deployed already, and is not the root cause of my issue.
I had the same problem with MVC4, but Will is correct, add a name to your input.....
<input type="file" id="fileInput" name="fileInput"/>
and all the magic is back up and working!
I had the same problem with flex. And below is the code that solved it. Basically I used a custom stream to append the newline that asp.net web api is expecting.
Stream reqStream = Request.Content.ReadAsStreamAsync().Result;
MemoryStream tempStream = new MemoryStream();
reqStream.CopyTo(tempStream);
tempStream.Seek(0, SeekOrigin.End);
StreamWriter writer = new StreamWriter(tempStream);
writer.WriteLine();
writer.Flush();
tempStream.Position = 0;
StreamContent streamContent = new StreamContent(tempStream);
foreach(var header in Request.Content.Headers)
{
streamContent.Headers.Add(header.Key, header.Value);
}
// Read the form data and return an async task.
await streamContent.ReadAsMultipartAsync(provider);
Hope this helps.
Reading through your existing research and following through to the codeplex issue reported it looks like someone else confirmed this issue to still exist in September.
They believe that MVC 4 fails to parse uploads without a terminating "\r\n".
The issue is really simple but extremely hard to fix. The problem is that Uploadify does > not add an "\r\n" at the end of the MultiPartForm message
http://aspnetwebstack.codeplex.com/discussions/354215
It may be worth checking that the Flex upload adds the "\r\n"
For those landing here googling:
Unexpected end of MIME multipart stream. MIME multipart message is not complete.
Reading the request stream more than once will also cause this exception. I struggled with it for hours until I found a source explaining that the request stream only could be read once.
In my case, I combined trying to read the request stream using a MultipartMemoryStreamProvider and at the same time letting ASP.NET do some magic for me by specifying parameters (coming from the request body) for my api method.
Make sure the virtual directory ("~/App_Data" directory as below example) where the image files are first uploaded are physically existance. When you publish the project, it may not be in the output files.
string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);
I just removed my headers I was setting on my post method which ended up solving this issue.
The problem is this line:
string root = HttpContext.Current.Server.MapPath("~/App_Data");
It will only work in localhost, you can use HostingEnvironment.MapPath instead in any context where System.Web objects like HttpContext.Current are not available (e.g also from a static method).
var mappedPath = System.Web.Hosting.HostingEnvironment.MapPath("~/SomePath");
See also What is the difference between Server.MapPath and HostingEnvironment.MapPath?
Reference to this answer How to do a Server Map Path.
I would like to convert a HTML + CSS page to a PDF file.
I have tried wkhtmltopdf and I have got a problem because the page I want to access requires to be authenticated on the Website.
The page I would like to convert to PDF has the following URL : http://[WEBSITE]/PDFReport/33
If I try to access it without being authenticated, I'm redirected to the login page.
So when I use wkhtmltopdf, it converts my login page to PDF...
The anthentication method I use on my ASP.NET MVC application is SimpleMembership:
[Authorize]
public ActionResult PDFReport(string id)
{
}
I am executing wkhtmltopdf.exe with System.Diagnostics.Process :
FileInfo tempFile = new FileInfo(Request.PhysicalApplicationPath + "\\bin\\test.pdf");
StringBuilder argument = new StringBuilder();
argument.Append(" --disable-smart-shrinking");
argument.Append(" --no-pdf-compression");
argument.Append(" " + "http://[WEBSITE]/PDFReport/33");
argument.Append(" " + tempFile.FullName);
// to call the exe to convert
using (Process p = new System.Diagnostics.Process())
{
p.StartInfo.UseShellExecute = false;
p.StartInfo.CreateNoWindow = true;
p.StartInfo.FileName = Request.PhysicalApplicationPath + "\\bin\\wkhtmltopdf.exe";
p.StartInfo.Arguments = argument.ToString();
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.Start();
p.WaitForExit();
}
Do you know how to generate the PDF without disabling security on this page?
I had a good deal of trouble with this recently. In a nutshell, WKHTMLTOPDF is a version of Webkit (QT, I believe they call it), so when you request a password protected page, that browser needs to log in and store/reference a cookie the same as you would normally.
The raw call would look something like this:
`/path/wkhtmltopdf --cookie-jar my.jar --username myusername --password mypassword URL
Where:
my.jar is a jar file that gets created and holds your cookie values
username is the name of the username form field and myusername is the post value
password is the name of the password form field and mypassword is the post value
URL is the URL of the log in page
Be sure to include any other post fields required to successfully log in - you'll probably want to monitor your HTTP headers, not just look at the form. Call WKHTMLTOPDF again on the page you're looking to capture with your normal parameters, including the --cookie-jar my.jar to maintain the session. That should do it!
However, I still had problems on my end, but it was a fairly robust login (multiple cookies, secure, many parameters, etc). I was working with PHP and had better luck using CURL - I'm not sure how that carries over to ASP.NET (maybe this? http://support.microsoft.com/kb/303436) but here's my logic if it helps:
Log in via CURL
Grab HTML page and store in local temporary file
Replace all relative references to images and files to absolute references (or insert a base tag)
Run plain 'ol WKHTMLTOPDF on the temporary file
Delete temporary file
All in all it was a hell of a lot easier to do it this way, and it feels better to me knowing I'm leaning on tried and true code rather than parameters in version 0.10 of WKHTMLTOPDF.
I am using Google Data API for .Net(version 1.9) in my application.
I have created a Google apps account and i have set the "Users cannot share documents outside this organization" setting under Google Docs.
When i try to share a file outside of the domain(organization) from Google docs web, i get a error saying the file cannot be shared outside of my domain.
But when i try the same thing from the API, it succeeds. I get a 200 success from the API. When i try to access the file from the share link it says 'You need permission to access this resource'. My question is shouldn't the API return with a error? how can i handle this case?
Here is the code that I am using:
DocumentsRequest request = null;
/* request initialization */
string csBatchReqBody = "<?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:gAcl="http://schemas.google.com/acl/2007" xmlns:batch="http://schemas.google.com/gdata/batch"><category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/acl/2007#accessRule"/><entry><id>https://docs.google.com/feeds/default/private/full/document:1DsELtiNwq-ogOrp8cAONdMpGR4gBF79PjijTae-vVNg/acl/user:myusername#mydomain.com</id><batch:operation type="query"/></entry><entry><batch:id>1</batch:id><batch:operation type="insert"/><gAcl:role value="reader"/><gAcl:scope type="user" value="myusername#gmail.com"/></entry>"
string Url = "https://docs.google.com/feeds/default/private/full/document:1DsELtiNwq-ogOrp8cAONdMpGR4gBF79PjijTae-vVNg/acl/batch";
byte[] byteArray = Encoding.ASCII.GetBytes(csBatchReqBody);
MemoryStream inputStream = new MemoryStream(byteArray);
AtomEntry reply = request.Service.Insert(new Uri(Url), inputStream, "application/atom+xml", "");
MemoryStream stream = new MemoryStream();
reply.SaveToXml(stream);
The API actually returns a 400 if you try to share a file outside the domain and the admins have set the "Users cannot share documents outside this organization" flag.
Your code sends a batch request (even if for a single element), you'd have to check the batch response to notice the error.
Instead, use the following code to share a document to a single user, it assumes that entry is the DocumentEntry you want to share:
AclEntry acl = new AclEntry();
acl.Scope = new AclScope("username#gmail.com", "user");
acl.Role = new AclRole("reader");
acl = service.Insert(new Uri(entry.AccessControlList), acl);
I have a need to store files on Amazon AWS S3, but in order to isolate the user from the AWS authentication I want to go via an ASP page on my site, which the user will be logged into. So:
The application sends the file using the Delphi Indy library TidHTTP.Put (FileStream) routine to the ASP page, along with some authentication stuff (mine, not AWS) on the querystring.
The ASP page checks the auth details and then if OK stores the file on S3 using my Amazon account.
Problem I have is: how do I access the data coming in from the Indy PUT using JScript in the ASP page and pass it on to S3. I'm OK with AWS signing, etc, it's just the nuts and bolts of connecting the two bits (the incoming request and the outgoing AWS request) ...
TIA
R
A HTTP PUT will store the file at the given location in the HTTP header - it "requests that the enclosed entity be stored under the supplied Request-URI".
The disadvantage with the PUT method is that if you are on a shared hosting environment it may not be available to you.
So if the web server supports PUT, the file should be available at the given location in the the (virtual) file system. The PUT request will be handled by the server and not ASP:
In the case of PUT, the web server
handles the request itself: there is
no room for a CGI or ASP application
to step in.
The only way for your application to
capture a PUT is to operate on the
low-level, ISAPI filter level
http://www.15seconds.com/issue/981120.htm
Are you sure you need PUT and can not use a POST, which will send the file to a URL where your ASP script can read it from the request stream?
OK, Ive got a bit further with this. Code at the ASP end is:
var PostedDataSize = Request.TotalBytes ;
var PostedData = Request.BinaryRead (PostedDataSize) ;
var PostedDataStream = Server.CreateObject ("ADODB.Stream") ;
PostedDataStream.Open ;
PostedDataStream.Type = 1 ; // binary
PostedDataStream.Write (PostedData) ;
Response.Write ("PostedDataStream.Size = " + PostedDataStream.Size + "<br>") ;
var XML = AmazonAWSPUTRequest (BucketName, AWSDestinationFileID, PostedDataStream) ;
.....
function AmazonAWSPUTRequest (Bucket, Filename, InputStream)
{
....
XMLHttp.open ("PUT", URL + FRequest, false) ;
XMLHttp.setRequestHeader (....
XMLHttp.setRequestHeader (....
...
Response.Write ("InputStream.Size = " + InputStream.Size + "<br>") ;
XMLHttp.send (InputStream) ;
So I use BinaryRead, write it to a binary stream. If I write out the size of the stream I get the size of the file I POST'ed from my application, so I reckon the data is in there somewhere. I then call a routine (with the stream as a parameter) which sets up the AWS authentication/signing and does a PUT.
The AWS call returns no errors and a file of the correct name is created in the right place, but it has a size of zero! InputStream.Size has a value the same as the stream parameter passed to the routine - i.e. the size of the original file.
Any ideas?
POSTSCRIPT. Found the problem. It's caught me a few times with streams, this one. When you write data to a stream, don't forget to reset the stream position back to zero before trying to read from the stream again. I.e. just before the line:
XMLHttp.send (InputStream) ;
I needed to add:
InputStream.Position = 0 ;
My thanks for the interest and suggestions.