I have been using this for years and been forcing a xlsx to download to the users personal computer. I now need to modify it to save to the server but I am not having any luck...
this is my code, i added the 2 lines to the end.
I commented out all this code with no luck
"Redirect output to a client’s web browser (Excel2007)"
I have tried rewriting it in many different combinations, no luck
// Redirect output to a client’s web browser (Excel2007)
header('Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
header('Content-Disposition: attachment;filename="rep_balance-'.date("F j, Y").'.xlsx"');
// If you're serving to IE 9, then the following may be needed
header('Cache-Control: max-age=1');
// If you're serving to IE over SSL, then the following may be needed
header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); // Date in the past
header('Last-Modified: ' . gmdate('D, d M Y H:i:s') . ' GMT'); // always modified
header('Cache-Control: cache, must-revalidate'); // HTTP/1.1
header('Pragma: public'); // HTTP/1.0
$writer = new Xlsx($objPHPExcel);
$writer->save('php://output');
$path = '/home/paththewebsite/reps/rep_balance-'.date("F j, Y").'.xlsx';
$writer->save($path);
Just remove the lines that set the header since those are not required if you're not serving the file to the end user. And remove the "$writer->save('php://output');" line.
This should be your code
$writer = new Xlsx($objPHPExcel);
$path = '/home/paththewebsite/reps/rep_balance-'.date("F j, Y").'.xlsx';
$writer->save($path);
Related
Starting January 25, 2021 at about 20:00 UTC, the export links that are returned from creating files stopped working.
Before that time we were able to reliably POST a new file and get back something like
...
"exportLinks": {
"application/pdf": "https://docs.google.com/feeds/download/documents/export/Export?id=blahblahblah&exportFormat=pdf"
}
...
and then use that URL to download the document. We'd get response codes of 302 or 307. However, at 2021-01-25 20:00 UTC we starting getting several 500's.
This seems to most often happen when a doc is created from that copy endpoint, but not exclusively.
Did something change in the API at that time?
I'm trying to download a file from SharePoint Online using an "app only" token. I can obtain file info using this url
https://graph.microsoft.com:443/v1.0/sites/{siteId}/drives/{driveId}/list/items/{itemId}/driveItem
But when I try to download the file with this url
https://graph.microsoft.com:443/v1.0/sites/{siteId}/drives/{driveId}/list/items/{itemId}/driveItem/content
I get the following error
403 FORBIDDEN
Content-Length →13
Content-Type →text/plain; charset=utf-8
Date →Fri, 13 Apr 2018 08:47:12 GMT
MicrosoftSharePointTeamServices →16.0.0.7604
P3P →CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
SPIisLatency →2
SPRequestDuration →53
X-Content-Type-Options →nosniff
X-MS-InvokeApp →1; RequireReadOnly
X-MSDAVEXT_Error →917656; Access+denied.+Before+opening+files+in+this+location%2c+you+must+first+browse+to+the+web+site+and+select+the+option+to+login+automatically.
X-MSEdge-Ref →Ref A: B9E0C567B0CC4E60AEE93EEB8DC06AF1 Ref B: VIEEDGE0813 Ref C: 2018-04-13T08:47:12Z
X-Powered-By →ASP.NET
X-SharePointHealthScore →0
what is wrong?
it seems that internally it generates a download link (.../_layouts/15/download.aspx?UniqueId=...) that works with username / pwd token, but does not work with "app only" token
I have another office 365 subscription that works with "app only" token. The other subscription have a custom domain, but I can not see other configuration differences (both have LegacyAuthProtocolsEnabled property set to true, same sharing options...)
EDIT: It seems that the example I was testing on friday now works!!
This bug appeared wednesday last week, and is spreading to more and more of our tenants. It appears that temporary tokens generated by the Graph API/Sharepoint API are invalid. This affects:
Chunked file upload, as you receice an url to upload to with a temporary token
#microsoft.graph.downloadUrl as it contains a temporary token
Content download, as it uses the excact same url as #microsoft.graph.downloadUrl
Please fix this ASAP, as my application is cripled and the customers are angry
I created a post here too, but no response: Temporary tokens issued by graph api is invalid since wednesday
Also this bug appeared wednesday: Unable to set fileSystemInfo.lastModifiedDateTime on files on Sharepoint Online for some users since wednesday
Did you find anything on this Mark LeFleur?
You should use the #microsoft.graph.downloadUrl property obtained from the /v1.0/me/drive/list/items/x/driveItem response to get an app-only url to the file.
A GET request wil allow you to download the file.
See https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/driveitem#instance-attributes
I have executed the same example that initially failed and now works, at least in the two tenants that I have.
I have not changed any configuration or source code, so it seems that it was a temporary problem that has been fixed.
I have a Microsoft MVC 4 application that needs to open a PDF file served from another MVC application. The code in the main application looks like this:
return Redirect(model.PdfUrl);
where model.PdfUrl looks like http://repository.{domain_name}/pdf/whitepaper/810219c9-a599-4132-965a-d3388d0fce3e.pdf
On the repository application, this request is routed to a controller action that looks-up the actual file from the incoming URL. Once the physical path of the file has been found, it returns the file to the HTTP Response as follows:
var mimeType = MimeMapping.GetMimeMapping(path);
return File(path, mimeType);
Using Fiddler, this creates the following response:
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/pdf
Date: Thu, 26 May 2016 16:16:07 GMT
Content-Length: 582158
Chrome, Firefox, Internet Explorer 11, all then open and render the PDF file 'inline'. Microsoft Edge however, displays a prompt asking if you want to download the file - at least it does most of the time, though sometimes it just opens the file as expected/required.
I have tried adding a content-disposition header to the response, but it does not appear to make any difference.
var mimeType = MimeMapping.GetMimeMapping(path);
var cd = new ContentDisposition
{
Inline = true,
FileName = Path.GetFileName(path)
};
Response.AddHeader("Content-Disposition", cd.ToString());
return File(path, mimeType);
Any ideas or suggestions would be much appreciated.
We have a controller returning dynamically generated JS code in a JavaScriptResult (ActionResult).
public ActionResult Merchant(int id)
{
var js = "<script>alert('bleh')</script>"; // really retrieved from custom storage
return JavaScript(js); // same as new JavaScriptResult() { Script = js };
}
The default gzip filter is compressing it but not setting Content-Length. Instead, it just sets Transport-Mode: Chunked and leaves it at that.
Arr-Disable-Session-Affinity:True
Cache-Control:public, max-age=600
Content-Encoding:gzip
Content-Type:application/x-javascript; charset=utf-8
Date:Fri, 06 Nov 2015 22:25:17 GMT
Server:Microsoft-IIS/8.0
Timing-Allow-Origin:*
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-AspNet-Version:4.0.30319
X-AspNetMvc-Version:5.2
X-Powered-By:ASP.NET
How can I get it to add the header? I absolutely need this header. Without Content-Length, there's no way to tell if the file completed downloading, e.g. if the connection dropped. Without it, some CDNs like Amazon CloudFront aren't able to properly cache.
After turning off compression, I get a normal Content-Length. It seems trivial for the gzip filter to add this length - there must be an option somewhere?
I just found a partial response being cached as complete in one of our customer's machines, which rendered the whole website unusable. And I have absolutely no idea, what could possible have gone wrong there.
So what could have possibly gone wrong in the following setup?
On the server-side, we have an ASP.NET-application running. One IHttpHandler handles requests to javascript-files. It basically minifies the files as they are requested and writes the result on the response-stream. It does also log the length of the string being written to the Response-Stream:
String javascript = /* Javascript is retrieved here */;
HttpResponse response = context.Response;
response.ContentEncoding = Encoding.UTF8;
response.ContentType = "application/javascript";
HttpCachePolicy cache = response.Cache;
cache.SetCacheability(HttpCacheability.Public);
cache.SetMaxAge(TimeSpan.FromDays(300));
cache.SetETag(ETag);
cache.SetExpires(DateTime.Now.AddDays(300));
cache.SetLastModified(LastModified);
cache.SetRevalidation(HttpCacheRevalidation.None);
response.Headers.Add("Vary", "Accept-Encoding");
Log.Info("{0} characters sent", javascript.length);
response.Write(javascript);
response.Flush();
response.End();
The content is then normally sent using gzip-encoding with chunked transfer-encoding. Seems simple enough to me.
Unfortunately, I just had a remote-session with a user, where only about 1/3 of the file was in the cache, which broke the file of course (15k instead of 44k). In the cache, the content-encoding was also set to gzip, all communication took place via https.
After having opened the source-file on the user's machine, I just hit Ctrl-F5 and the full content was displayed immediately.
What could have possibly gone wrong?
In case it matters, please find the cache-entry from Firefox below:
Cache entry information
key: <resource-url>
fetch count: 49
last fetched: 2015-04-28 15:31:35
last modified: 2015-04-27 15:29:13
expires: 2016-02-09 14:27:05
Data size: 15998 B
Security: This is a secure document.
security-info: (...)
request-method: GET
request-Accept-Encoding: gzip, deflate
response-head: HTTP/1.1 200 OK
Cache-Control: public, max-age=25920000
Content-Type: application/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Tue, 09 Feb 2016 14:27:12 GMT
Last-Modified: Tue, 02 Jan 2001 11:00:00 GMT
Etag: W/"0"
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
Date: Wed, 15 Apr 2015 13:27:12 GMT
necko:classified: 1
Your clients browser is most likely caching the JavaScript files which would mean the src of your scripts isn't changing.
For instance if you were to request myScripts
<script src="/myScripts.js">
Then the first time, the client would request that file and any further times the browser would read its cache.
You need to append some sort of unique value such as a timestamp to the end of your scripts so even if the browser caches the file, the new timestamp will act like a new file name.
The client receives the new scripts after pressing Ctrl+F5 because this is a shortcut to empty the browsers cache.
MVC has a really nice way of doing this which involves appending a unique code which changes everytime the application or it's app pool is restarted. Check out MVC Bundling and Minification.
Hope this helps!