I have a MVC application that uses NPOI version:2.1.3.0 to read and write in excel file and user can download that excel file in their local machine. File format: xlsx.
public ActionResult NPOI()
{
FileStream fs = new FileStream(Server.MapPath(#"\Content\SampleExcel.xlsx"), FileMode.Open, FileAccess.Read);
XSSFWorkbook templateWorkbook = new XSSFWorkbook(fs);
ISheet sheet = (ISheet)templateWorkbook.GetSheet("Sheet1");
IRow dataRow = (IRow)sheet.GetRow(1);
dataRow.GetCell(0).SetCellValue(77);
sheet.ForceFormulaRecalculation = true;
MemoryStream ms = new MemoryStream();
templateWorkbook.Write(ms);
return File(ms.ToArray(), "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", "SampleExcel.xlsx");
}
Writing in excel file is working fine. But when browser is IE and excel 2013 in user machine.
If after downloads completes user uses IE open(preview file) option is selected:
Then in excel 2013 this error is observed:
Followed by:
Although the template file(SampleExcel.xlsx) which is there on server side is open and saved with excel 2013. Then also I don't know why it is showing file is corrupt. And there is enough memory on user's machine, it never touches it's peak value in task manager.
Any help would be highly appreciated.
I figured it out, actually the problem is not with the memory, but with the user permission on that particular machine. If Admin guys try to do same above mentioned steps with no customization in access then no problem is faced by him.
Anyhow thanks everyone for your time:)
Related
I am trying to get only existing file names list with respected date and time from SharePoint using any API and C#.
I am able to download and upload files from SharePoint using webclient, but i am not able to get only file names list with respected date and time to datagridview. Please let me know if there is any solution for that.
I am developing in windows forms application using visual studio environment.
Thanks,
Please try the c# code below to get the information about the files in directory
class FileSysInfo
{
static void Main()
{
// Get the files in the directory and prints out filename, Last access time and length
System.IO.FileInfo[] fileNames = dirInfo.GetFiles("*.*");
foreach (System.IO.FileInfo fi in fileNames)
{
Console.WriteLine("{0}: {1}: {2}", fi.Name, fi.LastAccessTime, fi.Length);
}
}
}
I am facing an issue while exporting japanese text in CSV format. Junk characters are being exported instead of original japanese text. I am using .NET MVC FileStreamResult to export records in Csv file and used encoding format as UTF8 (I have also used some other encoding format, but no luck). I debugged my code and able to convert string from memory stream and vice versa and able to see original japanese text being exported. Once exporting completed, I opened the CSV file, but only able to see junk character instead of expected text. If I open the CSV file in NotePad ( Opening the csv file in Notepad is NOT my requirement. I am referring Notepad only to verify whether i am able to see Japanese translated language ), then i can see the expected japanese text. It would be really helpful if someone please help me find root cause of this issue and provide a resolution.
Ex. 東京都品川区大崎 gets written as æ±äº¬éƒ½å“å·åŒºå¤§å´Ž
Note: I can see expected japanese text is exported properly if I opened the sample .CSV file using LibreOffice Calc, Linux default gEdit. But the issue is with opening this csv file using MS Office.
Please find the below attached code -
Controller/Action to execute while clicking on export to Csv button
================================================================================
[HttpPost]
[ValidateInput(false)]
public FileStreamResult SaveCustomerInfo()
{
return ExportToCsv();
}
================================================================================
private static FileStreamResult ExportToCsv()
{
var exportedData = new StringBuilder();
exportedData
.AppendLine("実行日,口座番号,支店番号,アカウント名,支店名,の/受益秩序,ステートメント日,入力日,お問い合わせ番号, ,Date Range")
.Append(
"CS0001,Demo FName,Demo LName,8/20/2015,\"Demo User Address\",City,Country,08830,0123456789,15813,Absolute from 8/20/2015 to 8/22/2015");
var stream = PrintingHelper.StringToMemoryStream(Encoding.UTF8, exportedData.ToString());
var fileStreamResult = new FileStreamResult(stream, "text/csv")
{
FileDownloadName =
new StringBuilder("TestExportedFileInCsv")
.Append(".csv").ToString()
};
return fileStreamResult;
}
It sound as though you haven't installed the language pack for MS Office on the machine that you are trying to open the csv on.
Now on my PC I can use explorer to open a location on our SP server (location eg http://sp.myhost.com/site/Documents/). And from there I can copy/paste a file from eg my C:\ drive.
I need to replicate the copy process progmatically. FileCopy() doesn't do it - seems to be the http:// bit that's causing problems!
Does the server allow WebDAV access? If yes, there are WebDAV clients for Delphi available, including Indy 10.
In case if you are not using BLOB storage all SharePoint files are stored in the database as BLOB objects.
When you access your files with explorer you are using windows service which is reading files from SharePoiont and render it to you. This way you can copy and paste as soon as donwload them from an to SharePoint manually.
To be able to do this automatically you should achive this using the next SP API code:
using (SPSite site = new SPSite("http://testsite.dev"))
{
using (SPWeb web = site.OpenWeb())
{
using (FileStream fs = File.OpenRead(#"C:\Debug.txt"))
{
byte[] buffer = new byte[fs.Length];
fs.Read(buffer, 0, (int) fs.Length);
SPList list = web.GetList("Lists/Test AAD");
SPFile f = list.RootFolder.Files.Add("/Shared Documents/"+Path.GetFileName(fs.Name), buffer);
}
}
}
This will add new "Debug.txt" file to the "Shared Documents" library read from the disk C. To do this for each file just loop through each file in the folder. You can open web only once and do the loop each time when you add file...
Hope it helps,
Andrew
I'm new to search engines and web crawlers. Now I want to store all the original pages in a particular web site as html files, but with Apache Nutch I can only get the binary database files. How do I get the original html files with Nutch?
Does Nutch support it? If not, what other tools can I use to achieve my goal.(The tools that support distributed crawling are better.)
Well, nutch will write the crawled data in binary form so if if you want that to be saved in html format, you will have to modify the code. (this will be painful if you are new to nutch).
If you want quick and easy solution for getting html pages:
If the list of pages/urls that you intend to have is quite low, then better get it done with a script which invokes wget for each url.
OR use HTTrack tool.
EDIT:
Writing a your own nutch plugin will be great. Your problem will get solved plus you can contribute to nutch by submitting your work !!! If you are new to nutch (in terms of code & design), then you will have to invest lot of time building a new plugin ... else its easy to do.
Few pointers for helping your initiative:
Here is a page which talks about writing own nutch plugin.
Start with Fetcher.java. See lines 647-648. That is the place where you can get the fetched content on per url basis (for those pages which got fetched successfully).
pstatus = output(fit.url, fit.datum, content, status, CrawlDatum.STATUS_FETCH_SUCCESS);
updateStatus(content.getContent().length);
You should add code right after this to invoke your plugin. Pass content object to it. By now, you would have guessed that content.getContent() is the content for url you want. Inside the plugin code, write it to some file. Filename should be based on the url name else it will be difficult to work with that. Url can be obtained by fit.url.
You must do modifications in run Nutch in Eclipse.
When you are able to run, open Fetcher.java and add the lines between "content saver" command lines.
case ProtocolStatus.SUCCESS: // got a page
pstatus = output(fit.url, fit.datum, content, status, CrawlDatum.STATUS_FETCH_SUCCESS, fit.outlinkDepth);
updateStatus(content.getContent().length);'
//------------------------------------------- content saver ---------------------------------------------\\
String filename = "savedsites//" + content.getUrl().replace('/', '-');
File file = new File(filename);
file.getParentFile().mkdirs();
boolean exist = file.createNewFile();
if (!exist) {
System.out.println("File exists.");
} else {
FileWriter fstream = new FileWriter(file);
BufferedWriter out = new BufferedWriter(fstream);
out.write(content.toString().substring(content.toString().indexOf("<!DOCTYPE html")));
out.close();
System.out.println("File created successfully.");
}
//------------------------------------------- content saver ---------------------------------------------\\
To update this answer -
It is possible to post process the data from your crawldb segment folder, and read in the html (including other data nutch has stored) directly.
Configuration conf = NutchConfiguration.create();
FileSystem fs = FileSystem.get(conf);
Path file = new Path(segment, Content.DIR_NAME + "/part-00000/data");
SequenceFile.Reader reader = new SequenceFile.Reader(fs, file, conf);
try
{
Text key = new Text();
Content content = new Content();
while (reader.next(key, content))
{
System.out.println(new String(content.GetContent()));
}
}
catch (Exception e)
{
}
The answers here are obsolete. Now, it is simply possible to get the plain HTML-files with nutch dump. Please see this answer.
In apache Nutch 2.3.1
You can save the raw HTML by edit the Nutch code firstly run the nutch in eclipse by following https://wiki.apache.org/nutch/RunNutchInEclipse
After you finish ruunning nutch in eclipse edit file FetcherReducer.java , add this code to the output method, run ant eclipse again to rebuild the class
Finally the raw html will added to reportUrl column in your database
if (content != null) {
ByteBuffer raw = fit.page.getContent();
if (raw != null) {
ByteArrayInputStream arrayInputStream = new ByteArrayInputStream(raw.array(), raw.arrayOffset() + raw.position(), raw.remaining());
Scanner scanner = new Scanner(arrayInputStream);
scanner.useDelimiter("\\Z");//To read all scanner content in one String
String data = "";
if (scanner.hasNext()) {
data = scanner.next();
}
fit.page.setReprUrl(StringUtil.cleanField(data));
scanner.close();
}
I use ASP MVC3 framework, created an Excel file and outputted it using FileResult action with content type "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet".
When attempting to open it Excel just says "file is corrupt and cannot be opened."
When I open the source generated Excel file that was used to send the output it works without any problems. I also run file comparison on the bytes for both copies and the files are identical. I tried to email the corrupt file to myself and the attachment opens fine.
This leads me to believe it's a problem with headers or some sort of Excel/Windows security config.
If it is the latter, then I need a solution that won't make clients change their security settings.
EDIT - Found the setting:
I've found what setting causes this - "Enable protected view from files originated from the internet" in Excel's Trust Center / Protected View settings.
So I guess the question is - Is there a way for the file to appear trusted?
Here are the response headers:
Cache-Control:private
Content-Disposition:attachment;
filename="Report - Monday, March 19, 2012.xlsx" Content-Length:20569
Content-Type:application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
The action method that makes the output:
[HttpPost]
public virtual FileResult Export()
{
try
{
...
string newFilePath = createNewFile(...);
string downloadedFileName = "Report - " + DateTime.Now.ToString("D") + ".xlsx";
return File(newFilePath, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", downloadedFileName);
}
catch (Exception ex)
{
...
}
}
How I create the Excel file:
I have a template XLSX file witch column names and some pivot charts in other sheets. From C# I create a copy of this template and then call SQL Server which outputs data into 1st sheet using OLEDB connector:
set #SQL='insert into OPENROWSET(''Microsoft.ACE.OLEDB.12.0'', ''Excel 12.0;Database=' + #PreparedXLSXFilePath + ';'', ''SELECT * FROM [Data$]'') ...
Thanks in advance for any help.
You would need a digital signature in your Excel file. How to do this from code is another question.
More info here:
http://www.delphifaq.com/faq/windows_user/f2751.shtml