Using streamreader to read line containing this "//"? - c#-2.0

Read a Text file having any line starts from "//" omit this line and moved to next line.
The Input text file having some seprate partitions. Find line by line process and this mark.

If you are using .Net 3.5 you can use LINQ with a IEnumerable wrapped around a Stream Reader. This cool part if then you can just use a where statement to file statmens or better yet use a select with a regular expression to just trim the comment and leave data on the same line.
//.Net 3.5
static class Program
{
static void Main(string[] args)
{
var clean = from line in args[0].ReadAsLines()
let trimmed = line.Trim()
where !trimmed.StartsWith("//")
select line;
}
static IEnumerable<string> ReadAsLines(this string filename)
{
using (var reader = new StreamReader(filename))
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
...
//.Net 2.0
static class Program
{
static void Main(string[] args)
{
var clean = FilteredLines(args[0]);
}
static IEnumerable<string> FilteredLines(string filename)
{
foreach (var line in ReadAsLines(filename))
if (line.TrimStart().StartsWith("//"))
yield return line;
}
static IEnumerable<string> ReadAsLines(string filename)
{
using (var reader = new StreamReader(filename))
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}

I'm not sure what you exactly need but, if you just want to filter out // lines from some text in a stream... just remember to close the stream after using it.
public string FilterComments(System.IO.Stream stream)
{
var data = new System.Text.StringBuilder();
using (var reader = new System.IO.StreamReader(stream))
{
var line = string.Empty;
while (!reader.EndOfStream)
{
line = reader.ReadLine();
if (!line.TrimStart(' ').StartsWith("//"))
{
data.Append(line);
}
}
}
return data.ToString();
}

Class SplLineIgnorStrmReader:StreamReader // derived class from StreamReader
SplLineIgnorStrmReader ConverterDefFileReadStream = null;
{
//created the Obj for this Class.
Obj = new SplLineIgnorStrmReader(strFile, Encoding.default);
}
public override string ReadLine()
{
string strLineText = "", strTemp;
while (!EndOfStream)
{
strLineText = base.ReadLine();
strLineText = strLineText.TrimStart(' ');
strLineText = strLineText.TrimEnd(' ');
strTemp = strLineText.Substring(0, 2);
if (strTemp == "//")
continue;
break;
}
return strLineText;
This is if u want to read the Text file and omit any comments from that file(here exclude "//" comment).

Related

How to keep the hyperlink in an pdf merge in ITextSharp? [duplicate]

How to merge multiple pdf files (generated on run time) through ItextSharp then printing them.
I found the following link but that method requires the pdf names considering that the pdf files stored and this is not my case .
I have multiple reports i'll convert them to pdf files through this method :
private void AddReportToResponse(LocalReport followsReport)
{
string mimeType;
string encoding;
string extension;
string[] streams = new string[100];
Warning[] warnings = new Warning[100];
byte[] pdfStream = followsReport.Render("PDF", "", out mimeType, out encoding, out extension, out streams, out warnings);
//Response.Clear();
//Response.ContentType = mimeType;
//Response.AddHeader("content-disposition", "attachment; filename=Application." + extension);
//Response.BinaryWrite(pdfStream);
//Response.End();
}
Now i want to merge all those generated files (Bytes) in one pdf file to print it
If you want to merge source documents using iText(Sharp), there are two basic situations:
You really want to merge the documents, acquiring the pages in their original format, transfering as much of their content and their interactive annotations as possible. In this case you should use a solution based on a member of the Pdf*Copy* family of classes.
You actually want to integrate pages from the source documents into a new document but want the new document to govern the general format and don't care for the interactive features (annotations...) in the original documents (or even want to get rid of them). In this case you should use a solution based on the PdfWriter class.
You can find details in chapter 6 (especially section 6.4) of iText in Action — 2nd Edition. The Java sample code can be accessed here and the C#'ified versions here.
A simple sample using PdfCopy is Concatenate.java / Concatenate.cs. The central piece of code is:
byte[] mergedPdf = null;
using (MemoryStream ms = new MemoryStream())
{
using (Document document = new Document())
{
using (PdfCopy copy = new PdfCopy(document, ms))
{
document.Open();
for (int i = 0; i < pdf.Count; ++i)
{
PdfReader reader = new PdfReader(pdf[i]);
// loop over the pages in that document
int n = reader.NumberOfPages;
for (int page = 0; page < n; )
{
copy.AddPage(copy.GetImportedPage(reader, ++page));
}
}
}
}
mergedPdf = ms.ToArray();
}
Here pdf can either be defined as a List<byte[]> immediately containing the source documents (appropriate for your use case of merging intermediate in-memory documents) or as a List<String> containing the names of source document files (appropriate if you merge documents from disk).
An overview at the end of the referenced chapter summarizes the usage of the classes mentioned:
PdfCopy: Copies pages from one or more existing PDF documents. Major downsides: PdfCopy doesn’t detect redundant content, and it fails when concatenating forms.
PdfCopyFields: Puts the fields of the different forms into one form. Can be used to avoid the problems encountered with form fields when concatenating forms using PdfCopy. Memory use can be an issue.
PdfSmartCopy: Copies pages from one or more existing PDF documents. PdfSmartCopy is able to detect redundant content, but it needs more memory and CPU than PdfCopy.
PdfWriter: Generates PDF documents from scratch. Can import pages from other PDF documents. The major downside is that all interactive features of the imported page (annotations, bookmarks, fields, and so forth) are lost in the process.
I used iTextsharp with c# to combine pdf files. This is the code I used.
string[] lstFiles=new string[3];
lstFiles[0]=#"C:/pdf/1.pdf";
lstFiles[1]=#"C:/pdf/2.pdf";
lstFiles[2]=#"C:/pdf/3.pdf";
PdfReader reader = null;
Document sourceDocument = null;
PdfCopy pdfCopyProvider = null;
PdfImportedPage importedPage;
string outputPdfPath=#"C:/pdf/new.pdf";
sourceDocument = new Document();
pdfCopyProvider = new PdfCopy(sourceDocument, new System.IO.FileStream(outputPdfPath, System.IO.FileMode.Create));
//Open the output file
sourceDocument.Open();
try
{
//Loop through the files list
for (int f = 0; f < lstFiles.Length-1; f++)
{
int pages =get_pageCcount(lstFiles[f]);
reader = new PdfReader(lstFiles[f]);
//Add pages of current file
for (int i = 1; i <= pages; i++)
{
importedPage = pdfCopyProvider.GetImportedPage(reader, i);
pdfCopyProvider.AddPage(importedPage);
}
reader.Close();
}
//At the end save the output file
sourceDocument.Close();
}
catch (Exception ex)
{
throw ex;
}
private int get_pageCcount(string file)
{
using (StreamReader sr = new StreamReader(File.OpenRead(file)))
{
Regex regex = new Regex(#"/Type\s*/Page[^s]");
MatchCollection matches = regex.Matches(sr.ReadToEnd());
return matches.Count;
}
}
Here is some code I pulled out of an old project I had. It was a web application but I was using iTextSharp to merge pdf files then print them.
public static class PdfMerger
{
/// <summary>
/// Merge pdf files.
/// </summary>
/// <param name="sourceFiles">PDF files being merged.</param>
/// <returns></returns>
public static byte[] MergeFiles(List<Stream> sourceFiles)
{
Document document = new Document();
MemoryStream output = new MemoryStream();
try
{
// Initialize pdf writer
PdfWriter writer = PdfWriter.GetInstance(document, output);
writer.PageEvent = new PdfPageEvents();
// Open document to write
document.Open();
PdfContentByte content = writer.DirectContent;
// Iterate through all pdf documents
for (int fileCounter = 0; fileCounter < sourceFiles.Count; fileCounter++)
{
// Create pdf reader
PdfReader reader = new PdfReader(sourceFiles[fileCounter]);
int numberOfPages = reader.NumberOfPages;
// Iterate through all pages
for (int currentPageIndex = 1; currentPageIndex <=
numberOfPages; currentPageIndex++)
{
// Determine page size for the current page
document.SetPageSize(
reader.GetPageSizeWithRotation(currentPageIndex));
// Create page
document.NewPage();
PdfImportedPage importedPage =
writer.GetImportedPage(reader, currentPageIndex);
// Determine page orientation
int pageOrientation = reader.GetPageRotation(currentPageIndex);
if ((pageOrientation == 90) || (pageOrientation == 270))
{
content.AddTemplate(importedPage, 0, -1f, 1f, 0, 0,
reader.GetPageSizeWithRotation(currentPageIndex).Height);
}
else
{
content.AddTemplate(importedPage, 1f, 0, 0, 1f, 0, 0);
}
}
}
}
catch (Exception exception)
{
throw new Exception("There has an unexpected exception" +
" occured during the pdf merging process.", exception);
}
finally
{
document.Close();
}
return output.GetBuffer();
}
}
/// <summary>
/// Implements custom page events.
/// </summary>
internal class PdfPageEvents : IPdfPageEvent
{
#region members
private BaseFont _baseFont = null;
private PdfContentByte _content;
#endregion
#region IPdfPageEvent Members
public void OnOpenDocument(PdfWriter writer, Document document)
{
_baseFont = BaseFont.CreateFont(BaseFont.HELVETICA,
BaseFont.CP1252, BaseFont.NOT_EMBEDDED);
_content = writer.DirectContent;
}
public void OnStartPage(PdfWriter writer, Document document)
{ }
public void OnEndPage(PdfWriter writer, Document document)
{ }
public void OnCloseDocument(PdfWriter writer, Document document)
{ }
public void OnParagraph(PdfWriter writer,
Document document, float paragraphPosition)
{ }
public void OnParagraphEnd(PdfWriter writer,
Document document, float paragraphPosition)
{ }
public void OnChapter(PdfWriter writer, Document document,
float paragraphPosition, Paragraph title)
{ }
public void OnChapterEnd(PdfWriter writer,
Document document, float paragraphPosition)
{ }
public void OnSection(PdfWriter writer, Document document,
float paragraphPosition, int depth, Paragraph title)
{ }
public void OnSectionEnd(PdfWriter writer,
Document document, float paragraphPosition)
{ }
public void OnGenericTag(PdfWriter writer, Document document,
Rectangle rect, string text)
{ }
#endregion
private float GetCenterTextPosition(string text, PdfWriter writer)
{
return writer.PageSize.Width / 2 - _baseFont.GetWidthPoint(text, 8) / 2;
}
}
I didn't write this, but made some modifications. I can't remember where I found it. After I merged the PDFs I would call this method to insert javascript to open the print dialog when the PDF is opened. If you change bSilent to true then it should print silently to their default printer.
public Stream addPrintJStoPDF(Stream thePDF)
{
MemoryStream outPutStream = null;
PRStream finalStream = null;
PdfDictionary page = null;
string content = null;
//Open the stream with iTextSharp
var reader = new PdfReader(thePDF);
outPutStream = new MemoryStream(finalStream.GetBytes());
var stamper = new PdfStamper(reader, (MemoryStream)outPutStream);
var jsText = "var res = app.setTimeOut('this.print({bUI: true, bSilent: false, bShrinkToFit: false});', 200);";
//Add the javascript to the PDF
stamper.JavaScript = jsText;
stamper.FormFlattening = true;
stamper.Writer.CloseStream = false;
stamper.Close();
//Set the stream to the beginning
outPutStream.Position = 0;
return outPutStream;
}
Not sure how well the above code is written since I pulled it from somewhere else and I haven't worked in depth at all with iTextSharp but I do know that it did work at merging PDFs that I was generating at runtime.
Tested with iTextSharp-LGPL 4.1.6:
public static byte[] ConcatenatePdfs(IEnumerable<byte[]> documents)
{
using (var ms = new MemoryStream())
{
var outputDocument = new Document();
var writer = new PdfCopy(outputDocument, ms);
outputDocument.Open();
foreach (var doc in documents)
{
var reader = new PdfReader(doc);
for (var i = 1; i <= reader.NumberOfPages; i++)
{
writer.AddPage(writer.GetImportedPage(reader, i));
}
writer.FreeReader(reader);
reader.Close();
}
writer.Close();
outputDocument.Close();
var allPagesContent = ms.GetBuffer();
ms.Flush();
return allPagesContent;
}
}
To avoid the memory issues mentioned, I used file stream instead of memory stream(mentioned in ITextSharp Out of memory exception merging multiple pdf) to merge pdf files:
var parentDirectory = Directory.GetParent(SelectedDocuments[0].FilePath);
var savePath = parentDirectory + "\\MergedDocument.pdf";
using (var fs = new FileStream(savePath, FileMode.Create))
{
using (var document = new Document())
{
using (var pdfCopy = new PdfCopy(document, fs))
{
document.Open();
for (var i = 0; i < SelectedDocuments.Count; i++)
{
using (var pdfReader = new PdfReader(SelectedDocuments[i].FilePath))
{
for (var page = 0; page < pdfReader.NumberOfPages;)
{
pdfCopy.AddPage(pdfCopy.GetImportedPage(pdfReader, ++page));
}
}
}
}
}
}
****/*For Multiple PDF Print..!!*/****
<button type="button" id="btnPrintMultiplePdf" runat="server" class="btn btn-primary btn-border btn-sm"
onserverclick="btnPrintMultiplePdf_click">
<i class="fa fa-file-pdf-o"></i>Print Multiple pdf</button>
protected void btnPrintMultiplePdf_click(object sender, EventArgs e)
{
if (ValidateForMultiplePDF() == true)
{
#region Declare Temp Variables..!!
CheckBox chkList = new CheckBox();
HiddenField HidNo = new HiddenField();
string Multi_fofile, Multi_listfile;
Multi_fofile = Multi_listfile = "";
Multi_fofile = Server.MapPath("PDFRNew");
#endregion
for (int i = 0; i < grdRnew.Rows.Count; i++)
{
#region Find Grd Controls..!!
CheckBox Chk_One = (CheckBox)grdRnew.Rows[i].FindControl("chkOne");
Label lbl_Year = (Label)grdRnew.Rows[i].FindControl("lblYear");
Label lbl_No = (Label)grdRnew.Rows[i].FindControl("lblCode");
#endregion
if (Chk_One.Checked == true)
{
HidNo .Value = llbl_No .Text.Trim()+ lbl_Year .Text;
if (File.Exists(Multi_fofile + "\\" + HidNo.Value.ToString() + ".pdf"))
{
#region Get Multiple Files Name And Paths..!!
if (Multi_listfile != "")
{
Multi_listfile = Multi_listfile + ",";
}
Multi_listfile = Multi_listfile + Multi_fofile + "\\" + HidNo.Value.ToString() + ".pdf";
#endregion
}
}
}
#region For Generate Multiple Pdf..!!
if (Multi_listfile != "")
{
String[] Multifiles = Multi_listfile.Split(',');
string DestinationFile = Server.MapPath("PDFRNew") + "\\Multiple.Pdf";
MergeFiles(DestinationFile, Multifiles);
Response.ContentType = "pdf";
Response.AddHeader("Content-Disposition", "attachment;filename=\"" + DestinationFile + "\"");
Response.TransmitFile(DestinationFile);
Response.End();
}
else
{
}
#endregion
}
}
private void MergeFiles(string DestinationFile, string[] SourceFiles)
{
try
{
int f = 0;
/**we create a reader for a certain Document**/
PdfReader reader = new PdfReader(SourceFiles[f]);
/**we retrieve the total number of pages**/
int n = reader.NumberOfPages;
/**Console.WriteLine("There are " + n + " pages in the original file.")**/
/**Step 1: creation of a document-object**/
Document document = new Document(reader.GetPageSizeWithRotation(1));
/**Step 2: we create a writer that listens to the Document**/
PdfWriter writer = PdfWriter.GetInstance(document, new FileStream(DestinationFile, FileMode.Create));
/**Step 3: we open the Document**/
document.Open();
PdfContentByte cb = writer.DirectContent;
PdfImportedPage page;
int rotation;
/**Step 4: We Add Content**/
while (f < SourceFiles.Length)
{
int i = 0;
while (i < n)
{
i++;
document.SetPageSize(reader.GetPageSizeWithRotation(i));
document.NewPage();
page = writer.GetImportedPage(reader, i);
rotation = reader.GetPageRotation(i);
if (rotation == 90 || rotation == 270)
{
cb.AddTemplate(page, 0, -1f, 1f, 0, 0, reader.GetPageSizeWithRotation(i).Height);
}
else
{
cb.AddTemplate(page, 1f, 0, 0, 1f, 0, 0);
}
/**Console.WriteLine("Processed page " + i)**/
}
f++;
if (f < SourceFiles.Length)
{
reader = new PdfReader(SourceFiles[f]);
/**we retrieve the total number of pages**/
n = reader.NumberOfPages;
/**Console.WriteLine("There are"+n+"pages in the original file.")**/
}
}
/**Step 5: we Close the Document**/
document.Close();
}
catch (Exception e)
{
string strOb = e.Message;
}
}
private bool ValidateForMultiplePDF()
{
bool chkList = false;
foreach (GridViewRow gvr in grdRnew.Rows)
{
CheckBox Chk_One = (CheckBox)gvr.FindControl("ChkSelectOne");
if (Chk_One.Checked == true)
{
chkList = true;
}
}
if (chkList == false)
{
divStatusMsg.Style.Add("display", "");
divStatusMsg.Attributes.Add("class", "alert alert-danger alert-dismissable");
divStatusMsg.InnerText = "ERROR !!...Please Check At Least On CheckBox.";
grdRnew.Focus();
set_timeout();
return false;
}
return true;
}

Populate labels when selecting item in listbox

I have a listbox that contains the first index of every line in a text file.
the indexes are seperated with a ','.
I would like to select an item in the listbox and have it populate the labels I have in place with the rest of the line from the text file.
private void listsup_MouseClick(object sender, MouseEventArgs e)
{
Supfile = System.AppDomain.CurrentDomain.BaseDirectory + "data\\Suppliers.txt";
StreamReader spl = new StreamReader(Supfile);
string word = Convert.ToString(listsup.SelectedItem);
List<string> values = new List<string>();
foreach (string str in values)
{
if (str.Contains(word))
{
string[] tokens = str.Split(',');
labelsupnm.Text = tokens[0];
labelconpers.Text = tokens[1];
labeldiscr1.Text = tokens[2];
labeldiscr2.Text = tokens[3];
labeldiscr3.Text = tokens[4];
labeldiscr4.Text = tokens[5];
labeldiscr5.Text = tokens[6];
}
}
}
Problem is, I'm not getting anything to display in my labels, please help.
I changed my code a little, added some code that I used to populate the listbox itself, and now it all works just fine.
private void listsup_MouseClick(object sender, MouseEventArgs e)
{
Supfile = System.AppDomain.CurrentDomain.BaseDirectory + "data\\Suppliers.txt";
try
{
StreamReader supFile;
supFile = File.OpenText(Supfile);
string lines;
while (!supFile.EndOfStream)
{
lines = supFile.ReadLine();
string[] tokens = lines.Split(',');
string tr = listsup.SelectedItem.ToString();
if (tr.Equals(tokens[0]))
{
labelsupnm.Text = tokens[0];
labelconpers.Text = tokens[1];
labeldiscr1.Text = tokens[2];
labeldiscr2.Text = tokens[3];
labeldiscr3.Text = tokens[4];
labeldiscr4.Text = tokens[5];
labeldiscr5.Text = tokens[6];
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

File is a Method which is not valid in the given context [duplicate]

This question already has an answer here:
Why can't I use System.IO.File methods in an MVC controller?
(1 answer)
Closed 6 years ago.
Trying to check whether a file exists inside a path but getting a build error on File.
File is a method and cannot be used in this context.
if (!File.Exists(excelFilePath)) throw new FileNotFoundException(excelFilePath);
if (File.Exists(csvOutputFile)) throw new ArgumentException("File exists: " + csvOutputFile);
Full Class code
static void CovertExcelToCsv(string excelFilePath, string csvOutputFile, int worksheetNumber = 1)
{
if (!File.Exists(excelFilePath)) throw new FileNotFoundException(excelFilePath);
if (File.Exists(csvOutputFile)) throw new ArgumentException("File exists: " + csvOutputFile);
// connection string
var cnnStr = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties=\"Excel 8.0;IMEX=1;HDR=NO\"", excelFilePath);
var cnn = new System.Data.OleDb.OleDbConnection(cnnStr);
// get schema, then data
var dt = new DataTable();
try
{
cnn.Open();
var schemaTable = cnn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, null);
if (schemaTable.Rows.Count < worksheetNumber) throw new ArgumentException("The worksheet number provided cannot be found in the spreadsheet");
string worksheet = schemaTable.Rows[worksheetNumber - 1]["table_name"].ToString().Replace("'", "");
string sql = String.Format("select * from [{0}]", worksheet);
var da = new OleDbDataAdapter(sql, cnn);
da.Fill(dt);
}
catch (Exception e)
{
// ???
throw e;
}
finally
{
// free resources
cnn.Close();
}
// write out CSV data
using (var wtr = new StreamWriter(csvOutputFile))
{
foreach (DataRow row in dt.Rows)
{
bool firstLine = true;
foreach (DataColumn col in dt.Columns)
{
if (!firstLine) { wtr.Write(","); } else { firstLine = false; }
var data = row[col.ColumnName].ToString().Replace("\"", "\"\"");
wtr.Write(String.Format("\"{0}\"", data));
}
wtr.WriteLine();
}
}
}
How can fix this?
You probably have this method inside MVC controller in which File method exists. Add in your code System.IO.File instead of File

Understanding mahout classification output

I have trained mahout model for three categories Category_A,Category_B,Category_C using 20newsGroupExample , Now i want to classify my documents using this model. Can somebody help me to understand output i am getting from this model.
Here is my output
{0:-2813549.8786637094,1:-2651723.736745838,2:-2710651.7525975127}
According to output category of document is 1, But expected category is 2. Am i going right or something is missing in my code ?
public class NaiveBayesClassifierExample {
public static void loadClassifier(String strModelPath, Vector v)
throws IOException {
Configuration conf = new Configuration();
NaiveBayesModel model = NaiveBayesModel.materialize(new Path(strModelPath), conf);
AbstractNaiveBayesClassifier classifier = new StandardNaiveBayesClassifier(model);
Vector st = classifier.classifyFull(v);
System.out.println(st.asFormatString());
System.out.println(st.maxValueIndex());
st.asFormatString();
}
public static Vector createVect() throws IOException {
FeatureVectorEncoder encoder = new StaticWordValueEncoder("text");
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_36);
String inputData=readData();
StringReader in = new StringReader(inputData);
TokenStream ts = analyzer.tokenStream("body", in);
CharTermAttribute termAtt = ts.addAttribute(CharTermAttribute.class);
Vector v1 = new RandomAccessSparseVector(100000);
while (ts.incrementToken()) {
char[] termBuffer = termAtt.buffer();
int termLen = termAtt.length();
String w = new String(termBuffer, 0, termLen);
encoder.addToVector(w, 1.0, v1);
}
v1.normalize();
return v1;
}
private static String readData() {
// TODO Auto-generated method stub
BufferedReader reader=null;
String line, results = "";
try{
reader = new BufferedReader(new FileReader("c:\\inputFile.txt"));
while( ( line = reader.readLine() ) != null)
{
results += line;
}
reader.close();
}
catch(Exception ex)
{
ex.printStackTrace();
}
return results;
}
public static void main(String[] args) throws IOException {
Vector v = createVect();
String mp = "E:\\Final_Model\\model";
loadClassifier(mp, v);
}
}

asp.net mvc serving txt gets truncated

I'm trying to serve a txt file made from the database using an action. The action is the following:
public ActionResult ATxt()
{
var articulos = _articulosService.ObteTotsArticles();
return File(CatalegATxt.ATxt(articulos), "text/plain");
}
and the CatalegATxt class is:
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using WebDibaelsaMVC.DTOs.Busqueda;
namespace WebDibaelsaMVC.TxtLib
{
public static class CatalegATxt
{
public static Stream ATxt(IEnumerable<ArticuloBusquedaDTO> articles)
{
var stream = new MemoryStream();
var streamWriter = new StreamWriter(stream, Encoding.UTF8);
foreach (ArticuloBusquedaDTO article in articles)
{
streamWriter.WriteLine(article.ToStringFix());
}
stream.Seek(0, SeekOrigin.Begin);
return stream;
}
public static string ToStringFix(this ArticuloBusquedaDTO article)
{
string result = "";
result += article.CodigoArticulo.PadRight(10, ' ').Substring(0, 10);
result += article.EAN.Trim().PadLeft(13, '0').Substring(0, 13);
result += article.NombreArticulo.PadRight(100, ' ').Substring(0, 100);
result += article.Marca.PadRight(100, ' ').Substring(0, 100);
result += article.Familia.PadRight(50, ' ').Substring(0, 50);
result += article.PrecioCesion.ToStringFix();
result += article.PVP.ToStringFix();
return result;
}
private static string ToStringFix(this double numero)
{
var num = (int)Math.Round(numero * 100, 0);
string result = num.ToString().PadLeft(10, '0');
return result;
}
}
}
it just writes the file lines based on the stuff I got from the database. But when I look at the file it looks truncated. The file is about 8Mb. I also tried converting to byte[] before returning from ATxt with the same result.
Any idea?
Thanks,
Carles
Update: I also tried to serve XML from the same content and it also gets truncated. It doesn't get truncated on the data (I thought it might have been an EOF character in it) but it truncates in the middle of a label...
I was having the exact same problem. The text file would always be returned as truncated.
It crossed my mind that it might be a "flushing" problem, and indeed it was. The writer's buffer hasn't been flushed at the end of the operation - since there's no using block, or the Close() call - which would flush automatically.
You need to call:
streamWriter.Flush();
before MVC takes over the stream.
Here's how your method should look like:
public static Stream ATxt(IEnumerable<ArticuloBusquedaDTO> articles)
{
var stream = new MemoryStream();
var streamWriter = new StreamWriter(stream, Encoding.UTF8);
foreach (ArticuloBusquedaDTO article in articles)
{
streamWriter.WriteLine(article.ToStringFix());
}
// Flush the stream writer buffer
streamWriter.Flush();
stream.Seek(0, SeekOrigin.Begin);
return stream;
}
Why are you using an ActionResult?
ASP.NET MVC 1 has a FileStreamResult for just what you are doing. It expects a Stream object, and returns it.
public FileStreamResult Test()
{
return new FileStreamResult(myMemoryStream, "text/plain");
}
Should work fine for what you want to do. No need to do any conversions.
In your case, just change your method to this:
public FileStreamResult ATxt()
{
var articulos = _articulosService.ObteTotsArticles();
return new FileStreamResult(CatalegATxt.ATxt(articulos), "text/plain");
}
You probably want to close the MemoryStream. It could be getting truncated because it expects more data still. Or to make things even simpler, try something like this:
public static byte[] ATxt(IEnumerable<ArticuloBusquedaDTO> articles)
{
using(var stream = new MemoryStream())
{
var streamWriter = new StreamWriter(stream, Encoding.UTF8);
foreach (ArticuloBusquedaDTO article in articles)
{
streamWriter.WriteLine(article.ToStringFix());
}
return stream.ToArray();
}
}

Resources