Save SpreadsheetDocument in the DB - openxml-sdk

I have a valid SpreadsheetDocument object created from the stream. I can manipulate it (f.e. add new row). After my changes I need to save this changed document in SQL Server as varbinary and later read it for SQL Server to manipulate further.
Could you provide some example how to achieve it?
I know how to put/read data from SQL Server. What I'm looking for is the way somehow to convert SpreadsheetDocument to byte array and to create back SpreadsheetDocument from byte array for SQL Server.
I'm using Open XML SDK 2.0
Thanks a lot,
Alexander

Not quite the same but I needed to load an Excel template into memory, modify it and send it over HTTP using IIS. I did it by loading the data into a memory stream, then doing the modifications (that seems to be the way Microsoft recomend here:
http://msdn.microsoft.com/en-us/library/ee945362%28v=office.11%29.aspx
This might help you:
MemoryStream ms = new MemoryStream();
byte [] byteArray = System.IO.File.ReadAllBytes("document.xslm");
ms.Write(byteArray, 0, byteArray.Length);
ms.Position = 0;
using (SpreadsheetDocument doc = SpreadsheetDocument.Open(ms, true))
{
<Do stuff>
}
return File(ms.ToArray(), "application/vnd.ms-excel.sheet.macroEnabled.12", "output.xlsm");
Obviously the last line is what I needed to do, you're going to need to save the stream to the database.

Related

How to parse EDIfact file containing multiple items using EDI.Net?

I am using EDI.Net from indice-co and i have a EDI file that contains multiple items, when i use the EdiGrammer.NewEdiFact and read the file using stream and deserialize it I get only 1 item from the file, the top most; how do i read the file using stream and deserialize it to a list?
Code Example:
var editFactParser = EdiGrammar.NewEdiFact();
var interchange = default(EdiModel.Interchange);
using (
var stream = File.Open("E:\\SomePath\\20191121020103.00000091.EDI", FileMode.Open,
FileAccess.Read))
{
interchange = new EdiSerializer().Deserialize<EdiModel.Interchange>(new StreamReader(stream),
editFactParser );
}
EdiFact File Content
UNA:+.? 'UNB+UNOA:2+DHLEUAPGW+CENTIRO+191030:1347+203516'UNH+240179+IFTSTA:D:01B:UN'BGM+77+9690108+9'DTM+9:201910301347:203'NAD+CZ+9690108'CNI+1+1032173'LOC+5+AMS::87'LOC+8+AMS::87'STS++PU+:::SHIPMENT PICKUP'RFF+CN:1297617'DTM+11:20191030:102'DTM+7:201910301329:203'GID++1'PCI+18'GIN+BN+10321732'UNT+15+240179'UNH+240180+IFTSTA:D:01B:UN'BGM+77+9690108+9'DTM+9:201910301347:203'NAD+CZ+96901083'CNI+1+2598018'LOC+5+ORY::87'LOC+8+AMS::87'STS++PL+:::PROCESSED AT LOCATION'RFF+CN:116775116'DTM+11:20191029:102'DTM+7:201910301336:203'GID++1'PCI+18'GIN+BN+2598018043'CNI+2+4911357323'LOC+5+CDG::87'LOC+8+AMS::87'STS++PL+:::PROCESSED AT LOCATION'RFF+CN:1286700'DTM+11:20191029:102'DTM+7:201910301339:203'GID++1'PCI+18'GIN+BN+49113573'CNI+3+4911401'LOC+5+CDG::87'LOC+8+AMS::87'STS++PL+:::PROCESSED AT LOCATION'RFF+CN:129007'DTM+11:20191029:102'DTM+7:201910301337:203'GID++1'PCI+18'GIN+BN+49114019'CNI+4+6194460'LOC+5+BRU::87'LOC+8+AMS::87'STS++PL+:::PROCESSED AT LOCATION'RFF+CN:127214241'DTM+11:20191029:102'DTM+7:201910301339:203'GID++1'PCI+18'GIN+BN+6194460856'CNI+5+7525715'LOC+5+ORY::87'LOC+8+AMS::87'STS++PL+:::PROCESSED AT LOCATION'RFF+CN:ECONOCOM'DTM+11:20191029:102'DTM+7:201910301336:203'GID++1'PCI+18'GIN+BN+75257154'CNI+6+752571'LOC+5+ORY::87'LOC+8+AMS::87'STS++PL+:::PROCESSED AT LOCATION'RFF+CN:ECONOCOM'DTM+11:20191029:102'DTM+7:201910301339:203'GID++1'PCI+18'GIN+BN+7525715'UNT+65+240180'UNZ+2+203516'
sorry for the late reply, I was able to solve it, it was an issue how with how I was accessing the segments and what data I was trying to get back, after some more trial and error I was able to figure it out, I currently do not have access to the code and will try and post it back here when I get to it. Thank you.

Export SSRS report directly without rendering it on ReportViewer

I have a set of RDL reports hosted on the report server instance. Some of the report renders more than 100,000 records on the ReportViewer. So that it takes quite long time to render it on the Viewer. So, we decided to go with Export the content directly from the server based on the user input parameters for the report as well as export file format.
Main thing here, I do not want the user to wait until the export file available for download. Rather, User can submit the action and can proceed to do other works. In the background, the program has to export the file to some physical location. When the download will be available, the user will be informed with some notification about the exported file.
I found the way in this Link. I need to know what are the ways to achieve the above mentioned functionality as well as how to pass the input parameters for the report. Pl suggest me.
Note: I was using XML as datasource for the rdl reports.
EDIT
I found something useful and did the coding like the below,
string path = ServerURL +"?" + _reportFolder + "ReportName&rs:Command=Render&rs:Format=PDF";
WebRequest req = WebRequest.Create(path);
string reportParametersQT = String.Empty;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
WebResponse response = req.GetResponse();
Stream stream = response.GetResponseStream();
//screen.Response.Clear();
string enCodeFileName = HttpUtility.UrlEncode("fileName.pdf", System.Text.Encoding.UTF8);
// The word attachment in Addheader is used to directly show the save dialog box in browser
Response.AddHeader("content-disposition", "attachment; filename=" + enCodeFileName);
Response.BufferOutput = false; // to prevent buffering
Response.ContentType = response.ContentType;
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
Response.End();
I am able to download the exported file. But need to save the file in physical location instead of downloading. I dont know how to do that.
Both of these are very easy to do. You essentially just pass the parameters in the URL that you're calling, for example for a parameter called "LearnerList" you add &LearnerList=12345 to the URL. For exporting, add an additional paramter for Format=PDF (or whatever you want the file as) to get the report to export as a PDF instead of generating in Report Viewer.
Here's an example URL:
https://reporting.MySite.net/ReportServer/Pages/ReportViewer.aspx?/Users+Folders/User/My+Reports/Learner+Details&rs:Format=PDF&LearnerList=202307
Read these two pages, and you should be golden:
https://msdn.microsoft.com/en-us/library/ms155391.aspx
https://msdn.microsoft.com/en-us/library/ms154040.aspx

How to read Microsoft Word Binary Data from database and convert it to readable text

I'm working in a java application called Mirth where I need to read a saved word document that is saved in a database table in a Microsoft word binary data format. Currently I can retrieve data from column in my java application but I need to convert this to readable text or XML or HTML format.
Looking online there is a java library call Aspose.words but I can't find any methods that would read in this binary data and convert it to something readable. Has anyone use Aspose.words before to do a task like this or does anyone have an alternative solution
Load document from Database
You can load the Word document using ByteArrayInputStream, if it's in a database table. Please refer to http://www.aspose.com/docs/display/wordsjava/How+to++Load+and+Save+a+Document+to+Database for an article that explains saving and reading a Word document to/from database. I have copied the relevant code from there.
public static Document readFromDatabase(String fileName) throws Exception
{
// Create the SQL command.
String commandString = "SELECT * FROM Documents WHERE FileName='" + fileName + "'";
// Retrieve the results from the database.
ResultSet resultSet = executeQuery(commandString);
// Check there was a matching record found from the database and throw an exception if no record was found.
if(!resultSet.isBeforeFirst())
throw new IllegalArgumentException(MessageFormat.format("Could not find any record matching the document \"{0}\" in the database.", fileName));
// Move to the first record.
resultSet.next();
// The document is stored in byte form in the FileContent column.
// Retrieve these bytes of the first matching record to a new buffer.
byte[] buffer = resultSet.getBytes("FileContent");
// Wrap the bytes from the buffer into a new ByteArrayInputStream object.
ByteArrayInputStream newStream = new ByteArrayInputStream(buffer);
// Read the document from the input stream.
Document doc = new Document(newStream);
// Return the retrieved document.
return doc;
}
Read text
Once the file is loaded, you can read it's paragraphs, tables, images etc using DOM, see the related documentation at http://www.aspose.com/docs/display/wordsjava/Programming+with+Documents.
But, if you just want to get all the text from a document, you can do it easily by calling toString() method as below
System.out.println(doc.toString(SaveFormat.TEXT));
I work with Aspose as Developer Evangelist.

Uploading a File: MemoryStream vs. File System

In a business app I am creating, we allow our administrators to upload a CSV file with certain data that gets parsed and entered into our databases (all appropriate error handling is occurring, etc.).
As part of an upgrade to .NET 4.5, I had to update a few aspects of this code, and, while I was doing so, I ran across this answer of someone who is using MemoryStream to handle uploaded files as opposed to temporarily saving to the file system. There's no real reason for me to change (and maybe it's even bad to), but I wanted to give it a shot to learn a bit. So, I quickly swapped out this code (from a strongly-typed model due to the upload of other metadata):
HttpPostedFileBase file = model.File;
var fileName = Path.GetFileName(file.FileName);
var path = Path.Combine(Server.MapPath("~/App_Data/Uploads"), fileName);
file.SaveAs(path);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(path);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
to this:
using (MemoryStream memoryStream = new MemoryStream())
{
model.File.InputStream.CopyTo(memoryStream);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(memoryStream);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
}
Unfortunately, things break when I do this - all my data is coming out with null values and it seems as though there is nothing actually in the MemoryStream (though I'm not positive about this). I know this may be a long shot, but is there anything obvious that I'm missing here or something I can do to better debug this?
You need to add the following:
model.File.InputStream.CopyTo(memoryStream);
memoryStream.Position = 0;
...
Product product = csvParser.Parse(memoryStream);
When you copy the file into the MemoryStream, the pointer is moved to the end of the stream, so when you then try to read it, you're getting a null byte instead of your stream data. You just need to reset the position to the start, i.e. 0.
The problem I believe is your memoryStream has it's position set to the end, and I'm guessing that your CSVParser is only processing from that point onwards which there is no data.
To fix you can simply set the memoryStream position to 0 before you parse it with your csvParser.
memoryStream.Position = 0;

DBF Large Char Field

I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.
For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.
The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.
Here is my connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;
Can anyone tell me the secret to reading larger fields from a DBF file?
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
if (dr.Read())
{
stuff.StuffId = dr["ITEM"].ToString();
// OPTION 1
string ctrlNums = dr["CTRLNUMS"].ToString();
// OPTION 2
char[] buffer = new char[750];
int index = 0;
int readSize = 5;
while (index < 750)
{
long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);
index += (int)charsRead;
if (charsRead < readSize)
{
break;
}
}
}
}
}
}
You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html
What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.
I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.
Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?
I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.
Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.
3 suggestions that might be worth a shot...
1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.
2 - try the FoxPro OLEDB provider
3 - parse the DBF file by hand. Example is here.
My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)

Resources