I want to export csv file that contains hebrew character in my ASP.net MVC application - asp.net-mvc

I want to export csv file that contains hebrew character in my ASP.net MVC application
I have tried many encoding but not work. Actually hebrew characters and not displaying as they are.
Can anybody have idea?
System.Text.UnicodeEncoding Enc = new UnicodeEncoding();
HttpContext.Current.Response.AddHeader("Content-Length", Enc.GetByteCount(strExport).ToString());
HttpContext.Current.Response.BinaryWrite(Enc.GetBytes(strExport));
HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.GetEncoding("windows-1255");
//HttpContext.Current.Response.Charset = "iso-8859-8";
HttpContext.Current.Response.ContentType = "text/csv";
HttpContext.Current.Response.AddHeader("content-disposition", string.Format("attachment;inline; filename={0}.csv", fileName));
HttpContext.Current.Response.End();

Check this out and see if setting the encoding helps: http://msdn.microsoft.com/en-us/library/system.text.encoding.aspx

Once upon a time we had multiple clients, including Hebrew, text files for import into MySQL, Sql Server, etc. The company had standardized on UTF8 as the encoding for everything. That was a few years ago, so ymmv.
Might be easier to debug if you show us a code sample.

Related

OleDb Connection string for tab-delimited files

I need to read a variety of data file types, such as xlsx, csv, txt, and mdb, and I want to use an OleDB connection so that the process of reading the files is the same, just with a different connection string. However, OleDB is ignoring the delimiter in connection strings such as the following and only reads comma-delimited.
Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties='Text;HDR=Yes;Delimited(\t)';
Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties='Text;HDR=Yes;FMT=TabDelimited';
I would prefer to have the OleDB engine do the work rather than parse the tab-delimited files myself.
There are several StackOverflow questions concerning this, and the solution is usually to create an .ini file in the same directory, but sometimes my users do not have write access to the folder. Seeing as all of the StackOverflow questions similar to mine are at least a couple years old, does anybody have any updated information on this issue?
This is how I've used | delimiter to read |-delimited .csv or .txt files using OleDB, however, I was using ACE engine and constructing connection string from C#:
connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + Path.GetDirectoryName(catalogFile) + ";Extended Properties='text;HDR=YES;FMT=Delimited(" + (char)124 + ")'";
(char)124 stands for the ASCII code of |. Knowing that ASCII code of TAB is 9 you may try using this in your connection string:
...;Extended Properties='text;HDR=YES;FMT=Delimited(" + (char)9 + ")'";
Try the above code snippet and also try your code using MS Access Database Engine driver. Since it's newer, maybe it has better delimiter config handling.

How to correctly handle character encoding when using Postgresql's copy_data function?

In my Rails app, I managed to stream large CSV files directly from Postgres based on solutions mentioned in this SO post. My working code looks somewhat like so:
query = <A Long SQL Query String>
response.headers["Cache-Control"] = "no-cache"
response.headers["Content-Type"] = "text/csv; charset=utf-8"
response.headers["Content-Disposition"] =
%(attachment; filename="#{csv_filename}")
response.headers["Last-Modified"] = Time.now.ctime.to_s
conn = ActiveRecord::Base.connection.raw_connection
conn.copy_data("COPY (#{query}) TO STDOUT WITH (FORMAT CSV, HEADER TRUE, FORCE_QUOTE *, ESCAPE E'\\\\');") do
while row = conn.get_copy_data
response.stream.write row
end
end
response.stream.close
end
Some of the columns (VARCHAR) being queried have values as either English or Chinese strings. The CSV file resulting from the above code doesn’t show the Chinese characters as is. Instead, I get something like this:
大大 文文
Am I supposed to change the way I’m using the copy_data function, or is there something I could do to the CSV file to solve this? I’ve tried saving the file as UTF-8 .txt file, as well as trying the convert_to function mentioned in the copy_data documentation, but to no avail.
This depends of the original encoding included in the CSV file.
Do this on Linux :
file -i you_file
Are you sure it's not UTF-16 or GB 18030 ?
And also in what kind of encoding is setup your database ?
do a \l in psql to see this.
So it boiled down to my MS Excel not being able to render the Chinese chars correctly. On MacOS, opening the same .csv file using the Numbers app (or even Atom, for that matter) resolved this issue for me.

PHPEXCEL weird characters on form inputs

I need some help with PHPEXCEL library, everything works great, I'm successfully extracting my SQL query to excel5 file, I need to give this file to transport company in order to auto collect informations about packages, unfotunately the generated excel file has some ascii characters between each letter of the cell text, and when the excel file is imported you need to manually delete these charaters.
If I open the excel file, everything is fine I see: COMPANY NAME, If I open the excel file with notepad++, I see the cell values this way: C(NUL)O(NUL)M(NUL)P(NUL)A(NUL)N(NUL)Y N(NUL)A(NUL)M(NUL)E
If I open again the file with excel and save, then reopen with notepad++ I see COMPANY NAME.
So I do not understan why every time I create an excel file using PHPEXCEL my every letter of all words are filled with (nul) every letter.
So how do I prevent the generated excel file to include (nul) between every word????
Also if you open the original excel file generated from PHPExcel samples are also filled with (nul) and if you open and save it, the (nul) is gone.
Any help would be appreciated, thanks.
what is the (nul) ??? 0x00??? char(0)???
ok, here is the example:
error_reporting(E_ALL);
ini_set('display_errors', TRUE);
ini_set('display_startup_errors', TRUE);
date_default_timezone_set('Europe/London');
if (PHP_SAPI == 'cli')
die('Disponibile solo su browser');
require_once dirname(__FILE__) . '/Classes/PHPExcel.php';
$objPHPExcel = new PHPExcel();
$objPHPExcel->getProperties()->setCreator("Solidus")
->setLastModifiedBy("Solidus")
->setTitle("Import web")
->setSubject("Import File")
->setDescription("n.a")
->setKeywords("n.a")
->setCategory("n.a");
$objPHPExcel->setActiveSheetIndex(0)
->setCellValueExplicit("A1", "COMPANY")
->setCellValue('A2', 'SAMSUNG');
$objPHPExcel->getActiveSheet()->setTitle('DDT');
$objPHPExcel->setActiveSheetIndex(0);
header('Content-Type: application/vnd.ms-excel');
header('Content-Disposition: attachment;filename="TEST.xls"');
header('Cache-Control: max-age=0');
header('Cache-Control: max-age=1');
header('Cache-Control: private',false);
$objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel5');
ob_end_clean();
$objWriter->save('php://output');
As you can see from this little example, this scripts creates a file excel5 with 2 cells, A1 = COMPANY, A2 = SAMSUNG
when I send this file to the transport company, they import the file into their system, but as you can see from the picture, there is an weird character between each letter.
so I noticed every time I open the generated Excel5 with notepad++ file I get:
S(nul)A(nul)M(nul)S(nul)U(nul)N(nul)G
If I save the save with excel and then open it again with notepad++ I get:
SAMSUNG
and this file is ok for the transport company
so my question is, how should I avoid the file generated to contain thi '(nul) charachter between each letter????
some help?
weird characters
SAMSUNG
I found the soluion by myself, I explain just in case anyone has also this problem:
there is not way to change the way the excelfile is encoded by PHPEXCEL
so I figured out the problem was reading the file, I did some simulations and reproduce the problem, every time a read the file and put the result into inputs a get weird characters:
C�O�M�P�A�N�Y�
If I set the output enconding enconding as follows:
$excel->setOutputEncoding('UTF-8');
the file loads fine, so the problem was not creating the excel file, but reading the excel file.
If I print the variable with ECHO I get: "COMPANY",
if I put the variable on input as value I get: "C�O�M�P�A�N�Y�"
setting the output solves the problem, but I would like to know why the difference when I put the variable on input as value, thanks

Japanese language translation issue in CSV File - ASP.NET MVC

I am facing an issue while exporting japanese text in CSV format. Junk characters are being exported instead of original japanese text. I am using .NET MVC FileStreamResult to export records in Csv file and used encoding format as UTF8 (I have also used some other encoding format, but no luck). I debugged my code and able to convert string from memory stream and vice versa and able to see original japanese text being exported. Once exporting completed, I opened the CSV file, but only able to see junk character instead of expected text. If I open the CSV file in NotePad ( Opening the csv file in Notepad is NOT my requirement. I am referring Notepad only to verify whether i am able to see Japanese translated language ), then i can see the expected japanese text. It would be really helpful if someone please help me find root cause of this issue and provide a resolution.
Ex. 東京都品川区大崎 gets written as æ±äº¬éƒ½å“å·åŒºå¤§å´Ž
Note: I can see expected japanese text is exported properly if I opened the sample .CSV file using LibreOffice Calc, Linux default gEdit. But the issue is with opening this csv file using MS Office.
Please find the below attached code -
Controller/Action to execute while clicking on export to Csv button
================================================================================
[HttpPost]
[ValidateInput(false)]
public FileStreamResult SaveCustomerInfo()
{
return ExportToCsv();
}
================================================================================
private static FileStreamResult ExportToCsv()
{
var exportedData = new StringBuilder();
exportedData
.AppendLine("実行日,口座番号,支店番号,アカウント名,支店名,の/受益秩序,ステートメント日,入力日,お問い合わせ番号, ,Date Range")
.Append(
"CS0001,Demo FName,Demo LName,8/20/2015,\"Demo User Address\",City,Country,08830,0123456789,15813,Absolute from 8/20/2015 to 8/22/2015");
var stream = PrintingHelper.StringToMemoryStream(Encoding.UTF8, exportedData.ToString());
var fileStreamResult = new FileStreamResult(stream, "text/csv")
{
FileDownloadName =
new StringBuilder("TestExportedFileInCsv")
.Append(".csv").ToString()
};
return fileStreamResult;
}
It sound as though you haven't installed the language pack for MS Office on the machine that you are trying to open the csv on.

BlackBerry - language support for Chinese

I have localised my app by adding the correct resource files for various European languages / dialects.
I have the required folder in my project: ./res/com/demo/localization
It contains the required files e.g. Demo.rrh, Demo.rrc, Demo_de.rrc etc.
I want to add support for 2 Chinese dialects, and I have the translations in an Excel file. On iPhone, they are referred to by the codes zh_TW & zh_CM. Following the pattern with German, I created 2 extra files called Demo_zh_TW.rrc & Demo_zh_CN.rrc.
I opened file Demo_zh_CN.rrc using Eclipse's text editor, and pasted in line of the Chinese translation using the normal resource file format:
START_LOCATION#0="开始位置";
When I tried to save the file, I got Eclipse's error about the Cp1252 character encoding:
Save could not be completed.
Reason:
Some characters cannot be mapped using "Cp1252" character encoding.
Either change the encoding or remove the characters which are not
supported by the "Cp1252" character encoding.
It seems the Eclipse editor will accept the Chinese characters, but the resource tool expects that these characters must be saved in the resource file as Java Unicode /u encoding.
How do I add language support for these 2 regions without manually copy n pasting in each string?
Is there maybe a tool that I can use to Java Unicode /u encode the strings from Excel so they can be saved in Code page 1252 Latin chars only?
I'm not aware of any readily available tools for working with BlackBerry's peculiar localization style.
Here's a snippet of Java-SE code I use to convert the UTF-8 strings I get for use with BlackBerry:
private static String unicodeEscape(String value, CharsetEncoder encoder) {
StringBuilder sb = new StringBuilder();
for(char c : value.toCharArray()) {
if(encoder.canEncode(c)) {
sb.append(c);
} else {
sb.append("\\u");
sb.append(hex4(c));
}
}
return sb.toString();
}
private static String hex4(char c) {
String ret = Integer.toHexString(c);
while(ret.length() < 4) {
ret = "0" + ret;
}
return ret;
}
Call unicodeEscape with the 8859-1 encoder with Charset.forName("ISO-8859-1").newEncoder()
I suggest you look at Blackberry Hindi and Gujarati text display
You need to use the resource editor to make these files with the right encoding. Eclipse will escape the characters automatically.
This is a problem with the encoding of your resource file. 1252 Code Page contains Latin characters only.
I have never worked with Eclipse, but there should be somewhere you specify the encoding of the file, you should set your default encoding for files to UTF-8 if possible. This will handle your chinese characters.
You could also use a good editor like Notepad++ or EMEditor to set the encoding of your file.
See here for how you can configure Eclipse to use UTF-8 by default.

Resources