dataset import error for AutoML text classification - machine-learning

I have trying to import dataset into AutoML NL Text Classification. However, the Ui gave me an error of Invalid row in CSV file , Error details: Error detected: "FILE_TYPE_NOT_SUPPORTED"
I am uploading the csv file, what should I do?

Please make sure there is no hidden quotes in your dataset. Complete requirements can be found on “Preparing your training data” page.
Common .csv errors:
Using Unicode characters in labels. For example, Japanese characters are not supported.
Using spaces and non-alphanumeric characters in labels.
Empty lines.
Empty columns (lines with two successive commas).
Missing quotes around embedded text that includes commas.
Incorrect capitalization of Cloud Storage text paths.
Incorrect access control configured for your text files. Your service account should have read or greater access, or files must be publicly-readable.
References to non-text files, such as JPEG files. Likewise, files that are not text files but that have been renamed with a text extension will cause an error.
The URI of a text file points to a different bucket than the current project. > > - Only files in the project bucket can be accessed.
Non-CSV-formatted files.

Related

Reading multiline files in Apache beam separated with custom delimiters

I have a text file separated by two delimiters(#*) and one of the field contains multiline statements. ex:
test#*123#*"contain
multiline"
test#*321#*"contain
multiline"
Those are actual 2 rows but in text file it's 4 lines. The way I was trying is to retrieve the files with FileIO and then using pardo to open the file , find the last character in a line and if it's not ending with " then find the next line and append it with 1st line. my concern is beam processes the file in bundles .So if 2 lines are not in the same bundle then it will fail.
is my understanding correct ? and pls let me know the best way to handle the same.

Unknown exported text artifacts $FL? $G?

I have received a flat file to import in a DB. This flat file has many problems: the French text characters are converted to ? (it is not my editor, the file as realy the "3F" = "?" ASCII character for different letters with accent. But the main problem is that this file is studded with strange artefacts: "$FL?", "$B?", "$E?". They are anywhere, including in the middle of words.
It does not seams to be quotted printable.
Have you already faced such symptoms and what was the cause ?
Theses artifacts are created by an upstream buggy program and are not related to encoding changes.

How do I make Cypher respect character encoding when using LOAD CSV in browser?

My case: List of Danish-named students (with names including characters as ü,æ,ø,å). Minimal Working Example
CSV file:
Fornavn;Efternavn;Mobil;Adresse
Øjvind;Ørnenæb;87654321;Paradisæblevej 125, 5610 Åkirkeby
Süzette;Ågård;12345678;Ærøvej 123, 2000 Frederiksberg
In-browser neo4j-editor:
$ LOAD CSV WITH HEADERS FROM 'file:///path/to/file.csv' AS line FIELDTERMINATOR ";"
CREATE (:Elev {fornavn: line.Fornavn, efternavn: line.Efternavn, mobil: line.Mobilnr, adresse: line.Adresse})
Resulting in registrations like:
Neo4j browser screenshot, containing ?-characters, where Danish/German characters are wanted. My data come from a Learning Management System into Excel. When exporting as CSV from Excel, I can control file encoding as a function of the Save As dialogue box. I have tried encoding from Excel as "UTF-8" (which the Neo4j manual says it wants), "ISO-Western European", "Windows-Western European", "Unicode" in separately named file, and adjusted the FROM 'file:///path/to/file.csv' clause accordingly.
Intriguingly, exactly the same misrepresentation results, independent of which (apparent?) file encoding, I request from Excel when "Saving As". When Copy-pasting the names and addresses directly into the editor, I do not encounter the same problem.
Check Michael Hunger's blog post here which contains some tips, namely:
if you use non-ascii characters (umlauts, accents etc.) make sure to use the appropriate locale or provide the System property -Dfile.encoding=UTF8

Convert a Text file in to ARFF Format

I know how to convert a Set of text or web page files in to arff file using TextDirectoryLoader.
I want to know how to convert a single Text file in to Arff file.
Any help will be highly appreciated.
Please be more specific. Anyway:
If the text in the file corresponds to a single document (that it, a
single instance), then all you need is to replace all "new lines"
with the escape code \n to make the full text be in a single line,
then manually format as an arff with a single text attribute and a
single instance.
If the text corresponds to several instances (e.g. documents), then I
suggest to make an script to break it into several files and to apply
TextDirectoryLoader. If there is any specific formating (e.g.
instances are enclosed in XML tags), you can either do the same (by
taking advantage of the XML format), or to write a custom Loader
class in WEKA to recognize your format and build an Instances object.
If you post an example, it would be easier to get a more precise suggestion.

Parsing PDF files

I'm finding it difficult to parse a pdf file that's created in a non-english language. I used pdfbox and itext but couldn't find anything in there that could help parse this file. Here's the pdf file that I'm talking about: http://prapatti.com/slokas/telugu/vishnusahasranaamam.pdf The pdf says that it's created use LaTeX and Tikkana font. I have Tikkana font installed on my machine, but that didn't help. Please help me in this.
Thanks, K
When you say "parse PDF files", my first thought was that the PDF in question wasn't opening in various PDF viewers & libraries, and was therefore corrupt in some way.
But that's not the case at all. It opens just fine in Acrobat Reader X. And then I see the text on the page.
And when I copy/paste that text from the first page, I get:
Ûûp{¨¶ðQ{p{¨|={pÛû{¨>üb¶úN}l{¨d{p{¨> >Ûpû¶bp{¨}|=/}pT¶=}Nm{Z{Úpd{m}a¾Ú}mp{Ú¶¨>ztNð{øÔ_c}m{ТÁ}=N{Nzt¶ztbm}¥Ázv¬b¢Á
Á ÛûÁøÛûzÏrze¨=ztTzv}lÛzt{¨d¨c}p{Ðu{¨½ÐuÛ½{=Û Á{=Á Á ÁÛûb}ßb{q{d}p{¨ze=Vm{Ðu½Û{=Á
That's from Reader.
Much of the text in this PDF is written using various "Type 3" fonts. These fonts claim to use "WinAnsiEncoding" (Also Known As code page 1252), with a "differences" array. This differences array is wrong:
47 /BB 61 /BP /BQ 81 /C6...
The first number is the code point being replaced, the second is a Name of a character that replaces the original value at that code point.
There's no such character names as BB, BP, BQ, C9... and so on. So when you copy-paste that text, you get the above garbage.
I'm sorry, but the only reliable way to extract text from such a PDF is OCR (optical character recognition).
Eh... Long shot idea:
If you can find the specific versions of the specific fonts used to generate this PDF, you just might be able to determine the actual stream contents of known characters converted to Type 3 fonts in this way.
Once you have these known streams, you can compare them to the streams in the PDF and use that to build your own translation table.
You could either fix the existing PDF[s] (by changing the names in the encoding dictionary and Type 3 charproc entries) such that these text extractors will work correctly, or just grab the bytes out of the stream and translate them yourself.
The workflow would go something like this:
For each character in a font used in the form:
render it to PDF by itself using the same LaTeK/GhostScript versions.
Open the PDF and find the CharProc for that particular known character.
Store that stream along with the known character used to build it.
For each text byte in the PDF to be interpreted.
Get the glyph name for the given byte based on the existing encoding array
Get the "char proc" stream for that glyph name and compare it to your known char procs.
NOTE: This could be rewritten to be much more efficient with some caching, but it gets the idea across (I hope).
All that requires a fairly deep understanding of PDF and the parsing methods involved. But it just might work. Might not too...

Resources