How can these strings be different? - character-encoding

I am facing a weird problem.
I have extracted data from an Excel file. It should contain an IBAN account number.
Then I tried to analyze the set of account numbers (which the source guarantees to be good) with a Java library.
To keep the scope of the question narrow, I can't explain the following. The below strings are different
030​69
03069
The first is a copy & paste from the Excel file, the second is handwritten. Google returns different results for abi [above number] and in fact in the second case I can find that it is the bank code for Intesa Sanpaolo bank (exact page displaying the ABI code, localized, here).
So, to keep the scope narrow: how is that possible? Is it something to do with the encoding?
Try it yourself: do CTRL+F and try type "030", it will select both lines. Now type 6, it will match only the 2nd line.
Same happened in Notepad++

There's an U+200B ZERO WIDTH SPACE in between 030 and 69 in the first text.
Paste the text in https://www.branah.com/unicode-converter for example, or edit in a hexadecimal capable editor.
The solution for cleaning such strings could be for example to whitelist characters, so replace everything that isn't A-Z0-9 will be scrubbed.

Related

Extracting PDF Tables into Excel in Automation Anywhere

[![enter image description here][4]][4][![enter image description here][5]][5]I have a PDF that has tabular data that runs over 50+ pages, i want to extract this table into an excel file using Automation Anywhere. (i am using community version of AA 11.3). I watched videos of the PDF integration command but haven't had any success trying this for tabular data.
Requesting assistance.
Thanks.
I am afraid that your case will be quite challenging... and the main reason for that are the values that contains multiple lines. You can still achieve what you need, and with good performance, but the code itself will not be pretty. You will also be facing challanges with Automation Anywhere, since it does not really provide the right tools to do such a thing and you may need to resort to scripting (VBScripts) or Metabots.
Solution 1
This one will try to use purely text extraction and Regular expressions. Mainly standard functionality, nothing too "dirty".
First you need to realise how do the exported data look like. You can see that you can export to Plain or Structured.
The Plain one is not useful at all as the data is all over the place, without any clear pattern.
The Structured one is much better as the data structure resembles the data from the original document. From looking at the data you can make these observations:
Each row contains 5 columns
All columns are always filled (at least in the visible sample set)
The last two columns can serve as a pattern "anchor" (identifier), because they contain a clear pattern (a number followed by minimum of two spaces followed by a dollar sign and another number)
Rows with data are separated by a blank row
The text columns may contain a multiline value, which will duplicate the rows (this one thing makes it especially tricky)
First wou need to ensure that the Structured data contain only the table, nothing else. You can probably use the Before-After string command for that.
Then you need to check if you can reliably identify the character width of every column. You can try this for yourself if you copy the text into Excel, use the Text to Columns with the Fixed Width option and try to play around with the sliders
The you need to try to find a way how to reliably identify each row and prepare it for the Split command in AA. For that you need to have a delimiter. But since each data row can actually consists of multiple text rows, you need to create a delimiter of your own. I used the Replace function with Regular Expression option and replace a specific pattern for a delimiter (pipe). See here.
Now that you have added a custom delimiter, you can use the Split command to add each row into a list and loop through it.
Because each data row may consists of several rows, you will need to use Split again, this time use the [ENTER] as delimiter. Now you need to loop through each of the text line of a single data line and use the Substring function to extract data based on column width and concatenate them to a single value that you store somewhere else.
All in all, a painful process.
Solution 2
This may not be applicable, but it's worth a try - open the PDF in Microsoft Word. It will give you a warning, ignore it. Word will attempt to open the document and, if you're lucky, it will recognise your table as a table. If it works, it will make the data extraction much easier an you will be able to use Macros/VBA or even simple Copy&Paste. I tried it on a random PDF of my own and it works quite well.

Informix 4GL report to screen - Reverse

I have a generated report in Informix 4GL that prints to the screen.
I need to have one column displayed in reverse format.
I tried the following:
print line_image attribute(reverse)
But that doesn't work. Is this possible at all?
Adding on to the previous answer, you can try the following
print "\033[7mHello \033[0mWorld"
\033[7m means to print in reverse. And, \033[0m means to go back to standard.
If you mean "is there any way at all to do it", the answer's "yes". If you mean "is there a nice easy built-in way to do it", the answer's "no".
What you'll need to do is:
Determine the character sequence that switches to 'reverse' video — store the characters in a string variable brv (begin reverse video; choose your own name if you don't like mine).
Determine the character sequence that switches to 'normal' video — store the characters in a string variable erv (end reverse video).
Arrange for your printing to use:
PRINT COLUMN 1, first_lot_of_data,
COLUMN 37, brv, reverse_data,
COLUMN 52, erv,
COLUMN 56, next_lot_of_data
There'll probably be 3 or 4 characters needed to switch. Those characters will be counted by the column-counting code in the report.
Different terminal types will have different sequences. These days, the chances are your not dealing with the huge variety of actual green-screen terminals that were prevalent in the mid-80s, so you may be able to hardwire your findings for the brv and erv strings. OTOH, you may have to do some fancy footwork to find the correct sequences for different terminals at runtime. Shout if you need more information on this.
A simple way which might allow you to discover the relevant sequences is to run a program such as (this hasn't been anywhere near an I4GL compiler — there are probably syntax errors in it):
MAIN
DISPLAY "HI" AT 1,1
DISPLAY "REVERSE" AT 1,4 ATTRIBUTE(REVERSE)
DISPLAY "LO" AT 1, 12
SLEEP 2
END MAIN
Compile that into terminfo.4ge and run:
./terminfo.4ge # So you know what the screen looks like
./terminfo.4ge > out.file
There's a chance that won't use the display attributes. You'd see that if you run cat out.file and don't see the reverse flash up, then we have to work harder.
You could also look at the terminal entry in the termcap file or from the terminfo entry. Use infocmp $TERM (with the correct terminal type set in the environment variable) and look for the smso (enter standout mode) and rmso (exit standout mode) capabilities. Decipher those (I have rmso=\E[27m and smso=\E[7m for an xterm-256color terminal; the \E is ASCII ESC or \033) and use them in the brv and erv strings. Note that rmso is 5 characters long.

Understanding Textmate help

I am trying to become familiar with Textmate shortcut keys.
However when I open my "Demo Help" for the Ruby on Rails Bundle it shows keymappings with characters like "& # x21E5"
When I paste them in here they often display correctly as they appear to be HTML escape codes but in the Textmate window they are unreadable. I cant keep referring to a list of HTML escape codes!
How am I supposed to translate those key mappings. Have I got something set up wrongly that they show like this rather than with the normal symbols?
Cheers
George
⇥ appears to be a "normal" key symbol, meaning tab (forward tab, specifically). A quick googling led me to this page, which seems to have a number of such mappings. For reference, the phrase I googled was "key symbols mac".
Using symbols not on the key to represent a key is fairly common when describing mac shortcuts, you have probably not done anything wrong.
For some reason the help file is showing you the unicode hex symbol instead of the actual glyph in the font. If you have Unicode text enabled, inside TextMate you can hold down the Option key and type the four digit code to get a quick translation. Kind of a pain while you're reading a help file but they probably only use a few.
From the Ruby on Rails Bundle Help the ones I see (my version is also showing the codes not the glyphs) are:
2303 is the Control Key
2318 is the Command Key
2325 the option key
21e5 is the Tab key
232b is Delete
2193 Down arrow
21e7 up arrow
21a9 return
BTW, You enter unicode here in SO by entereing &# followed by the decimal code and then a semi-colon, So x21e5 is 8677 decimal so ⇥ yields ⇥

Parsing PDF files

I'm finding it difficult to parse a pdf file that's created in a non-english language. I used pdfbox and itext but couldn't find anything in there that could help parse this file. Here's the pdf file that I'm talking about: http://prapatti.com/slokas/telugu/vishnusahasranaamam.pdf The pdf says that it's created use LaTeX and Tikkana font. I have Tikkana font installed on my machine, but that didn't help. Please help me in this.
Thanks, K
When you say "parse PDF files", my first thought was that the PDF in question wasn't opening in various PDF viewers & libraries, and was therefore corrupt in some way.
But that's not the case at all. It opens just fine in Acrobat Reader X. And then I see the text on the page.
And when I copy/paste that text from the first page, I get:
Ûûp{¨¶ðQ{p{¨|={pÛû{¨>üb¶úN}l{¨d{p{¨> >Ûpû¶bp{¨}|=/}pT¶=}Nm{Z{Úpd{m}a¾Ú}mp{Ú¶¨>ztNð{øÔ_c}m{ТÁ}=N{Nzt¶ztbm}¥Ázv¬b¢Á
Á ÛûÁøÛûzÏrze¨=ztTzv}lÛzt{¨d¨c}p{Ðu{¨½ÐuÛ½{=Û Á{=Á Á ÁÛûb}ßb{q{d}p{¨ze=Vm{Ðu½Û{=Á
That's from Reader.
Much of the text in this PDF is written using various "Type 3" fonts. These fonts claim to use "WinAnsiEncoding" (Also Known As code page 1252), with a "differences" array. This differences array is wrong:
47 /BB 61 /BP /BQ 81 /C6...
The first number is the code point being replaced, the second is a Name of a character that replaces the original value at that code point.
There's no such character names as BB, BP, BQ, C9... and so on. So when you copy-paste that text, you get the above garbage.
I'm sorry, but the only reliable way to extract text from such a PDF is OCR (optical character recognition).
Eh... Long shot idea:
If you can find the specific versions of the specific fonts used to generate this PDF, you just might be able to determine the actual stream contents of known characters converted to Type 3 fonts in this way.
Once you have these known streams, you can compare them to the streams in the PDF and use that to build your own translation table.
You could either fix the existing PDF[s] (by changing the names in the encoding dictionary and Type 3 charproc entries) such that these text extractors will work correctly, or just grab the bytes out of the stream and translate them yourself.
The workflow would go something like this:
For each character in a font used in the form:
render it to PDF by itself using the same LaTeK/GhostScript versions.
Open the PDF and find the CharProc for that particular known character.
Store that stream along with the known character used to build it.
For each text byte in the PDF to be interpreted.
Get the glyph name for the given byte based on the existing encoding array
Get the "char proc" stream for that glyph name and compare it to your known char procs.
NOTE: This could be rewritten to be much more efficient with some caching, but it gets the idea across (I hope).
All that requires a fairly deep understanding of PDF and the parsing methods involved. But it just might work. Might not too...

Sanitize pasted text from MS-Word

Here's my wild and whacky psuedo-code. Anyone know how to make this real?
Background:
This dynamic content comes from a ckeditor. And a lot of folks paste Microsoft Word content in it. No worries, if I just call the attribute untouched it loads pretty. But the catch is that I want it to be just 125 characters abbreviated. When I add truncation to it, then all of the Microsoft Word scripts start popping up. Then I added simple_format, and sanitize, and truncate, and even made my controller start spotting out specific variables that MS would make and gsub them out. But there's too many of them, and it seems like an awfully messy way to accomplish this. Thus so! Realizing that by itself, its clean. I thought, why not just slice it. However, the microsoft word text becomes blank but still holds its numbered position in the string. So I came up with this (probably awful) solution below.
It's in three steps.
When the text parses, it doesn't display any of the MSWord junk. But that text still holds a number position in a slice statement. So I want to use a regexp to find the first actual character.
Take that character and find out what its numbered position is in the total string.
Use a slice statement to cut it from.
def about_us_truncated
x = self.about_us.find.first(regExp representing first actual character)
x.charCount = y
self.about_us[y..125]
end
The only other idea i got, is a regex statement that allows it to explicitly slice only actual characters like so :
about_us([a-zA-Z][0..125]) , but that is definately not how it is written.
Here is some sample text of MS Word junk :
≪! [If Gte Mso 9]>≪Xml>≪Br /> ≪O:Office Document Settings>≪Br /> ≪O:Allow Png/>≪Br /> ≪/O:Off...
You haven't provided much information to go off of, but don't be too leery of trying to build this regex on your own before you seek help...
Take your sample text and paste it in Rubular in the test string area and start building your regex. It has a great quick reference at the bottom.
Stumbled across this
http://gist.github.com/139987
it looks like it requires the sanitize gem.
This is technically not a straight answer, but it seems like the best possible one you can find.
In order to prevent MS Word, you should be using CK Editor's built-in MS word sanitizer. This is because writing regex for it can be very complicated and you can very easily break tags in half and destroy your site with it.
What I did as a workaround, is I did a force paste as plain text in the CK Editor.

Resources