I have a text file to process, with some example content as follows:
[FCT-FCTVALUEXXXX-IA]
Name=value
Label = value
Zero or more lines of text
Abbr=
Zero or more lines of text
Field A=1
Field B=0
Zero or more lines of text
Hidden=N
[Text-FCT-FCTVALUEXXXX-IA-Note]
One or more note lines
[FCT-FCT-FCTVALUEZ-IE-DETAIL]
Zero or more lines of text
[FCT-FCT-FCTVALUEQ-IA-DETAIL]
Zero or more lines of text
[FCT-_FCTVALUEY-IA]
Name=value
Zero or more lines of text
Label=value
Zero or more lines of text
Field A=1
Abbr=value
Field A=1
Zero or more lines of text
Hidden=N
I need to find sections like this:
[FCT-FCTVALUEXXXX-IA]
Name=value
Label = value
Zero or more lines of text
Abbr=
Zero or more lines of text
Field A=1
Field B=0
Zero or more lines of text
Hidden=N
and extract FCT-FCTVALUEXXXX-AA, Name, Label, Abbr, Field A and B and Hidden, and then find a corresponding section (if it exists):
[Text-FCT-FCTVALUEXXXX-IA-Note]
One or more note lines
end extract the Note lines as a single string.
I don't care about the sections
[FCT-FCT-FCTVALUEZ-IE-DETAIL]
Zero or more lines of text
All three sorts of sections can appear anywhere in the file, including right at the end, and there's no predictable relationship in position between the sections.
The order of Abbr and Fields A and B cannot be guaranteed but they always appear after Name and Label and before Hidden.
What I have so far:
strParse = "(%[FCT%-.-%-)([IF])([EA])%]%c+Name=(.-)%c.-Label=(.-)%c(.-)Hidden=(%a)%c" --cant pull everything out at once because the order of some fields is not predictable
for id, rt, ft, name, label, detail, hidden in strFacts:gmatch(strParse) do
--extract details
abbr=detail:match("Abbr=(.-)%c") --may be blank
if abbr == nil then abbr = "" end
FieldA = detail:match("Field A=(%d)")
FieldB = detail:match("Field B=(%d)")
--need to sanitise id which could have a bunch of extraneous material tacked on the front and use it to get the Note
ident=id:match(".*(%[FCT%-.-%-$)")..rt..ft
Note = ParseAutonote(ident) --this is a function to parse the note which I've yet to test so a dummy function returns ""
tblResults[name]={ident, rt, ft, name, label, abbr, FieldA, FieldB, hidden, note}
end
Most of it works OK (after many hours of working on it), but the piece that isn't working is:
(".*(%[FCT%-.-%-$)")
which is supposed to pull out the final occurrence of FCT-sometext-
in the string id
My logic: anchor the search to the end of the string and capture the shortest possible string beginning with "[FCT-" and ending with "-" at the end of the string.
Given a value of either "[FCT-_ABCD-PDQR-" or
"[FCT-XYZ-DETAIL]lines of text[FCT-_ABCD-PDQR-" it returns nil when I want it to return "FCT-_ABCD-PDQR-". (Note ABCD, PDQR etc can be any length of text containing Alpha, - and _).
As you discovered yourself (".*(%[FCT%-.-%-)$") works the way you want,
where (".*(%[FCT%-.-%-$)") does not. $ and ^ are anchors and must come at the end or beginning of the pattern, they can not appear inside a capture closure.
When the anchor characters appear anywhere else in the pattern they will be part of the string you are looking for, excluding cases where ^ is used in a set to exclude chars i.e.: excluding upper-case chars [^A-Z]
Here are examples of the pattern matching using the an example string and the pattern from your question.
print(string.match("[FCT-_ABCD-PDQR-", (".*(%[FCT%-.-%-$)"))) -- initial pattern
> nil
print(string.match("[FCT-_ABCD-PDQR-$", (".*(%[FCT%-.-%-$)"))) -- $ added to end of string
> [FCT-_ABCD-PDQR-$
print(string.match("[FCT-_ABCD-PDQR-", (".*(%[FCT%-.-%-)$"))) -- $ moved to end of pattern
> [FCT-_ABCD-PDQR-
Related
how can I extract a few words separated by symbols in a string so that nothing is extracted if the symbols change?
for example I wrote this code:
function split(str)
result = {};
for match in string.gmatch(str, "[^%<%|:%,%FS:%>,%s]+" ) do
table.insert(result, match);
end
return result
end
--------------------------Example--------------------------------------------
str = "<busy|MPos:-750.222,900.853,1450.808|FS:2,10>"
my_status={}
status=split(str)
for key, value in pairs(status) do
table.insert(my_status,value)
end
print(my_status[1]) --
print(my_status[2]) --
print(my_status[3]) --
print(my_status[4]) --
print(my_status[5]) --
print(my_status[6]) --
print(my_status[7]) --
output :
busy
MPos
-750.222
900.853
1450.808
2
10
This code works fine, but if the characters and text in the str string change, the extraction is still done, which I do not want to be.
If the string change to
str = "Hello stack overFlow"
Output:
Hello
stack
over
low
nil
nil
nil
In other words, I only want to extract if the string is in this format: "<busy|MPos:-750.222,900.853,1450.808|FS:2,10>"
In lua patterns, you can use captures, which are perfect for things like this. I use something like the following:
--------------------------Example--------------------------------------------
str = "<busy|MPos:-750.222,900.853,1450.808|FS:2,10>"
local status, mpos1, mpos2, mpos3, fs1, fs2 = string.match(str, "%<(%w+)%|MPos:(%--%d+%.%d+),(%--%d+%.%d+),(%--%d+%.%d+)%|FS:(%d+),(%d+)%>")
print(status, mpos1, mpos2, mpos3, fs1, fs2)
I use string.match, not string.gmatch here, because we don't have an arbitrary number of entries (if that is the case, you have to have a different approach). Let's break down the pattern: All captures are surrounded by parantheses () and get returned, so there are as many return values as captures. The individual captures are:
the status flag (or whatever that is): busy is a simple word, so we can use the %w character class (alphanumeric characters, maybe %a, only letters would also do). Then apply the + operator (you already know that one). The + is within the capture
the three numbers for the MPos entry each get (%--%d+%.%d+), which looks weird at first. I use % in front of any non-alphanumeric character, since it turns all magic characters (such as + into normal ones). - is a magic character, so it is required here to match a literal -, but lua allows to put that in front of any non-alphanumerical character, which I do. So the minus is optional, so the capture starts with %-- which is one or zero repetitions (- operator) of a literal - (%-). Then I just match two integers separated by a dot (%d is a digit, %. matches a literal dot). We do this three times, separated by a comma (which I don't escape since I'm sure it is not a magical character).
the last entry (FS) works practically the same as the MPos entry
all entries are separated by |, which I simply match with %|
So putting it together:
start of string: %<
status field: (%w+)
separator: %|
MPos (three numbers): MPos:(%--%d+%.%d+),(%--%d+%.%d+),(%--%d+%.%d+)
separator: %|
FS entry (two integers): FS:(%d+),(%d+)
end of string: %>
With this approach you have the data in local variables with sensible names, which you can then put into a table (for example).
If the match failes (for instance, when you use "Hello stack overFlow"), nil` is returned, which can simply be checked for (you could check any of the local variables, but it is common to check the first one.
I have a plain text file with a one string per line. I'd like to identify any instances where a string contains a value outside of a restricted character set. In this particular instance, if the string contains any character outside of the set "[THADGRC.SMBN-WVKY]" I want to retain it and pass it along to a new file.
For example, let's say the original file "mystrings.txt" contained the following data:
THADGRC.SMBN-WVKY
YKVW-NBMS.CRGDHAT
THADGRC.SMBN-WVKYI
My intention is to retain only the third sequence, because it contains a character outside of the allowed set (I) in this case.
It doesn't matter how many times, or in what order, an allowed character is present - all I care about is if a character exists in that string outside of the allowed set.
Originally I tried:
cat mystrings.txt | grep -v [THADGRC\.SMBN-WVKY] > badstrings.txt
but of course the third string contains those allowed character in addition to the non-allowed characters, thus this search ended up producing no "offending" strings.
Last thing: I'm not sure what characters outside of the allowed set might exist in this text file. It would be great to know ahead of time to just search for anything with an "I", but I don't actually know this ahead of time.
So the question: is there a way to use grep (or another tool, say awk?) to pass in a restricted list of characters, and flag any instances where a string contains any number of characters outside of that set?
Thanks for your consideration
I think that your problem is N-W. This doesn't match "N", "-" and "W", it matches a range from "N" to "W". You should move "-" to the end of the character class, or escape it. I suggest changing to:
grep '[^THADGRC.SMBNWVKY-]' mystrings.txt
Also, note that "." doesn't have to be escaped when it's inside a character class.
Your attempt says "remove any lines which contain one of these characters at least once". But you want "print any lines which contain at least one character not in this set."
(Also, quote your regular expressions , and lose the useless cat.)
grep '[^-THADGRC.SMBNWVKY]' mystrings.txt > badstrings.txt
I moved the dash to the beginning of the character class on the assumption that you want a literal dash, not the regex range N-W (i.e. N, O, P, Q, R, S, T, U, V, W).
Hello guys I want to convert my non delimited file into a delimited file
Example of the file is as follows.
Name. CIF Address line 1 State Phn Address line 2 Country Billing Address line 3
Alex. 44A. Biston NJ 25478163 4th,floor XY USA 55/2018 kenning
And so on all the data are in this format.
First three lines are metadata and then the data.
How can I make it delimited in proper format using logic.
There are two parts in the problem:
how to find the column widths
how to split each line into fields and output a new line with delimiters
I could not propose an automated solution for the first one, because (not knowing anything about the metadata format), there is no clear way to find where one column ends and the next one begins. Some of the column headings contain multiple space-separated words and space is also used as a separator between the headings (and apparently one cannot use the rule "more than one space means the end of a heading name" because there's only one space between "Address line 2" and "Country" - and they're clearly separate columns. Clearly, finding the correct column widths requires understanding English and this is not something that you can write a program for.
For the second problem, things are much easier - once you have the column positions. If you figure the column positions manually (or programmatically, if you know something about the metadata that I don't - and you have a simple method for finding what's a column heading), then a program written in AWK can do this, for example:
cols="8,15,32,40,53,66,83,105"
awk_prog='BEGIN {
nt=split(cols,tabs,",")
delim=","
ORS=""
}
{ o=1 ;
for (i in tabs) { t=tabs[i] ; f=substr($0,o,t-o); sub(" *$","",f) ; print f
delim ; o=t } ;
print substr($0, o) "\n"
}'
awk -v cols="$cols" "$awk_prog" input_file
NOTE that the above program does not deal correctly with the case when the separator character (e.g. ",") appears inside the data. If you decide to use this as-is, be sure to use a separator that is not present in the input data. It may be better to modify the code to escape any separator characters found in the input data (there are different ways to do this - depends on what you plan to feed the output file to).
I need to update a bilingual dictionary written in Writer by first parsing all entries into their parts e.g.
main word (font 1, bold)
foreign equivalent transliterated (font 1, italic)
foreign equivalent (font 2, bold)
part of speech (font 1, italic)
Each line of the document is the main word followed by the parts listed above, each separated by a space or punctuation.
I need to automate the process of walking through the whole file, line by line, and place a delimiter between each part, ignoring spaces and punctuation, so I can mass import it into a Calc file. In other words, "each part" is a sequence of character (ignoring spaces and punctuation) that have the same font AND font-style.
I have tried the standard Search&Replace feature, and AltSearch extension, but neither are able to complete the task. The main problem is I am not able to write a search query that says:
Find: consecutive characters with the same font AND font_style, ignore spaces and punctuation
Replace: term found above + "delimiter"
Any suggestions how I can write a script for this, or if an existing tool can solve the problem?
Thanks!
Pseudo code for desired effect:
var delimiter = "|"
Go to beginning of document
While not end of document do:
var $currLine = get line from doc
var $currChar = get next character which is not space or punctuation;
var $font = currChar.font
var $font_style - currChar.font_style (e.g. bold, italic, normal)
While not end of line do:
$currChar = next character which is not space or punctuation;
if (currChar.font != $font || currChar.font_style != $font_style) { // font or style has changed
print $delimiter
$font = currChar.font
$font_style - currChar.font_style (e.g. bold, italic, normal)
}
end While
end While
Here are tips for each of the things your pseudocode does.
First, the easiest way to move line by line is with the TextViewCursor, although it is slow. Notice the XLineCursor section. For the while loop, oVC.goDown() will return false when the end of the document is reached. (oVC is our variable for the TextViewCursor).
Get each character by calling oVC.goRight(0, False) to deselect followed by oVC.goRight(1, True) to select. Then the selected value is obtained by oVC.getString(). To ignore space and punctuation, perhaps use python's isalnum() or the re module.
To determine the font of the character, call oVC.getPropertyValue(attr). Values for attr could simply be CharAutoStyleName and CharStyleName to check for any changes in formatting.
Or grab a list of specific properties such as 'CharFontFamily', 'CharFontFamilyAsian', 'CharFontFamilyComplex', 'CharFontPitch', 'CharFontPitchAsian' etc. Character properties are described at https://wiki.openoffice.org/wiki/Documentation/DevGuide/Text/Formatting.
To insert the delimiter into the text: oVC.getText().insertString(oVC, "|", 0).
This python code from github shows how to do most of these things, although you'll need to read through it to find the relevant parts.
Alternatively, instead of using the LibreOffice API, unzip the .odt file and parse content.xml with a script.
I've got a question on failure function description from "Compilers: Principles, Techniques, and Tools" aka DragonBook
Firstly, the quote:
In order to process text strings rapidly and search those strings for a keyword,
it is useful to define, for keyword b1b2...bn, and position s in that keyword , a failure function, f (s) ...
The objective is that b1b2.. - bf(s) is the longest proper prefix of
b1...bs, that is also a suffix of b1...bs. The reason f (s) is important is that
if we are trying to match a text string for blb2..bn, and we have matched the
first s positions, but we then fail (i.e., the next position of the text string does
not hold bs+l), then f (s) is the longest prefix of b1..bn that could possibly
match the text string up to the point we are at. Of course, the next character of
the text string must be bf(s)+1 or else we still have problems and must consider
a yet shorter prefix, which will be bf(f(s)).
So, the questions:
1. If we've matched s positions with the text, why f (s) is the longest prefix of b1..bn that matches the string? I think s - is the longest prefix.
2. Next character of the text string must be bf(s)+1, why? We have a mismatch at this position, does it matter at all what the char is?
f(s) is the longest prefix at that position that might match the entire keyword. The idea is not to try to match the keyword with the text from the start, but to find a position where the keyword appears.
Consider a search for the word 'aaaba' in the text 'aaaabaa'. The match fails after the three first a's, but it's not necessary to retry from the second 'a', since we know that if the next letter is a 'b' (which it is), we may have a match there.