How can I combine these 2 functions without going over the 50,000 character limit? - google-sheets

I tried to simply replace anytime I referenced the cell into the actual function inside of the referred cell. This normally works in every single other function
I've done this with, but in this case, it's a big function and it gets referred to many times. This causes it to go over the 50,000 character limit for functions and this method no longer applies.
check out this spreadsheet to see the functions I'm talking about:
https://docs.google.com/spreadsheets/d/1RFA8s68TSQI2jQSOQm2_Ma776vC1LUQn7JP9tg-gZ1g/edit?usp=sharing

here's the formula:
=index(fixed(regexextract(A3,"[\d.]+")*product(10^vlookup(regexextract(A3,regexreplace(A3,"([A-Za-z])","($1)")),split(flatten(regexextract(flatten(split("Kk,Mm,Bb,Tt,q,Q,s,S,Oo,Nn,d,Uu,D",",")),regexreplace(flatten(split("Kk,Mm,Bb,Tt,q,Q,s,S,Oo,Nn,d,Uu,D",",")),"(.)","($1)"))&"❄️"&sequence(13,1,3,3)),"❄️"),2,0))/(2.5*B3+1)/10^sumproduct((exact(index(split(flatten(regexextract(flatten(split("Kk,Mm,Bb,Tt,q,Q,s,S,Oo,Nn,d,Uu,D",",")),regexreplace(flatten(split("Kk,Mm,Bb,Tt,q,Q,s,S,Oo,Nn,d,Uu,D",",")),"(.)","($1)"))&"❄️"&sequence(13,1,3,3)),"❄️"),,1),C3))*(index(split(flatten(regexextract(flatten(split("Kk,Mm,Bb,Tt,q,Q,s,S,Oo,Nn,d,Uu,D",",")),regexreplace(flatten(split("Kk,Mm,Bb,Tt,q,Q,s,S,Oo,Nn,d,Uu,D",",")),"(.)","($1)"))&"❄️"&sequence(13,1,3,3)),"❄️"),,2))),D3)&C3)
I couldn't find an efficient way to automatically convert back to the best unit because we are dealing with huge numbers that get turned to scientific notation preventing us from easily getting the actual length of the number. For this reason, I added a cell (C3) where you can specify the unit you want. I also added another cell (D3) where you can specify the amount of decimal places you want to display.

Related

Extracting PDF Tables into Excel in Automation Anywhere

[![enter image description here][4]][4][![enter image description here][5]][5]I have a PDF that has tabular data that runs over 50+ pages, i want to extract this table into an excel file using Automation Anywhere. (i am using community version of AA 11.3). I watched videos of the PDF integration command but haven't had any success trying this for tabular data.
Requesting assistance.
Thanks.
I am afraid that your case will be quite challenging... and the main reason for that are the values that contains multiple lines. You can still achieve what you need, and with good performance, but the code itself will not be pretty. You will also be facing challanges with Automation Anywhere, since it does not really provide the right tools to do such a thing and you may need to resort to scripting (VBScripts) or Metabots.
Solution 1
This one will try to use purely text extraction and Regular expressions. Mainly standard functionality, nothing too "dirty".
First you need to realise how do the exported data look like. You can see that you can export to Plain or Structured.
The Plain one is not useful at all as the data is all over the place, without any clear pattern.
The Structured one is much better as the data structure resembles the data from the original document. From looking at the data you can make these observations:
Each row contains 5 columns
All columns are always filled (at least in the visible sample set)
The last two columns can serve as a pattern "anchor" (identifier), because they contain a clear pattern (a number followed by minimum of two spaces followed by a dollar sign and another number)
Rows with data are separated by a blank row
The text columns may contain a multiline value, which will duplicate the rows (this one thing makes it especially tricky)
First wou need to ensure that the Structured data contain only the table, nothing else. You can probably use the Before-After string command for that.
Then you need to check if you can reliably identify the character width of every column. You can try this for yourself if you copy the text into Excel, use the Text to Columns with the Fixed Width option and try to play around with the sliders
The you need to try to find a way how to reliably identify each row and prepare it for the Split command in AA. For that you need to have a delimiter. But since each data row can actually consists of multiple text rows, you need to create a delimiter of your own. I used the Replace function with Regular Expression option and replace a specific pattern for a delimiter (pipe). See here.
Now that you have added a custom delimiter, you can use the Split command to add each row into a list and loop through it.
Because each data row may consists of several rows, you will need to use Split again, this time use the [ENTER] as delimiter. Now you need to loop through each of the text line of a single data line and use the Substring function to extract data based on column width and concatenate them to a single value that you store somewhere else.
All in all, a painful process.
Solution 2
This may not be applicable, but it's worth a try - open the PDF in Microsoft Word. It will give you a warning, ignore it. Word will attempt to open the document and, if you're lucky, it will recognise your table as a table. If it works, it will make the data extraction much easier an you will be able to use Macros/VBA or even simple Copy&Paste. I tried it on a random PDF of my own and it works quite well.

How to handle homophones in speech recognition?

For those who are not familiar with what a homophone is, I provide the following examples:
our & are
hi & high
to & too & two
While using the Speech API included with iOS, I am encountering situations where a user may say one of these words, but it will not always return the word I want.
I looked into the [alternativeSubstrings] (link) property wondering if this would help, but in my testing of the above words, it always comes back empty.
I also looked into the Natural Language API, but could not find anything in there that looked useful.
I understand that as a user adds more words, the Speech API can begin to infer context and correct for these, but my use case will not work well with this since it will often only want one or two words at most, limiting the effectiveness of context.
An example of contextual processing:
Using the words above on their own, I get these results:
are
hi
to
However, if I put together the following sentence, you can see they are all wrong:
I am too high for our ladder
Ideally, I would either get a list back containing [are, our], [to, too, two], [hi, high] for each transcription segment, or would have a way to compare a string against a function that supports homophones.
An example of this would be:
if myDetectedWord == "to" then { ... }
Where myDetectedWord can be [to, too, two], and this function would return true for each of these.
This is a common NLP dilemma, and I'm not so sure what might be your desired output in this application. However, you may want to bypass this problem in your design/architecture process, if possible and if you could. Otherwise, this problem is to turn into a challenge.
Being said that, if you wish to really get into it, I like this idea of yours:
string against a function
This might be more efficient and performance friendly.
One way, I'd be liking to solve this problem would be though RegEx processing, instead of using endless loops and arrays. You could maybe prototype loops and arrays to begin with and see how it works, then you might want to use regular expression for gaining performance.
You could for instance define fixed arrays in regular expressions and quickly check against your string (word by word, maybe using back-referencing) and you can add many boundaries in your expressions for string processing, as you wish.
Your fixed arrays also can be designed based on probabilities of occurring certain words in certain part of a string. For instance,
^I
vs
^eye
The probability of I being the first word is much higher than that of eye.
The probability of I in any part of a string is higher than that of eye, also.
You might want to weight words based on that.
I'd say the key would be that you'd narrow down your desired outputs as focused as possible and increase accuracy, [maybe even with 100 words if possible], if you wish to have a good/working application.
Good project though, I hope you like/enjoy the challenge.

Manipulating Unbounded Integers using custom data structure

Full disclosure, my question is pertaining to a project I am working on for my Data Structures class. I know this is usually frowned upon, but I am hoping it may be okay due to the fact that I have the data structure itself done and I'm just seeking assistance in creating a method.
The project is to implement a custom data structure to represent unbounded integers using a custom linked list. I cannot use the BigInteger nor LinkedList classes. I implemented the data structure using the IntNode class provided from the project.
The class takes in a string of numbers, breaks it into 3 character chunks, converts those chunks into integers and stores each chunk in a custom "linked list" of IntNode objects.
For example: 123456789123 represented as 4 IntNodes, <123> <456> <789> <123>
The method I am having difficulty implementing is:
UnboundedInt multiply (UnboundedInt )
A method that multiplies the current UnboundedInt with a passed in one. The return is a new UnboundedInt.
There is also an 'add' method which was easy to implement and I do realize I could use to handle multiplication by looping the 'add' method as many times as one of the UnboundedInt objects, however, how would I handle the loop variable when it, itself, breaches the limit of an integer?
I do realize I could use to handle multiplication by looping the 'add' method as many times as one of the UnboundedInt objects
That's not going to be the answer, because it would be too slow if either operand is non-trivial.
There is also an 'add' method which was easy to implement
That's good, because that's going to be part of the solution.
How did you implement that?
Probably following the steps you would do if you had to do it on paper.
You can implement multiplication the same way.
How do you multiply two numbers on paper?
You multiply the number on the left with each digit on the right, one by one.
After you have the multiples by one digits, you add them.
For example, let's say you were to multiply 123456789123 with 234. It would go like this:
123456789123 * 234
------------
246913578246
370370367369
+ 493827156492
==============
28888888654782
Multiplying an IntNode by 1-digit numbers should be easy,
and you already have the implementation of add, so the complete solution is not far away. To sum it up, what you still need to implement:
Multiply by a 1-digit number
Multiply by 10
Combine the above two to compute the total

Obtain original (unexpanded) macro text using libclang

Using libclang, I have a cursor into a AST, which corresponds to the statement resulting from a macro expansion. I want to retrieve the original, unexpanded macro text.
I've looked for a libclang API to do this, and can't find one. Am I missing something?
Assuming such an API doesn't exist, I see a couple of ways to go about doing this, both based on using clang_getCursorExtent() to obtain the source range of the cursor - which is, presumably, the range of the original text.
The first idea is to use clang_getFileLocation() to obtain the filename and position od the range start and end, and to read the text directly from the file. If I've compiled from unsaved files then i need to deal with that, but my main concern with this approach is that it just doesn't seem right to be going outside to the filesystem when I'm sure clang holds all this information internally. There also would be implications if the AST has been loaded rather than generated, or if the source files have been modified since they were parsed.
The second approach is to call clang_tokenize() on the cursor extent. I tried doing this, and found that it fails to produce a token list for most of the cursors in the AST. Tracing into the code, it turns out that internally clang_tokenize() manipulates the supplied range and ends up concluding that it spans multiple files (presumably due to some effect of the macro expansion), and aborts. This seems incorrect to me, but I do feel that in any case I'm abusing clang_tokenize() trying to do this.
So, what's the best approach?
This is the only way I've found.
So you get the top level cursor with clang_getTranslationUnitCursor(). Then, you do clang_visitChildren(), with the visitor function passed into this returning CXChildVisit_Continue so that only the immediate children are returned. Among the children, you see the usual cursor types for top level declarations (like CXCursor_TypedefDecl, CXCursor_EnumDecl) but among them there's also CXCursor_MacroExpansion. Every single macro expansion appears to show up in a cursor with this type. You can then call clang_tokenize() on any of these cursors and it gives you the unexpanded macro text.
I have no idea why macro expansions get stuck near the top of the AST instead of within elements where they get used, it makes things pretty awkward. Example:
enum someEnum{
one = SOMEMACRO,
two,
three
}
It'd be nice if the macro expansion cursor for SOMEMACRO were within the enum declaration instead of being a sibling to it.
(I realize this is ridiculously late but I'm hoping this gets libclang more exposure, maybe someone more experienced with it can provide more insight).

How to detect tabular data from a variety of sources

In an experimental project I am playing with I want to be able to look at textual data and detect whether it contains data in a tabular format. Of course there are a lot of cases that could look like tabular data, so I was wondering what sort of algorithm I'd need to research to look for common features.
My first thought was to write a long switch/case statement that checked for data seperated by tabs, and then another case for data separated by pipe symbols and then yet another case for data separated in another way etc etc. Now of course I realize that I would have to come up with a list of different things to detect - but I wondered if there was a more intelligent way of detecting these features than doing a relatively slow search for each type.
I realize this question isn't especially eloquently put so I hope it makes some sense!
Any ideas?
(no idea how to tag this either - so help there is welcomed!)
The only reliable scheme would be to use machine-learning. You could, for example, train a perceptron classifier on a stack of examples of tabular and non-tabular materials.
A mixed solution might be appropriate, i.e. one whereby you handled the most common/obvious cases with simple heuristics (handled in "switch-like" manner) as you suggested, and to leave the harder cases, for automated-learning and other types of classifier-logic.
This assumes that you do not already have a defined types stored in the TSV.
A TSV file is typically
[Value1]\t[Value..N]\n
My suggestion would be to:
Count up all the tabs
Count up all of new lines
Count the total tabs in the first row
Divide the total number of tabs by the tabs in the first row
With the result of 4, if you get a remainder of 0 then you have a candidate of TSV files. From there you may either want to do the following things:
You can continue reading the data and ignoring the error of lines with less or more than the predicted tabs per line
You can scan each line before reading to make sure all are consistent
You can read up to the line that does not fit the format and then throw an error
Once you have a good prediction of the amount of tab separated values you can use a regular expression to parse out the values [as a group].

Resources