Keywords cooccurence using OLAP - keyword

I have a large set of documents. Each document contains multiple keywords. I would like to create an OLAP cube that calculate the co-occurrence of keywords in this set. is it possible to perform such solution (using olap cube). in this case what would be the attributs of the fact table, the dimensions , the measure and the aggregation function. Also what tool do you suggest.
An example of document : (in JSON form) actually form doesn't matter
example of document

Related

Reclassify data using filters

My goal is to include or exclude dimensional data, from a calculation that creates a category on that dimension, in this example, Customer Name. I have achieve the inclusion/exclusion using Parameters, but they only accept single values. That means I need to create several parameters to achieve a selection of 10 items or more.
To explain the case in full, I'm using SuperStore sample dataset on Tableau Desktop 2021.1, I have created the following calculation
Top 10 Customers
IF
{fixed [Customer Name]:sum([Sales])}>10000
then
[Customer Name]
ELSE
"Other"
END
That renders the following visual
How can I move Bart Watters and Denny Joy to Other, without filtering the data? The idea is providing the user the ability to classify - instead of hard coding the selection into the calculation.

Query-document similarity with doc2vec

Given a query and a document, I would like to compute a similarity score using Gensim doc2vec.
Each document consists of multiple fields (e.g., main title, author, publisher, etc)
For training, is it better to concatenate the document fields and treat each row as a unique document or should I split the fields and use them as different training examples?
For inference, should I treat a query like a document? Meaning, should I call the model (trained over the documents) on the query?
The right answer will depend on your data & user behavior, so you'll want to try several variants.
Just to get some initial results, I'd suggest combining all fields into a single 'document', for each potential query-result, and using the (fast-to-train) PV-DBOW mode (dm=0). That will let you start seeing results, doing either some informal assessment or beginning to compile some automatic assessment data (like lists of probe queries & docs that they "should" rank highly).
You could then try testing the idea of making the fields separate docs – either instead-of, or in addition-to, the single-doc approach.
Another option might be to create specialized word-tokens per field. That is, when 'John' appears in the title, you'd actually preprocess it to be 'title:John', and when in author, 'author:John', etc. (This might be in lieu of, or in addition to, the naked original token.) That could enhance the model to also understand the shifting senses of each token, depending on the field.
Then, providing you have enough training data, & choose other model parameters well, your search interface might also preprocess queries similarly, when the user indicates a certain field, and get improved results. (Or maybe not: it's just an idea to be tried.)
In all cases, if you need precise results – exact matches of well-specified user queries – more traditional searches like exact DB matches/greps, or full-text reverse-indexes, will outperform Doc2Vec. But when queries are more approximate, and results need filling-out with near-in-meaning-even-if-not-in-literal-tokens results, a fuzzier vector document representation may be helpful.

Using ontology to infer labels for process model

I'm trying to implement a specific type of process mining, that has been presented in this thesis [link]. It is based on HMMs and generates a process model in form of a directed graph, where:
Nodes are called intentions and correspond to hidden states
Edges are called strategies and consist of different activities
These activities correspond to the HMM's observable emissions
Intentions can be fulfilled using different strategies
A user event log consisting of user IDs, timestamps and activities is used as input. The image below is an example of such a process model. The highlighted nodes and edges resemble the path that has been predicted using the Viterbi algorithm.
You can see that the graph's nodes and edges only carry numeric labels, which allow to distinguish between the different strategies and intentions. In order to make these labels more meaningful to the human reader, I'd like to infer some suitable labels.
My idea is to use an ontology to obtain those labels. After some research I figured out that I probably needed to do something that is generally referred to as "ontology learning". For this I would need to create some axioms in RDF/OWL format and then use these as input for a reasoner, that would infer an ontology.
Is this approach correct and reasonable to achieve my goal?
If this is the way to go, I will need some tool to generate axioms in an automated way. So far I couldn't find any tool that would do that completely out-of-the-box. Based on what I've seen so far I conclude that I would need to define some kind of mapping between the original data and the desired axioms. I took a closer look at protégé, which offers a plugin for spreadsheets. It seems to be based on the MappingMasterDSL project [link].
I've also found an interesting paper [link] on ontology learning where an RNN-based model is trained in a end-to-end fashion to translate definitory sentences into OWL formulae. BUT: My user event log data does not contain any natural sentences. Its activities are defined by tokens derived from HTML elements of the user interface. Therefore the RNN-based approach does not seem to be applicable here. (For the interested reader, the related project can be found here [link])
Isn't there really any easier way than hand-crafting the axioms' schema(ta)?
Assuming that I have created my axioms and inferred an ontology, I would like to use the strategies' (edges') observable activities (emissions) to infer a suitable label. I guess I would need to query my ontology somehow. I could use the activity names as parameters for my query and look for some related entities that reveal the desired label. I'm expecting something like:
"I have a strategy with ID=3, that strategy can be executed with
actions a, b and c, give me all entities of the ontology, that
have these actions as property value and show and give me all related
labels for those entities"
But where would the data for the labels actually come from?
I think I'm missing some important step during the process of ontology learning. Where do I find an additional data source for the labels and how do I relate this data to my ontology's entities?
Also I'm wondering if there is a way to incorporate the inherent knowledge of the process model's topology into my ontology.

Learnig NER using category list

In the template for training CRF++, how can I include a custom dictionary.txt file for listed companies, another for popular European foods, for eg, or just about any category.
Then provide a sample training data for each category whereby it learns how those specific named entites are used within a context for that category.
In this way, I as well as the system, can be sure it correctly understood how certain named entites are structured in a text, whether a tweet or a Pulitzer prize winning news article, instead of providing hundred megabytes of data.
This would be rather cool. Model would have a definite dictionary of known entites (which does not need to be expanded) and a statistical approach on how those known entites are structured in human text.
PS - Just for clarity, not yearning for a regex ner. These are only cool if you got lots in the dictionary, lots of rule and lots of dulltime.
I think what you are talking about is Gazetteers list (dictionary.txt).
You would have to include corresponding feature for a word in training data and then specify it in template file.
For Example: Your list contains the entity: Hershey's
and training data has a sentence: I like Hershey's chocolates.
So when you arrange the data in CoNLL Format (for CRF++), you can add a column (which shall have values 0 or 1 , indicating is the word is present in dictionary) which will have 0 value for all words, except Hershey's.
You also have to include this column as feature in template file.
To get a better understanding on Template File and NER training with CRF++, you can watch the below videos and comment your doubts :)
1) https://youtu.be/GJHeTvDkIaE
2) https://youtu.be/Ur5umC4BwN4
EDIT: (after viewing the OP's comment)
Sample Training Data with extra features: https://pastebin.com/fBgu8c67
I've added 3 features. The IsCountry feature value ( 1 or 0 ) can be obtained from a Gazetteers list of countries. The other 2 features can be computed offline. Note that Headers are added in file for reference only, should not be include in training data file.
Sample Template File for the above data : https://pastebin.com/LPvAGCVL
Note that, Test Data should also be in the same format as Train Data, with the same features / same no of columns.

Which Giraph I/O format can be used for property graph?

There are several built-in input output format in Giraph, but all those formats support only numerical ids & value.
So is there a way to process property graph such that both vertices & edges can have multiple key & values or anything close? I'm specifically interested in whether edge can have attributes like timeCreated or type.
Also, is there some convention to use only numerical ids & data for faster processing? Specifically, is the property graph from graph database usually filtered to have only ids & value before batch processing using Giraph?
At least from Neo4j you can use the csv export of node- and relationship-id's to generate the data for giraph.
You can use something like this:
http://neo4j.com/blog/export-csv-from-neo4j-curl-cypher-jq/
and you can use neo4j-import to import that csv data, or LOAD CSV for more advanced structural updates.

Resources