Using protégé 4, I have created three differents ontologies: Process, tools, Raw material. Now, i want to define the relation between the concepts of these ontologies (for example: a specific task in the process ontology need the use of a specific tool). How can i do this using rules?
Related
I am looking for guidance on what is the best way for ontology alignment? I am using protege for ontology modelling at the moment. I like to use other classes from existing ontologies. However, is it better to (1) make my own classes and then add the existing classes as equiavalentTo? or (2) import the existing ontologies or classes/relationships and use them as the start?
Thanks!
I don't think it's necessarily "better" to use either approach, but there are factors which may affect the choice:
Some vocabularies may not have a proper ontology, i.e. the terms are not described by anything that could be imported. You could still use the terms from the vocabulary directly, but you'd have to describe the ontology yourself to make them logically usable, at the risk of coming into conflict with someone else who would also want to use the vocabulary this way.
If you use the external ontology only marginally, and you don't want to describe all the classes and properties yourself since it pretty much aligns with the existing definitions, there is of course no need to "duplicate" it, as importing makes it less cluttered and helps people who are already familiar with the other ontology understand yours.
If you'd use the external ontology as the core, it depends on what you are creating. If you are merely extending it with new concepts, then again your concepts should align with those from the external ontology, so there is no issue with importing it (for the same reasons as above). However, if your ontology has a somewhat different focus, you may want to define the core terms yourself without relying on other ontologies, since it may as well come to the point that you decide that they are not really equivalent (like a "Person" in one ontology may not be equivalent to a "Person" in another ontology). Such a choice will be easier to make when you don't have to rewrite half of your ontology.
Last thing to note: owl:equivalentClass does not mean the classes are the same; just that they share the set of individuals. You could still give them your own descriptions, link them to other concepts etc. without affecting the equivalent classes, which still have their own "identity". This is similar in mathematics to Zorn's lemma, the well-ordering theorem, and the axiom of choice, which are all logically equivalent, yet they have their own Wikipedia articles so clearly they are not identical.
I am looking for ontologies made for the domain of agriculture. I need these ontologies for testing a logic I implemented for merging domain specific ontologies. I specifically need ontologies that are created for the sub domains of agriculture(ontologies of other domains will also be useful as well).
For example: Crops ontology, Fertilizer ontology, Rice ontology, Weed ontology.
Any kind of ontology will be helpful. Does anyone know where to find such ontologies? I couldn't find any.
If anyone know ontologies like these related to a domain other than agriculture, mention them too. Thank you in advance.
Sorry if I posted a wrong kind of question.
You don't state exactly what you mean by ontology, so I'm assuming a definition as is commonly used in the life-sciences, encompassing a wide degree of axiomatization with hundreds to potentially hundreds of thousands of classes. If you mean something more like an RDF schema then my examples may not apply.
AgroPortal has over 100 ontologies/thesauri in the domain of or closely related to Agriculture. You can see in the top 5 accessed ontologies some of the most relevant ones, including AGROVOC. Note that GACS is itself a merger of other thesauri, so if your goal is to test a merge framework you may want to hold this one back. Many of these are more thesaurus-like, but some such as ENVO and AGRO employ more extensive OWL axiomatization.
Note also that considerable work has been done in the Planteome project to merge together different crop ontologies into a pan-species trait ontology, this may also be useful for your evaluation.
If you are interested in applying your techniques more broadly in the life sciences, common sub-domains are anatomy, phenotype and disease. These frequently are used as tests in initiatives like the Ontology Alignment Evaluation Initiative. Although it sounds like your technique goes beyond mapping and into merging, it may be useful to look at past competitions. I have also produced merged anatomy, disease and phenotype ontologies, these have all been curated and could be used as test sets for your approach.
I know that this question may not be suitable for SO, but please let this question be here for a while. Last time my question was moved to cross-validated, it froze; no more views or feedback.
I came across a question that does not make much sense for me. How IFC models can be interrogated via NLP? Consider IFC models as semantically rich structured data. IFC defines an EXPRESS based entity-relationship model consisting of entities organized into an object-based inheritance hierarchy. Examples of entities include building elements, geometry, and basic constructs.
How could NLP be used for such type of data? I don't see NLP relevant at all.
In general, I would suggest that using NLP techniques to "interrogate" already (quite formally) structured data like EXPRESS would be overkill at best and a time / maintenance sinkhole at worst. In general, the strengths of NLP (human language ambiguity resolution, coreference resolution, text summarization, textual entailment, etc.) are wholly unnecessary when you already have such an unambiguous encoding as this. If anything, you could imagine translating this schema directly into a Prolog application for direct logic queries, etc. (which is quite a different direction than NLP).
I did some searches to try to find the references you may have been referring to. The only item I found was Extending Building Information Models Semiautomatically Using Semantic Natural Language Processing Techniques:
... the authors propose a new method for extending the IFC schema to incorporate CC-related information, in an objective and semiautomated manner. The method utilizes semantic natural language processing techniques and machine learning techniques to extract concepts from documents that are related to CC [compliance checking] (e.g., building codes) and match the extracted concepts to concepts in the IFC class hierarchy.
So in this example, at least, the authors are not "interrogating" the IFC schema with NLP, but rather using it to augment existing schemas with additional information extracted from human-readable text. This makes much more sense. If you want to post the actual URL or reference that contains the "NLP interrogation" phrase, I should be able to comment more specifically.
Edit:
The project grant abstract you referenced does not contain much in the way of details, but they have this sentence:
... The information embedded in the parametric 3D model is intended for facility or workplace management using appropriate software. However, this information also has the potential, when combined with IoT sensors and cognitive computing, to be utilised by healthcare professionals in Ambient Assisted Living (AAL) environments. This project will examine how as-constructed BIM models of healthcare facilities can be interrogated via natural language processing to support AAL. ...
I can only speculate on the following reason for possibly using an NLP framework for this purpose:
While BIM models include Industry Foundation Classes (IFCs) and aecXML, there are many dozens of other formats, many of them proprietary. Some are CAD-integrated and others are standalone. Rather than pay for many proprietary licenses (some of these enterprise products are quite expensive), and/or spend the time to develop proper structured query behavior for the various diverse file format specifications (which may not be publicly available in proprietary cases), the authors have chosen a more automated, general solution to extract the content they are looking for (which I assume must be textual or textual tags in nearly all cases). This would almost be akin to a search engine "scraping" websites and looking for key words or phrases and synonyms to them, etc. The upside is they don't have to explicitly code against all the different possible BIM file formats to get good coverage, nor pay out large sums of money. The downside is they open up new issues and considerations that come with NLP, including training, validation, supervision, etc. And NLP will never have the same level of accuracy you could obtain from a true structured query against a known schema.
I have developed an ontology of emotions using Protege. I want to relate each class(emotion) of my ontology to its similar concepts in another ontology. For example, I have a class Anger. I want to retrieve concepts relating to anger, like agitation, mad, etc., in correct context from another ontology (ConceptNet or WordNet) through their URIs. How can I do so?
Is this even a correct idea to begin with? How else can I achieve my target? Can I call a class from another ontology from my ontology through URI within Protege?
If you only want to refer to the classes, you can simply use the same URI in both places.
However this does not force tools to actually take into account any axioms about those classes, i.e., your ontology will not know about superclasses or restrictions declared in the other ontology.
To actually use all axioms related to your other classes, you'll need to import the other ontology in its entirety. To only use part of the ontology, you can use one of various modularisation techniques available to create a subset of an ontology, containing the axioms you're interested in. However, the technique to use depends on your specific needs.
Can you provide more insight on what you are trying to achieve?
There's a lot of material online on modularisation - search for "ontology modularisation". Reading a few abstracts should help you focus on the best approach for your needs.
I have a set of football related keywords, a data set of positive sentiments words and negative sentiments words with me. My requirement is to combine these and search is social media to get some real time discussions and posts, and do some statistical analysis and reach some conclusions. This keywords and data sets are dynamically updating one. Now my question is
What is the best practice to handle the three sets of data? Using an Ontology structure or Well structured database?
Whether the data in the ontology is able to access from any programming languages? can i update or retrieve the data in Ontology using .NET or R or with any other programming language?
Thank you
Representing the related keywords as an ontology is a good idea rather than storing in a database.
SPARQL can be used to access and search the ontology to get related information
Your system will be semantically rich if its an ontology
If its a database, may be the access time may be improved but it will not be semantically rich
You may use apache jena which is a free Java API for creating an ontology.
Python also has many plugins for ontology generation.