Upper Ontology (computer Science) - Development - Arguments For The Feasibility of An Upper Ontology

Arguments For The Feasibility of An Upper Ontology

Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps:

(1) they assert that there is no possibility of universal agreement on any conceptual scheme; but they ignore the fact that a practical common ontology does not need to have universal agreement, it only needs a large enough user community to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and
(2) they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representation are in fact logically inconsistent.

In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility. Objections based on the different ways people perceive things attack a simplistic, impoverished view of ontology. The objection that there are logically incompatible models of the world are true, but in an upper ontology those different models can be represented as different theories, and the adherents of those theories can use them in preference to other theories, while preserving the logical consistency of the necessary assumptions of the upper ontology. The necessary assumptions provide the logical vocabulary with which to specify the meanings of all of the incompatible models. It has never been demonstrated that incompatible models cannot be properly specified with a common, more basic set of concepts, while there are examples of incompatible theories that can be logically specified with only a few basic concepts.

Many of the objections to upper ontology refer to the problems of life-critical decisions or non-axiomatized problem areas such as law or medicine or politics that are difficult even for humans to understand. Some of these objections do not apply to physical objects or standard abstractions that are defined into existence by human beings and closely controlled by them for mutual good, such as standards for electrical power system connections or the signals used in traffic lights. No single general metaphysics is required to agree that some such standards are desirable. For instance, while time and space can be represented many ways, some of these are already used in interoperable artifacts like maps or schedules.

Objections to the feasibility of a common upper ontology also do not take into account the possibility of forging agreement on an ontology that contains all of the primitive ontology elements that can be combined to create any number of more specialized concept representations. Adopting this tactic permits effort to be focused on agreement only on a limited number of ontology elements (under 10,000). By agreeing on the meanings of that inventory of basic concepts, it becomes possible to create and then accurately and automatically interpret an infinite number of concept representations as combinations of the basic ontology elements. Any domain ontology or database that uses the elements of such an upper ontology to specify the meanings of its terms will be automatically and accurately interoperable with other ontologies that use the upper ontology, even though they may each separately define a large number of domain elements not defined in other ontologies. In such a case, proper interpretation will require that the logical descriptions of domain-specific elements be transmitted along with any data that is communicated; the data will then be automatically interpretable because the domain element descriptions, based on the upper ontology, will be properly interpretable by any system that can properly use the upper ontology. An upper ontology based on such a set of primitive elements can include alternative views, provided that they are logically compatible. Logically incompatible models can be represented as alternative theories, or represented in a specialized extension to the upper ontology. The proper use of alternative theories is a piece of knowledge that can itself be represented in an ontology.

Most proponents of an upper ontology argue that several good ones may be created with perhaps different emphasis. Very few are actually arguing to discover just one within natural language or even an academic field. Most are simply standardizing some existing communication. Another view advanced is that there is almost total overlap of the different ways that upper ontologies have been formalized, in the sense that different ontologies focus on a different aspect of the same entities, but the different views are complementary and not contradictory to each other; as a result, an internally consistent ontology that contains all the views, with means of translating the different views into the other, is feasible. Such an ontology has not thus far been constructed, however, because it would require a large project to develop so as to include all of the alternative views in the separately developed upper ontologies, along with their translations. The main barrier to construction of such an ontology is not the technical issues, but the reluctance of funding agencies to provide the funds for a large enough consortium of developers and users.

Several common arguments against upper ontology can be examined more clearly by separating issues of concept definition (ontology), language (lexicons), and facts (knowledge). For instance, people have different terms and phrases for the same concept. However, that does not necessarily mean that those people are referring to different concepts. They may simply be using different language or idiom. Formal ontologies typically use linguistic labels to refer to concepts, but the terms that label ontology elements mean no more and no less than what their axioms say they mean. Labels are similar to variable names in software, evocative rather than definitive. The proponents of a common upper ontology point out that the meanings of the elements (classes, relations, rules) in an ontology depend only on their logical form, and not on the labels, which are usually chosen merely to make the ontologies more easily usable by their human developers. In fact, the labels for elements in an ontology need not be words - they could be, for example, images of instances of a particular type, or videos of an action that is represented by a particular type. It cannot be emphasized too strongly that words are *not* what are represented in an ontology, but entities in the real world, or abstract entities (concepts) in the minds of people. Words are not equivalent to ontology elements, but words *label* ontology elements. There can be many words that label a single concept, even in a single language (synonymy), and there can be many concepts labeled by a single word (ambiguity). Creating the mappings between human language and the elements of an ontology is the province of Natural Language Understanding. But the ontology itself stands independently as a logical and computational structure. For this reason, finding agreement on the structure of an ontology is actually easier than developing a controlled vocabulary, because all different interpretations of a word can be included, each *mapped* to the same word in the different terminologies.

A second argument is that people believe different things, and therefore can't have the same ontology. However, people can assign different truth values to a particular assertion while accepting the validity of certain underlying claims, facts, or way of expressing an argument with which they disagree. (Using, for instance, the issue/position/argument form.) This objection to upper ontologies ignores the fact that a single ontology can represent different belief systems, representing them as different belief systems, without taking a position on the validity of either.

Even arguments about the existence of a thing require a certain sharing of a concept, even though its existence in the real world may be disputed. Separating belief from naming and definition also helps to clarify this issue, and show how concepts can be held in common, even in the face of differing belief. For instance, wiki as a medium may permit such confusion but disciplined users can apply dispute resolution methods to sort out their conflicts. It is also argued that most people share a common set of "semantic primitives", fundamental concepts, to which they refer when they are trying to explain unfamiliar terms to other people. An ontology that includes representations of those semantic primitives could in such a case be used to create logical descriptions of any term that a person may wish to define logically. That ontology would be one form of upper ontology, serving as a logical "interlingua" that can translate ideas in one terminology to its logical equivalent in another terminology.

Advocates argue that most disagreement about the viability of an upper ontology can be traced to the conflation of ontology, language and knowledge, or too-specialized areas of knowledge: many people, or agents or groups will have areas of their respective internal ontologies that do not overlap. If they can cooperate and share a conceptual map at all, this may be so very useful that it outweighs any disadvantages that accrue from sharing. To the degree it becomes harder to share concepts the deeper one probes, the more valuable such sharing tends to get. If the problem is as basic as opponents of upper ontologies claim, then, it applies also to a group of humans trying to cooperate, who might need machine assistance to communicate easily.

If nothing else, such ontologies are implied by machine translation, used when people cannot practically communicate. Whether "upper" or not, these seem likely to proliferate.

Read more about this topic:  Upper Ontology (computer Science), Development

Famous quotes containing the words arguments and/or upper:

    ‘Tis happy, therefore, that nature breaks the force of all sceptical arguments in time, and keeps them from having any considerable influence on the understanding. Were we to trust entirely to their self-destruction, that can never take place, ‘till they have first subverted all conviction, and have totally destroy’d human reason.
    David Hume (1711–1776)

    The thirst for powerful sensations takes the upper hand both over fear and over compassion for the grief of others.
    Anton Pavlovich Chekhov (1860–1904)