Neural Modeling Fields - Concept Models and Similarity Measures

Concept Models and Similarity Measures

In the general case, NMF system consists of multiple processing levels. At each level, output signals are the concepts recognized in (or formed from) input, bottom-up signals. Input signals are associated with (or recognized, or grouped into) concepts according to the models and at this level. In the process of learning the concept-models are adapted for better representation of the input signals so that similarity between the concept-models and signals increases. This increase in similarity can be interpreted as satisfaction of an instinct for knowledge, and is felt as aesthetic emotions.

Each hierarchical level consists of N "neurons" enumerated by index n=1,2..N. These neurons receive input, bottom-up signals, X(n), from lower levels in the processing hierarchy. X(n) is a field of bottom-up neuronal synaptic activations, coming from neurons at a lower level. Each neuron has a number of synapses; for generality, each neuron activation is described as a set of numbers,

,where D is the number or dimensions necessary to describe individual neuron's activation.

Top-down, or priming signals to these neurons are sent by concept-models, Mm(Sm,n)

,where M is the number of models. Each model is characterized by its parameters, Sm; in the neuron structure of the brain they are encoded by strength of synaptic connections, mathematically, they are given by a set of numbers,

,where A is the number of dimensions necessary to describe invividual model.

Models represent signals in the following way. Suppose that signal X(n) is coming from sensory neurons n activated by object m, which is characterized by parameters Sm. These parameters may include position, orientation, or lighting of an object m. Model Mm(Sm,n) predicts a value X(n) of a signal at neuron n. For example, during visual perception, a neuron n in the visual cortex receives a signal X(n) from retina and a priming signal Mm(Sm,n) from an object-concept-model m. Neuron n is activated if both the bottom-up signal from lower-level-input and the top-down priming signal are strong. Various models compete for evidence in the bottom-up signals, while adapting their parameters for better match as described below. This is a simplified description of perception. The most benign everyday visual perception uses many levels from retina to object perception. The NMF premise is that the same laws describe the basic interaction dynamics at each level. Perception of minute features, or everyday objects, or cognition of complex abstract concepts is due to the same mechanism described below. Perception and cognition involve concept-models and learning. In perception, concept-models correspond to objects; in cognition models correspond to relationships and situations.

Learning is an essential part of perception and cognition, and in NMF theory it is driven by the dynamics that increase a similarity measure between the sets of models and signals, L({X},{M}). The similarity measure is a function of model parameters and associations between the input bottom-up signals and top-down, concept-model signals. In constructing a mathematical description of the similarity measure, it is important to acknowledge two principles:

First, the visual field content is unknown before perception occurred
Second, it may contain any of a number of objects. Important information could be contained in any bottom-up signal;

Therefore, the similarity measure is constructed so that it accounts for all bottom-up signals, X(n),

(1)

This expression contains a product of partial similarities, l(X(n)), over all bottom-up signals; therefore it forces the NMF system to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the knowledge instinct is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the mind does not know which object gave rise to a signal from a particular retinal neuron. Therefore a partial similarity measure is constructed so that it treats each model as an alternative (a sum over concept-models) for each input neuron signal. Its constituent elements are conditional partial similarities between signal X(n) and model Mm, l(X(n)|m). This measure is “conditional” on object m being present, therefore, when combining these quantities into the overall similarity measure, L, they are multiplied by r(m), which represent a probabilistic measure of object m actually being present. Combining these elements with the two principles noted above, a similarity measure is constructed as follows:

(2)

The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name “conditional partial similarity” for l(X(n)|m) (or simply l(n|m)) follows the probabilistic terminology. If learning is successful, l(n|m) becomes a conditional probability density function, a probabilistic measure that signal in neuron n originated from object m. Then L is a total likelihood of observing signals {X(n)} coming from objects described by concept-model {Mm}. Coefficients r(m), called priors in probability theory, contain preliminary biases or expectations, expected objects m have relatively high r(m) values; their true values are usually unknown and should be learned, like other parameters Sm.

Note that in probability theory, a product of probabilities usually assumes that evidence is independent. Expression for L contains a product over n, but it does not assume independence among various signals X(n). There is a dependence among signals due to concept-models: each model Mm(Sm,n) predicts expected signal values in many neurons n.

During the learning process, concept-models are constantly modified. Usually, the functional forms of models, Mm(Sm,n), are all fixed and learning-adaptation involves only model parameters, Sm. From time to time a system forms a new concept, while retaining an old one as well; alternatively, old concepts are sometimes merged or eliminated. This requires a modification of the similarity measure L; The reason is that more models always result in a better fit between the models and data. This is a well known problem, it is addressed by reducing similarity L using a “skeptic penalty function,” (Penalty method) p(N,M) that grows with the number of models M, and this growth is steeper for a smaller amount of data N. For example, an asymptotically unbiased maximum likelihood estimation leads to multiplicative p(N,M) = exp(-Npar/2), where Npar is a total number of adaptive parameters in all models (this penalty function is known as Akaike information criterion, see (Perlovsky 2001) for further discussion and references).

Read more about this topic:  Neural Modeling Fields

Famous quotes containing the words concept, models, similarity and/or measures:

    Obscenity is a moral concept in the verbal arsenal of the Establishment, which abuses the term by applying it, not to expressions of its own morality, but to those of another.
    Herbert Marcuse (1898–1979)

    Today it is not the classroom nor the classics which are the repositories of models of eloquence, but the ad agencies.
    Marshall McLuhan (1911–1980)

    Incompatibility. In matrimony a similarity of tastes, particularly the taste for domination.
    Ambrose Bierce (1842–1914)

    To the eyes of a god, mankind must appear as a species of bacteria which multiply and become progressively virulent whenever they find themselves in a congenial culture, and whose activity diminishes until they disappear completely as soon as proper measures are taken to sterilise them.
    Aleister Crowley (1875–1947)