Neural Modeling Fields - Learning in NMF Using Dynamic Logic Algorithm

Learning in NMF Using Dynamic Logic Algorithm

The learning process consists of estimating model parameters S and associating signals with concepts by maximizing the similarity L. Note that all possible combinations of signals and models are accounted for in expression (2) for L. This can be seen by expanding a sum and multiplying all the terms resulting in MN items, a huge number. This is the number of combinations between all signals (N) and all models (M). This is the source of Combinatorial Complexity, which is solved in NMF by utilizing the idea of dynamic logic,. An important aspect of dynamic logic is matching vagueness or fuzziness of similarity measures to the uncertainty of models. Initially, parameter values are not known, and uncertainty of models is high; so is the fuzziness of the similarity measures. In the process of learning, models become more accurate, and the similarity measure more crisp, the value of the similarity increases.

The maximization of similarity L is done as follows. First, the unknown parameters {Sm} are randomly initialized. Then the association variables f(m|n) are computed,

(3).

Equation for f(m|n) looks like the Bayes formula for a posteriori probabilities; if l(n|m) in the result of learning become conditional likelihoods, f(m|n) become Bayesian probabilities for signal n originating from object m. The dynamic logic of the NMF is defined as follows:

(4).
(5)

The following theorem has been proved (Perlovsky 2001):

Theorem. Equations (3), (4), and (5) define a convergent dynamic NMF system with stationary states defined by max{Sm}L.

It follows that the stationary states of an MF system are the maximum similarity states. When partial similarities are specified as probability density functions (pdf), or likelihoods, the stationary values of parameters {Sm} are asymptotically unbiased and efficient estimates of these parameters. The computational complexity of dynamic logic is linear in N.

Practically, when solving the equations through successive iterations, f(m|n) can be recomputed at every iteration using (3), as opposed to incremental formula (5).

The proof of the above theorem contains a proof that similarity L increases at each iteration. This has a psychological interpretation that the instinct for increasing knowledge is satisfied at each step, resulting in the positive emotions: NMF-dynamic logic system emotionally enjoys learning.

Read more about this topic:  Neural Modeling Fields

Famous quotes containing the words learning, dynamic and/or logic:

    “It’s hard enough to adjust [to the lack of control] in the beginning,” says a corporate vice president and single mother. “But then you realize that everything keeps changing, so you never regain control. I was just learning to take care of the belly-button stump, when it fell off. I had just learned to make formula really efficiently, when Sarah stopped using it.”
    Anne C. Weisberg (20th century)

    Knowledge about life is one thing; effective occupation of a place in life, with its dynamic currents passing through your being, is another.
    William James (1842–1910)

    The logic of the world is prior to all truth and falsehood.
    Ludwig Wittgenstein (1889–1951)