Powerful RBF classifier

and its benefit for practical AI
  • Does not need a large amount of training data. User can define his own training set, and does not depend on 3rd party large data bases
  • Learn new examples and categories at any time and take them immediately into account for the next recognition
  • Latencies to learn and recognize depend on the number of samples and their length, but not on their content nor their relationship to one another
  • If you run out of neurons, just cascade NeuroMem chips, with no need to reprogram, nor relearn from scratch
  • NeuroMem neurons are a highly non-linear classifier (behaving as a Radial Basis Function or K-Nearest Neighbor)
  • Not knowing or being uncertain are acceptable outputs, and much preferred than probabilities or the famous Top3 or Top5 criteria used in Deep Learning benchmarks
  • Acknowledgment of uncertainties enriches the decision process hinting for more training of the network or the recourse to a second opinion such as another network trained with different features
  • Final classification can involve multiple networks (aka experts) and require minimum consensus between them or other consolidation rules taking advantage of their mutually exclusive domains of unknown and uncertainty
  • A NeuroMem network is not a black box
  • NeuroMem neurons are composed of 95% memory and 5% of learning and recognition logic. Their memory is storage for the models they decide to retain collectively while learning.
  • The response of the network is explainable since the models stored in the neurons (firing neurons in particular) can be retrieved.

RBF in a few words

  • A model generator and classifier
  • Notion of positive and uncertain classification, essential for robust decision making
  • Notion of unknown, essential for prediction and learning causality


What about KNN?

  • A classification mode only! Not a model generator
  • Equivalent to “RBF”, only throwing a dice in lieu of reporting an “Unknown”
  • Always retun a response, but the NEAREST can still be far
  • Commonly used for clustering algorithms