Biological creatures (including human-being) do not have data and lines of code on one side and “fetch and decode unit” on the other. Following this example, the NeuroMem architecture is a bank of neuromorphic memory all interconnected and working in parallel and dedicated to the intrinsic recognition of patterns learned instantly and incrementally.
The renewed hype for Deep Learning, and consequently Multiple Layer Perceptron, is based on huge advances in storage capacities and high-performance computing due to semiconductor evolution. Unfortunately, the so-called Deep Learning (DL) needs hundreds, if not thousands, of patterns to distinguish between two simple categories such as a dog and a cat. Once trained, correcting a mistake or adding a new category usually requires relearning everything.
Recently DARPA voiced to the operational defense stakeholders that Deep Learning is not a practical and viable option and issued a L2M (“Lifelong Learning Machines”) request for proposals. We feel that our NeuroMem technology has all the attributes to address the recent AI research programs aimed at advancing beyond DL, and that is requiring lesser amounts of training data, detecting novelty and adapting to change in real-time and providing users with explanations of its results.
The NeuroMem architecture is inherently capable of “lifelong learning” which is the basis for “evolution” and “adaptivity”.