Trainable lifelong AI
A learning and inference engine in a same neural network
Teach new examples and new categories at anytime. Your input will be taken into account immediately and contribute to the next recognition. If by mistake, you (or an unsupervised algorithm) teach an example contradicting prior learning, the neurons in conflict will autonomously flag themselves as such and correct their influence field. Neurons can also be pre-trained and this means that they are loaded with a knowledge file previously built by a NeuroMem network.
A neural network which reports real values, not just probabilities
The neurons output detailed information about the recognized patterns and do not hesitate to report when they do not know or are unsure about a classification. In applications with cost of the mistake, such behavior is much preferred than probabilities or the famous Top3 or Top5 criteria used in Deep Learning benchmarks. Acknowledgment of uncertainties enriches the decision process by suggesting more training or the recourse to a second opinion such as another network trained with a different feature. For the best robustness, the final decision may require minimum consensus between neurons of a same or multiple networks.
A neural network with transparency, not a black box
When neurons learn a new pattern, the decision is actually taken collectively between all the neurons already committed. The next available neuron in the chain will store the new pattern in its own memory and become committed. Consequently, the content of the neurons not only represent a knowledge that they have built together, but also the log of all the patterns that have been retained as novelty at the time they were taught.
Two classifiers in a same architecture
Radial Basis Function (RBF)
- Highly adaptive model generator
- Non linear classifier
- Report uncertainties, essential for robust decision making
- Report novelties essential for prediction and learning causality
K-Nearest Neighbor (KNN)
- Closest match in msec regardless of the number of models
- Ultra-fast patented Winner-Takes-All for the Search and Sort
- Commonly used for sorting and clustering algorithms
- How does RBF model a decision space?
- Can I build knowledge in RBF and use it in KNN mode?
- When to use KNN versus RBF?
A patented fully interconnected neural network
- Simple primitive neuron cell = memory + processing logic
- Homogenous assembly with no supervisor
- Full interconnect between neuron cells through patented bus
- Broadcast mode
- Deterministic latencies
- Low power consumption
- IP available on FPGA and ASIC
- Easily scalable
Field proven IP
The NeuroMem IP is easy to integrate into an ASIC or FPGA and this has been proven numerous times by General Vision and its customers.
All the neurons are identical and composed of a memory cell and some processing logic. They can be copied and pasted as many times as your footprint authorizes and simply interconnected through a common bus of 26 lines called the NeuroMem bus.