NeuroMem® Knowledge Builder
NeuroMem Knowledge Builder (NMKB) is a simple framework to experiment with the power and adaptivity of the NeuroMem RBF classifier. It lets you train and test the neurons on your datasets while producing rich diagnostics reports to help you find the best compromise between accuracy and throughput, uncertainties and confusion, and more.
The application runs under Windows and integrates a cycle accurate simulation of 8000 neurons. It can also interface to the NM500 chips of the NeuroMem USB dongle (2304 neurons) and the NeuroShield board (576 neurons, expandable).
Simple toolchain for non-AI experts
NMKB imports labeled datasets derived from any data types such as text, measurements, images, video and audio files. Upon learning a dataset, a diagnostics report indicates how many models were retained by the neurons to describe each category, and more. For the classification, you can choose to use the neurons as a Radial Basis Function or K-NN classifier. Other settings include the value K and a consolidation rule to produce a single output in case of uncertainties. The throughput and accuracy of the classification are reported per categories.
Building knowledge with traceability
NMKB produces a data log to easily track and compare the settings and functions applied on the datasets. The traceability of the knowledge built by the NeuroMem neurons is also conveniently exploited. For example, you can filter the vectors classified incorrectly and comprehend why by comparing their profile to the firing models. This utility may even pinpoint errors in the input datasets!
The knowledge built by the neurons can be saved and exported to NeuroMem hardware platforms. The latter can use the knowledge “as is” or enrich it if they integrate their own learning logic.
A typical NeuroMem platform is composed of a Field Programmable Gate Array (FPGA) and a bank of NM500 chips interconnected together either directly or through the FPGA along with the necessary GPIOs and communication ports for the targeted application. They all have in common that latencies to learn and recognize are deterministic and independent of the complexity of your datasets. The network’s parallel architecture also enables a seamless scalability of the network by cascading chips.