Whether you are interested in video, sounds, vibrations or other sensors, you can teach NeuroMem networks with examples in real time and start immediately monitoring their response (or lack of response in the case of novelty detection)
With the new NeuroShield HDK for the ZYNQ7000 SOC, developers can easily integrate real-time pattern learning and recognition into their applications using the software programmability of an ARM®-based processor and/or the hardware programmability of an FPGA.
The pairing of the ZYNQ and NeuroShield grants the ability to surround a powerful processor with a unique digital neural network, tailored for whatever AI application is being conquered. Video, audio, vibration and other sensory inputs can be acquired, formatted, learned and immediately recognized. Decision can be taken on-board to control an actuator, transmit or store data of interest. Typical applications include detection of abnormal vibrations in an appliance, classification of outdoor noises from glass debris to bird songs, monitoring of a flame (or absence of a flame), identification of a person or object in a field of view, etc.
The NeuroShield board features a digital neural network of 576 neurons, expandable to 4032 neurons. It can learn features extracted from single or multiple sensors upon the push of a button or other external or programmatic events managed by the ARM or FPGA of the ZYNQ SOC. As soon as a learning has occurred, the neurons can be queried and depending on the application, the ARM or FPGA can retrieve a simple recognition status (Identified, Uncertain or Unknown), the category of the closest match or a detailed classification integrating the response of all firing neurons.
Unlike a conventional CNN or DL approach, the NeuroMem neurons are capable of intrinsic learning and recognition. They learn autonomously and collectively on the NeuroShield board and adapt their knowledge in real-time so new examples are taken into account at the next recognition. Learning does not require access to massive databases of annotated models and the teacher can observe the accuracy and throughput of the network after teaching a few relevant examples of each category. Note that the training can also be done offline since the NeuroShield can also be interfaced to a PC via its USB connector. In either case, the knowledge built by the neurons can be saved and ported between platforms using General Vision’s tools.
Currently supported platforms:
- Digilent Arty Z7 (switches & leds, Gige, HDMI, audio, pmods, SD card, etc)
- Avnet MiniZed (switches & leds, wifi, pmods, etc)
- HDK comes with Vivado and SDK projects and can be easily updated to support other ZYNQ7000 development boards
Experience with a real-time trainable and powerful non-linear classifer under Matlab with a choice of two toolkit:
General Vision is introducing NeuroMem Knowledge Builder (NMKB) version 4.0 with new and improved functions including the ability to learn and classify multiple files in batch, the selection of new consolidation rules to waive uncertain classifications, and more.
NeuroMem Knowledge Builder (NMKB) is a simple framework to experiment with the power and adaptivity of the NeuroMem RBF and KNN classifier. It lets you train and classify your datasets in a few mouse clicks while producing rich diagnostics reports to help you find the best compromise between accuracy and throughput, uncertainties and confusion, and more.
The application runs under Windows and integrates a cycle accurate simulation of 8000 neurons. It can also interface to the NM500 chips of the NeuroMem USB dongle (2304 neurons) and the NeuroShield board (576 neurons, expandable).
Simple toolchain for non AI experts
NMKB can import labeled datasets deriving from any data types such as text, heterogeneous measurements, images, video and audio files. The neurons can build a knowledge in a few seconds and diagnostics reports indicate if the training dataset was significant and sufficient, how many models were retained by the neurons to describe each category, and more. For the classification of new datasets, you can choose to use the neurons as a Radial Basis Function or K-Nearest Neighbor classifier. Other settings include the value K and a consolidation rule to produce a single output in case of uncertainties. The throughput and accuracy of the classification are reported per categories.
Building a knowledge with traceability
The application produces a data log to easily track and compare the settings and workflow which were tested on a given dataset. The traceability of the knowledge built by the NeuroMem neurons is conveniently exploited by NMKB. For example, you can filter the vectors classified incorrectly and comprehend why by comparing their profile to the firing models. This utility may even pinpoint errors in the input datasets!
Primitive and custom knowledge bases
Finally, the knowledge built by the neurons can be saved and exported to other NeuroMem platforms which can themselves use the knowledge “as is” or possibly enrich it if they are configured with a learning logic. A typical NeuroMem platform features a Field Programmable Gate Array (FPGA) and a bank of NM500 chips interconnected together either directly or through the FPGA along with the necessary GPIOs and communication ports for the targeted application. They all have in common that latencies to learn and recognize are deterministic and independent of the complexity of your datasets. The network’s parallel architecture also enables a seamless scalability of the network by cascading chips.
In addition to the NeuroMem Knowledge Builder, General Vision offers SDKs interfacing to NeuroMem networks for generic pattern recognition and image recognition with examples in C/C++, C#, Python, MatLab and LabVIEW.
General Vision is releasing a NeuroMem API for the Intel Arduino/Genuino101 board. This embedded module features a QuarkSE microcontroller with a pattern recognition engine composed of 128 NeuroMem neurons. Using the new API, developers can teach and query the neurons in minutes to monitor and classify signals received from the on-board MEM or other sensors. The API is delivered with examples and a getting started guide.