What if my data does not fit in 256 bytes?

The neurons of the NeuroMem have 256 bytes of memory meaning that the pattern vectors broadcasted to the neurons must be composed of values ranging between 0 and 256 and have a length ranging between 0 and 256. If your input data has a higher dynamics range,...

How do the neurons learn?

When a new example of category A is presented for learning, the neural network first attempts to recognize it. If the example is not recognized by any existing neurons, a new neuron is automatically added to the network in order to store the new example and its...

What if my data does not fit in 256 bytes?

The neurons of the NeuroMem have 256 bytes of memory meaning that the pattern vectors broadcasted to the neurons must be composed of values ranging between 0 and 256 and have a length ranging between 0 and 256.

If your input data has a higher dynamics range, converting to byte will result in a loss of information. Care must be taken during the conversion to minimize this error:

  • Floating points to integers:
    • If the original data is floating-point values, the information content after the decimal place will be lost. To overcome this problem, first multiply the floating-point raster by a value, such that the most significant digit after the decimal point becomes part of the integer value. For example, if the accuracy is in the 1/100, multiply all data by a factor of 100.
  • Global linear data compression
    1. Survey the amplitude of the entire dataset to find its minimum, maximum, mean and standard deviation
    2. Evaluate the compression ratio [maximum-minimum]/256.
    3. If it is too high, consider truncating the lower bits of the original data in order to increase the minimum value and consequently reduce the compression factor. Verify that the histogram of the “truncated” data has the same profile. Go to e.
    4. Consider truncating values outside the range mean-stdDeviation. Verify that the histogram of the “truncated” data has the same profile. Go to e.
    5. Using the General Vision Knowledge Builder, run a trial learning with validation on 50% of the dataset with a ground truth category.
    6. Verify that the learned vectors are properly recognized
    7. Test on the remaining 50% dataset
  • Selective data compression
    1. Consider segmenting your input vectors per amplitude range and/or amplitude deviation across their n components. If applicable, this approach will help optimize the compression factor per profiles of vectors.
    2. Assuming that you identify M different profiles, write the “Profile ID” to the global context register prior to learning and recognition in order to build M sub-networks per Profile ID.
  • Evaluate if the majority of the cases can be addressed with a simple Global linear data compression and a minority of outliers requires the use of a selective compression.

How do the neurons learn?

When a new example of category A is presented for learning, the neural network first attempts to recognize it. If the example is not recognized by any existing neurons, a new neuron is automatically added to the network in order to store the new example and its category value A. If the example is recognized by one or more neurons and they all agree that it matches a category A, then the new example is discarded since it does not add any new information to the existing knowledge base. If the example is recognized by several neurons where one or more identify it with a category other than A, these neurons which are in disagreement with the category to learn automatically reduce their Influence field to exclude the new example. This corrective action changes the knowledge base by making certain neurons more conservative in their classification process.

As a result, a learning operation can have the following impact on a knowledge base:

  • Add a new neuron
  • Reduce the Influence field of existing neurons
  • Reduce the Influence field of existing neurons and add a new neuron
  • Do nothing

It is important to realize that when the Influence field of neurons are reduced, it might very well happen that an example which was recognized with the correct category at an earlier time is no longer recognized as such because the neuron which originally recognized the said example now excludes it. Therefore, repeating the learning of all examples until the number of neurons reaches a constant is a good method to build a knowledge.