An exploration of the effect of super large n-tuples on single layer RAMnets

Richard Rohwer, Alex Lamb

    Research output: Chapter in Book/Report/Conference proceedingChapter

    Abstract

    N-tuple recognition systems (RAMnets) are normally modeled using a small number of input lines to each RAM, because the address space grows exponentially with the number of inputs. It is impossible to implement an arbitrarily-large address space as physical memory. But given modest amounts of training data, correspondingly modest numbers of bits will be set in that memory. Hash arrays can therefore be used instead of a direct implementation of the required address space. This paper describes some exploratory experiments using the hash array technique to investigate the performance of RAMnets with very large numbers of input lines. An argument is presented which concludes that performance should peak at a relatively small n-tuple size, but the experiments carried out so far contradict this. Further experiments are needed to confirm this unexpected result.
    Original languageEnglish
    Title of host publicationProceedings of the Weightless Neural Network Workshop 1993, Computing with Logical Neurons
    EditorsNigel Allinson
    PublisherUniversity of York
    Pages33-37
    Number of pages5
    Publication statusPublished - 1993
    EventProceedings of the Weightless Neural Network Workshop '93, Computing with Logical Neurons -
    Duration: 1 Jan 19931 Jan 1993

    Workshop

    WorkshopProceedings of the Weightless Neural Network Workshop '93, Computing with Logical Neurons
    Period1/01/931/01/93

    Keywords

    • N-tuple recognition systems
    • RAMnets
    • memory
    • hash array technique

    Fingerprint Dive into the research topics of 'An exploration of the effect of super large n-tuples on single layer RAMnets'. Together they form a unique fingerprint.

    Cite this