This tutorial will set out to address the key limiters of scalability and discuss the means of increasing the numbers of devices on a chip to biologically plausible levels, i.e., investigate biologically inspired connectivity solutions for future nanoelectronic systems. The significance of brain-in…
This tutorial will set out to address the key limiters of scalability and discuss the means of increasing the numbers of devices on a chip to biologically plausible levels, i.e., investigate biologically inspired connectivity solutions for future nanoelectronic systems. The significance of brain-inspired connectivity comes from the fact that the mammalian brain is one of the most efficient and remarkably robust network of processing elements currently know to mankind. One reason that scalability is so important is that much of the brain’s computing power comes from its massive parallelism. The global connectivity of the brain will be analyzed along with the way it communicates locally. We will first of all compare the brain’s connectivity (based on neurological data) with well-known computer network topologies (originally used in super-computers). The comparison will reveal that brain’s connectivity is in good agreement with Rent’s rule. However, the known network topologies fall short of being strong contenders for mimicking the brain, therefore emphasis will be placed upon detailed Rent-based (top-down) modeling of two-layer hierarchical networks. This analysis will identify those generic network topologies which when combined could mimic brain’s connectivity. The range of granularities (i.e., number of gates/cores/neurons) where such mimicking is possible will be presented and discussed. Accurate wire length estimates for hierarchical networks with complexity tending to that of the brain will follow, and will help in estimating many other important parameters like power and reliability. For local interconnects, artificial synapses will be evaluated. Issues in terms of latency, energy dissipated and signal integrity are inherent problems that normally act to limit the scalability and can negate any computational advantages of parallelism. Various schemes used for reducing the interconnecting density such as Pulsed Wave Interconnect, Address Event Decoding, and Multiple Valued Logic, all have deficiencies which prevent them from scaling towards biologically plausible levels, while Time Multiplexed Architectures seems to exhibit stronger potential. These results should have immediate implications for the design of future networks-on-chip in general (in the short term), and for the burgeoning field of multi-/many-core processors in particular (in the medium term), as well as for forward-looking investigations on emerging brain-inspired nano-architectures (in the long run).