Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design
Author: Nan Zheng
Publisher: John Wiley & Sons
Total Pages: 389
Release: 2019-10-18
Genre: Computers
ISBN: 1119507405

Download Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design Book in PDF, Epub and Kindle

Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks. The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware. Includes cross-layer survey of hardware accelerators for neuromorphic algorithms Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design
Author: Nan Zheng
Publisher: John Wiley & Sons
Total Pages: 296
Release: 2019-10-18
Genre: Computers
ISBN: 1119507391

Download Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design Book in PDF, Epub and Kindle

Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks. The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware. Includes cross-layer survey of hardware accelerators for neuromorphic algorithms Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.

Co-Architecting Brain-inspired Algorithms and Hardware for Performance and Energy Efficiency

Co-Architecting Brain-inspired Algorithms and Hardware for Performance and Energy Efficiency
Author: Sonali Singh
Publisher:
Total Pages: 0
Release: 2023
Genre:
ISBN:

Download Co-Architecting Brain-inspired Algorithms and Hardware for Performance and Energy Efficiency Book in PDF, Epub and Kindle

Understanding and emulating human-like intelligence has been a long-standing goal of researchers in various domains leading to the emergence of an inter-disciplinary area called Brain-inspired or Neuromorphic Computing. This research area aims to achieve brain- like intelligence and energy efficiency by understanding and emulating its functionality. In the contemporary world of big data-driven analytics that has fueled ever-increasing demands for computing power, combined with the end of Moore's law scaling, the sheer energy cost of providing exascale-compute capability could soon make it economically and ecologically unsustainable. It, therefore, becomes imperative to explore alternate and more energy-efficient computing paradigms and the human brain, with its 20 W operating power budget, provides the ideal inspiration for building these future computing systems. Spiking Neural Networks (SNNs) are a class of biologically-inspired algorithms designed to mimic natural neural networks found in the brain. Besides playing an important role in biological simulations for neuroscience-related studies, SNNs are recently gaining traction as low- power counterparts of high-precision DNNs. However, in order to build systems with brain-like energy efficiency, we need to capture the functionality of billions of neurons and their communication mechanism in hardware, and this requires innovations at the device/circuit, architecture, algorithm and application levels of the computing stack. Further, efficiently utilizing and incorporating the SNN-led temporal computing paradigm in day-to-day tasks on time-dependent data also requires considerable algorithmic and architectural innovations. With these over-arching princi- ples, this dissertation is aimed at addressing the following architectural and algorithmic issues in SNN inference and training: (i) Investigating the design space of scalable, low- power SNNs by taking a holistic approach spanning the device/circuit levels for designing extremely low power spiking neurons and synapses, architectural solutions for efficient scal- ing of these networks, as well as algorithm-level optimizations for improving the accuracy of SNN models. Further, the SNN characteristics are compared against those of deep/analog neural networks (DNN/ANN), the de-facto drivers of modern AI. Based on this study, a low-power SNN, ANN and hybrid SNN-ANN inference architecture is designed using spintronics-based Magnetic Tunnel Junction (MTJ) devices, while also accounting for the deep interactions between the algorithm and the device. (ii) Training an SNN to solve a problem in a user-level application has so far proved to be challenging due to its discrete and temporal nature. SNNs are, therefore, often converted from high-precision ANNs that can be easily trained using gradient descent-based backpropagation. In this chapter, we study the effectiveness of existing ANN-SNN conversion techniques on sparse event-based data emitted by a neuromorphic camera -- several low-power, hardware-friendly techniques are proposed to boost conversion accuracy and their efficacy is evaluated on a gesture recognition task. (iii) Next, we address the computational challenges involved in train- ing a deep SNN using gradient-descent backpropagation, which is the most effective and scalable technique for training DNNs and SNNs from scratch. By reducing the memory footprint and computational overhead of backpropagation through time-based SNN train- ing, we enable the training and exploration of deeper SNNs on resource-limited hardware platforms including edge devices. Techniques such as re-computation, approximation and a combination thereof, are explored in the context of SNN training. In a nutshell, this dissertation identifies the major compute and memory bottlenecks afflicting SNNs today and proposes efficient algorithm-architecture co-design techniques to alleviate them, with the ultimate goal of facilitating the adaption of energy-efficient Neuromorphic Computing in the mainstream computing paradigm.

Energy Efficient and Error Resilient Neuromorphic Computing in VLSI

Energy Efficient and Error Resilient Neuromorphic Computing in VLSI
Author: Yongtae Kim
Publisher:
Total Pages:
Release: 2014
Genre:
ISBN:

Download Energy Efficient and Error Resilient Neuromorphic Computing in VLSI Book in PDF, Epub and Kindle

Realization of the conventional Von Neumann architecture faces increasing challenges due to growing process variations, device reliability and power consumption. As an appealing architectural solution, brain-inspired neuromorphic computing has drawn a great deal of research interest due to its potential improved scalability and power efficiency, and better suitability in processing complex tasks. Moreover, inherit error resilience in neuromorphic computing allows remarkable power and energy savings by exploiting approximate computing. This dissertation focuses on a scalable and energy efficient neurocomputing architecture which leverages emerging memristor nanodevices and a novel approximate arithmetic for cognitive computing. First, brain-inspired digital neuromorphic processor (DNP) architecture with memristive synaptic crossbar is presented for large scale spiking neural networks. We leverage memristor nanodevices to build an N x N crossbar array to store not only multibit synaptic weight values but also the network configuration data with significantly reduced area cost. Additionally, the crossbar array is accessible both column- and row-wise to significantly expedite the synaptic weight update process for on-chip learning. The proposed digital pulse width modulator (PWM) readily creates a binary pulse with various durations to read and write the multilevel memristors with low cost. Our design integrates N digital leaky integrate-and-fire (LIF) silicon neurons to mimic their biological counterparts and the respective on-chip learning circuits for implementing spike timing dependent plasticity (STDP) learning rules. The proposed column based analog-to-digital conversion (ADC) scheme accumulates the pre-synaptic weights of a neuron efficiently and reduces silicon area by using only one shared arithmetic unit for processing LIF operations of all N neurons. With 256 silicon neurons, the learning circuits and 64K synapses, the power dissipation and area of our design are evaluated as 6.45 mW and 1.86 mm2, respectively, in a 90 nm CMOS technology. Furthermore, arithmetic computations contribute significantly to the overall processing time and power of the proposed architecture. In particular, addition and comparison operations represent 88.5% and 42.9% of processing time and power for digital LIF computation, respectively. Hence, by exploiting the built-in resilience of the presented neuromorphic architecture, we propose novel approximate adder and comparator designs to significantly reduce energy consumption with a very low error rate. The significantly improved error rate and critical path delay stem from a novel carry prediction technique that leverages the information from less significant input bits in a parallel manner. An error magnitude reduction scheme is proposed to further reduce amount of error once detected with low cost in the proposed adder design. Implemented in a commercial 90 nm CMOS process, it is shown that the proposed adder is up to 2.4x faster and 43% more energy efficient over traditional adders while having an error rate of only 0.18%. Additionally, the proposed comparator achieves an error rate of less than 0.1% and an energy reduction of up to 4.9x compared to the conventional ones. The proposed arithmetic has been adopted in a VLSI-based neuromorphic character recognition chip using unsupervised learning. The approximation errors of the proposed arithmetic units have been shown to have negligible impacts on the training process. Moreover, the energy saving of up to 66.5% over traditional arithmetic units is achieved for the neuromorphic chip with scaled supply levels. The electronic version of this dissertation is accessible from http://hdl.handle.net/1969.1/151721

Neuromorphic Computing Principles and Organization

Neuromorphic Computing Principles and Organization
Author: Abderazek Ben Abdallah
Publisher: Springer Nature
Total Pages: 260
Release: 2022-05-31
Genre: Computers
ISBN: 3030925250

Download Neuromorphic Computing Principles and Organization Book in PDF, Epub and Kindle

This book focuses on neuromorphic computing principles and organization and how to build fault-tolerant scalable hardware for large and medium scale spiking neural networks with learning capabilities. In addition, the book describes in a comprehensive way the organization and how to design a spike-based neuromorphic system to perform network of spiking neurons communication, computing, and adaptive learning for emerging AI applications. The book begins with an overview of neuromorphic computing systems and explores the fundamental concepts of artificial neural networks. Next, we discuss artificial neurons and how they have evolved in their representation of biological neuronal dynamics. Afterward, we discuss implementing these neural networks in neuron models, storage technologies, inter-neuron communication networks, learning, and various design approaches. Then, comes the fundamental design principle to build an efficient neuromorphic system in hardware. The challenges that need to be solved toward building a spiking neural network architecture with many synapses are discussed. Learning in neuromorphic computing systems and the major emerging memory technologies that promise neuromorphic computing are then given. A particular chapter of this book is dedicated to the circuits and architectures used for communication in neuromorphic systems. In particular, the Network-on-Chip fabric is introduced for receiving and transmitting spikes following the Address Event Representation (AER) protocol and the memory accessing method. In addition, the interconnect design principle is covered to help understand the overall concept of on-chip and off-chip communication. Advanced on-chip interconnect technologies, including si-photonic three-dimensional interconnects and fault-tolerant routing algorithms, are also given. The book also covers the main threats of reliability and discusses several recovery methods for multicore neuromorphic systems. This is important for reliable processing in several embedded neuromorphic applications. A reconfigurable design approach that supports multiple target applications via dynamic reconfigurability, network topology independence, and network expandability is also described in the subsequent chapters. The book ends with a case study about a real hardware-software design of a reliable three-dimensional digital neuromorphic processor geared explicitly toward the 3D-ICs biological brain’s three-dimensional structure. The platform enables high integration density and slight spike delay of spiking networks and features a scalable design. We present methods for fault detection and recovery in a neuromorphic system as well. Neuromorphic Computing Principles and Organization is an excellent resource for researchers, scientists, graduate students, and hardware-software engineers dealing with the ever-increasing demands on fault-tolerance, scalability, and low power consumption. It is also an excellent resource for teaching advanced undergraduate and graduate students about the fundamentals concepts, organization, and actual hardware-software design of reliable neuromorphic systems with learning and fault-tolerance capabilities.

Neuromorphic Intelligence

Neuromorphic Intelligence
Author: Shuangming Yang
Publisher: Springer Nature
Total Pages: 256
Release:
Genre:
ISBN: 3031578732

Download Neuromorphic Intelligence Book in PDF, Epub and Kindle

Efficient Processing of Deep Neural Networks

Efficient Processing of Deep Neural Networks
Author: Vivienne Sze
Publisher: Springer Nature
Total Pages: 254
Release: 2022-05-31
Genre: Technology & Engineering
ISBN: 3031017668

Download Efficient Processing of Deep Neural Networks Book in PDF, Epub and Kindle

This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas.

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing
Author: Sudeep Pasricha
Publisher: Springer Nature
Total Pages: 418
Release: 2023-11-01
Genre: Technology & Engineering
ISBN: 303119568X

Download Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing Book in PDF, Epub and Kindle

This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits.

Energy Efficient High Performance Processors

Energy Efficient High Performance Processors
Author: Jawad Haj-Yahya
Publisher: Springer
Total Pages: 176
Release: 2018-03-22
Genre: Technology & Engineering
ISBN: 9811085544

Download Energy Efficient High Performance Processors Book in PDF, Epub and Kindle

This book explores energy efficiency techniques for high-performance computing (HPC) systems using power-management methods. Adopting a step-by-step approach, it describes power-management flows, algorithms and mechanism that are employed in modern processors such as Intel Sandy Bridge, Haswell, Skylake and other architectures (e.g. ARM). Further, it includes practical examples and recent studies demonstrating how modem processors dynamically manage wide power ranges, from a few milliwatts in the lowest idle power state, to tens of watts in turbo state. Moreover, the book explains how thermal and power deliveries are managed in the context this huge power range. The book also discusses the different metrics for energy efficiency, presents several methods and applications of the power and energy estimation, and shows how by using innovative power estimation methods and new algorithms modern processors are able to optimize metrics such as power, energy, and performance. Different power estimation tools are presented, including tools that break down the power consumption of modern processors at sub-processor core/thread granularity. The book also investigates software, firmware and hardware coordination methods of reducing power consumption, for example a compiler-assisted power management method to overcome power excursions. Lastly, it examines firmware algorithms for dynamic cache resizing and dynamic voltage and frequency scaling (DVFS) for memory sub-systems.

Energy-efficient Neocortex-inspired Systems with On-device Learning

Energy-efficient Neocortex-inspired Systems with On-device Learning
Author: Abdullah M. Zyarah
Publisher:
Total Pages: 172
Release: 2020
Genre: Computer architecture
ISBN:

Download Energy-efficient Neocortex-inspired Systems with On-device Learning Book in PDF, Epub and Kindle

"Shifting the compute workloads from cloud toward edge devices can significantly improve the overall latency for inference and learning. On the contrary this paradigm shift exacerbates the resource constraints on the edge devices. Neuromorphic computing architectures, inspired by the neural processes, are natural substrates for edge devices. They offer co-located memory, in-situ training, energy efficiency, high memory density, and compute capacity in a small form factor. Owing to these features, in the recent past, there has been a rapid proliferation of hybrid CMOS/Memristor neuromorphic computing systems. However, most of these systems offer limited plasticity, target either spatial or temporal input streams, and are not demonstrated on large scale heterogeneous tasks. There is a critical knowledge gap in designing scalable neuromorphic systems that can support hybrid plasticity for spatio-temporal input streams on edge devices. This research proposes Pyragrid, a low latency and energy efficient neuromorphic computing system for processing spatio-temporal information natively on the edge. Pyragrid is a full-scale custom hybrid CMOS/Memristor architecture with analog computational modules and an underlying digital communication scheme. Pyragrid is designed for hierarchical temporal memory, a biomimetic sequence memory algorithm inspired by the neocortex. It features a novel synthetic synapses representation that enables dynamic synaptic pathways with reduced memory usage and interconnects. The dynamic growth in the synaptic pathways is emulated in the memristor device physical behavior, while the synaptic modulation is enabled through a custom training scheme optimized for area and power. Pyragrid features data reuse, in-memory computing, and event-driven sparse local computing to reduce data movement by ~44x and maximize system throughput and power efficiency by ~3x and ~161x over custom CMOS digital design. The innate sparsity in Pyragrid results in overall robustness to noise and device failure, particularly when processing visual input and predicting time series sequences. Porting the proposed system on edge devices can enhance their computational capability, response time, and battery life."--Abstract.