Tutorials

#TitleLeading lecturer

Monday, June 26: 9:30 - 11:30

1Hardware-Embedded Neural Signal Processing Dedicated to Next-Generation Brain ImplantsAmir M. Sodagar
2Using Artificial Neural Networks for the Automated Design of Analog Circuits – Application to SD ConvertersJosé M. de la Rosa
4Hybrid CMOS/Memristor Circuit Design MethodologyAlexander Serb
5Recent Advances in RF/Microwave and Millimetre-Wave Passive Filter Design for 5G and BeyondXi Zhu

Monday, June 26: 13:00 - 15:00

3Architecture and circuits for High-dynamic-range Biopotential-Acquisition Front-endMingyi Chen
6A cross-layer Approach for Radiation-induced Soft Error Assessment and Mitigation of IoT SystemsLuciano Ost
7Voltage Reference Design Trends: From Bulky BGR to Tiny Pico-Watt CMOSChutham Sawigun


 

1. Title: Hardware-Embedded Neural Signal Processing Dedicated to Next-Generation Brain Implants
Speaker: Amir M. Sodagar, York University 
Abstract: A brain-implantable microsystem is a miniaturized system developed for interfacing with the biological neural system inside the brain. As a result of residing inside or in close proximity to the brain, such systems provide the possibility of neural interfacing with significantly high temporal and spatial resolutions. The neuronal activities recorded by the implantable device is streamed to the outside world usually through wireless connection. Recent advances in the development of intra-cortical neural interfacing devices show the bright horizon of having access to brain-implantable microsystems with extremely high channel counts in the not-so-distant future. With the successful fabrication of high-density neural interfacing microelectrode arrays in recent years, the engineering bottleneck in the realization of next generation brain-implantable microsystems with thousands of parallel channels has shifted to the handling and live streaming of the recorded neural signals through a wireless link with limited bandwidth. Even though a spectrum of efforts has been reported at both circuit and system levels, it is now apparent that the most effective solution is to overcome this challenge at the signal level. Employment of digital signal processing techniques for data reduction or compression has therefore become an inseparable part of the design of high-density neural recording brain implants. It is of crucial importance to note that the type of signal processing that is required in such applications needs to be efficiently implemented in hardware, and comply with strict electrical and physical restrictions on the implant.
This tutorial starts with providing an overview of neural interfacing brain-implantable microsystems and the associated engineering requirements and challenges at both circuit and system levels. It then addresses technical and technological challenges of transferring massive amounts of recorded data off high-density neural recording brain implants. Subsequently, the tutorial focuses on ‘On-implant, hardware-embedded signal processing’ techniques. What distinguishes this class of signal processing from ordinary signal processing and makes this tutorial attractive to circuits and systems specialists is that it encompasses an interweaved combination of signal-level, circuit-level, system-level, and application-level aspects. In signal processing, the focus of this tutorial is on spike detection and extraction, temporal and spatial neural signal compression, and spike sorting. On the circuits and systems side, efficient hardware implementation in terms of power consumption, silicon area, and real-time operation is of the main concern. This will be addressed in the tutorial by discussing some of the common design techniques that have been employed by leading research groups.

2. Title: Using Artificial Neural Networks for the Automated Design of Analog Circuits – Application to ΣΔ Converters
Speaker: José M. de la Rosa, Institute of Microelectronics of Seville, IMSE-CNM (CSIC/University of Seville)
Abstract: Prompted by the benefits of technology downscaling, nowadays electronic devices are mostly implemented by digital circuits, which are – a priori – more programmable, energy/cost efficient and robust against device imperfections than analog circuits. The increasingly use of digital signal processing runs parallel with the development of Electronic Design Automation (EDA) tools and synthesis methods to aid designers to automate and optimize their circuits from specifications to silicon. By contrast, designing analog circuits still relies on a number of rules of thumb and practical receipts based on prior experience and know-how, which makes it difficult to systematize the design procedures. 
Albeit analog design automation is far from its digital counterpart, a number of EDA tools and design methodologies has been reported so far in order to optimize the performance of analog and mixed-signal Integrated Circuits (ICs). Most approaches follow a top-down/bottom-up synthesis methodology, where a given system is divided into several hierarchical levels, so that at each abstraction level, a design (or sizing) process takes place, by transmitting (or mapping) the specifications in a hierarchical way – from system-level to circuit-level and finally physical (chip) implementation. In most cases, an optimization-based design methodology is adopted, where a simulator is used as performance evaluator and is combined with an optimization engine to find the optimum solution within the design space. To this end, several optimization algorithms – such as genetic or simulated annealing – have been used to guide behavioral simulators – at system level – or electrical simulators – at circuit level. More recently, Artificial Neural Networks (ANNs) have been applied to automate analog circuit design in an optimization-based synthesis methodology [1]-[3]. Some of them train ANNs to replace the simulator, while other approaches consider ANNs as an optimization engine. In the latter case, the ANN is trained to size a given system for a set of specifications. Once the ANN is trained, it is able to automate the sizing process and generate optimum sizing solutions for additional sets of specifications which were not considered in the training dataset [4]. This approach has been proposed and successfully applied to design analog and mixed-signal circuits such as operational amplifiers [4], and more recently applied to SD converters [5]-[6]. 
This tutorial deals with this topic and shows how to use ANNs for the optimization and automated design of analog circuits. A survey of automated design methods and CAD tools is given as a motivation towards using ANNs for the optimization-based design of analog and mixed-signal circuits. A step-by-step methodology is shown to explain the key aspects to be considered, such as the dataset generation, ANN modeling, verification and their application to a given analog design problem. As an application, the ANN assisted synthesis method is applied in this tutorial to the high-level design of Sigma-Delta Modulators (SDMs). The method, originally proposed in [4] and extended here to SDMs [5][6], can be applied to any arbitrary analog and mixed-signal integrated circuit. The tutorial is addressed to a general audience interested in learning the fundamentals and practical considerations of using ANNs for the automated design of analog integrated circuits and systems.

3. Title: Architecture and circuits for High-dynamic-range Biopotential-Acquisition Front-end
Speaker: Mingyi Chen, Shanghai Jiao Tong University
Abstract: Bipotential-acquisition front-end is one of the essential blocks in the brain-computer interface (BCI). It has been widely used to condition the physiological signals, such as ExG or neural signals, which typically have very small amplitude from tens of μV to several mV. In order to faithfully digitize these physiological signals in the constrain of power consumption, it is necessary to improve the biopotential-recording front-end circuits in terms of noise and power consumption, which can be normalized by Noise-efficient-factor (NEF). Meanwhile, in the wearable or bidirectional BCI, the physiological signals are vulnerable to artifacts from motion or brain stimulation. These artifacts typically have much larger amplitude from hundreds of mV to volts, which tend to saturate the front-end if not properly handled. As a result, the front-end should be able to resolve very weak physiological signals riding on top of large artifacts or recover rapidly from the artifacts. Therefore, in addition to keeping the noise and power consumption low, DR extension techniques attract attention in the research area of biopotential-acquisition front-end. In this tutorial, we focus on the cutting-edge DR extension techniques from the architecture level to circuit level. Firstly, we give a general background of bipotential-acquisition front-end applied in BCI and give the overall specification from a system perspective. Then, architectures and circuits will be introduced including the conventional topology with an instrumentation amplifier (IA) followed by an analog-to-digital converter (ADC), and direct-conversion topology. The NEF optimization and on-chip pseudo-resistor implementation technique will be discussed. Consequently, DR extension circuit techniques including signal folding, adaptive gain control, differential pulse code modulation (DPCM), input-scaling-down, will be discussed and explained in depth. Finally, we conclude this tutorial by outlooking the furfure trend of bipotential-acquisition front-end applied in BCI. This is a very comprehensive topic in that it covers almost all the current state-of-the-art circuit in this area. Additionally, the research is popular not only in the academic community but also in industry. The circuit designer should put in more effort to achieve practical circuits and systems to improve the quality of human life. We are hence of the opinion that it will be a great chance for NEWCAS attendees to be
acquainted with the latest advancements in bioelectronics through the tutorial.

4. Title: Hybrid CMOS/Memristor Circuit Design Methodology 
Speaker: Alexander Serb, Spyros Stathopoulos, Sachin Maheshwari, Yihan Pan, University of Edinburgh
Abstract: As Moore’s law is coming to an end due to the difficulty in developing chips at the atomic level, different approaches, such as hybrid CMOS/Memristor circuits, can continue the push towards ever more efficient circuits and systems. A memristor is a two-terminal passive device whose resistance value can be increased or decreased depending on the magnitude and polarity of voltage or current biasing applied to it. Memristive technology has experienced explosive growth in the last decade, with multiple device structures of wildly varying underlying electrochemistry being developed across the world for a wide range of applications. This growth is driven by the memristors’ unique capabilities of supporting native in-memory-computation and multi-bit memory operation, as well as their scalability, low-power operation, and most importantly co-integrability with standard CMOS. As memristive devices continue to be developed and are slowly progressing towards application in next-generation nano-electronics it is important to develop standard design methodologies and industry-grade tool-chain support. This includes further improvements of memristive device models, such as by incorporating new properties and behavior characteristics.
The purpose of this tutorial is to provide a detailed account of how any lab in the world can
incorporate memristive technologies into their chip design flow, allowing their co-integration with commercially available CMOS technology. The example of how the authoring group achieved this with metaloxide-based metal-oxide devices co-integrated with a commercial 180nm technology, from “virgin PDK” to full DRC and LVS support and eventually a tape-out. The tutorial spans..., we demonstrate with examples an end-to-end design flow for memristor-based electronics, from the introduction of a custom memristor model into the Cadence Electronic Design Automation (EDA) tools to performing layout-versus-schematic and post-layout checks including the memristive device verification. Various input stimuli were given to record the memristive device characteristics both at the device level as well as at the schematic level for verification of the memristor model. We envisage that this systematic guide to introducing memristor into the standard integrated circuit design flow will be a useful reference document for both device developers who wish to benchmark their technologies and circuit designers who wish to experiment with memristive-enhanced systems.

5. Title: Recent Advances in RF/Microwave and Millimetre-Wave Passive Filter Design for 5G and Beyond
Speaker: Xi (Forest) Zhu, University of Technology Sydney, Li Yang, University of Alcalá
Abstract: With recent 5G deployment underway, the focus of wireless research is shifting toward beyond 5G, which is expected to support a peak data rate of 1Tb/s, 50 times the peak data rate of the current 5G. To reach Tb/s transmission in beyond 5G era, it is inevitable to utilise a much wider bandwidth, from several hundred Megahertz (MHz) to several Gigahertz (GHz) at least. However, the use of such broad bandwidth results in many design challenges of RF circuits. One of the critical issues is how to effectively design wideband or even ultra-wideband (UWB) filters at both RF front-end and intermediate frequency (IF) chains. Filter especially bandpass filter is a very critical component in many wireless systems. Unlike many other building blocks that only have impact on “in-band” signal, filter is a unique component in a wireless system that also clears up “out-of-band” spectrum.
In traditional wireless transceivers that operate at sub-6 GHz and below, it is common to use active filters, such as active-RC, and gm-C filters, because they are compact and can be simply configured for different wireless standards. However, as the operation frequency is being pushed higher and higher with significantly enlarged bandwidth, the existing active filter design approaches are no longer suitable for filters operating above sub-6 GHz with several Gigahertz bandwidth, especially for those filters operated at millimetre-wave frequency spectrum. Consequently, there is a clear trend that RF/microwave and millimetre-wave passive filters in different technologies will be playing the leading role in the coming years.
In this tutorial, to overcome the technical issues met during the design of wideband/UWB RF and millimetre-wave filters or filtering devices, recent advances in passive filter design theories and techniques will be introduced to facilitate RF front-end and IF-chain designs beyond 5G era. The tutorial will be divided into two parts. In Part I, the off-chip wideband bandpass filters and filtering devices using the microstrip technology will be focused [1], [2]. The development of these circuits with different functionalities in planar and multilayer structures will be summarised. Particularly, the explorations of a novel class of multilayer absorptive/reflectionless wideband bandpass filters as well as filtering circuits for next-generation power-efficient wireless communication systems and sensing networks will be discussed [3]-[5]. On the other hand, the detailed design methodologies for on-chip filters will be provided in Part II, including bandpass filter design in different semiconductor technologies (e.g., CMOS, SiGe, and GaAs) [6]-[10].
In addition, the lumped-element-based design approaches have been widely used for conventional passive filter at sub-6 GHz and below. Although this approach results in a relatively compact footprint, it has some limitations on the overall performance due to self-resonance frequency and low Q-factor. To mitigate the impact on passive filter design due to these limitations, distributed-element-based approaches will be extensively introduced in this tutorial for both off- and on-chip filters design. Unlike the conventional approach that bulky transmission lines are used, multiple device miniaturisation techniques will also be introduced in this tutorial along with passive filter design.

6. Title: A cross-layer Approach for Radiation-induced Soft Error Assessment and Mitigation of IoT Systems
Speaker: Luciano Ost, Loughborough University, United Kingdom; Ricardo Reis, Universidade Federal do Rio Grande do Sul, Brazil
Abstract: Today’s electronic computing systems are expected to integrate Artificial Intelligence (AI) and Machine Learning (ML) algorithms that will be just as complex as those found in data centres. Such algorithms are used to perform various tasks, including pattern detection, system control, and autonomous operation, when used in safety-critical systems (e.g., autonomous vehicles, drones, implantable circuits and robots) operating within dynamic and uncertain environments. In addition, the underlying systems must comply with strict safety and reliability requirements, making their design an inherently challenging problem. The occurrence of radiation-induced soft errors is recognised to carry a risk of affecting both non-critical and critical system functionalities (e.g., pedestrian detection), which might ultimately result in the loss of life. For such systems, software and hardware engineers must develop lightweight and performance efficient and more secure and reliable algorithms that can guarantee a fail-safe system operation.
The literature reports that circuit manufacturing technologies and software and hardware architectures have a substantial influence on the soft error reliability of circuits and electronic computing systems. Industrial and research leaders rely on technology-specific techniques and on different approaches to assess and reduce the soft error vulnerability of underlying systems, varying from the use of ML to identify a particular source of error (e.g., software or hardware component), circuit mitigation techniques, to real laser-induced or neutron radiation experiments. The preceding context motivates this half-day tutorial, which is organised in two parts. While the first part reviews the circuit design challenges and the most common techniques used to fabricate robust and reliable chips, the second one explores the benets and limitations of different assessment and mitigation techniques, considering different levels of abstraction and approaches.
The tutorial will present the radiation effects on circuits and several techniques to mitigate soft errors at the circuit level. This tutorial also advocates that due to the vast number of alternatives in the design space of electronic computing systems, fast and early soft error assessment, diagnosis and susceptibility reduction evaluation approaches can result in earlier and often better performance and soft error reliability trade-offs. To that end, this tutorial will present a semi- automated mitigation fow that can be used early in the design process to reduce applications susceptibility to soft errors by applying various system-level mitigation techniques. Unlike the literature, the promoted ow enables designers to identify critical soft errors and the specific application’s characteristics (e.g., functions, layers) that contribute more directly to their occurrence, thus allowing the hardening tuning and the use of more effective mechanisms to obtain fault tolerance.

This tutorial will provide the following: 
– have a good knowledge about radiation effects
– learn several techniques to mitigate radiation effects at circuit level
– have a good understanding of the main challenges inherent to the soft error assessment of electronic computing systems.
– explain most common practices used to assess and mitigate the occurrence of soft errors in electronic computing systems.
– understand the benefits and impediments of assessing the soft error reliability of complex electronic systems at early design process stages

7. Title: Voltage Reference Design Trends: From Bulky BGR to Tiny Pico-Watt CMOS
Speaker: Chutham Sawigun, IMEC
Abstract: Driven by nano-scale CMOS technology scaling and requirements for always-on operation in internet-of-thing (IoT) sensor nodes, voltage reference (VR) circuits as a part of power management unit are demanded to be extremely compact and low-power. Over the past decade, there are significant progresses in terms of power and area reduction, and techniques used to achieve the requirements mentioned above. This tutorial will describe principles, architectures, and design techniques to achieve voltage reference (VR) circuits operated below breakdown voltages of core devices in nano-scaled CMOS technologies. The talk will begin with reviewing the key performance metrics including temperature coefficient, line regulation and power supply rejection, and variability. Traditional bandgap and sub-bandgap reference circuits will then be reviewed. Limitations of these architectures when operated below the bandgap voltage (<1.2V) will be discussed. Low-voltage architectures including hybrid (BJT together with MOS) and MOS-only (two-transistor and self-cascode) VRs will also be examined. State-of-the-art techniques for picowatt-power MOS-only VRs to achieve low supply sensitivity practical for self-power IoT sensor nodes will be reported. The compact unify technique to achieve the lowest line sensitivity with an ability to scale the reference voltage precisely will also be given. Finally, the tutorial will conclude with the outlook on possible solutions to further improve performance of modern VR circuits.