CE Talks 2017-2018

2017-18 CE Seminar Series header

Photo of Peng Li

May 30th (Wednesday), 10:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)  


Peng Li, Professor, ECE, Texas A&M University


"From Statistics to Spiking Neurons: Algorithms, Architectures, and Circuits for IC Design Verification and Neuromorphic Computing"


The world around of us is increasingly data driven. While data sampling in certain physical processes comes at a price, other types of data (e.g., image or speech) may be generated in huge volumes, which are further processed either at the edge or in the cloud.

 

At the scarcity end of the data spectrum, I will first examine the challenges in verifying analog and mixed-signal ICs under stringent reliability specifications set by safety-critical applications such as auto electronics. Here, design failures may be extremely rare but catastrophic. I will highlight how Bayesian learning can be integrated with formal verification to process small amounts of simulation data, and yet identify rare design failures which would otherwise go undetected.

 

At the other end of the spectrum, brain-inspired spiking neural networks (SNN) have gained substantial momentum. This is fueled by in part by advancements in emerging devices and neuromorphic hardware (e.g., Intel Loihi and IBM TrueNorth), promising ultra-low energy event-driven processing of large data volumes. Nevertheless, major

challenges are yet to be conquered to make spike-based temporal computation a competitive choice for real-world applications. We take a three-faceted approach: 1) empowering SNNs based on deep feedforward and recurrent architectures; tackling major challenges in training such complex SNNs by developing 2) brain-inspired supervised & unsupervised learning mechanisms, and 3) spike-train level error backpropagation to operate on top of spiking discontinuities and to scale to large networks. Design of FPGA spiking neural processors with on-chip learning will be discussed under the context of recurrent liquid state machine models.


Biography

Peng Li received the Ph. D. degree in electrical and computer engineering from Carnegie Mellon University in 2003. He is presently Professor of Electrical and Computer Engineering, Texas A&M University, and an affiliated faculty member of Texas A&M Institute for Neuroscience. His research interests are in integrated circuits and systems, brain-inspired computing, electronic design automation, and computational brain modeling. He has authored and co-authored over 200 publications, and edited two books. 


April 4th (Wednesday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)  


Mingcong Song, Ph.D. candidate, ECE, University of Florida



"Towards Efficient Architectural Support for AI-based IoT Applications"


In recent years, the artificial intelligence (AI) techniques, represented by deep neural networks (DNN), have demonstrated transformative impacts to modern Internet-of-Things (IoT) applications such as smart cities and smart transportation. With the increasing computing power and energy efficiency of mobile devices, there is a growing interest in performing AI-based IoT applications on mobile platforms. As a result, we believe the next-generation AI-based applications are pervasive across all platforms, ranging from central cloud data center to edge-side wearable and mobile devices. However, we observe several architectural gaps that challenge the pervasive AI. First, the diversity of computing hardware resources and different end-user requirements present challenges to AI-based applications deployment on various IoT platforms, which results in inferior user satisfaction. Second, the traditional statically trained DNN model could not efficiently handle the dynamic data in the real IoT environments, which leads to low inference accuracy. Lastly, the training of DNN models still involves extensive human efforts to collect and label the large-scale dataset, which becomes impractical in IoT big data era where raw IoT data is largely un-labeled and un-categorized. In this talk, I will introduce my research which enables pervasive AI-based IoT applications to become high-efficient, user-satisfactory, and intelligent. I will first introduce Pervasive AI, a user satisfaction-aware deep learning inference framework, to provide the best user satisfaction when migrating AI-based applications from Cloud to all kinds of platforms. Next, I will describe In-situ AI, a novel computing paradigm tailored to AI-based IoT applications. Finally, to achieve real intelligent (support autonomous learning) in IoT nodes, I will introduce an unsupervised GAN-based deep learning accelerator.


Biography

Mingcong Song is a Ph.D. candidate in the Department of Electrical and Computer Engineering at the University of Florida. His research interests include architectural support for emerging AI applications, AI-enabled IoT system design and heterogeneous computing for big data applications. His work has been published in top-tier conferences including ISCA, HPCA, ASPLOS, PACT, ICS, etc. His research has won the best paper nomination at HPCA 2017. He received a BS degree from Huazhong University of Science and Technology in 2010 and an MS degree from University of Chinese Academy Sciences in 2013.




April 2nd (Monday), 11:00am
Harold Frank Hall (HFH), Rm. 1132 (Computer Science Conf. Rm.)  


Abhishek Bhattacharjee, Associate Professor of Computer Science at Rutgers University



"Designing Efficient Heterogeneous Computer Systems Across Computing Scales"


Computer systems at all scales, from server-class systems for datacenters to embedded systems on IoT devices, are embracing extreme heterogeneity in hardware and software. While heterogeneity offers immense computational promise, it also poses programmability and performance/energy challenges. In this talk, I will show how we can leverage decades of research on traditional general-purpose CPUs to improve the programmability and efficiency of two classes of emerging heterogeneous systems. In one example, we will improve the programmability and performance of server-class GPGPUs using virtual memory techniques developed over decades for traditional CPUs. In the second example, I will co-opt server-class hardware traditionally designed for branch prediction in servers to instead manage energy in brain implants with a completely different power/performance profile. At a high-level, these two examples represent two types of heterogeneity — intra- and inter-device — and our work shows how we can reap the benefits of specialization using modest hardware enhancements of these systems.


Biography

Abhishek Bhattacharjee is an Associate Professor of Computer Science at Rutgers, The State University of New Jersey. Abhishek’s research interests are at the hardware/software interface, as it relates to the design of server-scale systems for datacenters and embedded systems in IoT and biomedical devices. He is the recipient of the CV Starr Fellowship from the Princeton Neuroscience Institute, and the Rutgers Chancellor’s Award for Faculty Excellence in Research.




March 19th (Monday), 11:00am
Harold Frank Hall (HFH), Rm. 1132 (Computer Science Conf. Rm.)  


Mingyu Gao, Ph.D. candidate, Stanford University



"Scalable Near-Data Processing Systems for Data-Intensive Applications"


Big data applications such as deep learning and graph analytics process massive data within rigorous time constraints. For such data-intensive workloads, the frequent and expensive data movement between memory and compute modules dominates both execution time and energy consumption, seriously impeding performance scaling. Recent semiconductor 3D integration technologies allow us to avoid data movement by executing computations closer to the data. Nevertheless, realizing such near-data processing systems still faces critical architectural challenges, including efficient processing logic circuits, practical system architectures and programming models, and scalable parallelization and dataflow scheduling schemes.

 

I have proposed a coherent set of hardware and software solutions to enable efficient, practical, and scalable near-data processing systems for both general-purpose and specialized computing platforms. First, I will present an efficient hardware logic substrate, which uses dense memory arrays, such as DRAM and non-volatile memories, to build a bit-level, multi-context reconfigurable fabric with high density and low power consumption. Then, I will briefly describe a practical near-data processing architecture and runtime system. Finally, I will discuss the domain-specific parallelization schemes and dataflow optimizations that exploit different levels of parallelism in deep neural networks to improve the scalability. Overall, the presented techniques not only demonstrate order of magnitude improvements, but also represent practical large-scale system designs to realize such significant benefits.


Biography

Mingyu Gao is a Ph.D. candidate in the Electrical Engineering Department at Stanford University. His research interests include energy-efficient computing and memory systems, specifically on practical and efficient near-data processing for data-intensive analytics applications, high-density and low-power reconfigurable architectures for data center services, and scalable accelerators for large-scale neural networks. He received an MS in electrical engineering from Stanford, and a BS in microelectronics from Tsinghua University in Beijing, China.




Photo of Shimeng Yu

March 21st (Wednesday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)  


Kaisheng Ma, Ph.D: Penn State U.



"Self-powered Internet-of-Things Nonvolatile Processor and System Exploration and Optimization"


Energy harvesting has been widely investigated as a promising method of providing power for ultra-low-power applications. Such energy sources include solar energy, radio-frequency (RF) radiation, piezoelectric effect, thermal gradients, etc. However, the power supplied by these sources is highly unreliable and dependent upon ambient environment factors. Hence, it is necessary to develop specialized systems that are tolerant to this power variation, and also capable of making forward progress on the computation tasks.

In this talk, I will first do architectural explore of the design space for a nonvolatile processor with different architectures, different input power sources, and policies for maximizing forward progress. It is presented that different complexity levels of nonvolatile microarchitectures provide best fit for different power sources, and even different trails within same power source.

 

To further overcome this problem, I further propose various techniques including frequency scaling and resource allocation to dynamically adjust the microarchitecture to achieve the maximum forward progress. Noticing that such nodes usually perform similar operations across each new input record, which provides opportunities for mining the potential information in buffered historical data, at potentially lower effort, while processing new data rather than abandoning old inputs due to limited computational energy. This approach is proposed as incidental computing, and synergies between this approach and approximation techniques is explored. Last but not least, I take fog computing in Wireless Sensor Networks (WSN) as one of the system level examples to perform optimization from programing, intra-chain and inter-chain level, and show how nonvolatility features including nonvolatile processors and nonvolatile RF can benefit the system, and how other optimizations like load balance under unstable power, as well as increasing nodes density for quality of service can be applied into the fog computing system.


Biography

Kaisheng Ma is now a Ph.D. in Department of Computer Science and Engineering, The Pennsylvania State University. His research focuses on computer architecture, especially on IoT Fog Computing architecture exploration and optimization. For the first time, Dr. Ma explores the energy harvesting nonvolatile processor design, including tradeoffs between re-execution and backup penalty, architectural complexity vs. performance, etc. Two outstanding findings: a, optimizing for low power is not a good fit for energy harvesting. b, different application scenarios have different best architectural selections. Because of Dr. Ma’s research, for the first time, NVP architectural tradeoffs was introduced into the researchers within architectural level. The related work was published on HPCA 2015 Best Paper and and IEEE MICRO Top Picks 2016. In Micro 2017, for the first time, incidental approximate computing concept is proposed for energy harvesting scenarios, for a phenomenon is observed: a partial low quality outputs can sometime be more useful and urgent than delayed best quality outputs. System level optimizations for energy harvesting scenarios, based on nonvolatility is proposed by Dr. Ma, and the related paper will be publishing in ASPLOS 2018. In the past 5 years, Dr. Ma has published 35 papers (half first author), and 362 google citations (Feb 2018), and has several patents in US. As first author, Dr. Ma has won many awards, including: 2015 HPCA Best Paper Award, 2016 IEEE MICRO Top Picks, 2017 ASP-DAC Best Paper Award. Dr. Ma has many honors, including 2016 Penn State CSE Department Best Graduate Research Award (Among ~170 Ph.D. students), 2016 Cover Feature of NSF ASSIST Engineering Research Center Newsletter (Among 40 graduate students across four participating universities.), 2011 Yang Fuqing & Wang Yangyuan Academician Scholarship (1/126, Peking University.). His research interests include Non-volatile processor architecture and Neural Networks Accelerator Design.



Photo of Shimeng Yu

March 14th (Wednesday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)  


Luca Amaru, Senior II R&D Engineer, Synopsys Inc.



"Accelerating Logic Computing"


During the last six decades, computer technology has been moving forward at an incredible pace. From 1958 to 2018, computing systems have seen a 1 trillion-fold increase in performance and a commensurate increase in the complexity of tasks they can solve. This exceptional progress is reality thanks to the continuous research in device technology and computing models. In this talk, I show how new and technology-aware logic models can provide further acceleration of computing in present and future technologies. First, I illustrate the enabling role of native logic abstractions in the study of emerging nanotechnologies, ranging from enhanced functionality devices to new computational paradigms. Second, I present technology-driven models in logic manipulation algorithms and data-structures, pushing the solving limits for hard problems in computer science. Finally, I introduce a cloud-scale FPGA accelerator for Boolean SATisfiability (SAT), combining algorithmic and architecture innovations, capable of determining hard SAT problems that do not find answer with state-of-the-art methods.


Biography

Luca Gaetano Amarù received the B.S. and M.S. degrees in electronic engineering from the Politecnico di Torino, Turin, Italy, in 2009 and 2011 respectively, and the Ph.D. degree in computer science from the Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland, in 2015. He is Senior II R&D Engineer in the Design Group of Synopsys Inc., Mountain View, CA, USA, where he is responsible for designing efficient data structures and algorithms for EDA. Previously, he was research assistant at EPFL and visiting researcher at Stanford University. His current research interests include logic manipulation, with emphasis on optimization and SAT, accelerating logic reasoning engines, at both algorithmic and hardware implementation levels, and beyond CMOS design & nanotechnology exploration. Dr. Amaru is author or co-author of 75+ technical articles. His awards and achievements include: Synopsys Leading Edge Talent Program, 2017, Best Paper Award Nomination in TCAD, 2017, EDAA Outstanding Dissertation Award, 2016, Best Presentation Award at FETCH conference, 2013, Best Paper Award Nomination at ASP-DAC conference, 2013, and others.


Photo of Si Si

March 12th (Wednesday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)
 

Si Si, Researcher and Software Engineer, Google Research

"On-device Machine Learning: Small Models and Fast Prediction"


Many complex machine learning models have demonstrated tremendous success for massive data. However, these advances are not necessarily feasible when deploying these models to devices due to large model size and evaluation cost. In many real-world applications such as robotics, self-driving car, and smartphone apps, the learning tasks need to be carried out in a timely fashion on a computation and memory limited platform. Therefore, it is extremely important to study building “small” models from “big” machine learning models. The main topic of my talk is to investigate how to reduce the model size and speed up the elevation for complex machine learning models while maintaining similar accuracy. Specifically, I will discuss how to compress the model and achieve fast prediction for different real-world machine learning applications including matrix approximation and extreme classification.


Biography

Si Si is a researcher and software engineer in Google research. Her research focus is developing scalable machine learning models. Si obtained her bachelor's degree from the University of Science and Technology of China in 2008, M.Phil. degree in 2010 from University of Hong Kong, and Ph.D. from University of Texas at Austin in 2016. She is the recipient of the MCD fellowship in 2010-2013, and the best paper award in ICDM 2012. Si is selected as one of the Rising Stars in EECS 2017.



Photo of Siddarth Joshi

March 6th (Tuesday), 10:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)


Siddharth Joshi, Postdoctoral Fellow, Bioengineering, UC San Diego



"IoT in the CMOS Era and Beyond: Leveraging Mixed-Signal Arrays for Ultra-Low-Power Sensing, Computation, and Communication"



Energy efficiencies obtained by analog processing are critical for next-generation “smart” sensory systems that implement intelligence at the edge. Such systems are widely applicable in areas like biomedical data acquisition, continuous infrastructure monitoring, intelligent sensor networks, and data analytics. However, adaptive analog computing is sensitive to nonlinearities induced by mismatch and noise, which has limited the application of analog signal processing to signal conditioning prior to quantization. This has relegated the bulk of the processing to the digital domain, or a remote server, limiting the system efficiency and autonomy. This talk highlights principled techniques to algorithm-circuit co-design to overcome these obstacles, leading to energy-efficient high-fidelity mixed-signal computation and adaptation.

 

First, I will provide analytical bounds on the energetic advantages derived by alleviating the need for highly accurate and energy-consuming analog-to-digital conversion through high-resolution analog pre-processing. I will then present an embodiment of this principle in a micropower, multichannel, mixed-signal array processor developed in 65nm CMOS. Spatial filtering with the processor yields 84 dB in analog interference suppression at only 2 pJ energy per mixed-signal operation. At the algorithmic level, I will present work on a gradient-free variation of coordinate descent, Successive Stochastic Approximation (S2A). S2A is resilient to the adverse effects of analog mismatch encountered in compact low-power realizations of high-resolution, high-dimensional mixed-signal processing systems. Over-the-air experiments employing S2A in non-line-of-sight demonstrate adaptive beamforming achieving 65 dB of processing gain.

 

I will conclude with my vision about the impact of mixed-signal processing on the next generation of computing systems and share my recent work spanning across devices (RRAM), architectures (compute-in memory) and emerging applications (neuromorphic computing). Crossing these hierarchies is critical to leverage emerging technologies in realizing the next generation of sensing, computing, and communicating systems.


 

Biography

Siddharth Joshi is a Postdoctoral Fellow in the department of Bioengineering at UC San Diego, he completed his PhD in 2017 at the department of Electrical and Computer Engineering, UC San Diego where he also completed his M.S. in 2012. His research focuses on the co-design of custom, non-Boolean and non-von Neumann, hardware and algorithms to enable machine learning and adaptive signal processing in highly resource constrained environments. Before coming to UCSD, he completed a B. Tech from Dhirubhai Ambani Institute of Information and Communication Technology in India.





Photo of Siddarth Joshi

March 5th (Monday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)


Sudarsan Kannan, Postdoctoral Research Associate, University of Wisconsin-Madison



"Designing Operating Systems for Data-Intensive Heterogeneous Systems"



The dramatic growth in the volume of data and the disproportionately slower advancements in memory scalability and storage performance have plagued application performance in the last decade. Emerging heterogeneous memory technologies such as nonvolatile memory (NVM) promise to alleviate both memory capacity and storage problems; however, realizing the true potential of these technologies requires rethinking of software systems in a way that it hasn’t before. My research has developed fundamental principles and redesigned operating systems (OSes), runtimes, file systems, and applications to address both main memory capacity scaling and storage performance challenges. In my talk, I first present our approach to scaling main memory capacity across heterogeneous memory by redesigning the OS virtual memory subsystem as opposed to the file system used by current systems. Our design makes OS virtual memory data structures and abstractions heterogeneity-aware and intelligently captures application’s use of memory for efficient data placement. I then briefly discuss our approach to reducing software bottlenecks of storage by moving the file system into the storage hardware. I finally conclude my talk with a future vision of unifying converging memory and storage technologies into an application-transparent data-tier fully managed by OS, hardware, and user-level runtimes.


 

Biography

Sudarsun is a postdoctoral research associate at the University of Wisconsin-Madison, where he works on operating systems and storage research. His postdoctoral advisors are Prof. Andrea Arpaci-Dusseau and Prof. Remzi Arpaci-Dusseau. Sudarsun received a Ph.D. in Computer Science from Georgia Tech in 2016 under the guidance of the late Prof. Karsten Schwan and Prof. Ada Gavrilovska. Sudarsun's research focus is at the intersection of hardware and software, building operating systems and system software for next-generation memory and storage technologies. Results from his work have appeared at premier operating systems and architecture venues, including EuroSys, FAST, ISCA, HPCA, PACT, IPDPS, and others. In addition, his work during his summer internships at HP Labs, Intel Labs, and Adobe Labs resulted in 3 patents related to nonvolatile memory and resource management. Sudarsun has taught several graduate and undergraduate-level courses and he was nominated for the Georgia Tech-wide Outstanding Teaching Assistant Award.





February 28th (Wednesday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.) 


Katherine Driggs-Campbell, Postdoctoral Research Scholar, Stanford University



"Trustworthy Autonomy: Algorithms for Human-Robot Systems"



Autonomous systems, such as self-driving cars, are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of autonomy are only achievable if the underlying algorithms can handle the unique challenges humans present: People tend to defy expected behaviors and do not conform to many of the standard assumptions made in robotics. To design safe, trustworthy autonomy, we must transform how intelligent systems interact, influence, and predict human agents. In this work, we’ll use tools from robotics, artificial intelligence, and control to explore and uncover structure in complex human-robot systems to create more intelligent, interactive autonomy.

 

In this talk, I’ll present on robust prediction methods that allow us to predict driving behavior over long time horizons with very high accuracy. These methods have been applied to intervention schemes for semi-autonomous vehicles and to autonomous planning that considers nuanced interactions during cooperative maneuvers. I’ll also present a new framework for multi-agent perception that uses people as sensors to improve mapping. By observing the actions of human agents, we demonstrate how we can make inferences about occluded regions and, in turn, improve control. Finally, I’ll present on recent efforts on validating stochastic systems, merging deep learning and control, and implementing these algorithms on a fully equipped test vehicle that can operate safely on the road.


 

Biography

Katie is currently a Postdoctoral Research Scholar at the Stanford Intelligent Systems Laboratory in the Aeronautics and Astronautics Department. She received a B.S.E. with honors from Arizona State University in 2012 and an M.S. from UC Berkeley in 2015. In May of 2017, she earned her PhD in Electrical Engineering and Computer Sciences from the University of California, Berkeley, advised by Professor Ruzena Bajcsy. Her thesis was entitled “Tools for Trustworthy Autonomy: Robust Prediction, Intuitive Control, and Optimized Interaction,” which contributed to the field of autonomy, by merging ideas robotics, transportation, and control to address problems associated with human-in-the-loop. Her work considers the integration of autonomy into human dominated fields, in terms of safe interaction, with a strong emphasis on novel modeling methods, experimental design, robust learning, and control frameworks. She received the Demetri Angelakos Memorial Achievement Award for her contributions to the community, has instigated many events and groups for women in STEM, including founding a group for Women in Intelligent Transportation Systems, and was selected for the Rising Stars in EECS program in 2017.




Photo of Shimeng Yu

February 21st (Wednesday), 11:00am
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.) 


Shimeng Yu, Assistant Professor, Arizona State University



"Compute-in-Memory for Neural Network Accelerators: From CMOS to Post-CMOS"


Deep neural networks have shown great powerfulness in intelligent tasks such as image/speech recognition, but they heavily rely on the power-hungry hardware platforms such as GPU for training/inference at the cloud. The bottleneck of the computational and energy efficiency is the back-and-forth data transfer between the memory units and the computational units. Therefore, a shift in the computing paradigm towards “compute-in-memory” is promising to minimize the data transfer, and potentially enable the training/inference at the low-power mobile and edge devices.

 

In this talk, we will present our recent progresses in this direction which published in the top-tier conferences [IEDM 2017][ISSCC 2018][DATE 2018][DAC 2018]. The key idea of our design is to use the bitline of the memory array to sum up the analog current which effectively realizes the vector-matrix multiplication in parallel fashion, thereby eliminating the row-by-row multiply-and-accumulate (MAC) operations. First, we designed “inference” engines. For CMOS implementations, we proposed a 8T XNOR bit-cell and realized the parallel computation in SRAM arrays, and we successfully taped-out prototype chips in 65nm TSMC process and achieved >60 TOPS/W energy efficiency for dot-product. For post-CMOS implementation, we proposed a 2T2R bit-cell and realized parallel computation in RRAM arrays, and we also taped-out prototype chips in 90nm process with monolithic integration of RRAM on top of the CMOS substrate. A series of design considerations such as multilevel sense amplifier and nonlinear quantization for partial sum are applied to minimize the degradation of the inference accuracy less than 2% on CIFAR-10 dataset. Second, we will discuss the desired characteristics of the resistive synaptic devices for “online training”. We will discuss the design considerations in the resistive crossbar array design including the selector and the compact oscillation neuron device at the edge of the array, and we will show our array-level experimental demonstrations for implementing the convolution kernel. Finally, we will introduce “NeuroSim”, a device-circuit-algorithm co-design framework to evaluate the impact of non-ideal device effects (e.g. the weight update asymmetry/nonlinearity, the reliability effects) on the system-level performance (i.e. learning accuracy) and trade-offs in the circuit-level performance (i.e. area, latency, energy). This talk will be concluded with a holistic view of my research vision from materials/device engineering, and circuit/architecture co-optimization for developing hardware accelerators with emerging nanoelectronic devices.


 

Biography

Shimeng Yu received the B.S. degree in microelectronics from Peking University, Beijing, China in 2009, and the M.S. degree and Ph.D. degree in electrical engineering from Stanford University, Stanford, CA, USA in 2011, and in 2013, respectively. He is currently an assistant professor of electrical engineering and computer engineering at Arizona State University, Tempe, AZ, USA. 

His research interests are emerging nano-devices and circuits with a focus on the resistive memories for different applications including machine/deep learning, neuromorphic computing, monolithic 3D integration, hardware security, radiation-hard electronics, etc. He has published >70 journal papers and >100 conference papers with citations >5500 and H-index 34.Among his honors, he is a recipient of the DOD-DTRA Young Investigator Award in 2015, the NSF Faculty Early CAREER Award in 2016, the ASU Fulton Outstanding Assistant Professor in 2017 and the IEEE Electron Devices Society Early Career Award in 2017. He serves the Technical Program Committee for IEEE International Symposium on Circuits and Systems (ISCAS) 2015-2017, ACM/IEEE Design Automation Conference (DAC) 2017-2018, and IEEE International Electron Devices Meeting (IEDM) 2017-2018, etc.



CE Seminars 2017-2018