ELECTRICAL & COMPUTER ENGINEERING 189A/B PROJECTS
10:00-12:00 Engineering Science Bldg (ESB), Rm 1001
MORNING SESSION 10:00a-12:00p
COMPUTER SCIENCE 189A/B PROJECTS
10:00-3:00 Engineering Science Building (ESB), Rm 2001 & 1001
MORNING SESSION 10:00-12:00p in ESB 2001
AFTERNOON SESSION 1:00-3:00p in ESB 1001
ELECTRICAL & COMPUTER ENGINEERING 188A/B PROJECTS
10:00-2:30 Engineering Science Building (ESB) Courtyard
MORNING SESSION — 10:00-12:00p in the ESB Courtyard
AFTERNOON SESSION 1:00-2:30p in the ESB Courtyard
About the ECE 188, ECE 189 & CS 189 Capstone Courses:
During their senior year students take the ECE 188, ECE & CS 189 two quarter Senior "Capstone" Project. Every year at the end of the Spring quarter the final projects are presented at a full-day, industry-sponsored event where student groups publicly present their projects and participate in an outdoor lunchtime project demonstration and poster event.
Thanks to Citrix (Pizza & Cake) and Qualcomm (Posters) for their generous support of the Capstone Day
Hosted by: the CE Program and the ECE & CS Departments
In the first part of my talk, I will present a flexible insole pedometer with piezoelectric energy harvester that appeared in ISSCC 2012. Flexible electronics feature themselves by excellent mechanical flexibility and ultra-thin form factor (<0.1mm in thickness). However, the printed electronic circuits can have large process variations and suffer from time-dependent degradations due to exposure to bias-stress or oxygen in the ambient air. I will present circuit design solutions to meet these challenges of flexible electronics. In the second part of my talk, I will present a silicon optical bench (SiOB) and 28Gb/s shared-inductor CMOS optical receiver in 28nm CMOS for the next-generation optical network-on-the-chip. Optical interconnect recently emerges as a promising candidate to meet the rapid increasing bandwidth requirements of the high-speed I/O. The cost and the electrical-optical integration, however, remain to be the bottleneck for wide adoption of optical interconnects. The proposed receiver, appeared in ISSCC 2014, achieves the state-of-the-art energy efficiency while reducing the chip area by 56% compared with conventional inductive-peaking. Finally, I will discuss my future research directions for designing energy-aware smart devices in a more connected world.
Tsung-Ching (Jim) Huang is currently a member of technical staff at TSMC Design Center, San Jose. He received his M.S. and Ph.D. in ECE from University of California, Santa Barbara in 2006 and 2009, and his B.S. in EE from National Chiao-Tung University, Taiwan in 2001. After receiving his Ph.D., he worked as a research associate and subsequently an adjunct assistant professor at The University of Tokyo, Japan. His primary research interests include high-speed CMOS mixed-signal circuits and sub-threshold CMOS circuits, as well as reliability analysis, CAD, and circuit design for flexible wearable electronics and displays. Dr. Huang received Best Paper Awards from International Symp. on Flexible Electronics and Displays (ISFED) and IEEE International Electron Devices Meeting (IEDM) for his research in flexible electronics. His work has been highlighted in IEEE Solid-State Circuit Magazine and EE Times. Since 2011, he has been leading research projects on next-generation optical network-on-the-chip at TSMC. Dr. Huang has 12 US patents filed/issued, and has authored/co-authored more than 25 technical publications in the fields of solid-state circuit, electron devices, display, CAD, and material science, and has given a number of invited talks for his research.
Integrated circuits in modern SoCs and microprocessors are typically operated with sufficient timing margins to mitigate the impact of rising process, voltage and temperature (PVT) variations at advanced process nodes. The widening margins required to ensure robust computation inevitably leads to conservative designs with unacceptable energy-efficiency overheads. Reconciling the conflicting objectives imposed by variation-mitigation and energy-efficient computing will require fundamental departures from conventional circuit and system design practices. In my talk, I will posit error-resilient general-purpose computing as an effective approach for achieving this. I will describe how resilient techniques exploit tolerance to timing errors to automatically compensate for variations and dynamically tune a system to its most efficient operating point. I will present results from university and industrial designs that demonstrate significant efficiency improvements by combining resiliency with optimizations across algorithms, circuits and micro-architecture boundaries. I will also present directions for further research into variation-tolerant, reliable and energy-efficient system design for emerging applications in the coming decade.
Shidhartha Das received the B.Tech degree in electrical engineering from the Indian Institute of Technology, Bombay in 2002 and the M.S and Ph.D degrees in computer science and engineering from the University of Michigan, Ann Arbor in 2005 and 2009.
His research interests include micro-architectural and circuit design for variation measurement and mitigation, on-chip power delivery and VLSI architectures for digital signal processing (DSP) accelerators. His research has been featured in IEEE Spectrum and has won several awards including the Microprocessor Review analysts' choice award in innovation and best paper awards at MICRO 2003 and SAME 2010. He has authored more than 25 papers in peer-reviewed journals and conferences, including 7 invited publications. Dr. Das has 18 granted patents with several others that are pending. Currently, he is a Principal Engineer working for ARM Ltd., U.K., in the Research and Development group where he works on several aspects of low-power, variation-tolerant circuits and micro-architectural design. Dr. Das serves on the Technical Program Committee of the European Solid-State Circuits Conference (ESSCIRC 2014) and the International On-Line Testing Symposium (IOLTS 2014).
Record and deterministic replay (RnR) is an appealing mechanism for computer systems builders. It can recreate an exact copy of an execution and, thus, can be utilized as a powerful primitive in numerous areas including debugging of hard-to-reproduce bugs, computer security, fault tolerance and high availability.
In this talk, I will introduce the concept of RnR and present the design and implementation of the first physical prototype of a hardware-assisted RnR platform incorporating modified Intel processors and full operating system support. I will then discuss a couple of novel hardware techniques that enhance the baseline design in order to improve its usability and generalize it to support other processors such as ARM or IBM Power. I will also report on several on-going efforts focusing on using RnR in areas such as program debugging and security.
Nima Honarmand is a Ph.D. candidate in the Department of Computer Science at University of Illinois at Urbana-Champaign (UIUC), working with professors Josep Torrellas and Samuel King. His research interests span both sides of the hardware/software interface, including processor and system architecture, operating system design and programming models for parallel computers. He obtained his B.Sc. in CE from Sharif University of Technology and M.Sc. in ECE from University of Tehran, both in Iran. He is the recipient of multiple academic and industrial awards, including the Sarah and Sohaib Abbasi Fellowship from UIUC and Qualstar Hall of Fame from Qualcomm.
What is the concrete value of information? In this talk, I report recent developments regarding the extraction of useful information from empirical data to solve concrete problems in computing and optimization. Going beyond communication and storage, I introduce novel approaches based on information theory and statistics to optimize the design of robust and efficient computer systems. I start by formulating statistical bounds with near-optimal guarantees of efficiency, which can be employed to quantify the robustness of an information processing system with respect to fluctuations induced by noise and faulty behavior. This enabling technology allows the system designer to accelerate prototyping by selecting between competing alternatives and taking information efficiency into account. I proceed by relating these concepts to optimal control of dynamical systems and systems biology, showing how measuring the value of information enables method selection for optimization and engineering design. I conclude by reporting practical applications in the context of biochemical reaction networks for signaling and bio-inspired computing, personalized treatment and automated microscopy in biomedicine, and nanotechnology of carbon nanotubes.
Dr.Sc. A.G. Busetto is a postdoctoral researcher at the Department of Information Technology and Electrical Engineering at ETH Zurich, Switzerland, where he is establishing an initiative to engineer robust and efficient computing systems by taking the concrete value of information into account. His interdisciplinary research relates computing to information theory and optimal control, with the aim of optimizing the extraction of reliable information to solve optimization problems. He is also a member of the Competence Center for Systems Physiology and Metabolic Diseases, and collaborates in projects connecting systems biology, biological information processing and nanotechnology.
Dr.Sc. Busetto received the ETH Medal for Outstanding Doctoral Thesis for optimizing the value of information in complex dynamical systems. He has been awarded the Best Student Paper at the IC in AI in Education, Best of ITS 2012 Series of IJAIED for designing computer systems for intelligent tutoring, and the Best Paper at the IEEE IC in Computational Science and Engineering for biochemical reaction network modeling. He received the Best Master Thesis Award at the University of Padua, Italy, for computational nanotech of carbon nanotubes. He is a researcher at SystemsX.ch, the Swiss Initiative in Systems Biology and one of the largest ever partnerships in biomedicine and synthetic biology, and initiated collaborations in regenerative medicine, wind energy, and intelligent tutoring. His research appears in top-tier journals (such as Nature Methods, Science Signaling, PLoS Comp.Bio.) and his collaboration projects received significant media coverage on television, radio, general and specialized press (SF1, ORF, APA, der Standard). During his doctoral studies, he was delighted to visit MIT, Stanford, Caltech, University of Cambridge, Basel, Geneva, Melbourne, NICTA, MPI and UIUC.
The confluence of disruptive technologies beyond CMOS and Big Data workloads calls for a fundamental paradigm shift from homogenous compute-centric system which was designed for handling structured data to new heterogeneous data-centric system which can effectively store/process a large set of semi-structured or unstructured data for better innovation, competition and productivity. In a heterogeneous system, silicon CMOS (e.g., multi-core CPU) will continue to play a major role in primary computing and essential bookkeeping while the tasks that are either difficult, expensive, or even unachievable with standard CMOS within a fixed power/cost budget can be effectively offloaded to hardware engines enabled by other technologies. By harnessing the potential of new technologies, we can enable efficient data-centric computing by building cost-effective heterogeneous hardware substrate with significantly enhanced energy efficiency, performance, throughput, and scalability.
With the objective of rethinking data-centric system design from ground up, I will present a PCM-CMOS hardware accelerator inspired by the concept of ternary content addressable memory (TCAM) and enabled by emerging memory technology i.e., phase change memory (PCM). In particular, a fully-functional heterogeneous chip was designed and fabricated for the first time, achieving >10x cell area reduction compared to homogenous CMOS-based at the same technology node. The accelerator fundamentally blurs the boundary between computation and data storage, providing flexible control, implicit parallelism and exploits tremendous bandwidth close to data sources to reduce communication cost. It is particularly efficient in performing search operations with high and deterministic lookup rates. It can also be configured at various granularities as either a compute engine to perform direct data-flow computation or a storage media as storage class memory, providing tremendous opportunities for dynamic hardware specialization. Thus, it is an attractive solution for a wide range of data-intensive applications e.g., genome matching in bioinformatics, intrusion detection in cloud computing, etc. In spite of tremendous advantages in performance/cost/energy, design with heterogeneous PCM/CMOS technologies poses new challenges during practical hardware prototyping due to the severely degraded operating margin introduced by technology itself. To address these challenges, in the talk, I will present two enabling techniques: 1) a clocked self-referenced sensing scheme and 2) a two-bit encoding, which can also improve algorithmic mapping for better hardware utilization. With these techniques, the fabricated chip can reliably operate at very low voltage (750mV). The work was recognized as a highlighted paper by Symp. on VLSI Circuits and invited by JSSC for a journal paper. Finally, I will briefly present two critical techniques to move further into a more cost-effective design based on variable-bit storage.
Dr. Jing Li is a Research Staff Member at IBM T. J. Watson Research Center, Yorktown Heights, NY. She received her Ph.D. degree from the Electrical and Computer Engineering department of Purdue University in 2009 and the B.E. degree from Electrical Engineering department of Shanghai Jiao Tong University in 2004. Her general research interest is developing new computing paradigm driven either by technologies (from bottom-up, including but not limited to emerging nonvolatile memories, flexible electronics, etc.) or by workloads (from top-down, including traditional commercial workloads as well as emerging data-centric workloads) or by both. Her primary area of interest is energy efficient and resilient heterogeneous system design, closely interacting with underlying technologies as well as the upper level software stack.
Dr. Li has received IBM Research Division Outstanding Technical Achievement Award in 2012 for successfully achieving CEO milestone, multiple invention achievement awards from IBM from 2010-present, IBM Ph.D. Fellowship Award in 2008, the Dean's and Semester Honors for outstanding scholastics performance from Purdue University in 2007, the Meissner Fellowship from Purdue University in 2004, etc. She was also the recipient of the 2005~2006 Magoon's Award for excellence in teaching from Purdue University. She has published more than 35 technical papers in referred journals and conferences in fields of computer design, CAD, VLSI circuit, device physics, material science, etc. and has more than 35 patents filed/issued. She won the Best Paper Award from IEEE Circuits and Systems Society VLSI Transactions, in recognition of her work as one of the very first papers tackling reliability issues in STT RAM. She has been reviewers for numerous journals and conferences, including COMPUTER, JSSC, TVLSI, ACM JETC, TNANO, TED, EDL, etc., and was recognized as Golden Reviewer by IEEE Electron Device Letters in 2012, 2013. She has been serving on the technical committee for IEEE Design Automation Conference (DAC) since 2011. She also represents IBM at premier industry conference IEEE International Memory Workshop (IMW) as a member of Scientific Committee and Organizing Committee.
3D Integration emerges as an attractive option to sustain Moore’s law as well as to enable More-than-Moore. This talk will present an overview of recent research progress in 3D IC designs, including both design tools/VLSI perspective and architecture perspective. It will describe the following research directions for future 3D IC design: Design automation and test techniques and methodologies for 3D designs are imperative to realize 3D integration; Novel architectures and design space exploration at the architectural level are also essential to leverage 3D integration technologies for performance gain; Possible “killer” application for 3D integration (e.g., what application could dramatically benefit from 3D stacking technology or what novel applications are enabled by 3D technology.)
Yuan Xie is currently a Professor in the Computer Science and Engineering department at the Pennsylvania State University. He received Ph.D. from Princeton University, and was with IBM Microelectronic before joining Penn State. He also helped establish and lead AMD Research China Lab. Prof. Xie is a recipient of the National Science Foundation Early Faculty (CAREER) award, the SRC Inventor Recognition Award, IBM Faculty Award, and several Best Paper Award and Best Paper Award Nominations at IEEE/ACM conferences. His research covers areas of EDA, computer architecture, VLSI circuit designs, and embedded systems. His current research projects include: three-dimensional integrated circuits (3D ICs); emerging memory technologies; low power and thermal-aware design; reliable circuits and architectures; and embedded system synthesis.
By 2020, there will be billions of devices connecting to the Internet. These devices will be ubiquitous and will generate large amounts of sensing and monitoring data that will enable a multitude of applications to improve human life. The key enabler of this vision is the underlying wireless communication technology. However, current wireless networks are notoriously interference-limited. With the number of devices increasing to the billions in the future, current solutions will be crippled to support the amount of data that needs to be communicated.
In the first part of my talk, I will present a way to address a special kind of interference called self-interference, an interference from a node to itself. Specifically, the self-interference that arises from making multi-antenna radios fully flexible. Existing multiple antenna techniques are inflexible; they use all of their antennas for either transmission or reception, as in multiple input multiple output (MIMO) and interference alignment techniques. I will first motivate the need to make wireless nodes flexible. If a wireless node can allocate some of its antennas for transmission and the remaining for reception, then it can improve its efficiency. The exact allocation changes based on link quality, network topology and traffic demand. We call this design FlexRadio. Then, I will present a self-interference cancellation mechanism to deal with the interference from FelxRadio’s transmitting antennas at its receiving antennas. I will show that FlexRadio can outperform any existing multiple antenna technology; MIMO, full duplex, multi-user MIMO (MU-MIMO) and interference alignment. I will also present a way to design FlexRadio and show preliminary results from our prototype.
In the second part of my talk, I will motivate the need to have powerful nodes in a network to help out low power and mobile devices cope with interference. Existing interference mitigating techniques such as interference alignment assume all nodes to be equi-capable and stationary. I will present RobinHood, which enables powerful access points to help out less powerful and mobile devices. RobinHood can achieve 6X throughput gain over perfect time division multiple access (TDMA) and 24X gain over WiFi.
Kannan Srinivasan is an Assistant Professor in the department of Computer Science and Engineering at the Ohio State University (OSU). He graduated with a Ph.D from Stanford University in 2010 and was a post doctoral researcher at the University of Texas at Austin for a year before joining OSU. He has won multiple awards; Excellent performance award from OSU-CSE, NSF CAREER Award in 2013, Best Paper Runner-Up at IPSN 2013,Best Paper Award at MobiCom 2010, Best Paper Runner-Up at MobiCom 2013, Best Demo Award at MobiCom 2010, Fellowship from Stanford-ECE and a Presidential Award from Oklahoma State University. His work on wireless in-band full duplex broke a century-old belief that a wireless cannot send and receive on the same frequency simultaneously. This work got significant media attention and had both the theory and systems communities revisit the first principles. It’s being commercialized by a Stanford start-up company.
Capable and accessible infrastructure is an accelerant for good research, as it enables creative people to quickly and effectively explore new ideas. In this talk I will reflect upon my experiences with the SimpleScalar tool set, an open-source simulation infrastructure that has been employed by more than 5,500 published papers. I will use SimpleScalar as a case for why more researchers should release their tools, and I will share with you my best advice for building and distributing research infrastructure. Finally, I will speculate on the future of computer engineering research tools, and suggest where budding infrastructure hackers might want to spend their efforts.
Todd Austin is a Professor of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor. His research interests include computer architecture, robust and secure system design, hardware and software verification, and performance analysis tools and techniques. Currently Todd is director of C-FAR, the Center for Future Architectures Research, a multi-university SRC/DARPA funded center that is seeking technologies to scale the performance and efficiency of future computing systems. Prior to joining academia, Todd was a Senior Computer Architect in Intel's Microcomputer Research Labs, a product-oriented research laboratory in Hillsboro, Oregon. Todd is the first to take credit (but the last to accept blame) for creating the SimpleScalar Tool Set, a popular collection of computer architecture performance analysis tools. Todd is co-author (with Andrew Tanenbaum) of the undergraduate computer architecture textbook, "Structured Computer Architecture, 6th Ed." In addition to his work in academia, Todd is founder and President of SimpleScalar LLC and co-founder of InTempo Design LLC. In 2002, Todd was a Sloan Research Fellow, and in 2007 he received the ACM Maurice Wilkes Award for "innovative contributions in Computer Architecture including the SimpleScalar Toolkit and the DIVA and Razor architectures." Todd received his PhD in Computer Science from the University of Wisconsin in 1996.
Every integrated circuit is released with latent bugs. The damage and risk implied by an escaped bug ranges from almost imperceptible to potential tragedy; unfortunately it is impossible to discern within this range before a bug has been exposed and analyzed. While the past few decades have witnessed significant efforts to improve verification methodology for hardware systems, these efforts have been far outstripped by the massive complexity of modern digital designs, leading to product releases for which an always smaller fraction of system’s state has been verified. The news of escaped bugs in large market designs and safety critical domains is alarming because of safety and cost implications (due to replacements, lawsuits, etc.).
This talk will present some of our solutions to solve the verification challenge, such that users of future microprocessors can be assured that their devices will operate completely free of bugs. We will attack the problem after deployment in the field, discussing novel solutions which can correct escaped bugs after a system has been shipped.
Valeria Bertacco is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan. Her research interests are in the area of design correctness, with emphasis on digital system reliability, post-silicon and runtime validation, and hardware-security assurance. Valeria joined the faculty at the University of Michigan in 2003, after being in the Advanced Technology Group of Synopsys for four years as a lead developer of Vera and Magellan. During the Winter of 2012, she was on sabbatical at the Addis Ababa Institute of Technology.
Valeria is the author of three books on design errors and validation. She received her M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1998 and 2003, respectively; and a Computer Engineering degree ("Dottore in Ingegneria") summa cum laude from the University of Padova, Italy in 1995. Valeria is the recipient of the IEEE CEDA Early Career Award, NSF CAREER award, the Air Force Office of Scientific Research's Young Investigator award, the IBM Faculty Award and the Vulcans Education Excellence Award from the University of Michigan.