Emerging platforms such as Augmented Reality (AR), Virtual Reality (VR), and autonomous machines, while are of a computing nature, intimately interact with both the environment and humans. They must be built, from the ground up, with principled considerations of three main components: imaging, computer systems, and human perception. This talk will make a case for this tenet and discuss some of our recent work on this front.
I will first talk about in-sensor visual computing, the idea that co-designing the image sensor with the computer systems will significantly improve the overall system efficiency and, perhaps more importantly, unlock new machine capabilities. We will show a number of case studies in AR/VR and autonomous machines. I will then discuss our work on human-systems co-optimizations, where we computationally model biological (human) vision to build energy-efficient AR/VR devices without degrading, sometimes even enhancing, human perception.
If time permits, I will briefly discuss how we build fast and robust computing systems for autonomous machines, many of which are now deployed by a self-driving car start-up.
Yuhao Zhu is an Assistant Professor of Computer Science and Brain and Cognitive Sciences at University of Rochester. He holds a Ph.D. from The University of Texas at Austin and was a visiting researcher at Harvard University and Arm Research. His work is recognized by multiple IEEE Micro Top Picks designations, and multiple best paper awards/nominations in computer architecture, Virtual Reality, and visualization. More about his research can be found here.