ETRA 2012 Keynotes




    Wednesday, March 28th

Andrew T. Campbell

My phone told me I'm stressed and should chill!
Andrew T. Campbell, Dartmouth College

Abstract: Smartphones are woven into the fabric of our lives. By incorporating sensors into smartphones and by pushing intelligence to the phone it is now feasible to continuously collect sensor data to make reliable inferences about people's behavior, surroundings and life patterns. The emergence of smartphone sensing has significant implications across a wide variety of fields such as mobile health, social networks and social science. In this talk, I will discuss a number of smartphone applications we have developed including BeWell, a continuously sensing application that monitors physical activity, social interaction and sleep patterns; StressSense, which uses the phone's microphone to unobtrusively monitor stress from voice; and the NeuralPhone, which shows for the first time that smartphone applications can be driven by neural signals. Ultimately, as smartphones become smarter they will be able to understand trends in our physical, emotional and cognitive health, anticipate our actions and offer suggestions to improve our over well-being.

Biography: Andrew T. Campbell is a professor of computer science at Dartmouth College, where he leads the smartphone sensing group. His group developed the first continuous sensing application for smartphones and is currently focused on turning the everyday smartphone into a cognitive phone. Andrew received his Ph.D. in computer science (1996) from Lancaster University, England and the NSF Career Award (1999) for his research in programmable wireless networks. Before joining Dartmouth, he was a tenured associated professor of electrical engineering at Columbia University (1996-2005). Prior to that, he spent ten years in the software industry in the US and Europe leading the development of operating systems and wireless networks. Andrew has been a technical program chair of a number of top conferences in his area including ACM MobiCom, ACM MobiHoc and ACM SenSys; also, he recently co-chaired the NSF sponsored workshop on pervasive computing at scale. Andrew spent his sabbatical year (2003-2004) at the computer laboratory, Cambridge University, as an UK EPSRC visiting fellow, and fall 2009 as a visiting professor at the University of Salamanca, Spain.




    Friday, March 30th

Miguel Eckstein

Why do we look towards the eyes?
Miguel Eckstein, UC Santa Barbara

Abstract: When viewing a human face people often look towards the eyes. A prominent idea holds that these fixation patterns arise solely due to social norms. Here, I will propose that this behavior can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in evolutionarily important perceptual tasks (Sensory Optimization Theory). First, I will show that humans move their eyes to points of fixation that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal points of fixation, which vary moderately across tasks, are correctly predicted by a rational Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea towards the visual periphery. As the specifics of the task demands change, observers make small adjustments on their optimal points of fixation. Second, I will present evidence suggesting that there is individual variability in the preferred points of fixation with some humans looking near the eyes while others closer to the tip of the nose. These systematic differences persist over time and also correspond to individual variations in the points of fixation that maximize perceptual performance. Finally, I will show that when confronted with faces with unusual optimal points of fixation (e.g., mouth), observers have difficulty learning to fixate these new optimal points and fail to break away from their over-practiced eye movement strategies.

Biography: Miguel Eckstein earned a BS in Physics and Psychology at UC Berkeley and a PhD in Cognitive Psychology at UCLA. He then worked at the Department of Medical Physics and Imaging, Cedars Sinai Medical Center and NASA Ames Research Center before moving to UC Santa Barbara. He is recipient of the Optical Society of America Young Investigator Award, the Society for Optical Engineering (SPIE) Image Perception Cum Laude Award, Cedars Sinai Young Investigator Award, the National Science Foundation CAREER Award, and the National Academy of Sciences Troland Award. He has served as the chair of the Vision Technical Group of the Optical Society of America, chair of the Human Performance, Image Perception and Technology Assessment conference of the SPIE Medical Imaging Annual Meeting, and as a member of various National Institute of Health study section panels. He served from 2005 to 2011 as the Vision Editor of the Journal of the Optical Society of America A and is currently on the board of editors of Journal of Vision, and the board of directors of the Vision Sciences Society. He has published over 120 articles relating to computational human vision, visual attention and search, perceptual learning and the perception of medical images.