Ed Connor PhD
Professor of Neuroscience
Professor of Neuroscience
Neural population representation of a heart-shaped stimulus, derived from response rates of curvature/position-tuned cells in area V4. The left-hand horizontal axis represents boundary curvature, running from -0.3 (concave) through 0.0 (straight) to 1.0 (sharp convex). The right-hand horizontal axis represents object-centered position in degrees, with 0° corresponding to "right", 90° to "top", 180° to "left", and 270° to "bottom" relative to center of mass. The vertical axis (as well as color) represents response strength, derived by summing tuning functions across the cell population, with each cell's tuning function weighted by the cell's response to the heart-shaped stimulus. The surface contains two peaks near curvature 0.7 (broad convex) at positions 45° (upper right) and 135° (upper left), one peak at curvature 1.0 (sharp convex) and position 270° (bottom), and smaller peaks representing intervening concavities. These of course are the defining boundary features of a classic "heart" shape
• Deciphering neural population codes for structure, material, physics, utility
• Tracing neural algorithms for transforming images into visual information
• Using biological principles to advance deep network computer vision
• Using coding principles to design prosthetic interfaces
Vision is your superpower. At a glance, you can tell where you are, what is around you, what just happened, and what is about to happen. You effortlessly perceive the precise 3D structure of things in your environment, at distances ranging from millimeters to miles. You know what things are called, how valuable they are, how old or new, fresh or rotten, strong or weak. You intuit their material, mechanical, and energetic properties, allowing you to anticipate and alter physical events. You effectively read the minds of other humans and animals based on tiny variations in facial configuration and body pose. A picture is worth many times a thousand words to you.
Our visual appreciation of the world emerges from networks of billions of neurons in the ventral visual pathway of the brain. Our lab studies neural information processing in the intermediate and higher level stages of this pathway. We want to understand how the ventral pathway transforms images into knowledge about the world. If we could decipher how the brain does this on the algorithmic level, we could use the same principles to build computer vision systems with human-like capabilities. We could develop prosthetic interfaces for blind patients that hijack the mechanisms of the ventral pathway to induce vivid visual experiences. And, we would understand the substrate for our rich, detailed, aesthetic experiences of the visual world.