Compressing Time with Brain To Computer Interfaces

2014-12-28 00:00:00

This is one of a series of interviews with scientists working at the convergence of nanotechnology, biotechnology, information technology and cognitive science. Participants discuss their definition of technological convergence, how this might affect various scientific fields and what obstacles must be addressed to reach convergence’s full potential. In this video, Aude Oliva of the Massachusetts Institute of Technology discusses technological convergence.




This work is part of the international study, "Societal Convergence for Human Progress," sponsored by the National Science Foundation, National Institutes of Health, National Aeronautics and Space Administration, Environmental Protection Agency, Department of Defense and U.S. Department of Agriculture.


​After a French baccalaureate in Physics and Mathematics and a B.Sc. in Psychology (minor in Philosophy), Aude Oliva received two M.Sc. degrees –in Experimental Psychology, and in Cognitive Science and a Ph.D from the Institut National Polytechnique of Grenoble, France. She joined the MIT faculty in the Department of Brain and Cognitive Sciences in 2004 and the MIT Computer Science and Artificial Intelligence Laboratory - CSAIL - in 2012. She is also affiliated with the Athinoula A. Martinos Imaging Center at the McGoven Institute for Brain Research, and with the MIT Big Data Initiative at CSAIL.

Her research is cross-disciplinary, spanning human perception/cognition, computer vision, and cognitive neuroscience, focusing on research questions at the intersection of the three domains. Her work in Computational Perception and Cognition builds on the synergy between human and machine perception and cognition, and how it applies to solving high-level recognition problems like understanding scenes and events, perceiving space, localizing sounds, recognizing objects, modelling attention, eye movements and visual memory, as well as predicting subjective properties of images (like image memorability). Her research integrates knowledge and tools from image processing, image statistics, computer vision, human perception, cognition and neuro-imaging (fMRI, MEG).

Her work has been regularly featured in the scientific and popular press, in museums of Art and Science as well as in textbooks of Perception, Cognition, Computer Vision and Design. She is the recipient of a National Science Foundation CAREER Award (2006) in Computational Neuroscience, an elected Fellow of the Association for Psychological Science (APA), and the recipient of the 2014 Guggenheim fellowship in Computer Science. Her research programs are funded by the National Science Foundation, the National Eye Institute, Google and Xerox. Her Curriculum-Vitae (pdf), her google scholar profile page.
Research Overview

Cross-disciplinary research bridges the gaps from theory to experiments to applications, accelerating the rate at which discoveries are made by solving problems through a novel way of thinking. To this end, my research in human perception and cognition span three disciplines:

Psychology: We employ psychophysics and behavioral methods to discover phenomena of human perception and cognition.
Computer Vision: We use methods in image processing and computer vision as tools to model and predict psychological phenomena as well as to provide computer vision with new applications.
Human Cognitive Neuroscience: Armed with theoretical and computational frameworks, we use brain imaging experiments (e.g., fMRI, MEG) to study how the human brain represents perceptual and cognitive phenomena.

My work has capitalized on visual scene understanding. Scene understanding is a multidisciplinary field, but the traditional layered structure of academia and publication outlets have created substantial barriers between researchers working on human perception and computer vision. In my work I have constantly sought to bridge these two areas of research. Building on foundations in image statistics and image processing, I have developed new experimental paradigms to study human perception and memory, and novel methods in human neuroscience. I have integrated ideas from cognitive psychology into computer vision and I have applied computer vision to model various ecological tasks like scene classification, depth perception, visual search and memory. I have also established successful bridges with computer vision, contributing to its awareness of studies in psychology. Some of my earlier contributions (i.e., hybrid images) have also made a strong impact outside the field of human perception. My 2001 paper in Int. J. of Computer Vision, which introduced the first holistic computational model of scene recognition, and what would later be referred as GIST features, has generated much follow-up work in both human vision and computer vision. Recently, I introduced a new domain of application of computer vision: memorability. The study of long-term memory provides another avenue of research in which to probe what kind of information is extracted, stored, and used to predict behaviors.

In order to evaluate the dynamics of perception, cognition and action in the human brain, measurements at the level of milliseconds and millimeters are required. Combining the best of two neuro-imaging techniques (ms-resolution MEG and mm-resolution fMRI) allows to study, in a non invasive manner, how recognition processes unfold concurrently in time and space in the human brain. In a first study in Nature Neuroscience 2014 (MIT news release), we have applied the new approach to visual object recognition.

As an undergraduate student, I studied the Philosophy of Existentialism: it teaches you "fall seven times, stand up eight" (Japanese proverb). I believe that building cross-disciplinary knowledge in various disciplines, for instance, psychology, computer science and neuroscience, is a way for researchers to keep up with the pace of technology development, and this for decades: choosing to be a researcher or a scientist of our time means being a trans-disciplinary thinker. I have witnessed too many times that, “at every crossway on the road that leads to the future, each progressive spirit is opposed by a thousand men appointed to guard the past,” (A.G. Bose) but also that, “logic will get you from A to Z; imagination will get you everywhere.” (A. Einstein). I believe that cross-disciplinary education opens people’s minds to imagine, believe in and do, the impossible.



http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface



The Extended Mind
by Andy Clark & David J. Chalmers

1 Introduction



​Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes.



2 Extended Cognition



Consider three cases of human problem-solving:

(1) A person sits in front of a computer screen which displays images of various two-dimensional geometric shapes and is asked to answer questions concerning the potential fit of such shapes into depicted "sockets". To assess fit, the person must mentally rotate the shapes to align them with the sockets.

(2) A person sits in front of a similar computer screen, but this time can choose either to physically rotate the image on the screen, by pressing a rotate button, or to mentally rotate the image as before. We can also suppose, not unrealistically, that some speed advantage accrues to the physical rotation operation.

(3) Sometime in the cyberpunk future, a person sits in front of a similar computer screen. This agent, however, has the benefit of a neural implant which can perform the rotation operation as fast as the computer in the previous example. The agent must still choose which internal resource to use (the implant or the good old fashioned mental rotation), as each resource makes different demands on attention and other concurrent brain activity.

How much cognition is present in these cases? We suggest that all three cases are similar. Case (3) with the neural implant seems clearly to be on a par with case (1). And case (2) with the rotation button displays the same sort of computational structure as case (3), although it is distributed across agent and computer instead of internalized within the agent. If the rotation in case (3) is cognitive, by what right do we count case (2) as fundamentally different? We cannot simply point to the skin/skull boundary as justification, since the legitimacy of that boundary is precisely what is at issue. But nothing else seems different. 

Click Here to read more...






This is one of a series of interviews with scientists working at the convergence of nanotechnology, biotechnology, information technology and cognitive science. Participants discuss their definition of technological convergence, how this might affect various scientific fields and what obstacles must be addressed to reach convergence’s full potential. In this video, Aude Oliva of the Massachusetts Institute of Technology discusses technological convergence.




This work is part of the international study, "Societal Convergence for Human Progress," sponsored by the National Science Foundation, National Institutes of Health, National Aeronautics and Space Administration, Environmental Protection Agency, Department of Defense and U.S. Department of Agriculture.


​After a French baccalaureate in Physics and Mathematics and a B.Sc. in Psychology (minor in Philosophy), Aude Oliva received two M.Sc. degrees –in Experimental Psychology, and in Cognitive Science and a Ph.D from the Institut National Polytechnique of Grenoble, France. She joined the MIT faculty in the Department of Brain and Cognitive Sciences in 2004 and the MIT Computer Science and Artificial Intelligence Laboratory - CSAIL - in 2012. She is also affiliated with the Athinoula A. Martinos Imaging Center at the McGoven Institute for Brain Research, and with the MIT Big Data Initiative at CSAIL.

Her research is cross-disciplinary, spanning human perception/cognition, computer vision, and cognitive neuroscience, focusing on research questions at the intersection of the three domains. Her work in Computational Perception and Cognition builds on the synergy between human and machine perception and cognition, and how it applies to solving high-level recognition problems like understanding scenes and events, perceiving space, localizing sounds, recognizing objects, modelling attention, eye movements and visual memory, as well as predicting subjective properties of images (like image memorability). Her research integrates knowledge and tools from image processing, image statistics, computer vision, human perception, cognition and neuro-imaging (fMRI, MEG).

Her work has been regularly featured in the scientific and popular press, in museums of Art and Science as well as in textbooks of Perception, Cognition, Computer Vision and Design. She is the recipient of a National Science Foundation CAREER Award (2006) in Computational Neuroscience, an elected Fellow of the Association for Psychological Science (APA), and the recipient of the 2014 Guggenheim fellowship in Computer Science. Her research programs are funded by the National Science Foundation, the National Eye Institute, Google and Xerox. Her Curriculum-Vitae (pdf), her google scholar profile page.
Research Overview

Cross-disciplinary research bridges the gaps from theory to experiments to applications, accelerating the rate at which discoveries are made by solving problems through a novel way of thinking. To this end, my research in human perception and cognition span three disciplines:

Psychology: We employ psychophysics and behavioral methods to discover phenomena of human perception and cognition.
Computer Vision: We use methods in image processing and computer vision as tools to model and predict psychological phenomena as well as to provide computer vision with new applications.
Human Cognitive Neuroscience: Armed with theoretical and computational frameworks, we use brain imaging experiments (e.g., fMRI, MEG) to study how the human brain represents perceptual and cognitive phenomena.

My work has capitalized on visual scene understanding. Scene understanding is a multidisciplinary field, but the traditional layered structure of academia and publication outlets have created substantial barriers between researchers working on human perception and computer vision. In my work I have constantly sought to bridge these two areas of research. Building on foundations in image statistics and image processing, I have developed new experimental paradigms to study human perception and memory, and novel methods in human neuroscience. I have integrated ideas from cognitive psychology into computer vision and I have applied computer vision to model various ecological tasks like scene classification, depth perception, visual search and memory. I have also established successful bridges with computer vision, contributing to its awareness of studies in psychology. Some of my earlier contributions (i.e., hybrid images) have also made a strong impact outside the field of human perception. My 2001 paper in Int. J. of Computer Vision, which introduced the first holistic computational model of scene recognition, and what would later be referred as GIST features, has generated much follow-up work in both human vision and computer vision. Recently, I introduced a new domain of application of computer vision: memorability. The study of long-term memory provides another avenue of research in which to probe what kind of information is extracted, stored, and used to predict behaviors.

In order to evaluate the dynamics of perception, cognition and action in the human brain, measurements at the level of milliseconds and millimeters are required. Combining the best of two neuro-imaging techniques (ms-resolution MEG and mm-resolution fMRI) allows to study, in a non invasive manner, how recognition processes unfold concurrently in time and space in the human brain. In a first study in Nature Neuroscience 2014 (MIT news release), we have applied the new approach to visual object recognition.

As an undergraduate student, I studied the Philosophy of Existentialism: it teaches you "fall seven times, stand up eight" (Japanese proverb). I believe that building cross-disciplinary knowledge in various disciplines, for instance, psychology, computer science and neuroscience, is a way for researchers to keep up with the pace of technology development, and this for decades: choosing to be a researcher or a scientist of our time means being a trans-disciplinary thinker. I have witnessed too many times that, “at every crossway on the road that leads to the future, each progressive spirit is opposed by a thousand men appointed to guard the past,” (A.G. Bose) but also that, “logic will get you from A to Z; imagination will get you everywhere.” (A. Einstein). I believe that cross-disciplinary education opens people’s minds to imagine, believe in and do, the impossible.



http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface



The Extended Mind
by Andy Clark & David J. Chalmers

1 Introduction



​Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes.



2 Extended Cognition



Consider three cases of human problem-solving:

(1) A person sits in front of a computer screen which displays images of various two-dimensional geometric shapes and is asked to answer questions concerning the potential fit of such shapes into depicted "sockets". To assess fit, the person must mentally rotate the shapes to align them with the sockets.

(2) A person sits in front of a similar computer screen, but this time can choose either to physically rotate the image on the screen, by pressing a rotate button, or to mentally rotate the image as before. We can also suppose, not unrealistically, that some speed advantage accrues to the physical rotation operation.

(3) Sometime in the cyberpunk future, a person sits in front of a similar computer screen. This agent, however, has the benefit of a neural implant which can perform the rotation operation as fast as the computer in the previous example. The agent must still choose which internal resource to use (the implant or the good old fashioned mental rotation), as each resource makes different demands on attention and other concurrent brain activity.

How much cognition is present in these cases? We suggest that all three cases are similar. Case (3) with the neural implant seems clearly to be on a par with case (1). And case (2) with the rotation button displays the same sort of computational structure as case (3), although it is distributed across agent and computer instead of internalized within the agent. If the rotation in case (3) is cognitive, by what right do we count case (2) as fundamentally different? We cannot simply point to the skin/skull boundary as justification, since the legitimacy of that boundary is precisely what is at issue. But nothing else seems different. 

Click Here to read more...






https://www.youtube.com/watch?v=vZPveygyS4Q