By Pure Thought Alone
The Development of the First Cognitive Neural Prosthesis
by Joel W. Burdick and Richard A. Andersen
Many of us have probably had this fantasy: just by thinking, we direct our computer to turn on and open the document we want to work on. Or another perhaps: mentally commanding the cursor to move on the screen to a specific location. At Caltech, monkeys can already accomplish the latter.
This feat has been achieved through groundbreaking interdisciplinary research and technology that includes advances in neuroscience, engineering, neurosurgery, and neural informatics. Along with many colleagues, Richard Andersen, James G. Boswell Professor of Neuroscience, Joel Burdick, Professor of Mechanical Engineering and Bioengineering, and Yu-Chong Tai, Professor of Electrical Engineering, have developed proof-of-concept neural prostheses and the associated technology that will someday allow use of these devices by humans.
A neural prosthesis is a direct brain interface that enables a primate, via the use of surgically implanted electrode arrays and associated computer algorithms, to control external electromechanical devices by pure thought alone. The first beneficiaries of such technology are likely to be patients with spinal cord damage, peripheral nerve disease, or ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease). In the United States alone, there are 2.28 million cases of patients with some form of paralysis.
Although the goal is to develop practical applications, a basic understanding of the brain’s neural codes and representations is a cornerstone of this research. Moreover, the brain-machine interfaces (BMIs) that form the core of neural prosthetic technology will afford a new method to study brain mechanisms and will allow, among other things, the testing of new theories of brain function.
A primary issue in neuroprosthetic research is the choice of brain area from which prosthetic command signals are derived. Current studies around the world have focused primarily on deriving neuroprosthetic command signals from the motor cortex (we refer to this approach as motor-based). Recordings from multiple neurons are “decoded” to control the trajectories of a robotic limb or a cursor on a computer screen. In addition, progress has been made in using electroencephalogram (EEG)-based signals to derive neuroprosthetic commands.
At Caltech, however, we have pursued a novel approach, which is to use high-level cognitive signals for controlling neural prostheses. Read-outs are made of the goals and intentions of the subject, rather than the instructions on how to obtain those goals (see Figure 1). Smart output devices, such as robots, computers, or vehicles, using supervisory control systems, then manage carrying out the physical tasks required to complete the intended goal. The cognitive signals that can be read-out are myriad and can include the expected value of an action and, perhaps in the future, speech, emotional state, and other higher cortical functions. An “expected value signal” is used by the brain to make decisions and can be used by prosthetics to interpret a subject’s decisions, preferences, and motivation, all of which would help a paralyzed patient communicate better with the outside world.
Proof-of-Concept: Cognitive-Based Paradigm in Monkey
Cognitive control signals can be derived from many higher cortical areas in the parietal and frontal lobes related to sensory-motor integration. The primary distinction is not the place from which recordings are made, but rather the type of information that is being decoded and the strategy for using these signals to assist patients. In our work with macaque monkeys, we focused on the posterior parietal reach region (PRR), but similar approaches can be used for interpreting cognitive thoughts from other brain areas. It is likely that some areas will be better than others depending on the cognitive signals to be decoded and the parts of the brain that are damaged.
The PRR has many features of a movement area, being active primarily when a subject is preparing and executing an arm movement. However, the region is in direct neural connection with the visual system, and vision is perhaps its primary sensory input. Moreover, this area codes the targets for reach in visual coordinates relative to the current direction of gaze (also called retinal or eye-centered coordinates). It is coding the desired goal of a movement, rather than the intrinsic limb variables required to reach to the target. In contrast, motor cortical areas in the frontal lobe tend to code movements in intrinsic, limb-centered coordinates. Moreover, the PRR can hold the plan for a movement in short-term memory through persistent activity of its neurons. This intention-related activity provides a useful neural correlate of the intentions of the subject for subsequent decoding. The human homologue of PRR has recently been identified through fMRI experiments.
There may be advantages to using the visual-motor system for neural-prosthetic applications. Paralysis resulting from spinal cord lesions or other disease processes will compromise sensory feedback, resulting in a major loss of the subject’s capability for error correction. Vision, however, is generally not compromised with paralysis and therefore can still provide accurate feedback. Paralysis also results in degeneration and reorganization in the motor cortex. In the case of spinal cord lesion degeneration results from direct damage of cortico-spinal motor neurons and from the loss of somatosensory input, the main sensory input to the motor cortex. Visual-motor areas within the posterior parietal cortex are relatively more removed anatomically, with few cortico-spinal projecting neurons and with vision being a major source of sensory input. Thus it is possible that these areas may undergo less degeneration with paralysis, and therefore provide a stable source of command signals. Moreover, the posterior parietal cortex appears to be essential for visually guided, on-line correction of movement trajectories.
We have developed proof-of-concept neuroprosthetic systems in the Andersen laboratory. The work involves three monkeys that are each trained to operate a computer cursor by merely “thinking about it.” The experimental set-up consists of neurophysiological recording chambers which simulate the function of a neural prosthesis. Signals from electrodes placed in the PRR are amplified, filtered, and digitized. These sampled neural signals are then processed to extract the intended reach direction, as well as the current cognitive state of the monkey. The decoded reach direction and cognitive state the form the basis for a command signal sent to a computer interface or electromechanical device. In particular, we have the monkey think about positioning a cursor at a particular goal location on a computer screen, and then we decode his thoughts. He thinks about reaching there, but doesn’t actually reach, and if he thinks about it accurately, he’s rewarded. Combined with the goal task, the monkey is also told what reward to expect for correctly performing the task. Examples of variation in the reward are the type of juice, the size of the reward, and how often it can be given. We are able to predict what each monkey expects to get if he thinks about the task in the correct way. The monkey’s expectation of the value of the reward (his cognitive state) provides a signal that can be employed in the control of the neural prosthetic device (in this case, ultimately the cursor).
This type of signal processing may have great value in the operation of prosthetic devices because once the patient’s goals are accurately decoded, the devices’ computational system can perform the lower-level calculations needed to run the devices. Since the brain signals are high-level and abstract, they are versatile and can be used to operate a number of devices. These signals could also be rapidly adjusted by changing parameters of the task to expedite the learning that patients must do in order to use an external device. The result suggests that a large variety of cognitive signals could be interpreted, which could lead, for instance, to voice devices that operate by patients merely thinking about the words they want to speak.
The ability of the monkeys to position the cursor on the computer screen with their intentions improved considerably over a period of one to two months. This is consistent with a number of studies of cortical plasticity and suggests that patients will be able to optimize the performance of neural prostheses with training.
The local field potential (LFP) is the aggregate extracellular signal that is recorded by an electrode from the activity of neurons within its listening sphere. It has recently been found that the local field potentials recorded in the posterior parietal cortex of monkeys contain a good deal of information regarding the primates’ intentions. This information is complimentary to that obtained from action potentials. The LFP gamma band (approximately 25-90 Hz) temporal structure in the PRR is tuned for reach direction like the action potential activity of individual neurons. Moreover, the decoding of behavioral state from PRR activity was better when using LFPs as compared to spikes. Thus the LFPs provide the most reliable indication of changes in cognitive state.
From a practical point of view, these oscillations are extremely useful for neural prosthetic applications. A major challenge for cortical prostheses is to acquire meaningful data from a large number of channels over a long period of time. This is particularly challenging if single spikes are used since typically only a fraction of the probes in an implanted electrode array will show the presence of spikes, and these spikes are difficult to hold over very long periods of time. However, since local fields come from a less spatially restricted listening sphere, they are easier to record and are more stable over time. In our experience, most electrodes in an array can record LFP signals for at least two years, making this one of the most robust signal gathering methods. Thus it would be of great advantage to be able to use the LFP signals for decoding when and where monkeys intend to make movements.
We now turn to some of the engineering issues that are relevant to the development of future cognitive neural prostheses.
Moveable Electrodes for Autonomous Neuron Isolation and Tracking
The front end of a neural prosthesis consists of an array of chronically implanted electrodes. A key challenge is the yield (number of useful signals) and longevity of these electrode arrays. The reported values of yield and longevity vary widely across different animals, cortical areas, and array designs. While some arrays have provided useful signals for several years, the quality of single-cell activation in most channels of fixed-geometry implanted electrode arrays noticeably degrades after a few weeks or months. Factors contributing to this deleterious loss of signal include reactive gliosis (bio-incompatibility of the electrode’s surface material). Another difficulty arises from the arrays’ fixed electrode geometries, which cannot be adjusted once they are implanted.
Consequently, the array’s useful signal yield may be low if the electrodes’ active recording sites lie in electrically inactive tissue, are distant from cell bodies (which generally produce the largest extracellular signals), or sample cells with non-optimal receptive fields for the task at hand. Even if the initial placement is satisfactory, fixed-geometry electrode arrays can drift in the brain matrix due to tissue movement caused by respiratory or circulatory pressure variations and mechanical shocks. This drift can lead to the separation of the electrode from the vicinity of active cells, thereby lowering signal yield.
Ideally, it would be advantageous to be able to readjust the electrodes continuously after they are implanted to overcome these effects. Such continual adjustment would significantly improve the quality and yield of signals harvested by an electrode array. Electrodes that could break through scar tissue after its build up would also be useful. Manual adjustment of electrodes, which is the standard practice today, is tedious and impractical for paralyzed patients. Electrodes that could continuously and autonomously position themselves so as to optimize the neural signal would provide a great advantage.
To solve these problems, the Burdick lab has developed a new class of computer-controlled multi-electrode systems that continually and autonomously adjust electrode positions under closed-loop feedback control so as to optimize and maintain the quality of the recorded extracellular signal. These electrodes can maintain high signal quality without requiring human monitoring and intervention. They also allow specific populations of neurons to be selected, thereby simplifying decoding and control algorithms that are based on decoding neuronal populations.
We have developed algorithms that can autonomously isolate and then maintain the signal from a single neuron. These algorithms use a variant of stochastic optimization to find the best probe position using only the recorded electrical signal. The algorithm has been used successfully on a number of occasions to automatically isolate and maintain extracellular signal activity in monkey PRR, as well as rat barrel cortex. To demonstrate the future potential for this approach, we have also built a custom micro-drive containing four electrodes that are independently actuated by miniature piezoelectric motors. This device can fit inside the standard recording chamber that is used in the Andersen recording laboratory.
However, the eventual goal is to use micro-electro-mechanical systems (MEMS) technology to produce a movable electrode array implant. To this end we are collaborating with Yu-Chong Tai and his group. One promising method is to use electrolysis techniques to move and lock the probes in place. This movement is accomplished by passing electrical current within small bellows chambers filled with fluid. The gas released by electrolysis increases pressure within the bellows and moves the electrode. The electrodes can be moved in the opposite direction by introducing a catalyst and reversing the current flow. Advantages of this electrolysis technique include relatively low driving voltage, low heat dissipation, the ability to lock electrodes in place without the need for continuous power dissipation, the ability to generate very high forces, and the ability to provide hundreds of microns of electrode displacement.
Microfluidic delivery systems could also be added to the implant. These microfluidic systems would also work via electrolysis, and could potentially deliver anti-inflammatory agents to manage the effects of the electrodes’ presence, or to deliver therapeutic agents. The MEMS movable probes and microfluidic channels can be constructed as linear probe arrays. These arrays would consist of the electrodes/needles, micro-electrolysis systems, and control electronics. The individual chips with linear arrays would be stacked within a chamber, allowing the most flexibility in the overall geometry of the implanted array of electrodes and microfluidic channels. The depth of the individual chips can be adjusted coarsely using a motorized chip adjuster following surgery. After coarse adjustment, electrolysis actuators would provide the fine-tuning of the electrodes’ positions automatically and continuously. The integration of pre-processing electronics (e.g., pre-amplifiers, filters, and multiplexers) into a multi-electrode array front-end would improve recording performance by improving signal-to-noise ratio and buffering the signal of high-impedance electrodes. Such a pre-processing chip has recently been developed.
Thought into Action: Control Systems Based on Intelligent Devices and Supervisory Control
Also required for a cognitive-based prosthetic system are intelligent devices and hierarchical, supervisory control algorithms. Any system that translates thoughts into action will employ a computer interface, and often some electromechanical devices. Such systems must match the information that is decoded from the brain to the informational requirements of the computer interface and the commanded devices. On the brain side, the cognitive approach focuses on decoding high-level information at the abstract or symbolic level. The informational requirements on the electromechanical device side can vary widely with the type of device and intended task. For graphical computer interfaces, the problem of control system design reduces to matching the cognitive states of the brain to the symbolic states of the task. For instance, iconic menus on computer monitors can be used for communication with a wide range of devices from household utilities to computers for exploring the Internet.
Physical electromechanical devices require more detailed instructions. Supervisory control systems can convert symbolic level commands into detailed motor device commands, which are then carried out and monitored by the supervisory controller. There is much to be gained by pursuing this approach, as it has additional advantages for both the patient and the system engineer. To interface the brain to different electromechanical devices, often only the lowest level of the control hierarchy needs to be re-engineered for the specific mechanical device. Similarly, the hierarchical nature of supervisory control should allow patients to learn much more quickly how to command a new device.
Since a patient’s workspace will be limited, knowledge of that workspace, combined with the decoded desires of the subject, may be sufficient to successfully complete tasks using intelligent devices. For example, given the Cartesian coordinates of an intended object for grasping, a robotic motion planner can determine the detailed joint trajectories that will transport a prosthetic hand to the desired location. Sensors embedded in the mechanical arm ensure that it follows the commanded trajectories, thereby replacing the function of proprioceptive feedback (internal feedback within muscles, joints, and tendons) that is often lost in paralysis. Other sensors can allow the artificial arm and gripper to avoid obstacles and control the interaction forces with its surroundings, including grasping forces, thereby replacing somatosensory feedback (external feedback based on touch). Only the intent to grasp or ungrasp an object is needed to supervise these actions. Hence, low-level physical details and interactions need not be specifically commanded from decoded brain signals. However, if available, motor signals can augment low level plans and controls.
Work continues on all fronts, and we have recently identified the human homologue of the macaque parietal reach region. However, it is still unknown if neural activity in human PRR can be decoded in the same way as that in the macaque PRR. To address this question we are working with human participants (epilepsy patients) who have chronically implanted electrodes placed on the surface of cortex and within deep brain regions. Recordings taken from these participants while they execute delayed reaches allow us to acquire high signal-to-noise intracranial EEG (iEEG) activity from cortical areas during motor planning. Analysis of this neural activity is aimed at determining which properties of the signal can be used to decode and predict planned movement. The positive results to date of our unique approach to the development of cognitive neural prostheses have inspired us to continue, with the possibility of transitioning the technology to humans within several years.
Joel Burdick is Professor of Mechanical Engineering and Bioengineering and Richard Andersen is the James G. Boswell Professor of Neuroscience. Learn more about Joel Burdick’s research at: http://www.me.caltech.edu/faculty/burdick.html
More on Richard Andersen’s research at: http://vis.caltech.edu/