Mind control moves into battle
(Copyright: Getty Images)
Technology that taps into a soldier’s thought patterns could soon see action on the battlefield. But some worry about its future applications.
In Afghanistan, some soldiers are said to possess a sixth sense.
They hone their skills at the head of convoys that trundle along the dusty roads of remote mountainous provinces. As they drive, these soldiers scan ahead for signs of roadside bombs: disturbed earth, a glint of metal, or just something that seems out of place. Spotting them can mean the difference between life and death. Those who are half-jokingly said to possess the “sixth sense” are the ones that seem to have an uncanny ability to spot these almost imperceptible signs of danger.
Now, military scientists are beginning to build technologies that would give every soldier this ability, pushing the field of neuroscience from the lab and on to the battlefield.
These devices exploit what neuroscientists call the P300, a wave of brain activity that signifies an unconscious recognition of a visual object, and is so-named because it occurred about 300 milliseconds after stimulation. The P300 can be thought of as the biological basis of the sixth sense.
The problem is that it may take several seconds for the brain to become conscious of what it’s seen, and in Afghanistan, that brief time can mean the difference between spotting a bomb, and driving over it and setting it off.
But a device known as an electroencephalogram (EEG) can spot that P300 signal. Hooked into a sophisticated computer that can interpret the signal, it can immediately alert a person to a potential threat, taking a short cut through the brain’s normal conscious processing. Combined with advanced optics, it is possible to imagine a Terminator-like vision system that scans an area and immediately identifies and categorises threats.
Although it sounds far-fetched, this is roughly the idea behind a new military technology called Sentinel (SystEm for Notification of Threats Inspired by Neurally Enabled Learning), which is being touted as the world’s first “cognitive-neural” binocular threat-warning system.
Here’s how it works in practice: a high-tech camera scans a wide field of view, and then picks out potential threats based on sophisticated algorithms that mimic the human vision system. Then images of those threats are flashed in rapid succession through a viewer to the user, whose brain signals are being monitored. When the brain spots a potential threat and gives off the P300, the signal is detected and the user is visually alerted to focus on that specific image.
“That’s the beauty of this approach, it’s the human brain doing the filtering,” says Dr. Deepak Khosla, the chief scientist for the project at HRL Laboratories, based in Malibu, California, which developed the goggles under a research project funded by the Pentagon’s Defense Advanced Research Projects Agency (Darpa).
It may sound harder than just using a plain old pair of binoculars, but the ability to go through many images without manually scanning the landscape, or having to click through camera images, does save time, at least in tests. Sentinel has already outperformed conventional binoculars by spotting 30% more simulated threats in tests conducted in a desert environment in Arizona and tropical terrain in Hawaii, according to the company, and this summer the Army will field-test a prototype of the binoculars at Camp Roberts in California.
These goggles represent more than just another gadget: they could well become the first example of military neuro-enhancement using what is called a brain-machine interface.
The potential impact of such technology, long the stuff of science fiction, has emerged in recent years as a subject for serious military experts. In fact, a 2008 report on neuro-enhancement by the Jasons, a group of elite scientists who advise the government on national security issues, warned of the “potential for abuses in carrying out such research, as well as serious concerns about where remediation leaves off and changing natural humanity begins.” While the US military would be less prone to abuses thanks to extensive laws and regulations, “the activities of adversarial forces, will not likely be similarly constrained,” they wrote.
Whether foreign adversaries are about to engage in a brain race is unclear, but interest in such technology is on the rise. Darpa has another brain-computer interface project that looks at using the P300 to help intelligence analysts sort through satellite imagery, and the commercial sector is also moving ahead. Entertainment companies are already marketing EEG caps that allow players to use their thoughts to control an avatar in a video game, though the commercial technology is much less sophisticated than the military system.
The technology is also being used to help “locked-in people,” or the severely disabled, communicate by using their brain signals to pick out letters and type. This application works by flashing letters in front of the person, and then recording letters based on the person’s brain signals. The idea is that if the brain is looking for the letter K, then “when you see the letter K, it generates a P300,” says Deniz Erdogmus, an engineering professor at Northeastern University in Boston, who builds these systems. Practical applications are still five to 10 years away, but Erdogmus says that in lab experiments trained test subjects are scoring as high as 98%. “We can actually do it right now, it’s just a mater of fine tuning and making it field worthy,” he says.
Though such work is still primarily lab-based, experts do think the commercial sector will eventually improve; at HRL, Khosla sees a number of applications for this sort of brain-machine interface device, pointing to research that’s already being done in everything from the medical field, which is looking to use EEG to control prosthetics, to the automotive industry, which hopes the brain’s ability to detect danger will make cars safer. “Of course the holy grail is, ‘can you decode higher level information in the brain,’ like what is the human thinking?” Khosla says.
Such work, however, is also drawing attention now from scientists concerned that such technology could, for example, allow even more novel use of armed drones. “The ability to control a machine directly with the human brain could, for example, provide the potential to remotely operate robots or unmanned vehicles in hostile territory,” said a recent report by the UK Royal Society.
Jonathan Moreno, a professor of bioethics at the University of Pennsylvania and the author of Mind Wars, says what people find troubling is that, like with drones, such technology expands the battlefield, allowing soldiers to fight remotely. “It’s the projection of human intelligence into a device,” says Moreno. “That’s what it’s about.”
For now, however, researchers are far away from that ‘holy grail’ of mind reading complex thoughts, or controlling drones with the brain; even interpreting basic EEG data is tricky because the human skull basically obstructs the signal. While there have been advances, the best signals are still generated by having dozens of sensors attached to a user’s head with conductive gel, something that is fine for a laboratory, but impractical on the battlefield (another problem: the entire Sentinel system currently weights 7kg (15lbs), too heavy to be used like regular binoculars, though the goal is to get it down to 2.5kg (5lbs)).
Indeed, HRL is still using gel-based sensors for its goggles, though they are working toward something that can be put on and taken off, like a helmet or cap. “The biggest challenge far and away is the sensor: the device or cap that you wear on your head that is sensing the electrical activity in the brain,” says Todd Hughes, a former Darpa official.
Whether the Sentinel goggles are a harbinger of armed drones controlled by soldiers’ minds, or merely a reminder of the limits of new-fangled technology, depends on how far you look into the future. The Darpa programme demonstrates that such brain-computer interfaces can work, and do have military applications, but the idea of machines that can tap into the inner most thoughts of a human being is another matter. “We’re generations away from that,” says Hughes.