Will we be able to take a nap behind the wheel of a future autonomous car? Probably not. Autopilots and other automated machinery require forms of human-operator supervision.
Autonomy, for example, is dependent on chips and sensors, such as GPS for position and magnometers for directional bearing, among others. That tech, at least in the near-term, has to be monitored by humans in real time in case the sensors become glitchy.
+ Also on Network World:Self-driving warehouse robots give Giant Eagle a lift +
The problem with that human overseeing, though: How do you make sure the supervisory human doesn’t get glitchy? What if the person arbitrarily decides that the car is driving just fine and that he wants to take a nap or would rather read a book than supervise what is he thinks is a reliable machine?
I’ve actually seen someone reading a book while driving a normal car in traffic at speed on a freeway in Los Angeles’ San Fernando Valley.
The autonomous machine needs to “understand what’s going on with the human,” says Jessica Schwarz, a graduate psychologist from the Fraunhofer Institute for Communication in Germany.
“Hey, human, how are you doing,” is something, then, we may metaphorically be hearing from our devices in the future.
Real-time diagnosis
Schwarz has built a diagnostic device that “recognizes user states in real time and communicates them to the machine,” the institute explains in a press release. It’s close to being ready for commercial application, she claims.
The key is to be precise in the real-time diagnosis and try to get an overall picture of the human operator’s “workload, motivation, situation awareness, attention, fatigue and the emotional state”—all factors that contribute to “human performance,” she says.
Behavioral and psychological measurements help with the detection, as do environmental factors, time of day and the user’s experience.
“An increased heart rate, for example, does not automatically mean that a person is stressed,” she says.
That’s why all of the factors become important and are included in the tool.
Monitoring the operators’ eyes is one element. “If they are closed for more than one second, an alarm is triggered,” for example. That alarm is sent to the computer.
Air traffic controller alertness is another use for the scheme. Schwarz says that’s how she tested her system.
By mocking-up air traffic scenarios, with increasing levels or noise and distractions, and by fitting subjects with EEG sensors on the head and the eye tracker, she says she was able to determine and convey to the machine when the subjects were performing poorly. Included in the “holistic model” she used were factors such as user experience and known capabilities.
“Automated systems thus receive very exact information regarding the current capabilities of the user and can react accordingly,” Schwarz says in the release.
Potential applications don’t just include autonomous cars or drones. Anywhere where “critical user states can be a safety issue” could use the technology, she says. For example, “monotonous monitoring tasks in control rooms or training systems for pilots could be optimized by this technology.”
The future will require “that not only the user understands the machine, but that the machine also understands the state of the human,” Schwarz says.