Emotion-sensing computer software that responds to students’ cognitive and emotional states, including frustration and boredom, has been developed by researchers at the University of Notre Dame, University of Memphis, and the Massachusetts Institute of Technology.
The new technology matches the interaction of human tutors. It not only offers tremendous learning possibilities for students, but also redefines human-computer interaction, according to University of Notre Dame assistant professor of psychology Sidney D’Mello. D’Mello also is an assistant professor of computer science and engineering.
Dubbed “AutoTutor” and “Affective AutoTutor,” the software can gauge a student’s level of knowledge by asking probing questions, analyzing the responses, then proactively identifying and correcting misconceptions. It also responds to the student’s own questions, gripes, and comments and even senses a student’s frustration or boredom through facial expressions and body posture. It then changes its strategies to help the student conquer those negative emotions, the researchers said.
“Most of the 20th-century systems required humans to communicate with computers through windows, icons, menus, and pointing devices,” says D’Mello, who specializes in human-computer interaction and artificial intelligence in education. “But humans have always communicated with each other through speech and a host of nonverbal cues, such as facial expressions, eye contact, posture, and gesture. In addition to enhancing the content of the message, the new technology provides information regarding the cognitive states, motivation levels, and social dynamics of the students.”
AutoTutor is an Intelligent Tutoring System (ITS) that helps students learn complex technical content in Newtonian physics, computer literacy, and critical thinking by holding a conversation in natural language. It simulates the teaching and motivational strategies of human tutors, modeling students’ cognitive states and then tailoring the interaction to individual students. It also keeps students engaged with images, animations, and simulations.
Affective AutoTutor adds emotion-sensitive capabilities by monitoring facial features, body language, and conversational cues, the researchers said, explaining it then “regulates” negative states such as frustration and boredom and “synthesizes emotions via the content of its verbal responses, speech intonation, and facial expressions of an animated teacher.”
“Much like a gifted human tutor, AutoTutor and Affective AutoTutor attempt to keep the student balanced between the extremes of boredom and bewilderment by subtly modulating the pace, direction, and complexity of the learning task,” D’Mello said.
Tested on more than 1,000 students, AutoTutor produces learning gains of approximately one letter grade — gains that have proven to outperform novice human tutors and almost reach the bar of expert human tutors, he added.
Source: University of Notre Dame