A single conversation can be interpreted in many different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations very stressful. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) introduce an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals. As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy.
The MIT CSAIL researchers had subjects wear a Samsung Simband, a research device that captures high-resolution physiological waveforms. After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.