Electronic assistant voice customization

Insidely > Electronic assistant voice customization

How do consumers feel when they hear the synthetic voice of their electronic assistant?

Does customization of this voice improve their feelings? 

 

Listen closely and watch consumers’ reactions when we compare synthetic and human voices.

Objectives

We conducted an exploratory study to understand how people feel while listening to digital voices versus human voices.

Tools

  • Brain activity measurements with Emotiv Epoch+ electroencephalogram (EEG)
  • Analysis of gazepaths with Tobii Pro X3-120 Eye Tracker
  • Facial expressions recognition with Affectiva AFFDEX
  • Arousal level assertion through electrodermal activity with Shimmer3 GSR+ Unit
  • Digital voices generated by the Acapela engine

Protocol

Each respondent was equipped with the various sensors and asked to listen to 16 randomized voice samples (2 texts * happy/sad * male/female * human/synthetic). Self-reported emotional evaluation questionnaires were also provided after each sample and a general voice preference evaluation was conducted at the end of the session.

Learnings

Through brain frequency distribution and lateralization, increases in sudation and facial expressions, we can evaluate which voices are more pleasant to hear for the users.

A sad voice is not always negative to use. For instance, males feel positive emotions while listining to sad women voices.