Evaluation of models of speech intelligibility at the level of individual tokens can provide more detailed feedback on which model components require refinement. Based on the confusions corpus to be collected in S-1, this activity will develop protocols, data and delivery/scoring/analysis software to support model evaluation.
Goal: To create an annual evaluation framework for intelligibility models for participants both within and outside the INSPIRE network.
Relevance: Public evaluation campaigns help identify successful ideas, show where continued research effort is required, and help foster a research community.
Main host institution: University of Sheffield
Second host institution: Universidad del Pais Vasco