Posted by Alumni from MIT
December 10, 2024
These explanations are often complex, however, perhaps containing information about hundreds of model features. And they are sometimes presented as multifaceted visualizations that can be difficult for users who lack machine-learning expertise to fully comprehend. They developed a two-part system that converts a machine-learning explanation into a paragraph of human-readable text and then automatically evaluates the quality of the narrative, so an end-user knows whether to trust it. 'Our goal with this research was to take the first step toward allowing users to have full-blown conversations with machine-learning models about the reasons they made certain predictions, so they can make better decisions about whether to listen to the model,' says Alexandra Zytek, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique. She is joined on the paper by Sara Pido, an MIT postdoc; Sarah Alnegheimish, an EECS graduate student; Laure... learn more