Hyperight

Explainability for Text Summarization of Legal Documents – Nina Hristozova & Milda Norkute, Thomson Reuters

video
play-rounded-fill

Summarization of legal text is an essential part of a number of products in Thomson Reuters. The challenge is that writing summaries for multiple 10–100-page documents every day is very time consuming and laborious. By adding AI summarization capabilities to an existing product, we augmented the workflow of our editorial team – instead of writing the summaries from scratch they now review machine-generated summaries. We knew that one problem with the existing tool was that the outcomes generated by the AI summarization were not fully trusted, and so we looked at ways to improve the trust in the system. In this talk you will learn about how we added an extra layer of explainability to the machine-generated summaries, how to select suitable explainability mechanisms and how our users perceived it.

Key Takeaways

  • Explainability is very important for the adoption of AI systems.
  • Explainability helped the editors interacting with the AI system become even more efficient and strengthened their trust in the AI system.
  • Not all explainability methods are equal in the benefits they create to users. Explainability methods should be carefully tailored to the task and the user needs!
  • We will give a brief overview of the attention mechanism as one approach to add explainability.

Add comment