Explaining Data-Driven Document Classifications

  • David Martens
  • Foster Provost

Many document classification applications require human understanding of the reasons for data-driven classification decisions by managers, client-facing employees, and the technical team.  Predictive models treat documents as data to be classified, and document data are characterized by very high dimensionality, often with tens of thousands to millions of variables (words).  Unfortunately, due to the high dimensionality, understanding the decisions made by document classifiers is very difficult.  This paper begins by extending the most relevant prior theoretical model of explanations for intelligent systems to account for some missing elements.  The main theoretical contribution is the definition of a new sort of explanation as a minimal set of words (terms, generally), such that removing all words within this set from the document changes the predicted class from the class of interest.  We present an algorithm to find such explanations, as well as a framework to assess such an algorithm’s performance.  We demonstrate the value of the new approach with a case study from a real-world document classification task: classifying web pages as containing objectionable content, with the goal of allowing advertisers to choose not to have their ads appear on those pages.  A second empirical demonstration on news-story topic classification shows the explanations to be concise and document-specific, and to be capable of providing understanding of the exact reasons for the classification decisions, of the workings of the classification models, and of the business application itself.  We also illustrate how explaining the classifications of documents can help to improve data quality and model performance.