Surdeanu Abstract: Our Quest for Interpretable Natural Language Processing

Neural networks have achieved excellent performance in many natural language processing (NLP) tasks, to the point where they have become the standard strategy for approaching these problems. However, not all is great: neural networks have poor interpretability, i.e., it is hard for the human user to understand *why* the machine made a certain prediction, and, perhaps more importantly, once a limitation is identified, *how* to fix it. These issues are crucial in inter-disciplinary projects, where a considerable number of contributors, i.e., the non-machine-learning experts, must contribute in a meaningful way to the project.

In this talk, I will summarize our lab's quest for interpretable machine learning for NLP. I will present three papers that introduce directions for building models that humans can understand and correct when necessary (e.g., grammars rather than neural networks) using modern deep learning methods. These approaches work when there is training data (i.e., supervised learning), or with minimal training examples (i.e., semi-supervised learning).