Program in Applied Mathematics Colloquium
On Machine Reasoning
In this talk I will cover two papers in which we teach computers to mimic human reasoning over language. In particular, we focus on the problem of answering complex natural language questions from large textual knowledge bases.
In the first paper, I will introduce a simple, fast, and mostly unsupervised approach for question answering (QA) that semantically aligns each word in the question and candidate answer with the most similar word in the supporting evidence text. Our word similarity method operates over neural representations that model the underlying text at different levels of abstractions, ranging from characters to full sentences. Despite its simplicity and lack of supervision, our approach obtains state-of-the-art performance on two QA datasets, outperforming many other supervised neural approaches.
In the second paper, we focus on multi-hop QA, where several facts must be aggregated to answer a question. I will introduce an unsupervised strategy for the selection of evidence sentences for multi-hop QA that (a) maximizes the relevance of the selected sentences, (b) minimizes the overlap between the selected facts, and (c) maximizes the coverage of both question and answer. I will show that this strategy not only improves the performance of a downstream QA system, but also improves the explainability of its decisions.