Towards Zero Shot Frame Semantic Parsing for Domain Scaling


State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. Multi-task training of such models alleviate the need for in-domain annotated datasets, as they benefit from shared wording, meanings and schema elements across different tasks and domains. However, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding. This paper proposes a deep learning based approach that can utilize only the slot label descriptions in context without the need of any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea is using the encoding of the slot names and descriptions within a multi-task deep learning slot filling model, resulting in soft alignments across domains by leveraging implicit transfer learning. Such an approach is promising for solving the domain scaling problem of language understanding models and eliminates dependency on large amounts manually annotated training data sets. Furthermore, our controlled experiments using a multitude of domains show that this approach results in significantly better semantic parsing performance when compared to using only in-domain data.