Jump to Content
Jessica Kirchner

Jessica Kirchner

PhD Linguistics - University of California, Santa Cruz (2010)
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We propose a templatic Natural Language Generation system, which uses a dependency grammar together with feature structure unification to guide the generation process. Feature structures are unified across dependency arcs, licensing the selection of correct lexical forms. From a practical perspective, the system has numerous advantages, such as the possibility to easily mix static and dynamic content. From a theoretical point of view, the templates can be seen as linguistic constructions, of which the relevant grammar is specified in terms of dependency grammar. In this paper we present the architecture of the system, and two case studies: verbal agreement in French, including the object-agreement pattern of past participles, and definiteness spreading in Scandinavian languages. The latter case study also exemplifies how this framework can be used for cross-lingual comparison and generation. View details
    Using Dependency Grammars in guiding Natural Language Generation
    Anton Ivanov
    The Israeli Seminar of Computational Linguistics, IBM Research, Haifa (2019)
    Preview abstract We propose a templatic Natural Language Generation system, which uses a dependency grammar together with feature structure unification to guide the generation process. Feature structures are unified across dependency arcs, licensing the selection of correct lexical forms. From a practical perspective, the system allows for numerous advantages, as the possibility easily to mix static and dynamic content. From a theoretical point of view, the templates can be seen as linguistic constructions, of which the relevant grammar is specified in terms of dependency grammar. View details
    Preview abstract Entity type tagging is the task of assigning category labels to each mention of an entity in a document. While standard systems focus on a small set of types, recent work (Ling and Weld, 2012) suggests that using a large fine-grained label set can lead to dramatic improvements in downstream tasks. In the absence of labeled training data, existing fine-grained tagging systems obtain examples automatically, using resolved entities and their types extracted from a knowledge base. However, since the appropriate type often depends on context (e.g. Washington could be tagged either as city or government), this procedure can result in spurious labels, leading to poorer generalization. We propose the task of context-dependent fine type tagging, where the set of acceptable labels for a mention is restricted to only those deducible from the local context (e.g. sentence or document). We introduce new resources for this task: 11,304 mentions annotated with their context-dependent fine types, and we provide baseline experimental results on this data. View details
    No Results Found