Flexible Learning for Natural Language Processing
This project addresses three challenging, unresolved questions: (1) Given recent advances in learning the parameters of linguistic models and in approximate inference, how can feature design be automated? (2) Given that NLP tasks are often defined without recourse to real applications and that a specific annotated dataset is unlikely to fulfill the needs of multiple NLP projects, can learning frameworks be extended to perform automatic task refinement, simplifying a linguistic analysis task to obtain more consistent, more precise, or faster performance? (3) Can computational models of language take into account the non-text context in which our linguistic data are embedded? Building on recent success in social text analysis and text-driven forecasting, this project seeks to exploit context to refine models of linguistic structure while enabling advances in this application area.