Prediction and Integration for Natural Language Understanding
Date:
This talk presents novel research on improving natural language understanding through mechanisms inspired by human cognitive theories of prediction and integration. The speaker, introduces methods that augment pre-trained language models like BERT with predictive coding frameworks, enabling enhanced sentence-level and discourse-level representations by anticipating future latent states and leveraging contrastive learning. The talk also explores memory network architectures, inspired by human anticipation and rehearsal, demonstrating how these mechanisms—particularly when focused on coreference information—significantly boost memorization and comprehension in various question-answering and multimodal tasks. Through experimental results, Dr. Araujo shows that integrating these cognitive inspirations leads to more robust, context-aware, and linguistically sophisticated language models, advancing both theoretical understanding and practical performance in NLP.