Presentation description
Transformer-based large language models (LLMs) have seen prolific usage in natural language processing (NLP) tasks, search engines, and code generation. Despite their ability to compose intricate responses, LLMs exhibit reasoning-based limitations in compositional tasks. These are tasks that are complex enough that they require multiple subtasks to complete them. We explore the idea of whether prompting an LLM with in-context subtask examples can improve compositionality.
Presenter Name: Jordan Tan
Presentation Type: Poster
Presentation Format: In Person
Presentation #B14
College: Engineering
School / Department: School of Computing
Email: u1283221@utah.edu
Research Mentor: Vivek Srikumar
Date | Time: Tuesday, Apr 9th | 10:45 AM