Primary Menu

Education, Events, Publication

Funding & Recognition

Chain of Thought Analysis in AI: Large Language Models’ Reasoning Performance with Non-Causal Prompts

Semester: Spring 2024


Presentation description

Previous research has shown increased large language model (LLM) accuracy with "Chain-of-Thought" (CoT) reasoning, using causality-based prompting for Q&A. By fine-tuning the T5 model emphasizing non-causal prompting using "This is what I know about this: {Explanation}. My answer would be {Answer}," and the reverse, the study aims to assess whether the noncausal prompt fine-tuning models maintains the model's reasoning performance and explanation reliability in the same way CoT would.

Presenter Name: Kishan Thambu
Presentation Type: Poster
Presentation Format: In Person
Presentation #B15
College: Engineering
School / Department: School of Computing
Research Mentor: Ana Marasović
Date | Time: Tuesday, Apr 9th | 10:45 AM