Cognition in a Large Language Model
Brian W. Stone (PSYC 343 Cognitive Psychology) provided this assignment example. Examples can be adapted to fit your and your students’ comfort and skill levels.
Overview
In this psychology course, students develop a hypothesis about how a large language model (LLM) works, compared to human thinking, and then document their interaction with it where they informally test that hypothesis.
Pedagogical Application
This hands-on assignment encourages experimentation and testing and is grounded in course content about perception, attention, memory, and language.
This assignment connects course concepts to a real-world application.
How It Works
After an initial warm-up using an LLM, they pose and test a cognition related question. These questions could include:
Does the LLM fall for the conjunction fallacy?
Can it recognize analogies or metaphors?
How well does it adapt to different audiences?
Does it reflect on human cognitive biases like confabulation or base rate neglect?
Students submit prompts, outputs, and their analysis of the results to instructors. Instructors provide questions and examples to guide the student’s exploration.
Student Findings
One student encoded a message in capital letters in a paragraph. The LLM missed the pattern. This led to the student reflecting on human versus machine pattern recognition.
Another student saw an LLM inventing a video game term with great confidence, mirroring human confabulation.
Ethical and Practical Use Guidelines
Support students by being familiar with:
Additional resources can be found in the Boise State AI in Education Faculty Toolkit (link TBD).