By Olivia Buchanan
Faculty Mentor: Evan Coleman
Abstract
Reinforcement learning (RL) enables AI agents to solve complex problems through trial and error instead of using human-coded instructions. This project investigates how these agents perform when information about the world is hidden from them, a scenario that mimics real-life challenges such as a broken sensor. Specifically, this study explores whether giving an agent “memory” improves its ability to infer that missing information.
To test this, a custom AI model was developed that uses a stacked series of recent visual snapshots to act as its memory. Its performance was then evaluated in a physics-based simulation against standard industry benchmarks. Surprisingly, the results indicated that giving the AI memory in this way did not significantly improve its performance in this specific environment. These unexpected findings challenge our assumptions about how AI learns and provide a valuable foundation for developing more resilient systems in robotics and self-driving cars.

Leave a Reply