Designing Transparency and Intentionality in Agentic AI Systems

As AI systems become more autonomous and capable of taking actions on behalf of users, the need for transparency becomes critical. This research explores how we can design interfaces that make AI decision-making processes visible and understandable, creating systems that adapt to human needs rather than forcing humans to adapt to technology.
The Black Box Challenge
Modern AI systems, particularly large language models and agentic systems, often operate as "black boxes"—users can see inputs and outputs but have little visibility into the reasoning processes in between. This opacity creates trust issues, makes debugging difficult, and can lead to misaligned expectations between users and AI capabilities.
As these systems gain the ability to take autonomous actions—browsing the web, executing code, making decisions—the stakes of this opacity increase dramatically. Users need to understand not just what the AI is doing, but why it's doing it and what it might do next.
Key Research Questions
This research investigates several interconnected questions about transparency in agentic AI:
- •Visibility — How can we make AI reasoning processes visible without overwhelming users with information?
- •Control — What intervention points should users have when AI systems take autonomous actions?
- •Trust Calibration — How do we help users develop appropriate trust levels in AI capabilities?
- •Intentionality — How can interfaces communicate AI "intent" in ways humans naturally understand?
Design Research Methodology
My approach combines theoretical frameworks from human-computer interaction with hands-on design exploration. I study how existing AI interfaces communicate (or fail to communicate) system state, analyze user mental models of AI capabilities, and prototype new interaction patterns that make AI reasoning more transparent.
This work bridges the gap between technical capabilities and human understanding, ensuring that AI systems remain comprehensible and trustworthy as they become more autonomous. The goal is not just to explain AI to users, but to design AI interactions that are inherently more understandable.