My research focuses on designing transparency and intentionality into agentic AI systems. I explore how we can make AI decision-making processes more visible and understandable to users, creating interfaces that adapt to human needs rather than forcing humans to adapt to technology.
Currently, I'm investigating how transparency can be intentionally designed into Human–AI interaction, particularly within LLMs and agentic systems. This work bridges the gap between technical capabilities and human understanding, ensuring that AI systems remain comprehensible and trustworthy as they become more autonomous.
Through my research, I aim to contribute to a future where AI systems are not just powerful, but also transparent, explainable, and aligned with human values and expectations.