[Thoughts]

Designing for AI Agents: A Human-Centered Approach

[Date]

05 Jan 2025

05 Jan 2025

[Location]

Tysons Corner, VA

Tysons Corner, VA

[Table of Content]

User-Centered AI Agent Design

User-Centered AI Agent Design

Human vs. AI Control

Human vs. AI Control

Designing Task Flows with TACO

Designing Task Flows with TACO

Human-in-the-Loop (HITL) Moments

Human-in-the-Loop (HITL) Moments

Trust, Transparency, and Disclosure

Trust, Transparency, and Disclosure

Governance and Responsible AI

Governance and Responsible AI

Principles for Designing Effective AI Agents.

User-Centered AI Agent Design

Designing AI Agents to Streamline the User Experience: AI agents have the potential to make workflows faster, reduce cognitive load, and automate repetitive tasks. However, this is only possible if they are designed with the user in mind. A well-designed AI agent should feel predictable, helpful, and easy to interact with. When users do not understand what the system is doing, feel like they have lost control, or when an agent oversteps its boundaries, trust erodes and adoption suffers.

To avoid these pitfalls, we need a structured approach to designing AI agents. Swami Chandrasekaran’s TACO Framework for Understanding AI Agents, which breaks down an agent’s TasksAgencyCapabilities, and Outcomes, helps simplify the design process. By clearly defining what an agent does, how much autonomy it has, what it can reliably handle, and what success looks like, we create clearer, more intuitive experiences (Chandrasekaran, 2025).

From a user experience perspective, the key challenge is ensuring that users always feel informed and in control, even when the agent is handling tasks for them. This means designing agents that:

  • Know when to assist versus when to act. Users should never feel like AI is making choices for them without their awareness or approval.

  • Provide clear communication. The system should explain what it is doing and why, reducing any feeling of unpredictability.

  • Use progressive autonomy. Instead of overwhelming users with automation, agents should start with simple, predictable tasks and build trust over time.

We can see these principles in action across different real-world agent-driven experiences:

  • Apple’s iPhone setup assistant provides a step-by-step experience where the system assists without taking over. It makes smart suggestions, but the user remains in control of decisions like Face ID setup or data transfer (Apple Support, 2023).

  • Tesla’s Autopilot automates aspects of driving but keeps the driver in the loop, requiring hands-on supervision and clearly signaling when human intervention is needed. This ensures that even as automation increases, the user understands their role (Tesla, 2024).

  • Smart thermostats, like Nest and Ecobee, start by following user-set preferences and then gradually introduce automated adjustments. Users are always notified when changes happen and can override them, keeping trust intact (Google Nest, 2024; Ecobee, 2024).

  • Amazon Go stores push agency to the highest level by automating the checkout process entirely. However, transparency is key. Users still receive receipts and can review or dispute purchases, ensuring they never feel out of control (Amazon, 2024).

Each of these examples reinforces a core design principle. AI agents should simplify, not complicate. They should feel like an extension of the user’s intent rather than an independent system acting on its own.

Designing AI agents with this approach ensures tools and assets remain intuitive, transparent, and effective. Whether it is an AI system that helps draft reports, automates scheduling, or analyzes data, clarity, gradual automation, and user control must be prioritized. This turns complexity into a smooth, streamlined experience.

By applying this structured approach to AI agent design, organizations can create systems that are not only powerful and efficient but also trusted and easy to use. These systems help users stay focused on what matters most while AI handles the rest.

Human vs. AI Control

Defining Human vs. Agent Control: A key consideration when designing AI agents is determining the balance between human oversight and agent autonomy. In internal use cases, whether automating document generation, scheduling meetings, or summarizing reports, users must feel empowered to retain control while benefiting from the efficiency of automated agents.

The approach should begin with clear boundaries between when the human is in control and when the agent acts independently. For example, when an agent drafts a proposal or generates client-facing documentation, a human user must review and approve the content before it is sent externally. These are human-in-the-loop moments where accountability is essential.

For low-risk internal tasks, such as scheduling meetings based on calendar availability, agents can operate with higher agency. Once a user has provided initial permissions and preferences, the system can take charge of finding time slots, sending invites, and rescheduling as needed, reducing user burden.

A best practice is to start with low agency and build user trust through transparency and consistent performance. As users grow more confident, the system can take on more proactive roles, but always with manual overrides and clear notifications in place.

Designing Task Flows with TACO

Designing Task Flows for AI Agents Using TACO: When designing task flows for AI agents, it is important to map out where the human leads, where the agent assists, and where the agent can act independently. This approach ensures consistency across user experiences and aligns with responsible AI use standards.

For example, an agent that assists with automating deliverables might follow this flow:

  1. Human initiates the process by uploading materials or providing prompts.

  2. The agent drafts the deliverable (e.g., a slide deck, report summary).

  3. The human reviews and approves before finalizing.

  4. Over time, the agent might suggest optimizations or reusable templates, increasing its agency as trust grows.

Alternatively, for internal productivity tasks such as calendar management, an agent could:

  1. Learn from user behavior to propose meetings.

  2. Proactively schedule recurring events or send nudges to confirm availability.

  3. Provide a dashboard for manual intervention, ensuring transparency and allowing users to adjust actions as needed.

A core design principle is task clarity. Tasks must be well-defined, capabilities must be transparent, and outcomes should prioritize user efficiency without sacrificing control. The TACO framework can be applied to each agent project to define these flows clearly.

Human-in-the-Loop (HITL) Moments

Human-in-the-Loop (HITL) Moments in AI Agent Design: AI agents should be designed with Human-in-the-Loop moments at critical points to ensure accuracy, mitigate risks, and uphold trust. These moments are especially important when AI agents are involved in:

  • Drafting client-facing content, where human review and approval are required.

  • Processing sensitive data, where explicit confirmation must be obtained before actions are taken.

  • Suggesting compliance decisions, where human experts need to validate the AI’s recommendations.

For example, if an AI agent proposes a compliance risk remediation plan, it must flag the recommendation for human review before any action is taken. Similarly, if an AI agent identifies potential inconsistencies in financial data, it should prompt the user to review flagged areas before conclusions are drawn.

Human-in-the-loop design should be grounded in clear prompts, proactive notifications, and an easy override mechanism to empower users. At no point should AI agents act without the user’s explicit awareness or approval, especially when decisions impact clients, finances, or regulatory compliance.

Trust, Transparency, and Disclosure

Building Trust, Transparency and Progressive Disclosure in AI Agents: Building trust is the foundation of effective AI agent design. Without trust, users will hesitate to adopt agents or allow them to take meaningful actions. The approach should prioritize transparency, predictability, and progressive disclosure.

Transparency means AI agents should explain their actions and decisions, whether drafting reports, suggesting task prioritization, or analyzing data. Users should always know what the agent did, why, and what data was used in the process. For example, an AI agent summarizing meeting notes should include source references and give users the option to review or edit the summary.

Progressive disclosure ensures that users are introduced to an agent’s capabilities gradually. Early experiences should be simple, such as “generate a draft” or “propose three meeting times.” As users grow more comfortable, additional capabilities can be offered, such as “optimize document layout” or “predict feedback based on sentiment analysis.”

Trust is built through consistent behavior, clear user feedback loops, and manual override options at every stage. Users must always feel they are in control, and agents must behave predictably and transparently to maintain high standards for integrity and responsibility.

Governance and Responsible AI

AI Design Governance and Compliance Considerations: As AI agents scale, they must comply with Responsible AI Principles, ensuring fairness, accountability, transparency, and adherence to regulatory and ethical standards. Agent behavior should align with a governance framework that incorporates audit trails, explainability, and bias mitigation strategies into every design.

Select this text to see the highlight effect