The next stage of user experience design may not involve only human users. As artificial intelligence becomes more capable of acting on our behalf, designers are beginning to rethink what interaction means. Products like Microsoft Copilot, Rewind AI, and Google’s NotebookLM are early examples of this shift: interfaces where human users delegate tasks to digital agents that can interpret context, take initiative, and execute goals.
For Osman Gunes Cizmeci, a New York–based UX and UI designer, this change signals a fundamental redesign of the relationship between people and technology. “We’re designing for partnerships now,” he says. “The real UX question is when to hand off, not when to click.”
From User Control to Shared Control
Traditional UX principles are built around human agency. Designers focus on clarity, visibility, and feedback loops that make users feel in charge. Agentic systems challenge that balance. When an AI can take independent action—scheduling meetings, generating content, adjusting workflows—the boundary between guidance and autonomy becomes blurred.
“Users aren’t just issuing commands anymore,” Osman Gunes Cizmeci explains. “They’re setting intentions that agents interpret. That’s a big difference. It’s not a transaction, it’s collaboration.”
He describes the shift as designing for “composite users,” where both the human and the agent form a single functional unit. “When I map a user journey now, I sometimes include the agent as its own participant,” he says. “You have to think about what the AI sees, what it decides, and how it communicates those decisions.”
The Challenge of Trust
Agentic UX introduces a new layer of trust design. If an AI acts autonomously, users must understand when and why it does so. Hidden automation can create frustration or anxiety if people feel technology is operating behind the scenes without their knowledge.
“The hardest part isn’t making AI act, it’s making it accountable,” Cizmeci says. “If it changes something, the user should see that and understand the reasoning. Otherwise it feels like loss of control.”
Cizmeci emphasizes transparency as a design principle. When an AI proposes or completes an action, the interface should provide context, options to review or undo, and clear feedback about the outcome. “Good agentic design makes users feel supported, not replaced,” he adds.
He also points to accessibility as an emerging issue. “If your assistant acts without checking with you, what happens when you rely on screen readers or adaptive devices? We need to ensure autonomy doesn’t become exclusion.”
Designing for Delegation
In practice, designing for agentic systems requires rethinking core UI patterns. Instead of step-by-step flows, designers must build flexible frameworks that accommodate AI-driven suggestions and interruptions.
“It’s not about guiding users through a sequence anymore,” Cizmeci says. “It’s about giving them checkpoints where they can delegate safely.”
For example, an agentic task manager might let a user outline goals while the AI handles scheduling and reminders. A creative tool might generate content drafts automatically, but the human sets parameters and reviews results.
“I design these systems around negotiation,” Cizmeci explains. “The human defines the boundary, and the agent stays within it until invited to go further.”
New Skills for Designers
Agentic UX also requires a new literacy in behavior modeling and systems thinking. Designers must consider not just interfaces, but the decision logic behind them.
“You have to understand how agents interpret context,” Cizmeci says. “It’s not just layout and hierarchy anymore—it’s intent modeling, trust calibration, and feedback ethics.”
Collaboration with engineers and data scientists is becoming more essential. “We can’t design agentic systems in isolation,” he says. “Designers need to be part of defining the rules that govern how agents behave. If we don’t, someone else will.”
The Path Forward
Cizmeci sees this as a pivotal moment for design, as he’s stated on Forem. “We’re moving from user-centered design to relationship-centered design,” he says. “Every product that includes AI now has two users: the person and the system that represents them.”
He believes the profession will eventually evolve toward designing “triads” of interaction: human, agent, and environment. “That’s where it’s heading,” he says. “Our interfaces will mediate conversations between us and the technologies that think with us.”
Agentic UX, in his view, is not about automation but augmentation. “The goal isn’t to make systems that act for people, but systems that act with them,” he says. “That partnership is what the next generation of design has to get right.”