Carbon for AI
Consolidated AI interaction patterns for IBM's Carbon design system from 15+ fragmented team implementations into unified specifications. Created component systems for reasoning transparency, contextual automation, and workspace collaboration. Patterns adopted across IBM's AI portfolio, enabling enterprise users to make informed trust decisions about AI-generated outputs for the first time.
Team
IBM Carbon for AI
Timeline
2024-2025
Role
Pattern Owner (AI Strategy)


The Challenge
During the rapid rollout of watsonx, 15 independent teams were building AI features simultaneously. While each team was solving problems for their specific users, foundational interactions were beginning to diverge. For example, one team might use a sidebar for AI verification while another used a modal.
For an enterprise user, this inconsistency makes it difficult to build a reliable mental model of how the AI actually works. My goal was to step in and provide a single source of truth that would save these teams time and create a more predictable experience for our users across the entire portfolio.
The Challenge
During the rapid rollout of watsonx, 15 independent teams were building AI features simultaneously. While each team was solving problems for their specific users, foundational interactions were beginning to diverge. For example, one team might use a sidebar for AI verification while another used a modal.
For an enterprise user, this inconsistency makes it difficult to build a reliable mental model of how the AI actually works. My goal was to step in and provide a single source of truth that would save these teams time and create a more predictable experience for our users across the entire portfolio.

The Approach
I created unified patterns that solve these challenges consistently across products. Teams could adopt these reusable components and customize them for their specific contexts. By identifying the most common friction points across teams, I prioritized three strategic pillars that directly impact AI trust and adoption:
Transparency: How the system explains its logic.
Automation: How the system balances efficiency with human agency.
Workflow Integration: How the system fits into the user's existing tasks.
The Approach
I created unified patterns that solve these challenges consistently across products. Teams could adopt these reusable components and customize them for their specific contexts. By identifying the most common friction points across teams, I prioritized three strategic pillars that directly impact AI trust and adoption:
Transparency: How the system explains its logic.
Automation: How the system balances efficiency with human agency.
Workflow Integration: How the system fits into the user's existing tasks.

Reasoning Transparency
In early research, we found that enterprise users weren't refusing AI because it was wrong — they were refusing it because they had no way to verify whether it was right. The transparency patterns shifted that dynamic.
I defined reasoning transparency patterns adapted for enterprise needs - showing AI's step-by-step process, data sources, and execution status with enough detail for technical verification. Users can expand traces to audit specific steps for compliance documentation, not just understand AI's thinking. This helps users make informed trust decisions. They can verify when needed while maintaining AI's efficiency benefits.
Teams using these patterns reported users could identify exactly where AI failed instead of rejecting entire outputs, reducing unnecessary rework.
Reasoning Transparency
In early research, we found that enterprise users weren't refusing AI because it was wrong — they were refusing it because they had no way to verify whether it was right. The transparency patterns shifted that dynamic.
I defined reasoning transparency patterns adapted for enterprise needs - showing AI's step-by-step process, data sources, and execution status with enough detail for technical verification. Users can expand traces to audit specific steps for compliance documentation, not just understand AI's thinking. This helps users make informed trust decisions. They can verify when needed while maintaining AI's efficiency benefits.
Teams using these patterns reported users could identify exactly where AI failed instead of rejecting entire outputs, reducing unnecessary rework.

Contextual Automation
Automation in the enterprise is a high-risk activity. Users told us they wanted help with repetitive tasks, but they feared the system making decisions, things like moving budget or changing permissions without their knowledge.
The central design question wasn't 'how much can the AI automate?' it was 'where does human judgment still need to be in the loop, and how do we make that handoff feel natural rather than alarming?
Working with AI architects, I defined automation logic: low-risk, high-confidence tasks automate immediately; high-risk actions loop users in with reasoning, letting them decide whether to automate later. The system learns from patterns over time. This balances efficiency with control. Users handle important decisions while AI manages routine work.
Product teams found the risk framework helped users feel in control, they automated low-risk tasks while maintaining oversight of critical decisions.
Contextual Automation
Automation in the enterprise is a high-risk activity. Users told us they wanted help with repetitive tasks, but they feared the system making decisions, things like moving budget or changing permissions without their knowledge.
The central design question wasn't 'how much can the AI automate?' it was 'where does human judgment still need to be in the loop, and how do we make that handoff feel natural rather than alarming?
Working with AI architects, I defined automation logic: low-risk, high-confidence tasks automate immediately; high-risk actions loop users in with reasoning, letting them decide whether to automate later. The system learns from patterns over time. This balances efficiency with control. Users handle important decisions while AI manages routine work.
Product teams found the risk framework helped users feel in control, they automated low-risk tasks while maintaining oversight of critical decisions.

Multimodal Interaction Patterns
Most AI tools rely on a general chat sidebar. But IBM users work in highly specialized environments like code editors or data dashboards. Forcing them to move between a sidebar and their primary workspace was causing significant context-switching fatigue.
I designed workspace collaboration patterns where AI integrates directly into editing contexts. Users work on their content, AI suggestions appear inline where they're working, they can accept or modify without leaving their flow. AI becomes part of the workspace rather than a separate tool.
Teams reported inline suggestions reduced context-switching, keeping users focused on their content rather than bouncing between tools.
Multimodal Interaction Patterns
Most AI tools rely on a general chat sidebar. But IBM users work in highly specialized environments like code editors or data dashboards. Forcing them to move between a sidebar and their primary workspace was causing significant context-switching fatigue.
I designed workspace collaboration patterns where AI integrates directly into editing contexts. Users work on their content, AI suggestions appear inline where they're working, they can accept or modify without leaving their flow. AI becomes part of the workspace rather than a separate tool.
Teams reported inline suggestions reduced context-switching, keeping users focused on their content rather than bouncing between tools.


Process
I managed the lifecycle of these patterns through a collaborative approach:
Identifying common needs
Facilitated office hours where product teams shared AI collaboration challenges. When multiple teams faced similar problems, I consolidated requirements.
Creating unified patterns
Researched how consumer AI tools (ChatGPT, Claude) handled collaboration, synthesized best practices, and created patterns that worked across IBM's product contexts.
Adoption through value
Teams chose patterns because consolidated solutions worked better than building independently. Patterns released in products across the portfolio.
Process
I managed the lifecycle of these patterns through a collaborative approach:
Identifying common needs
Facilitated office hours where product teams shared AI collaboration challenges. When multiple teams faced similar problems, I consolidated requirements.
Creating unified patterns
Researched how consumer AI tools (ChatGPT, Claude) handled collaboration, synthesized best practices, and created patterns that worked across IBM's product contexts.
Adoption through value
Teams chose patterns because consolidated solutions worked better than building independently. Patterns released in products across the portfolio.
Process
I managed the lifecycle of these patterns through a collaborative approach:
Identifying common needs
Facilitated office hours where product teams shared AI collaboration challenges. When multiple teams faced similar problems, I consolidated requirements.
Creating unified patterns
Researched how consumer AI tools (ChatGPT, Claude) handled collaboration, synthesized best practices, and created patterns that worked across IBM's product contexts.
Adoption through value
Teams chose patterns because consolidated solutions worked better than building independently. Patterns released in products across the portfolio.
Impact
15+ product teams adopted unified patterns across IBM's AI portfolio
Reduced development time: Worked with Carbon developers to build reusable pattern structure - teams implemented patterns instead of building from scratch
Prevented duplicate work: Office hours identified teams solving similar problems, connected them to collaborate and share code
Released in products: Teams shipped using these patterns (patterns widely used though not yet published on Carbon website)
Team recognition: We won Core77, Red Dot, and iF Design Award
Impact
15+ product teams adopted unified patterns across IBM's AI portfolio
Reduced development time: Worked with Carbon developers to build reusable pattern structure - teams implemented patterns instead of building from scratch
Prevented duplicate work: Office hours identified teams solving similar problems, connected them to collaborate and share code
Released in products: Teams shipped using these patterns (patterns widely used though not yet published on Carbon website)
Team recognition: We won Core77, Red Dot, and iF Design Award
Other Cases
Other Cases




