Ambient AI

Ambient Intelligence was an exploratory project at IBM Design asking how enterprise software could respond more naturally to user behavior. I focused on the context layer, exploring how signals from behavior, time, and system state could be interpreted into more thoughtful responses, without adding more noise to the interface.

Team

IBM Carbon for AI team

Timeline

8 Weeks

Role
Role

Interaction Designer

Interaction Designer

Background
Background

Starting with fiction

A lot of enterprise AI still felt added on top, usually in the form of a sidebar or assistant that waited for users to ask for help. We wanted to look at a different direction.

I started by looking at how systems in 'Her' and 'Blade Runner 2049' use environmental and behavioral signals to support people quietly. It helped me define the 'Context Layer'—where technology moves from being a tool to being a responsive environment. These systems felt quiet, well-timed, and built into the environment. That became a useful reference for thinking about what “ambient” could actually mean in product design.


Starting with fiction

A lot of enterprise AI still felt added on top, usually in the form of a sidebar or assistant that waited for users to ask for help. We wanted to look at a different direction.

I started by looking at how systems in 'Her' and 'Blade Runner 2049' use environmental and behavioral signals to support people quietly. It helped me define the 'Context Layer'—where technology moves from being a tool to being a responsive environment. These systems felt quiet, well-timed, and built into the environment. That became a useful reference for thinking about what “ambient” could actually mean in product design.


Building the context layer

My part of the work focused on signals: what the system might realistically notice, and how those signals could help the system understand what a user needed.

I mapped signal types across a watsonx.data workflow, looking at things like repeated actions, pauses, time-based patterns, and system events. A single signal rarely means much by itself, but once combined with others, it starts to show whether someone might be stuck, exploring, or concentrating.

Building the context layer

My part of the work focused on signals: what the system might realistically notice, and how those signals could help the system understand what a user needed.

I mapped signal types across a watsonx.data workflow, looking at things like repeated actions, pauses, time-based patterns, and system events. A single signal rarely means much by itself, but once combined with others, it starts to show whether someone might be stuck, exploring, or concentrating.

Designing for enterprise context

One thing that became clear quickly was that the same signal can mean very different things depending on the user.

A failed query from a junior user may mean they need help or reassurance. The same pattern from an Admin (Senior) may point to debugging or optimization. I used that idea to explore how role, permissions, and risk could shape system behavior, so responses felt more appropriate to the situation instead of one-size-fits-all.

Designing for enterprise context

One thing that became clear quickly was that the same signal can mean very different things depending on the user.

A failed query from a junior user may mean they need help or reassurance. The same pattern from an Admin (Senior) may point to debugging or optimization. I used that idea to explore how role, permissions, and risk could shape system behavior, so responses felt more appropriate to the situation instead of one-size-fits-all.

Early direction

One early concept was a blended signal chat that surfaced help based on what the system knew. It was helpful for making the logic visible, but it still behaved too much like an assistant asking for attention.

That became an important turning point. If the user still has to stop and engage with it directly, it is probably not ambient.

Early direction

One early concept was a blended signal chat that surfaced help based on what the system knew. It was helpful for making the logic visible, but it still behaved too much like an assistant asking for attention.

That became an important turning point. If the user still has to stop and engage with it directly, it is probably not ambient.

Focus Mode

That led to Focus Mode, which became the strongest direction from my work on the project. Instead of giving users another thing to interact with, the system would notice signs of deep work and respond by reducing noise in the workspace. Non-essential panels could fade back, lower-priority updates could wait, and urgent information could stay visible without taking over the screen. I also explored how this would work across a pre-focus, active focus, and post-focus state.

What mattered most here was tone. I wanted the system to feel supportive, not pushy, and aware without feeling watchful.

Focus Mode

That led to Focus Mode, which became the strongest direction from my work on the project. Instead of giving users another thing to interact with, the system would notice signs of deep work and respond by reducing noise in the workspace. Non-essential panels could fade back, lower-priority updates could wait, and urgent information could stay visible without taking over the screen. I also explored how this would work across a pre-focus, active focus, and post-focus state.

What mattered most here was tone. I wanted the system to feel supportive, not pushy, and aware without feeling watchful.

Outcome

The project helped create a shared direction for how ambient behavior might show up in IBM products, and it opened up further discussion across design and engineering. I left the project before the work moved further, but the pattern direction was starting to carry into additional product exploration.

Reflection
This project pushed my thinking beyond screens and features. It made me more interested in how software behaves over time, how it reads context, and how AI can support people without constantly demanding attention.

Outcome

The project helped create a shared direction for how ambient behavior might show up in IBM products, and it opened up further discussion across design and engineering. I left the project before the work moved further, but the pattern direction was starting to carry into additional product exploration.

Reflection
This project pushed my thinking beyond screens and features. It made me more interested in how software behaves over time, how it reads context, and how AI can support people without constantly demanding attention.

Outcome

The project helped create a shared direction for how ambient behavior might show up in IBM products, and it opened up further discussion across design and engineering. I left the project before the work moved further, but the pattern direction was starting to carry into additional product exploration.

Reflection
This project pushed my thinking beyond screens and features. It made me more interested in how software behaves over time, how it reads context, and how AI can support people without constantly demanding attention.

Other Cases

Other Cases