all work/

Dalle-2 prompt: a human conversing and collaborating with an intelligent robot in a bright, open, and modern environment in the style of a cubist illustration using bright primary colors

Human-AI collaboration for creative ideation

Experience Strategy

UX Design

TLDR;

The Innovation Station was one of the first experiments designed and built by the Disrupt team to facilitate a strategy for reimagining work at Avanade. It is a social asynchronous platform for employee ideation. It functions as a single source of truth for the submission and tracking of AI ideas for AI and demonstrates an alternative to conversational UI when enabling Human-AI collaboration. The platform aims to support a culture of continuous innovation by providing transparency, access to resources, creative coaching, and peer recognition for innovative thinking. It automates the burden of manually tracking and updating ideas, and provides visibility into the work of the Disrupt team.

Role and Responsibilities

As part of the small and innovative Disrupt team, I find myself wearing a few different hats and finding a way to fit the need at hand to make progress and learn. For the ideation and experimentation processes I function as a business analyst, strategist, and designer. I am responsible for supporting the collection, refinement, definition, and design of ideas for experimentation through workshop facilitation, interviewing, requirement gathering, and the creation or process flows, journey maps, service blueprints, and low and high fidelity design.

BACKGROUND

Disrupt Avanade was established to explore how Generative AI could be implemented to reimagine work at Avanade. The initiative focused on establishing a strategy for the successful implementation of AI through continuous ideation, experimentation, and learning. Innovation Station was one of the first ideas that became an experiment as we quickly found we needed support in the collection and organization ideas, as well in providing visibility into the work being done by the Disrupt team. The platform will be rolled out to the organization in the new year as a hub for ideating, exploring, experimenting and accessing AI resources. For more information on the Disrupt Avanade strategy and framework view the full case study: Disrupt Avanade: AI and the Future of the Knowledge Worker.

Opportunity

In defining our strategy to ensure the successful implementation of AI at Avanade, we needed to have a continuous pipeline of AI ideas to experiment, learn, and iterate. In order to do so we created a strategy founded on the principle of AI Everything, Everywhere, All At Once through empowered curiosity.

You cannot reimagine the way you do work without having a workforce that is constantly experimenting with technology and thinking of better ways to do things. A significant cultural shift would be required to foster an environment where all employees are driven to continuously ideate and experiment on how to improve everything they do. To build a continuous pipeline of ideas we wanted to leverage thoughtful incentive structures that prioritized recognition, the use of peer networks, open information sharing, and radical visibility. We saw this as an opportunity to test the rapid experimentation process we defined in our strategy to build out a hub functioning as the single source of truth for ideating, exploring, experimenting and accessing AI resources.

SOLUTION

The creation of a social asynchronous platform for ideation that would function as a single source of truth for the submission and tracking of ideas for AI at Avanade, and support a culture of continuous innovation by providing transparency, access to resources, creative coaching, and peer recognition for innovation. This was an opportunity to:

Outcomes

While the tool itself will be rolled-out in the new year paired with an ideation campaign, one of the budding outcomes from this early design experiment, coupled with several other in-flight experiments, has been the beginnings of a mapping for the different interaction patterns for Human-AI collaboration based on the experiment definition framework. A few key dimensions for interaction pattern decisions including pushing vs. pulling agent generated content and allowing for structured vs. un-structured input options for prompting, can be informed by user motivation, environment, mental-model context, and data availability.

LEARNINGS
  1. We are going to see massive transformations as a result of AI and we’re focusing a large portion of our efforts at Disrupt on large innovative bets, but I believe it’s important for organizations to also prioritize experimentation with some of these smaller touchpoint automations incorporated in more familiar interfaces and interactions, as these learnings will be equally important in our journey toward a human-AI collaborative workplace.
THE APPROACH
Zoomed in section of the blueprint from stage workforce journey

The Disrupt experimentation process is rough, iterative, and highly collaborative between a small team of a strategist, designer, data scientist and developer as they cycle through iterations to define, develop and test.

When defining and designing an experiment we outline:

  1. Key objectives of the idea or solution
  2. Relevant process flow(s) and human touch points
  3. Applicable AI capabilities
  4. Necessary data sources and availability
  5. Functionality requirements: What does this tool or solution need to be able to do?
  6. Interaction pattern hypotheses for Human-AI collaboration (through rapid wire-framing): How do users expect to interact with it? What is their tolerance around the thresholds of constraints
  7. Governance Review: Responsible use of AI and threshold for risk

Objectives: What are we looking to achieve?

for innovation station a few key outcomes were identified:

  1. Increase general creativity and/or innovativeness of the ideas submitted by employees
  2. Facilitate the tracking and management of ideas, reducing duplication and saving time
  3. Increase organizational awareness of Disrupt activities
  4. Increase awareness of resources and training for AI

Process flows and touch points

How we do it and why we do it

For every experiment, it is vital we have a clear grasp on the applicable processes and systems as they exist today, so we can identify the opportunities for automation and the uniquely human abilities that drive success. [This steps also gives us early insight into the future impacts on people and how their day to day might be changing, allowing us to better prepare for necessary interventions]

For the innovation station, I mapped the end-to-end idea lifecycle and zeroed in on the key human touch points where efficiencies or improvements could be achieved through the use of AI. Through this we found the manual collection, tracking, and organization of similar or duplication ideas was value-draining activity. We also identified creative ideation as a uniquely human ability that relied heavily on implicit knowledge of experienced workers with lived experiences.

Applicable AI capabilities

What is AI good at? Where can AI automate or help humans do what they do best?

LLMs are uniquely equipped to organize, track, associate, and search the linguistic-based content of idea descriptions, saving our Disrupt team substantial amounts of time. LLMs could also be used to augment creative thinking through smaller additive automations with generated associations and probing based on the employees provided content, helping them to think of even more innovative ideas.

Data Sources and Availability

What is known about the data and how will that impact outputs and interactions?

For idea management, there was not a significant reliance on data access or cleanliness as the existing idea database was relatively new and well structured. For creativity coaching, no special knowledge was required to function as a thought-partner as the end user maintains their role as the primary owner of the content, enabling us to being development quickly while other longer-term organizational data foundation efforts were in flight.

Functionality Requirements and Interaction Pattern Hypotheses

What does this tool need to be able to do? How do users expect to interact with it? What is their tolerance around the thresholds of constraints?

For the POC we needed a way for users to submit ideas, browse existing ideas, track the progress of ideas through development, and access resources for learning and practical experimentation. For the exploration and submission of ideas there were two key interaction patterns I wanted to experiment with:

  1. Semantic search to organize the idea library, reduce duplicate submissions, and promote discoverability and cross-pollination.
  2. Creativity companion to coach the submitter on creative thinking to develop their idea

Semantic search and embeddings

For many of the ideas in our library, a driving pain point is often the discoverability of existing content. We have often found ourselves having to manually review and consolidate ideas that come from separate sources but are often highly related concepts worded slightly differently. I experimented with the inclusion of a subtle search loop in the beginning of the idea submission flow to nudge users to first consider whether there are existing similar ideas they should expand upon, rather than creating a new submissions.

Creativity companion

I’ve spent a lot of time thinking about the non-deterministic nature of generative AI and the prevalent use of chat interfaces to facilitate a more flexible and intuitive use. Many have come to realize this open-ended-ness can lead to frustration as all of the burden is put on the user to figure out the prompting they need to get the output they want. For something like drafting ideas, we already knew fairly specifically what the user would be trying to do, so I wanted to experiment with a pattern that does not require the user to pull out the information they need, but work collaboratively with a writing partner as they're pushed nudges, recommendations, and a helping hand to get close to what is being asked of them. While still having opt-in control and definitive ownership of the content produced, these elective coaching resources are “pre-prompted” creative techniques tailored to the users content to help them practice creative thinking, and coach them into providing more context on their idea so we would be able to more quickly assess value and prioritize.