a tutor teaching an adult a digital lesson in a bright, open, and modern environment in the style of a cubist illustration using bright primary colors
This is an active experiment and working case study. Learnie is one of the many in flight experiments of the Disrupt initiative where we are reimagining the way work is done at Avanade through continuous exploration, ideation, and experimentation with AI. Learnie looks to push the bounds of what can be possible for workforce upskilling when generative AI is leveraged to provide on demand, personalized, responsive and intentional learning content.
As part of the small and innovative Disrupt team, I find myself wearing a few different hats and finding a way to fit the need at hand to make progress and learn. For the ideation and experimentation processes I function as an experience strategist and designer. I am responsible for supporting my Strategy lead in the collection, refinement, definition, and design of ideas for experimentation through workshop facilitation, interviewing, requirement gathering, and the creation or process flows, journey maps, service blueprints, and low and high fidelity design. I also collaborate closely with engineers to evaluate development requirements and feasibility, specifically surrounding data, LLM prompting requirements, and human-AI collaboration interaction patterns.
Much of the discourse surrounding the rapid developments in AI have centered on whether our implementation of the technology will result in job replacement or job displacement, triggering a catalyst for unprecedented workplace transformation. Often the follow on conversation to this is how we might begin to prepare our workforce for such significant changes. We’ve seen the rapid development in technology over the past 18 months and with this the rapid development in skillset expectations for our workforces, and it’s only accelerating. According to the 2023 LinkedIn Workplace Learning Report, skillsets for jobs have changed by 25% since 2015 and this rate is expected to double by 2027 (LinkedIn Workplace Learning Report). With this rapid rate of change it is becoming clear to most organizations that they will no longer be able to hire away the talent they need, as the talent doesn’t yet exist. Upskilling existing talent is becoming the only sustainable talent pool and many are taking a second look at how to invest in the continuous learning and growth of their workforces.
As a technology consulting firm, systems integrator, and top Microsoft partner, there is a substantial amount of pressure to keep our workforce on the bleeding edge and up to date on the newest technologies as we often find ourselves getting disrupted, and tapped by clients to help them navigate their own disruptions. As a large consultancy, we also struggle with driving consistency in delivery practices across the organization from practices and industry verticals to geographic regions. This and more have created the persistent challenge of keeping up with our workforce training and upskilling needs and providing the best curated content for learning.
How might we generate consistent and accurate micro-learnings that are personalized to the learners needs and responsive to the learner comprehension and evolving objectives, providing quick, continuous, and sustainable upskilling for our workforce.
This is an active experiment with in flux results! Instead I will share the hypothesis / success criteria we have set for each discrete LLM experiment to elaborate on how we’re creating guardrails and measuring value throughout the process to stay agile before scaling to end users.
For this to be deemed a successful and valuable experiment we need to prove:
For each of these criteria we have a Learning and Development Partner reviewing and assessing the output to ascribe a reliability rating. In parallel with testing we will experiment with other validation interaction patterns to set guardrails for learners to validate content in the event this solution is deemed high value and achieves scale.
Throughout this experimentation process the Disrupt Strategy and Innovation Lead and I have partnered closely with a few Learning and Development representatives. One of these incredible individuals shared an anecdotal experience on how she’s spent quite some time testing out how to use Microsoft Copilot to generate learning outlines and content for a project she’s been working on. What she’s come to find is that with some iterative refinement she can produce an outline, but there’s extensive back and forth in the process to direct to tool to get there and to ensure accurate and real content is provided. This spurred a significant realization for us as we began to discuss the importance of having a holistic understanding of the experience in the ideation assessment process when considering solutions with generative AI.
One of the most frequent questions asked when assessing business value and the investment in GenAI in the workplace is can’t Copilot do this? The example above is the perfect reason why this isn’t exactly the right question. One of the most incredible elements of LLMs and chat Copilots are their flexibility and non-deterministic UX, but available data and context are king when it comes to producing the right outputs. Following a holistic experience-led approach helps us to identify the nuances of tasks, workflows, data inputs, data transformations, and data generation, enabling us to identify the right interaction patterns and touch points to optimize for the jobs to be done when humans collaborate with AI, transforming the workplace, not replacing the workforce.
This experiment is a part of the Disrupt initiative (Disrupt Avanade: AI and the future of the knowledge worker) at Avanade where we are continuously collecting, assessing, defining, and refining use cases for AI to transform the way we do work at Avanade. The Disrupt experimentation process is rough, iterative, and highly collaborative between a small team of a strategist, designer, data scientist and developer as we cycle through iterations to define, develop and test.
Learning was considered to be ripe for AI innovation as the creation and delivery of trainings rely on the consumption and generation of content. A majority of learning content was likely not highly specialized and generally available, require no fine-tuning for PoC. To begin our discovery process we did deep exploration of the experiences today for both the learner and the L&D course designer. This focused on evaluating the origination of the job to be done and the nuanced decision making process of each persona as they achieved their goals.
Understanding the experience maps and storyboards, I then focused on identifying the main objectives, motivations, and pain points of each stage in the journey. From there we went into rapid ideation where we reflected on the AI capabilities and ideated what solutions we could propose that would transform the learning experience. What was also important at this stage was to identify all of the input data that was necessary to make a decision that would result in an accurate output, enabling the next stage in the process. Doing something like framing a learning objective requires a significant amount of contextual information to understand the scope of learning and the current comprehension level of the user. Creating well written hypotheses for testing the output of the LLM was important to keep us honest with such a high-stakes use case where there was a large volume of content generation that could be ineffective learning content or prone to hallucinations without the proper guardrails in place.
unstructured goal framing
Conversational UI was chosen to enable the unstructured context collection and requirement gathering for shaping the learning objective. At this stage for the learner they likely have a high-level and abstract need but not much other context. This is a perfect example of when the JTBD of writing a 'learning-objective' can truly only be achieved successfully though an unstructured and facilitated discovery process that guides them through the framing through collaborative exploration without setting arbitrary UI input constraints that could cause friction for the user.
natural feedback loops and shared evolution
with the delivery of multi-modal content it was vital to enable a natural engagement and feedback loop between the learner and Learnie, mirroring a tutor guiding a lesson for students. Asking follow up questions is one of the most active forms of real-time information consolidation as learners are ingesting the content, transforming/extrapolating upon it (or struggling to), and rephrasing in their own words to reflect a question that will provide enough context to the teacher to reframe and explain. This was heavily influenced by Vygotsky's social development theory that states learning is a social process where Q&A is vital to practicing reasoning.
With leveraging generative AI for learning, the accuracy of content becomes the greatest risk because the base assumption is that the end user does not have the base-knowledge necessary to leverage some of the more common UX patterns for intermediary self-validation or “fact-checking” of the information. This was a significant driver in the decision to create discrete hypotheses to test as we move through development. We are also experimenting with additional indicators for published "Learnie Bits" or micro-lessons that reflect whether content has been reviewed by an L&D specialist.