Generative AI in Workplace Learning

And What eLearning Designers Should Do Next
Generative AI is no longer an experimental tool. The focus is on daily work. Employees use AI to write emails, summarize reports, create documents, explain policies, prepare presentations, and answer customer questions. But what does this change mean for eLearning professionals?
A large study from Microsoft Research provides useful clarity. In Working with AI: Assessing the Usability of Generative AI in Jobs (Tomlinson, Jaffe, Wang, Counts, and Suri, 2025), researchers analyzed 200,000 anonymous conversations with Microsoft Copilot and mapped them to real work tasks using the O*NET framework. Instead of predicting future disruptions, the study examined how AI is already being used successfully in workplace activities. The findings reveal important implications for the use of productive AI in workplace learning that should be heeded by Instructional Designers, L&D managers, and digital learning teams.
1. AI Works Best for Knowledge-Based Work
Research has found that AI excels in tasks including:
- Writing and editing content.
- Explaining procedures or technical details.
- Teaching or clarifying concepts.
- Gathering and organizing information.
- Communication with customers or stakeholders.
- Preparing educational or informational materials.
In short, AI excels at knowledge work—the creation, processing, and communication of information.
Here’s why this is important in eLearning: Almost all activities involve information activities. Even functional or executive roles require documentation, reporting, communication, planning, or compliance specifications. The effectiveness of AI is not limited to technical roles. It cuts across industries. This means that the development of AI capabilities should not be silenced in IT training. It should be part of the core learning strategy.
2. The Real Skill Shift Isn’t Technical—It’s Cognitive
Another very useful distinction in research is distinguishing two types of AI impact:
- AI helps workers (augmentation)
- AI does parts of the work itself (delegation)
Some roles will use AI as a productivity partner. Others will transfer parts of their work to AI systems. For eLearning professionals, this difference changes the way courses should be conducted. Most current AI training focuses on:
- A travel tool.
- Quick tips.
- Feature definitions.
But research suggests that’s not enough. What employees actually need is support in:
- Deciding when to use AI.
- Tests AI results.
- Getting incomplete or incorrect answers.
- Risk management and escalation.
In other words, we need to train the judgment, not just the use, of productive AI in workplace learning.
3. Completion Rates Do Not Indicate AI Readiness
Researchers have measured the impact of AI based on:
- Successful completion of the task.
- The scope of AI capability within job functions.
- Real world performance in all jobs.
They did not measure how many people “completed the training.” For eLearning teams, this is a wake-up call. If your AI program’s success metrics include:
- Course completion rates.
- Satisfaction points.
- Frequency of entry.
You may be measuring engagement, not impact. Many reasonable indicators include:
- Improved resolution quality.
- Reduced reactivity.
- Fast switching with maintained accuracy.
- Better climbing decisions.
- Improved text clarity.
AI is changing the way work is done. Learning metrics should reflect changes in job performance.
4. Why Basic Knowledge Still Matters
Research suggests that AI can help democratize access to expertise. When used effectively, AI can help workers perform tasks previously reserved for professionals. However, this benefit is only seen when users can deeply explore the AI output. Without basic knowledge, employees can:
- Accept wrong answers.
- Don’t miss the nuances of context.
- Failed to detect missing comments.
- Use direction incorrectly.
This creates a new frontier for Instructional Design: Combine AI capabilities with domain knowledge reinforcement. AI skill training should include:
- Validation frameworks.
- Error checklist.
- Risk awareness notices.
- Thoughtful decision questions.
The goal is balanced confidence—not blind trust.
5. Where AI Currently Struggles (And Why That Matters)
The study also found low AI performance in:
- Physical or manual labor.
- Making meaningful or complex decisions.
- Some analytical functions.
This reinforces an important design principle: AI should be included as a support tool, not a replacement for expert judgment. Your training should help students understand:
- Limits of AI.
- Conditions that require human supervision.
- When climbing is required.
- How to combine AI output with situational awareness.
This prevents over-reliance and creates responsible consumption habits.
Practitioner Implications of eLearning
So how should learning groups respond? Here are five possible changes.
1. Design Learning Methods for Role-Specific AI
Avoid generic AI awareness courses. Instead:
- Identify the high frequency information activities for each role.
- Map where AI meaningfully overlaps.
- Develop targeted learning modules for those sessions.
For example:
- Sales teams → AI-assisted proposal writing + validation
- HR teams → AI-assisted policy communication + compliance monitoring
- Functions → AI-supported documents + report clarity
The relevance of the case for using generative AI in workplace learning is increasing its adoption.
2. Use Scenario-based eLearning instead of passive modules
The AI skill cannot be controlled by slides alone. Combine:
- Branch conditions.
- Decision-based simulation.
- Risk assessment activities.
- Outcome assessment activities.
Ask students to review the AI-generated content and decide:
- Is this accurate?
- What is missing?
- What danger does this present?
- Can you go up?
This builds practical skill.
3. Embed AI in Operational Support, Not Just in Education
The AI itself can:
- A much needed descriptor.
- Writing assistant.
- Feedback partner.
- A tool for summarizing.
Rather than isolating AI from training sessions, integrate it into the workflow..For example:
- Provide instant libraries within LMS platforms.
- Offer AI-assisted workouts.
- Use AI to create dynamic feedback.
This supports learning in the workflow.
4. Review the Competency Frameworks
Traditional skills models rarely include:
- AI collaboration capabilities.
- Quick fix capability.
- Output validation.
- Risk assessment.
This should be embedded in a modern digital literacy framework. AI fluency becomes part of professional skills.
5. Redefine the Role of Instructional Designers
Here’s the uncomfortable truth: AI can already outline lessons, write objectives, generate questions, and summarize conversations for SMEs. If Instructional Design remains focused only on content production, its value will decrease. The opportunity is:
- Functional diagnostics.
- Workflow alignment.
- Simulation design.
- Behavioral measurement.
- Human and AI interaction design.
The value of L&D strategy increases when we move from content creation to performance engineering.
Final thoughts
Microsoft Research does not predict that AI will eliminate jobs. Instead, it shows where AI overlaps with real work tasks today. That accumulation is significant—and growing.
For eLearning professionals, the question is no longer that they teach AI skills. The real question is: Are we designing learning that improves human judgment in the AI extension work?
Because successful organizations won’t be the ones deploying the most AI tools. They will be the ones training their people to use AI thoughtfully, critically, and creatively. And that starts with how we design learning now.



