Education

Generative AI in Learning Design: Keeping It Under Control

Lessons for Building Training with GenAI

Generative AI has become a powerful partner in learning design. It can summarize lengthy Lesson Expert (SME) discussions, study drafts, structure content, and accelerate early stage design work. In many ways, it acts as a tireless research assistant, helping to transform raw technology into experience.

Yet anyone who has used production AI in real projects knows the other side of the story: AI is neutral. If the data is incomplete or the information is unclear, the system does not simply respond with “I don’t know.” Instead, it fills the gaps. Sometimes with sound but wrong information. It may invent clues, draw unsupported conclusions, or confidently suggest ideas that are out of context. For learning and development (L&D) professionals, this poses a significant challenge: How can we use productive AI effectively without losing control over the accuracy, authenticity, and accountability of learning content?

In my recent work developing leadership training programs, I have found that the answer is not just better appreciation. The key is to build a process that integrates AI responsibly into the entire learning design workflow. Here I share a few practices that have helped me keep AI productive while still in control when creating leadership training.

Grandma’s Rule: Always Start With a Purpose

My first rule comes from something I learned long before AI existed. During my high school teacher certification program, my supervisor often repeated one simple piece of advice: always start with a purpose. People are different, days are different, and the environment is always changing, but purpose remains the foundation that keeps the learning experience focused and meaningful. In AI-supported design, this principle becomes even more important.

Before producing any content, I define the learning objectives clearly and concisely. All information, drafts, and drafts of content are linked to those goals. As the project develops and discussions with SMEs deepen, the objectives may change slightly, but they remain central to the process.

This practice helps prevent a common problem with generative AI: content expansion without direction. AI can produce a large volume of polished material, but without a clear objective framework, that material may deviate from the learning objectives.

Objectives act as a control system that keeps the AI ​​output aligned with the training objective.

Create a Dedicated AI Assistant for Your Sources

Another important practice is to build a project-specific AI assistant rather than relying on a generic chatbot. In my work, I upload important things like:

  1. Policy compliance documents.
  2. SME notes and summaries.
  3. Learning frameworks.
  4. A document that describes the aims and objectives of the course.

These inputs become the basis of the resource that the AI ​​assistant refers to when generating content. This approach greatly reduces false positives because the system is guided by proven internal information instead of relying on standard internet patterns. It also keeps the information focused and ensures that the material produced is always connected to the specific learning context. In short, the assistant becomes a structured information center rather than a free-floating text generator.

Real Practice Comes First

One of the most important lessons I’ve learned is that authentic learning experiences must come from actual practice, not from an AI concept. Generative AI can create convincing scenarios, but struggles with the subtleties of local language, tone, and expert differences. These things are important for leadership training and learning on the job.

To deal with this, I start with real information.

In my projects, teachers and facilitators often record short demonstration videos to promote the work. These videos capture real conversations, authentic language, and the subtle power of practice. I collect the transcripts from these recordings and use them as a source material for my AI assistant. Then I tell the AI ​​to generate texts or scenarios based on those texts while being guided by the learning objectives.

This process allows AI to build and refine equipment while preserving the original voice of the workers. The result is reading content that feels natural and grounded rather than artificial.

Measuring Learning Without Losing Meaning

One of the most promising uses of AI in learning design is to measure information. If the content is based on real experiences and aligned with goals, AI can help refine and enhance it. For example, I often advise the contributor to improve language clarity and use SEO-oriented sentences. This makes learning materials easier to search, discover, and navigate within digital platforms.

However, this step always comes after content alignment, not before. All revisions are checked and compared to the learning objectives to ensure that improvements in clarity or optimization of keywords do not distort the intended meaning. AI can amplify language patterns, but Instructional Designers must always be responsible for maintaining the integrity of the learning message.

AI as a Language Learning Mirror

In Machines of LanguageLeif Weatherby explains how AI can reveal the patterns of collective language and influence the meaning of culture. In many ways, this is exactly what we see when generative AI is applied to learning design. AI reflects the way people speak, write, and organize ideas around the world. When used responsibly, it can help reveal patterns in organizational knowledge and accelerate the translation of technology into learning experiences. But this only works when AI is thoughtfully embedded within the learning design process.

For me, this means integrating AI into all stages of the ADDIE model, analysis, design, development, implementation, and testing, while maintaining strong collaboration with Subject Matter Experts. AI does not replace the learning designer or SME. Instead, it becomes a structured partner that helps organize information, refine language, and level learning. When used this way, generative AI does not reduce authenticity. In fact, it can help protect it if we cooperate with it wisely.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button