It’s been about a month since the ATD 2023 conference and hopefully those of us in the industry have been able to put our big ideas into practice. This year’s conference brought out lots of great ideas; whether it was unlearning our assumptions about our jobs, giving high impact players freedom to thrive, or considering how we encourage constructive feedback. ATD reminds us that there are many innovators out there digging into a multitude of hypotheses and finding real applications. It should also remind us that you don’t need to be a professor or a published author to discover new applications. Working in training & development means constantly searching for better ways to educate that produce real world effects. No matter how small or specific your area of education is, there are valuable gold nuggets to discover—better ways to educate that result in faster training times, longer retention, or improved performance. But with our packed schedules and tight deadlines, when do we have the opportunity to look for these gold nuggets? Here are some ideas for including experimentation into your training workflows.
Create With the Purpose of Discovery
When it is time to create something new (which will inevitably happen if you’re an instructional designer), be deliberate about what you create. Consider a new approach to your training but be selective—one big idea is enough! It might be a storytelling device, a game, a social activity, a journal, or one of many other possible teaching strategies. The point is to try add something new to your training to see if it makes your training better.
It is easy to fall into a routine with every new course; but it’s unlikely doing the same thing over and over again is going to lead to a breakthrough. Instead, treat each project as a way to apply something you haven’t tested yet. Keep your changes small—it is better to thoroughly test one small change rather than try to separate the effects of many changes.
Set Up Your Training Program Like an Experiment
Just because your idea is new, that does not mean it’s good. Experimentation simply requires controlling the training implementation in a way that lets you see if your new idea makes any difference. The classic way to do that is to create 2 groups: an experimental group that receives the new training and a control group that receives the old training. Make sure you have a way to clearly target the different training content for your 2 groups.
Keep in mind that we don’t want to harm the control group by giving them bad or inaccurate training. We just want to provide them with a continuation of the old way of training.
Does this mean we are doing twice the work in order to try new methods of training? It might, depending on what you are experimenting with! But I would recommend keeping your experiments simple. Restrict your experimental training to just a few activities. Split your groups up so some receive the new activities and some not.
We wouldn’t learn much about our training strategies if we didn’t measure any of the possible outcomes. Most eLearning should already come with common tools to measure learning: quizzes and surveys. You should also have tools to track time spent in the training and the big picture impact on performance.
Luckily measurement between your two groups can be exactly the same, no new measuring tools are needed. But you will want to make sure you have these measures in the first place.
Do you have two independent experimental groups? Did you collect relevant data from each group? Excellent! This is a good situation to use a Two Sample t-Test to check if your results are significant.
Repeat: including experimentation in your training workflows
If you’ve come this far, don’t stop! Each experiment informs all your further experiments and greatly increases your chance of finding that gold nugget. Training and development is not about doing the same thing every year, every project. We want to keep trying new approaches but we also want to ensure we only keep the approaches that work.
One way to accomplish this is by including experimentation into your training workflows. That is, continually trying new ideas, creating these ideas as small pieces of content, targeting content towards an experimental group while maintaining a control group, and comparing measurements between these groups.