When you don’t have data

I am currently working on a learning project and we wanted to use a data-driven design approach to come up with favourable interactions for the users. I have been hunting for data and research on learning interactions and how they impact on learners. I have to admit it is like searching for needles in a haystack.

In learning, many professionals shiver at the thought of introducing measurements and data. I came from that world so I definitely have met some of these. I am not sure whay that is, but maybe it is because it might show up inefficiencies or bad ratings for work, etc. When it comes to the business managers, they want to know how learning impacts the business results.

As an internal trainer I was privileged in one company to work with enlightened managers who would help us measure the training impact with clear targets. It is a two-way street in the end of the day, the business manager knows the goals for his or her team and they can help us define what success looks like and what training impact can be measured. I always saw it as a joint venture approach, which the business managers bought into and we could actually measure impact. It reduced the amount of training offered to the team, but it improved what they did with it. In my view win/win.

When it came to the training team however I had a different battle, because there were targets on how many days training ought to be delivered per business account. It was a classic case of the business wanting something that worked and a disconnect with training team targets on how to deliver that. True data driven design would have matched the high level targets to be both on impact measures agreed and then achieved, because part of the success had to be defining what success looked like.

In gamification, I am always on the lookout to measure which game mechanics and dynamics solve the problems we have in front of us better than others. This requires data on where people drop out and get stuck or no longer engage. Most platforms can track these today and provide great feedback on what works and what you may need to tweak to improve your impact.

In learning design, I have been asking the question in several different groups, what are the user interactions that work best for learner retention. I have stumbled on complete and utter silence.

I had one person challenging the scenario based approach, but without data. Another two challenging VR, because their learners are not tech savvy enough, which is valid I may add, even with an adequate work around of providing a room with a person teaching them how to use it. Then I also questioned passive content with a simple next button and apparently there is actual data to say that the next button as an interruption is worse for the learner than for example an autoplay video or animation for retention.

I am sure pockets of data exist somewhere and that I haven’t laid my fingers on it just yet. What we have suggested to the client is to A/B test two options, to find out in their setting what will work best. It is a more time intensive and expenive approach, but it will get us data we can work with, providing the learning system can track drop off points and completion and retains knowledge test results to prove retention straight after learning.

The post When you don’t have data appeared first on Gamification Nation.