How to measure engagement of your design?

Measuring engagement when it comes to employee engagement is usually carried out via questionnaires or survey or even mood tracking within the business. Some behaviours like the 5 pm rush out the door and lingering breaks, slow pace of productivity is indicative of a probably dwindling level of employee morale and engagement.

When it comes to learner engagement, often the measures are along the lines of completion rates, interaction with materials, feedback surveys and rating of courses. In some organisations, more data of where people drop out or stop a course is also being tracked.  Very rarely do we look for feedback on whether the person got what they needed from the course.

In gamification design, we tend to track against the business objectives for which the design was created. But should we be doing more?

At a panel discussion in Toronto, we were asked how we measured our design before it goes live. The measures suggested were named as the game engagement questionnaire  (Brockmyer and others, 2009), the presence questionnaire (Wittmer 1998) and flow (Cziksentmihalyi, 1990). The first two were developed with video games in mind and the third to create optimal experiences.

Having looked at each of the measurements, I have to say honestly we don’t tend to test that much before we go live with a design. There are a number of reasons for this, firstly practically mainly time constraints and secondly because we iterate the designs live to make sure we hit the business objectives. We do a few playtests before we start implementing, to sanity check our assumptions and to widen our target audience. By doing thorough user research we aim to design according to what works best for the audience in question.

In another discussion last week on a webinar between university researchers and a number of gamification practitioners, it was clear that not knowing which were the best game mechanics for ay given situation seemed to cause frustration and confusion for the researchers. Yet, for us as practitioners, we felt that we had to test and iterate and adapt our designs depending on what works or not. Saying that buyers don’t always find iterations comfortable, because a lot of organisations believe right the first time should be the way, but then I guess they wouldn’t need any engagement interventions if that belief was upheld.

As an emerging field, it may be useful to find out from platforms what works most in their experience and potentially for research and practise to work together more to find more accurate measures to test designs with. I also think that as soon as we are dealing with human motivation and behaviour a vast array of other circumstance come into play from individual experiences to culture to company etc. So finding the ultimate one test, may in my view be an illusion, just like one framework to fit all or one design to fit all is equally presumptuous. But maybe in striving to understand we learn more about what does work and what doesn’t.

Either way, it is something I want to explore further, so we can keep improving what we do.

An Coppens