THE RATING GAME

admin

Evaluation – box-ticking exercise or useful tool?

The other day at the hospital, I was asked to fill in a feedback form.

“How likely are you to recommend this hospital to friends and family if they needed similar care or treatment?” it asked. I had to place a tick beside the most appropriate response (ranging from ‘extremely likely’ to ‘extremely unlikely’.)

Thinking back to my Heritage MA Visitor Studies lectures, I identified this as a ‘Likert’ scale. (So named because Likert was its inventor – and sadly not because it asks you how much you Liker-t something.)

It also called to mind some of the other Visitor Studies topics I’d learned about:

  • Audience Research
  • Exhibit Evaluation
  • Quantitative vs. Qualitative research methods
  • Statistical Analysis
  • Front-end, Formative, Summative and Remedial Evaluation

 

Clearly, this hospital feedback questionnaire was a kind of ‘audience research’. (What is the nature of the hospital/museum experience? What is its impact on the patient/visitor?) There was also space to write more detailed comments about your experience. Ah, I thought – that’s an ‘open’ question, which allows the respondent to answer in their own words; as opposed to the Likert scale, which is ‘closed’ and therefore much easier to score and analyse statistically.

I later investigated on the ‘NHS Choices’ website and found that if you wanted to give your feedback online, there were even more Likert-scale ‘ratings’ to be given:

“How satisfied are you with the cleanliness of the area you were treated in? How satisfied are you that you were treated with dignity and respect by the staff?  How satisfied are you that you were involved in decisions about your care?” And so on.

You were then, again, invited to write a fuller review – and you even had to think of a title for it!!

Hardly a day goes by now, it seems, without being asked to give some sort of feedback for services or goods. The world’s gone rating mad! If I buy an item online, not only am I sent an email asking me to rate the item itself, but I’m usually asked to rate the service provided by the seller as well. Did the item arrive on time? Was it as described? If I contacted the seller, did they respond quickly and were they courteous etc. etc. Furthermore, was the amount of packaging appropriate?  Sellers are desperate for you to give them a five-star rating because it’s now so important to their business.

I open a website and a window pops up asking me if I could spare a few minutes to give feedback on the website! Or, I get asked to stay on the phone after contacting my bank to answer a few questions about my customer experience!

In fact, these days, I’m more surprised if I’m not asked to fill in some kind of feedback form than if I am. This is particularly true at museums and heritage attractions.

Back at the hospital, how things have changed – even in my lifetime! Once upon a time, doctors told patients what treatment they would be having, using technical/medical terms – and really didn’t want or expect the patient to answer back, or even ask questions. The concept of ‘feedback’ was completely unknown! What did it matter if Mrs X thought Dr Y was rude and patronising? What did it matter if she didn’t understand how her treatment worked as long as it did work?

Similarly, museums once presented the visitor with exhibits that were labelled using academic language that only a fellow academician would understand. What did it matter if ‘ordinary’ visitors might have liked further explanation (or interpretation) of an exhibit to make their visit more interesting? Or even – perish the thought – enjoyable? That wasn’t what museums were there for…

Feedback – Visitor Studies – Evaluation: whichever words you use, in the case of a publicly-funded service (including lottery-funded), it’s all about accountability. About showing that the public’s hard-earned cash will be/has been well spent and you are doing a good job – and achieving the desired effect: whether it’s patient satisfaction, visitor satisfaction, or whatever.

Today, digital technology makes the rating game particularly easy. Online forms are quick and easy to fill in and submit and your responses are instantly number-crunched by the web software – with your review appearing minutes later for all to see, along with all the others. (You can look up any hospital on the NHS Choices website and read its reviews and ratings, just as you would a hotel or visitor attraction.)

Back at the hospital, it occurs to me that, though these questions relating to the ‘patient experience’ are extremely important, perhaps there are more fundamental questions i.e. regarding whether the treatment is working! (And, some might say – if the operation was a success, who cares if the surgeon is a bit of an old-school curmudgeon?)

Doctors are clearly interested in finding out which treatments work and which don’t. And heritage managers and interpreters should also be concerned about the more ‘fundamental’ aspects of what we do – quite apart from whether the visitors thought our toilets clean or our café good value. In other words, if we have an important message to impart to our visitors (as many of us do) – whether it’s environmental, social, or historical – we should also be asking ‘Is our interpretation working?’

 

Quantitative vs. Qualitative

To decide which are the most effective treatments, doctors not only rely on scans, blood tests etc. and how long patients live (which is quantitative i.e. numerical data) – but also on patient feedback: the more qualitative ‘quality of life’ data regarding such things as how tolerable side effects are etc.  This data can be collected quantitatively, for example by using a Likert rating scale. But Doctors are also likely to want to get a ‘feel’ for how the patient is doing by having an actual conversation with them.

In the same way, some museums and visitor attractions have attempted to design quantitative research to measure ‘learning outcomes’ – for example by testing visitors’ knowledge before and after visiting an exhibition or site.

However, we must ask ourselves whether helping people to learn ‘facts’ is really the main aim of the interpretation we have produced. If, instead, the intended effect is a more general ‘enhancement’ of their visit, then techniques such as ‘quiz-questioning’ at the exhibition’s exit and entrance will not find out how effective it was.

We might do better to concentrate, instead, on collecting qualitative data – characterised by so-called ‘rich’ responses. This sort of research is far more about feelings than numbers. There are many ways to collect qualitative data – and that’s all I’m going to say here because it’s far beyond the scope of this blog to start describing them!

 

Vital statistics?

You may feel that by presenting your results as numerical values (for example, percentages) you will be able to give them more credence. However, there are several big pitfalls to think about.

In statistical analysis, sample size is a fundamental issue. Medical researchers and drug companies may be able to use the data from thousands of patients to prove that one treatment is better than another. Furthermore, a lot of effort goes into trying to ensure that external factors aren’t able to have an effect on the results. For example, many drug trials are ‘blind’ i.e. the patients don’t know whether they are receiving the drug on test or not. (In a ‘double blind’ trial, the researchers don’t know either!)

A large museum or visitor attraction may, similarly, be able to get thousands of visitors to agree to be interviewed/to complete a survey questionnaire. However, for most purposes, the respondents must also be selected at random to ensure the integrity of the results. Random sampling is trickier than it sounds! A common pitfall is the ‘self-administered questionnaire’ left in a pile at the front desk: the results will merely tell you what people who like filling in questionnaires think of your visitor attraction. (And they are probably not representative of the population as a whole!)

So, unless you are a large institution with access to hundreds (if not thousands) of visitors and some market research expertise (to guide you through the twin minefields of questionnaire design and sampling strategy), it may be better not to go down the quantitative data/statistics route at all.

 

Dead-end or front-end?

According to one definition, “Exhibit Evaluation is discovering the extent to which something has succeeded in achieving its purpose”. This would indicate that it’s something that you do right at the end of a project. Indeed, once upon a time, this so-called summative evaluation was the only type of evaluation being carried out in museums.

But evaluation doesn’t have to be a dead-end process, where the only interest lies in finding out how well a piece of interpretation met its objectives – at a point when there’s no time or money left to improve the outcome based on your findings! (Though one would hope that someone, somewhere, embarking on a similar project could still use your results.) Evaluation shouldn’t just be a box ticking exercise. It can do so much more.

For example, Front-end evaluation takes place at the planning stage with a focus on the target audience. So… even if you were dead keen at the outset to produce a panel on the identification of bryophytes; by doing a bit of front-end evaluation (i.e. on-site interviews) you might find that most of the visitors to the site are actually mothers with young toddlers – who would much prefer to read a fun panel about mini-beasts.

Formative evaluation is done during the design stage to improve the interpretation prior to installation. It’s therefore usually carried out on mock-ups or prototypes. The aim is to improve the interpretation by trial and error.

Finally, Remedial evaluation does exactly what it says on the tin!

 

Putting it all into practice

Back at the hospital, they’re building a new wing, complete with new ‘way-finding’ system – to be implemented, eventually, across the whole site. The interior designers acknowledge that signage can be very expensive and also very hard to get right first time (even after lots of trial ‘walk-throughs’) and they have hit upon a really sensible solution. All signs will be paper-based, to fit inside a large, smart-looking frame. So, if it’s found that people are getting confused or there is a change in the ward numbering system, the signs can be amended and replaced cheaply and quickly using digital print.

This is an approach similar to one that I like in interpretation panel design. A sort of combination of front-end, formative, summative and remedial evaluation all rolled into one – made possible by new technology.

Today, digital print makes it much easier and cheaper to delay ‘setting your interpretation in stone’ (sometimes quite literally) until you know that you’ve got it just right and it really works. Let’s say you’re planning an outdoor panel or set of panels. First, get out the office for a few hours to interview visitors to the site. Take along some preliminary artwork to show them and ask them what they think of your proposals. People generally like being asked for their opinions. (But you must then be prepared to act on their suggestions!)

After doing this front-end evaluation, you are already much better informed and you can produce a ‘temporary’ graphic panel in a relatively inexpensive medium, such as Foamex or polycarbonate

This can be installed in a sign that takes a removable graphic panel, such as the Cavalier™, Musketeer™ and Bowman™ range of interpretation displays.

Your panel can then be further evaluated in situ through more interviews – and/or from visitor feedback via a QR code etc. – and any changes made to the artwork before installing an amended version. This could be in a more permanent medium such as n-viro™, if you are confident that you’ve got it right. Or, depending upon the situation, you might want to stick with the more ‘temporary’ panel type, which would allow you to constantly review and update the information every time the panel needs replacing.

Evaluation – you know it makes sense! But it can seem like a complete pain. It isn’t an easy thing to do and it can be time-consuming. So, when you’re grappling with those deadlines, it can easily fall off the bottom of the to-do list!

I’ve been scratching my head to find that ‘killer argument’ for doing some (front-end/formative) evaluation on even a small project such as a single ‘welcome’ panel in a nature reserve car park. I know that I won’t get far by citing personal satisfaction as a reason (though it is one). So, how about ending with the following list of ‘returns on investment in evaluation’ that I wrote in my Visitor Analysis and Evaluation lecture notes, way back in 1999?

  • Success of communication with non-specialist audiences
  • Development time saved (evidence to support decisions taken and reduced persuasion time)
  • Money saved by getting messages and interactives right first time
  • Visitors consulted meant good PR

 

Related website:

Visitor Studies Group:  http://visitors.org.uk/

 

About the Author – Janina Holubecki

 

“My first degree was in Graphic Design and Photography. After an MA in Heritage Studies, I became Education Officer at Hackney City Farm and then Interpretation Officer for two Areas of Outstanding Natural Beauty in succession. Since 2008, when I went freelance, major contracts have included website and panel text for Lincolnshire Coastal Country Park and a book about the restoration of St Pancras Chambers Hotel. I have been a full member of the Association for Heritage Interpretation since 2003.”