I am not going to concentrate on the creation of the questions themselves. That is a wholly separate topic around competency frameworks which I intend to write separately on. Instead, I wish to concentrate on the structure of the questions rather than their content.
We have found the following guidelines useful – but please note they are only guidelines!
- Between 20 and 40 questions is about right
- Usually we see these as groups of 3-5 questions per competency
- Adding a narrative question for each competency is normally the right way to go – we have adopted a question that one of our partners, Peter Hyde, used for a common client. “Please provide comments/evidence/examples that support your answers above”. We don’t always stick to this – but it is a good start.
- Consider whether every group of people can answer every question. If not, then exclude the questions from that group. So, peers may answer a subset of the overall question set for example.
- It can be useful to have a question (or even 2 or 3) at the end of the 360 that asks people to give broader feedback or cover points they would like to make
- Questions should be brief, clear and unambiguous, and describe an observable behaviour
- If you have people for whom English is not their first language then we would recommend translation the questions. You probably don’t need to translate the whole system but the nuance of questions matters.
I’ll cover rating scales in my next post so I won’t go into detail here – but clearly whether you are using a frequency rating scale (e.g. Often, rarely,…) or a observational rating scale (e.g. Strong performer, development need) makes a difference to how you should work the question.
Whether we ask the right questions is obviously an important contributor to the success of the 360. I would say, though, that if you build the reports first and ensure the competency framework is drawn down from the strategy then the questions should start to write themselves.