ISO 31000 Risk management techniques
Attributes of a selection of risk assessment tools
If you haven't yet read my previous posts discussing risk assessment tools, you may like to go back before starting this one about the Scenario Analysis, Function Analysis, Controls Assessment and Statistical Methods.
Part 1 in this series is available here.
Part 2 in this series is available here.
Just to recap:
ISO 9001 Risk-based thinking could (and I am not saying that it should) be demonstrated by one or more of the techniques in ISO 31010.
Note: the text is based on the contents of Table A.2 – Attributes of a selection of risk assessment tools [Source: IEC/FDIS 31010:2009].
Continuing with ...
SUPPORTING METHODS
We have already looked at the following Look-Up and Supporting Methods that are relevant to risk identification:
- Check-lists
- Brainstorming
- Structured or semi-structured interviews
Brainstorming and structured/semi-structured interviews are techniques that are often used for improving the accuracy and completeness in risk identification; the Delphi methodology is another:
Delphi technique
A structured collaborative communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of experts. By combining expert opinions, the aim is to support the source and influence identification, probability and consequence estimation and risk evaluation. The experts answer questionnaires in two or more rounds. After each round, a facilitator provides an anonymous summary of the experts’ forecasts from the previous round as well as the reasons they provided for their judgments. In this way, experts are encouraged to revise their earlier answers in light of the replies of other members of their panel.
Delphi can be used to estimate probability of adverse and positive outcomes: In the words of ISO 31010:
"Expert opinion can be used in a systematic and structured process to estimate probability. Expert judgements should draw upon all relevant available information including historical, system-specific, organizational-specific, experimental, design, etc. There are a number of formal methods for eliciting expert judgement which provide an aid to the formulation of appropriate questions. The methods available include the Delphi approach, paired comparisons, category rating and absolute probability judgements."1
Despite the mention of probability above, Table A.1 – Applicability of tools used for risk assessment, the Delphi method is marked 'NA' [NA = Not Applicable] for Risk Analysis to assess Consequence, Probability and Level of risk - although personally I would agree with the commentary on page 29 [Clause B.3.2 Use] which states:
"The Delphi technique can be applied at any stage of the risk management process or at any phase of a system life cycle, wherever a consensus of views of experts is needed."2
A true consensus approach that avoids the bias of dominant members of the team can be the wake-up call that management needs to assess risk.
SWIFT (Structured “what-if” )
SWIFT is a system for prompting a team to identify risks, normally used within a facilitated workshop and linked to a risk analysis and evaluation technique.
The first thing to understand about SWIFT is that it was originally developed as a simpler alternative to HAZOP (Hazard and Operability Studies), a qualitative risk identification technique. HAZOP aims to stimulate the imagination of participants to identify potential hazards and operability problems; structure and completeness are given by using guideword prompts. The HAZOP technique was initially developed to analyze chemical process systems and mining operation process but has later been extended to other types of systems and also to complex operations such as nuclear power plant operation and to use software to record the deviation and consequence3. Needless to say, HAZOP is intended high risk organizational contexts where appropriate levels of resourcing are available to support its use. SWIFT, on the other hand, has been purposely-design as a sort of 'HAZOP-Lite' needing fewer resources. ISO 31010 regards the 'Resources and capability' requirement as "Medium", so this may be a viable risk identification technique for use by most small to medium as well as larger quality conscious organizations?
The system, procedure, plant item and/or change has to be carefully defined before the study can commence. Both the external and internal contexts are established through interviews and through the study of documents, plans and drawings by the facilitator.
The facilitator asks the participants to raise and discuss:
- known risks and hazards;
- previous experience and incidents;
- known and existing controls and safeguards;
- regulatory requirements and constraints. 4
Discussion is facilitated by creating a question using a ‘what-if’ phrase and a prompt word or subject. The ‘what-if’ phrases to be used are “what if…”, “what would happen if…”, “could someone or something…”, “has anyone or anything ever….” The intent is to stimulate the study team into exploring potential scenarios, their causes and consequences and impacts.5
The risks identified are summarized and the team considers the controls already in place - assuming that there are any - before confirming the description of the risk, its causes, consequences and expected controls.
This information is then recorded.
What I particularly like about the SWIFT concept approach is the inherent discipline which forces the team members to consider the effectiveness of the controls. Assessing risk is one thing, but treating it is another entirely. They have to agree a statement of risk control effectiveness, which, if it proves to be less than satisfactory, triggers the task of further considering risk treatment tasks and potential controls.
The application of this team-based model doesn't have to be complex. ISO 31010 simply rates the Complexity of the technique as "Any".6
Human reliability analysis (HRA)
Human reliability assessment (HRA) deals with the impact of humans on system performance and can be used to evaluate human error influences on the system.
At the risk of stating the obvious, human reliability is very important due to the contributions of humans to the resilience of systems and to possible adverse consequences of human errors or oversights, especially when the human is a crucial part of today's large socio-technical systems.
Contrary to the impression that you might receive by reading the relevant section in ISO 31010 - specifically B.20 Human reliability assessment (HRA) - a variety of methods exist for human reliability analysis. These break down into two basic classes of assessment method:
- probabilistic risk assessment (PRA)
- and those based on a cognitive theory of control.
In 2009, the Health and Safety Laboratory compiled a report7 for the Health and Safety Executive (HSE) outlining HRA methods for review.
They identified 35 tools that constituted true HRA techniques and that could be used effectively in the context of health and safety management.
Obviously, it is well beyond the scope of this article to define the merits and demits of all these methods. However, what the HRA tools in the table below illustrate is that there are a large number of risk assessment techniques in the Health & Safety arena that could be applied elsewhere. It's also worth reflecting that Risk Management is usually associated with the financial risk; however, risk assessment techniques have other well-established uses including helping to maintain safe working environments.
Without being specific at this time, I think that it is possible that some of these tools could be adapted (if they haven't been?) to identify, analyse and evaluate risks and opportunities in the design of quality processes. After all, corrective and preventive actions usually involve human beings!
Table 1: HRA Tools
Acronym for Tool | Expanded name |
ASEP | Accident Sequence Evaluation Programme |
AIPA | Accident Initiation and Progression Analysis |
APJ | Absolute Probability Judgement |
ATHEANA | A Technique for Human Error Analysis |
CAHR | Connectionism Assessment of Human Reliability |
CARA | Controller Action Reliability Assessment |
CES | Cognitive Environmental Simulation |
CESA | Commission Errors Search and Assessment |
CM | Confusion Matrix |
CODA | Conclusions from occurrences by descriptions of actions |
COGENT | COGnitive EveNt Tree |
COSIMO | Cognitive Simulation Model |
CREAM | Cognitive Reliability and Error Analysis Method |
DNE | Direct Numerical Estimation |
DREAMS | Dynamic Reliability Technique for Error Assessment in Man-machine Systems |
FACE | Framework for Analysing Commission Errors |
HCR | Human Cognitive Reliability |
HEART | Human Error Assessment and Reduction Technique |
HORAAM | Human and Organisational Reliability Analysis in Accident Management |
HRMS | Human Reliability Management System |
INTENT | Not an acronym |
JHEDI | Justified Human Error Data Information |
MAPPS | Maintenance Personnel Performance Simulation |
MERMOS | Method d'Evaluation de la Realisation des Missions Operateur pour la Surete (Assessment method for the performance of safety operation.) |
As ISO 31010 points out in the section on the 'Limitations' of HRA, many activities of humans do not have a simple pass/fail mode. HRA has difficulty dealing with partial failures or failure in quality or poor decision-making.8
Notes:
1 ISO/IEC 31010:2009 – Risk management – Risk assessment techniques, p.15.
2 Ibid., page 29.
3 British Standard BS: IEC61882:2002 Hazard and operability studies (HAZOP studies)- Application Guide, published by BSI Group.
4 ISO/IEC 31010:2009, B.9.3 Inputs, p.39.
5 Ibid.
6 Ibid., Table A.2 - Attributes of a selection of risk assessment tools.
7 Review of human reliability assessment methods, Prepared by the Health and Safety Laboratory for the Health and Safety Executive 2009, PR679 Research Report, Julie Bell & Justin Holroyd, Health and Safety Laboratory; First published 2009.
8 ISO/IEC 31010:2009, B.20.6 Strengths and limitations, p.63.
Next time: Function Analysis and Controls Assessment techniques that can be used to identify and assess risks posed to quality and reliability.
This post was written by Michael Shuff