Clinical audit is a way to ensure healthcare is being provided in line with service standards and lets both care providers and patients know the service is doing well. The aim of clinical audits is to allow quality improvement to take place where it will improve outcomes for patients.
John Grant-Casey is the Programme Manager – National Comparative Audit of Blood Transfusions at NHS Blood and Transplant. In the first of a series of contributions over the next year, John explores what makes a good audit and why audits are a necessary aspect of improving patient experience.
What is a clinical audit?
Clinical audit is a quality improvement technique that can be effective raising the standard of performance among health care professionals. In order to be effective, though, it should be realized that many health care professionals are taught to be autonomous and make independent assessments and treatment decisions. While they are encouraged to consider evidence-based practice, there is still the scope for a practitioner to disregard that evidence should they wish.
Types of problems addressed in clinical audits
The implication of this is that any clinical audit design should make it clear what is being audited, why that is being audited and what can be defined as “good practice”. There is also the consideration of whether collecting data for an audit will have any impact: the results might show that practice is not in line with guidance, but is that sufficient in itself? It is equally important to understand why practice is inadequate and to understand what can be done to improve things. One good approach is to consider the provision of healthcare from 2 perspectives: The Care Delivery Problem and the Service Delivery Problem.
A simple example: Hospital policy suggests that a doctor in A&E takes a blood sample for cardiac enzymes from a patient presenting with cardiac problems, but the doctor doesn’t do that because he thinks the test is of no value and would rather use his clinical judgement. This is a Care Delivery Problem – the doctor is aware of the need to do the test but doesn’t do it.
In another scenario, the doctor would like to do the test but does not have the correct sample bottle and cannot obtain one. He wants to do the test but is prevented from doing so. This is a Service Delivery Problem.
The imperatives for conducting a clinical audit usually fall into distinct categories:
- We do a lot of this and we want to make sure we do it right as often as possible
- This is expensive to do so we should make sure we are doing it right
- We know we are doing this badly and so should do something to improve
- The rules have just changed and we should make sure we are following them
Clinical Audit Design
Clinical audit design needs to make it clear which of these supports the reasons for auditing. In addition, audits need to be clear about how they are measuring quality: using legal requirements, published guidelines or local requirements. It is common to encapsulate these in a number of Standard statements. Audit is about measuring the behaviour of a health care professional, so these statements must be framed in behavioural terms and usually in the present tense, as in “Doctors take a blood sample for cardiac enzymes for every patient presenting in A&E with cardiac symptoms”. That statement tells you who should do what and when. Easy to audit because the doctors either do or don’t. If exceptions are allowed, they can be incorporated into the standard statement.
The timing of an audit may be important when considered alongside the issuance of guidelines. Guidance, whether issued by a national body, a medical Royal college, or in a publication, takes time to embed, so it might be worth waiting for perhaps 6 months after the guidance emerges before designing an audit to capture compliance with it. Equally, every time relevant guidance is issued it presents an opportunity to use audit and feedback as a vehicle to enable the guideline to act as a change agent.
Guidelines, though, are just that – a guide to practice. They are not set in stone nor are they mandatory. So when attempting to use guidelines as the basis for audit and feedback, it is important to ensure that those guidelines have integrity. Do those guidelines come from a respected source and are they backed up with sufficient evidence, such as trials data? Do the guidelines translate well into practice? Do they require heavy financial investment before they can be fully implemented? Do they contain sufficiently persuasive arguments to persuade the autonomous practitioner of the need for change?
What makes a good audit?
A good audit design is one which has involved the Stakeholders. These are the people who have a vested interest in improving service, and in our example could be people responsible for managing and delivering services, as well as the doctors who are expected to carry out the tasks being audited. With their agreement and involvement the audit will have much more leverage for change.
So by now we know why we are doing the audit, how we are measuring quality of care, and exactly what it is about care, and whose care, we are monitoring.
A good audit then includes questions, and only those questions, which address the problem being audited. It is very easy to collect more data than is actually needed to comment on appropriate practice, but collecting too much data, having too many audit standards, will cause audit fatigue. Audit is not research, and any data proposing to be collected that does not directly relate to the matter being audited must be rejected. Having arrived a set of questions or a list of data items to be collected, the next stage is a pilot audit.
Pilot Audits
Pilot audits should be designed to test the entire audit process and should include:
- Test data collection to see if the information to be gathered is actually available
- Review collected data to see if it generates the right kind of information in order to confirm that the questions are fit for purpose
- Draft an audit data analysis plan: what are you going to do with the data you collect
- Draft an audit feedback document: how will you use the data you have analysed to let people know how good or how bad practice is
- Test out the feedback by producing sample reports and asking people to tell you what they think the audit feedback says
- Form a plan for how you could use the data to implement change
Piloting all these means that the actual audit will run more smoothly and more quickly because all the process steps have been rehearsed. This is especially important when we consider the importance of Rapid Feedback. A good starting point is to pare back the feedback to the absolute minimum necessary to get the message across – short, sharp, relevant, timely reporting.
Time management and data collection of audits
Behaviour change theory tells us that feedback is most effective if it is given close to the wanted or unwanted behavior being enacted. Audits should endeavour to provide clear and concise feedback as soon as possible after the data has been collected. Performing an audit today and discussing the results with practitioners tomorrow is much more likely to achieve change than a delay of even a week. Any longer and those being audited may see the results as out of date and therefore not relevant. There is also the possibility, where there is a prolonged interval between data collection and feedback, that the results are irrelevant because there has been a change of staff or a change of procedure.
A good audit design is one which recognises the subject of the audit is the healthcare professional and not the patient, and thus when deciding how much data to collect it is important to agree with the Stakeholders what quantity of information they require before they are willing to act upon the findings: auditing for a week may be enough, but it may take 6 months to capture a sufficiently wide range of behaviours.
Another important factor to consider is the credibility of the collected data. Having agreed what might be a credible sample size to motivate change, the quality of the data used to start the process must be sufficient to gain acceptance among the practitioners being audited. Data needs to be complete, which may require effort to chase up missing data. Data should also be homogenous, so that it is clear that like is being compared to like. Data collection for clinical trials is necessarily very robust, and the same commitment to data quality must be shown when conducting a clinical audit. Many practitioners can easily be persuaded to disregard audit findings if there is insufficient integrity in the data.
The role of managers in audits
Adopting the approach that uses the Care Delivery Problem and the Service Delivery Problem greatly enhances the power of the audit to enact change. Measuring the extent of non-compliant behaviour will tell you if there is a problem, and often how big that problem is, but is of no use to those managing the service because they are not presented with data on why the problem occurs. Without knowing this, either no action can be taken, or resources are wasted in implementing solutions that will not necessarily fit the problem.
Finding out why moves us away from the normal “snapshot” audit and requires us to use a more “dynamic” audit style. This dynamic style allows us to recognise that Human Factors are involved in all but robotic care delivery, and those human factors must be investigated at the time of the audit. Put simply, we know today that our doctor is not taking a blood sample for cardiac enzymes, but we don’t know why. Tomorrow, we talk to the doctor to find out. If it’s a Care Delivery Problem then we must do something to persuade the doctor that he ought to take a sample. But if it’s a Service Delivery Problem then we must investigate further to find out why the blood bottles the doctor needs are not there. We must work with service managers to rectify the supply chain problem. Better still, we could work to empower the doctor to resolve the problem: ‘If you find there are no bottles there, this is how you go about getting some’.
Reporting back to managers on if there is a problem, and why the problem occurs is a clear step-forward from the “snapshot” approach, but we can, and must, go one step further: Tell the manager what can be done about it.
No manager likes a problem, but they are made more appetizing if they come with ready-made solutions. To achieve this, the auditor can try one or more things to change the doctor’s behaviour or improve the bottle supply problem. Not all the things that are tried work first time and it may take several goes, or several iterations, to find the best solution. In a sense, then, the audit operates an implementation laboratory, putting something in place to change personal or system behavior and continuing to collect audit data to see if an improvement can be detected.
Once that has been achieved, we can happily report to the manager that there is a problem, that there are implications of that problem (poor patient care, poor use of resources, we are getting this wrong for a lot of people, we are not complying with the rules) but that these are the solutions than we can show have been effective in resolving those problems.
The audit cycle is based on the concept of Plan, Do, Check, Act. Audits, though, cannot really become effective change agents unless there is continued audit activity until the highest possible level of behavioural change or system re-engineering has been achieved. At this point, the effort of auditing is unlikely to bring benefits, so the audit can be paused until circumstances suggest a re-audit would be beneficial, such as the rotation of medical staff or the establishment of a new treatment facility.
Audits that have simple data sets, collected and feedback rapidly, with an investigation into the cause of unwanted behaviour and the creation of known effective interventions, which then cease after a significant improvement, are probably of most value.
To summarise:
- Clearly state what is being audited
- Clearly state why that is being audited
- Clearly define “good practice”, and provide any evidence-base to support it
- Define, in a series of statements, what behaviour is being audited. Statements should include who does what and when (and possibly how)
- Assemble a question or data set that only gathers information to inform performance against the series of statements. Avoid irrelevant questions. The key is “does this happen?” and not “what does happen?”
- Involve stakeholders in the design. Unless those with a vested interest are engaged, they are unlikely to act to change
- Pilot the entire audit process from data collection through to implementing initiatives to achieve change
- Provide feedback to those being audited as quickly as possible, in a format that is meaningful to that person
- Investigate with that person why the unwanted behavior occurs and work with them towards improvement
- Continue to audit and feedback until you can show a reasonable level of improvement in care or in the delivery of services that support care
And finally, share your findings Let others learn from your efforts thus spreading the benefit.
Register FREE to access 2 more articles
We hope you’ve enjoyed your first article on GE Insights. To access 2 more articles for free, register now to join the Government Events community.
(Use discount code CPWR50)