Article Text
Statistics from Altmetric.com
The major shortcomings of self-regulation of the medical profession occur at local level. Doctors may have their suspicions about a colleague's performance, but these suspicions are generally rooted in hospital gossip rather than having any sound basis in audit or other available data. Even medical or lay managers have little if any evidence which they can use to take action. If this situation is to improve, it is not sufficient for the General Medical Council to call for a culture change so that doctors put patients' interests before covering up for deficient colleagues. What is needed is for data to be available so that managers may take rational, transparent, and informed action which is seen to be fair to all parties.
To achieve this, a robust system of appraisal for consultants is needed which goes somewhat beyond the present annual job plan review (when and if this takes place). It is of course necessary and proper to review regularly the work content of a consultant's job, to ascertain whether adjustments should be made to the balance of in-patient work, out-patient clinics, operating sessions, etc. It is also important to ensure that more and more work is not simply heaped onto the consultant, that if new work is to be added, some other work must be taken away, that facilities and staffing keep pace with modern practice, and that the consultant has an appropriate and adequate plan for continuing professional development which meets the requirement of the College or specialist organisation. All these topics are an appropriate part of the annual job plan review, and it is in the interest of all parties that it should take place.
But the real cause for concern, in cases of underperformance, is the standard of clinical work and this may not be picked up in the job plan review. Clearly, a review of clinical work and standards cannot be conducted by a lay person or even by a doctor in a different specialty. It must be undertaken by doctors working in the same field and preferably in similar institutions who have a detailed knowledge of the sort of standards, results, facilities, and staffing that are required to produce an acceptable level of care for patients. Increasingly, consultants subspecialise. Thus, a hospital with five general surgeons may find that they each have substantially different interests. Peer review will therefore increasingly need to come from outside. Indeed, even when there are doctors within the hospital practising similarly who might in theory review each other, a powerful argument can be made for why this should not be so. It is because external peer review can rise above local factors which may be contributing to underperformance − factors such as chronic underinvestment in certain departments, the inability of some consultants to shout as loudly as others in the scramble for resources, personality problems and feuds between colleagues or departments. All these will be seen more objectively by external rather than internal reviewers.
The review needs to examine the performance of individual consultants and will thus contribute to revalidation, but must also look at the whole department so that recommendations about the total working environment can be made. Where concerns are raised, whether about the individual or about facilities and staffing, recommendations should be made to rectify the problem. The British Thoracic Society1has pioneered a system of external peer review and its methodology might well be the basis for developing a national system in all specialties.
If, as seems likely, the key players in any such scheme are the medical Royal Colleges working closely with the specialist associations, consideration should be given as to how visits of inspection might be minimised and amalgamated. Hospitals cannot and should not have to cope with a state of perpetual visitation − visits to accredit departments, visits to review individuals, visits to inspect training, and so on. It cannot be beyond the wit of man to devise a system where a single visit per specialty at five-yearly intervals (if the NHS can afford this) can subsume all these functions. The result of such visits could provide evidence for the Commission for Health Improvement that (as far as doctors were concerned) clinical governance was in place and working.
A final piece of the jigsaw puzzle is necessary for external peer review to be meaningful. There must be a robust and ever-increasing data set of outcome and process data with national figures for the NHS as a whole for comparison. Data do not necessarily need to be adjusted for case mix, provided that the unadjusted results are compared with the range of outcomes obtained by most doctors performing that procedure. This would allow individual consultants and their managers, through the hospital audit mechanism to determine when results fall short of what could be expected in the NHS, and to take appropriate action. If the problem were excess mortality or another very serious matter, the decision might be taken urgently to stop performing the procedure until the matter could be investigated, analysed and put right − a much more sensible and constructive way forward than a culture of ‘naming and shaming’.
Such a scheme would not be cheap. The costs would include training doctors in appraisal techniques, the expenses incurred by visiting teams, the preparation time required to produce the necessary documentation and paperwork for the review, and the costs of work not done by doctors while reviewing and being reviewed. Nevertheless, if we are serious about clinical governance, and the need to protect patients from underperforming doctors and ill-equipped departments, such a system or something very similar is vital. If the NHS really wishes to emphasise quality rather than quantity it must be prepared to pay for measures to ensure that standards are maintained to the benefit of better patient care.