The sorry state of NHS provision of psychological therapy.

Healthcare Today carried the following headline at the end of January – “Figures from the Health and Social Care Information Centre (HSCIC) show fewer than 6% of referrals made under the Improving Access to Psychological Therapies (IAPT) programme in 2012-13 resulted in ‘reliable recovery’”. Shocking surely? If this were physical health, wouldn’t there be an outcry about wasted money and human resources? Wouldn’t NICE’s confidence in CBT be a little disturbed?

But according to the HSCIC report itself, this is a story of success. “43% of patients completing a course of treatment under IAPT achieved recovery”. In its foreword, Lord Layard writes, “the dataset … supports … the Department of Health’s continuing commitment to parity of care between Mental Health and other Health services”.

So, what is going on? Is it 6% or 43%? The answer lies in the opacity and manipulation of IAPT’s evidence base, and the politics of mental health.

According to the reported statistics, 43% “of those referrals that had completed treatment and were at ‘caseness’ at their first assessment (127,060 referrals)” achieved recovery. However, this group of 127,060 represent only 14% of 883,968 new referrals during the year. 51,900 patients were deemed to have recovered  – 6% of the total number of referrals.

The four-year vision for the IAPT programme published in Feb 2011, and repeated with every quarterly progress report, is for a total of 3.2m referrals, 2.6m completed courses of treatment (81% of referrals) and 1.3m ‘recoveries’ (40% of referrals) between 2011 and 2015. Compare this with the actual figures for 2012-13 – 14% of referrals completed treatment and 6% of referrals recover.

Put another way, then, 94% of referrals to IAPT failed to receive a successful course of therapy, and 86% failed to complete any course of therapy at all. What happened to 757,000 referrals who never completed a course of therapy?

The ‘evidence base’ obscures rather than clarifies the picture. We learn that of the 449,000 referrals who do not enter clinical treatment of any kind, 37% were still on a waiting-list at the end of the year and a half of this group (84,000) had been waiting for more than 90 days. The other 283,000 non-starter referrals disappear from the data. Who are they? Where do they go?

From a different starting point, we are told that 60% of new referrals ‘ended’ during the year. This figure includes referrals who completed treatment and those who either never started or failed to complete. A quarter of this 60% dropped out of the process ‘unexpectedly’ and another quarter ‘declined the treatment offered’. Why? What happened to these people?

These are not new questions being asked of the IAPT statistical light show.

In Nov 2013, The We Need to Talk Coalition report on access to talking therapies proposes from the results of its survey that 10% of IAPT referrals have been on a waiting list for over a year, and that 50% have been waiting for 90 days or more.

Tellingly, an article in Pulse Today in November 2013 reports an analysis of IAPT data for the previous year, 2011-12, by researchers from the University of Chester’s Centre for Psychological Therapies in Primary Care (CPTPC), published in two papers in the Journal of Psychological Therapies in Primary Care.

In the first paper, an analysis of IAPT data from the NHS Information Centre for 2011-2012, the team reported that the official figure for patients moving to recovery was 44%, based on those patients who were ‘at caseness’ to begin with and were considered to have completed treatment.However, when the researchers considered all patients entering treatment – completing at least one session – the figure fell to just 22%. If the full quota of patients referred for IAPT was considered, the proportion of patients moving to recovery fell even further, to just 12%.”

So, it seems one year later the proportion of patients moving to recovery has fallen even further, to just 6%.

Apart from the raw numbers, the report is full of obscure terminology and statistical caveats which are surely incomprehensible to the uninitiated and intended to hide as much as they reveal. For example, what a course of treatment consisting of two sessions means; what reliable recovery or reliable improvement really mean; how to read the complex flow chart illustrating the relationship between the two; and, even more, the perplexing diagrams of the various types and stages of threshold to recovery – all are beyond me at least.

Nor can I get my mind around this caveat concerning which case may or may not be counted to measure an outcome of ‘recovery’:

Not all referrals that have ended are eligible to be assessed on outcome measures such as recovery. It is possible for patients to exit the service, or be referred elsewhere, before entering treatment, or without having the required number of appointments to determine the impact of IAPT services. As a result of this, in order to be eligible for assessment a referral must end with at least two treatment appointments, allowing any changes between those two (or more) appointments to be calculated. This is known as completed treatment, but may not be the same figure as the number of referrals with an end reason of completed treatment, as the method allows all referrals with the requisite amount of treatment appointments to be assessed (even if the end reason is that the patient dropped out or declined treatment).

It does not help my understanding to hear that Professor David Clark, a key proponent of the IAPT programme, criticised the Chester researches by pointing out that it was inappropriate to consider all people referred to the service as many would not end up being treated, while those who did not complete treatment were people who had one session of treatment and advice, ‘in many cases entirely appropriately’.

By comparison, I know where I am when the Department of Health academics who made the economic case for the IAPT programme reject the researchers’ claims as based on ‘flawed analyses’, ‘inappropriate’ calculations and ‘dubious assumptions’. This is what the political game of evidence-base is all about. It makes no differences what the numbers actually say. Statistics are essential to the political lie. In this case, in the pursuit of the familiar policy – contempt for mental health.

The truth revealed by the 2012-13 IAPT annual report is that the IAPT programme is failing –  a failure obscured by the smoke and mirrors of its statistical evidence.

Paul Atkinson

March 2014