Skip to Main Content

Find Case LawBeta

Judgments and decisions from 2001 onwards

Lancashire Care NHS Foundation Trust & Anor v Lancashire County Council

[2018] EWHC 1589 (TCC)

Neutral Citation Number: [2018] EWHC 1589 (TCC)
Case No: HT-2017-00386

IN THE HIGH COURT OF JUSTICE

QUEEN'S BENCH DIVISION

BUSINESS AND PROPERTY COURTS

Royal Courts of Justice

Strand, London, WC2A 2LL

Date: 22/06/2018

Before :

THE HONOURABLE MR JUSTICE STUART-SMITH

Between :

1. Lancashire Care NHS Foundation Trust

2. Blackpool Teaching Hospitals NHS Foundation Trust

Claimants

- and –

Lancashire County Council

Defendant

Rob Williams (instructed by Hempsons) for the Claimant

Rhodri Williams QC (instructed by Lancashire CC Legal Department) for the Defendant

Hearing dates: 23, 24, 25, 26 April, 1 May 2018

Judgment Approved

Mr Justice Stuart-Smith:

Introduction

1.

This judgment follows an expedited trial concerning the procurement by the Defendant of a public contract relating to the provision of Public Health Nursing Services for persons aged 0-19 in Lancashire [“the Contract”]. The Claimants [“the Trusts”] are two local NHS Foundation Trusts and are the incumbent providers of the service. The Defendant [“the Council”] is the local County Council.

2.

The Council awarded the Contract to Virgin Care Services Limited [“Virgin”], thereby rejecting the Trusts’ tender. The Trusts and Virgin were the only bidders for the Contract. The Contract was due to commence on 1 April 2018 with a potential duration of up to five years and a potential value of £104 million. The Trusts challenge the award of the Contract to Virgin. Pursuant to a ruling by Fraser J on 25 January 2018, the conclusion of the Contract with Virgin remains suspended pending the outcome of this trial: see [2018] EWHC 200 (TCC). As things stand, the Trusts continue to provide the services which would be the subject of the Contract on an interim basis. Hence the need for expedition.

3.

The Trusts contend that the award of the Contract to Virgin is unlawful and should be set aside. The basis for that challenge has developed as the Council has progressively made information available to the Trusts since the initial mounting of the challenge. By the end of the trial, the Trusts’ grounds of claim were summarised in an agreed list of issues, which I set out below. The parties have agreed that the present hearing and judgment should deal with liability and causation but not remedies. The Court has endorsed that approach.

4.

The revised list of issues is as follows:

1.

As to the quality evaluation generally:

a.

Are the reasons given by the Defendant for the scores awarded to the Claimants and Virgin for the quality evaluation questions sufficient in law?

b.

Did the Defendant in fact apply or depart from the stated award criteria and/or evaluation methodology when evaluating tenders?

2.

As to the document described in the RAPoC para 27A as “the Scoring Prompt”:

a.

Did the Scoring Prompt contain or give effect to undisclosed and/or unlawful sub-criteria, weightings and/or model answers (described in the RAPoC as the Undisclosed Criteria)?

b.

Was the Scoring Prompt lawfully applied or used in the evaluation of tenders by Ms Slade or otherwise and/or did its use render the evaluation unlawful?

3.

Did the Defendant make manifest errors or breach the principles of procurement law:

a.

in the evaluation of the Claimants’ tender for any or all of questions 2, 3, 5 or 6? and/or

b.

in the evaluation of Virgin’s tender for any or all of questions 2, 3, 5, 6 or 7?

4.

In so far as the Court finds any legal breach under points 1 to 3 above:

a.

Did any such breach or combination of breaches cause the Claimants to lose the award of the Contract to Virgin; and/or

b.

Did any such breach or combination of breaches cause the Claimants to lose a chance of the award of the Contract?

5.

For the reasons set out later in the judgment, my conclusions on the issues for determination are as follows:

i)

The Trusts have proved a material breach under Issue 1(a): the reasons given by the Defendant for the scores awarded to the Claimants and Virgin for the quality evaluation questions are insufficient in law;

ii)

The Trusts have not proved a breach under Issues 1(b) and 2;

iii)

The pervasiveness of the Council’s breach under issue 1(a) has the effect that the Court cannot determine the issue of manifest error in the evaluation of either the Trusts’ or Virgin’s tender without conducting a full remark, which the Court will not do;

iv)

The consequence of the breach under Issue 1(a) is that the decision of the Council to award the contract to Virgin must be set aside.

Transparency and Criteria

6.

The procurement was carried out using the so called “light touch procedure” [“LTP”]. Under the LTP, the Council was required to comply with regulations 74 to 76 Part 2, Chapter 3, Section 7 of the Public Contracts Regulations 2015 and the general principles of procurement law. Much of the debate at trial concerned questions of transparency and “criteria”. In order to make sense of what follows, it is convenient to review at this stage the obligation of transparency that is central to public procurements such as the present and how that should affect the use of terms such as “criteria”, “sub-criteria” and so on.

7.

Regulation 76 provides:

“(1)

Contracting authorities shall determine the procedures that are to be applied in connection with the award of contracts subject to this Section, and may take into account the specificities of the services in question.

(2)

Those procedures shall be at least sufficient to ensure compliance with the principles of transparency and equal treatment of economic operators.

(3)

In particular, where, in accordance with regulation 75, a contract notice or prior information notice has been published in relation to a given procurement, the contracting authority shall, except in the circumstances mentioned in paragraph (4), conduct the procurement, and award any resulting contract, in conformity with the information contained in the notice about—

(a)

conditions for participation,

(b)

time limits for contacting the contracting authority, and

(c)

the award procedure to be applied.

(4)

The contracting authority may, however, conduct the procurement, and award any resulting contract, in a way which is not in conformity with that information, but only if all the following conditions are met:—

(a)

the failure to conform does not, in the particular circumstances, amount to a breach of the principles of transparency and equal treatment of economic operators;

(b)

the contracting authority has, before proceeding in reliance on sub-paragraph (a)—

(i)

given due consideration to the matter,

(ii)

concluded that sub-paragraph (a) is applicable,

(iii)

documented that conclusion and the reasons for it in accordance with regulation 84(7) and (8), and

(iv)

informed the participants of the respects in which the contracting authority intends to proceed in a way which is not in conformity with the information contained in the notice.”

8.

In Lion Apparel Systems Limited v Firebuy Ltd [2007] EWHC 2179 (Ch) Morgan J summarised the applicable principles in terms which I gratefully endorse and adopt:

THE RELEVANT LEGAL PRINCIPLES

26.

The procurement process must comply with Council Directive 92/50/EEC, the 1993 Regulations and any relevant enforceable Community obligation.

27.

The principally relevant enforceable Community obligations are obligations on the part of the Authority to treat bidders equally and in a non-discriminatory way and to act in a transparent way.

28.

The purpose of the Directive and the Regulations is to ensure that the Authority is guided only by economic considerations.

29.

The criteria used by the Authority must be transparent, objective and related to the proposed contract.

30.

When the Authority publishes its criteria, which conform to the above requirements, it must then apply those criteria. The published criteria may contain express provision for their amendment. If those provisions are complied with, then the criteria may be amended and the Authority may, and must, then comply with the amended criteria.

31.

In relation to equality of treatment, speaking generally, this involves treating equal cases equally and different cases differently.

32.

Council Directive 89/655/EEC (the remedies directive) requires Member States to take measures necessary to ensure that decisions taken by an Authority in this context maybe reviewed effectively and as rapidly as possible on the grounds that such a decision may have infringed Community law in the field of public procurement or national rules implementing that law.

33.

Regulation 32 of the 1993 Regulations (which I consider below) gives effect to the remedies directive.

34.

When the court is asked to review a decision taken, or a step taken, in a procurement process, it will apply the above principles.

35.

The court must carry out its review with the appropriate degree of scrutiny to ensure that the above principles for public procurement have been complied with, that the facts relied upon by the Authority are correct and that there is no manifest error of assessment or misuse of power.

36.

If the Authority has not complied with its obligations as to equality, transparency or objectivity, then there is no scope for the Authority to have a “margin of appreciation” as to the extent to which it will, or will not, comply with its obligations.

37.

In relation to matters of judgment, or assessment, the Authority does have a margin of appreciation so that the court should only disturb the Authority’s decision where it has committed a “manifest error”.

38.

When referring to “manifest” error, the word “manifest” does not require any exaggerated description of obviousness. A case of “manifest error” is a case where an error has clearly been made.

39.

I take the above principles from the decision of the Supreme Court of Ireland in SIAC CONSTRUCTION V MAYO COUNTY COUNCIL [2003] EuLR 1, and the decision of the Court of First Instance in EVROPAIKI DYNAMIKI V COMMISSION 12th July 2007.”

A summary to similar effect was provided by Ramsay J in Mears Ltd v Leeds City Council [2011] EWHC 1031 (TCC) at [122], which I also endorse and adopt without setting it out in full here.

9.

In the present case, the OJEU Notice cross-referred to the procurement documents for a statement of the award criteria. As a result, the obligation identified in Regulation 76(3) applies to the award procedures (including those setting out how the bids would be assessed and marked) as set out in the ITT. The proper approach to the design of the process is conveniently and accurately summarised in guidance published by the Crown Commercial Service in relation to the LTP, which states:

“The key things are to be clear about what your process will involve, making sure the process ensures transparency and equal treatment of suppliers, and sticking to the process that you decide to run.

It would also be necessary to be transparent about any award criteria to be used, and the weightings for the criteria and sub-criteria, to comply with the general transparency obligations”

10.

The context for this advice is that tender documents are to be construed on the basis of an objective standard, that is the standard of the reasonably well informed and normally diligent (RWIND) tenderer. It follows that the tender documents must state the process to be followed, including how marking of bids will be carried out, in terms that can be objectively assessed and understood by a RWIND tenderer; and, having done so, the contracting authority must stick to it.

11.

The literature and authorities, both in the UK and of European origin frequently refer to criteria and sub-criteria. The use of these terms may be convenient on the facts of a given case or tender; but it is not desirable to try to fix them with immutable meanings for universal application. Typically, the terms are applied when tender documents identify a requirement that is to be addressed by tenderers and to which an aggregate mark may be allocated (“a criterion”), which mark in turn may be the product of a number of other requirements that fall within the overall scope of the criterion (“sub-criteria”), whether or not specific marks are to be allocated for individual criteria. What matters, in my judgment, is that the authority should identify (a) what the tenderer is required to address and (b) how marks are going to be awarded. Once it does that, it must (subject to exceptions that do not apply in this case) stick to what it has said it requires of tenderers and how it has said it will mark the tenders. Provided it does so, it does not matter whether the language of criteria and sub-criteria are used at all. On the other hand, where the terms are used, it usually denotes that the criteria (or sub-criteria) have been defined and are basic elements in the approach to be adopted and applied consistently by tenderers and the authority alike. Put another way, “potential tenderers should be aware of all the elements to be taken into account by the contracting authority in identifying the economically most advantageous offer, and their relative importance, when they prepare their tenders…”: see Case 532/06 Lianakis [2008] ECR I-251 at [36].

The Factual Background

12.

There is a mandatory national programme known as the Healthy Child Programme [“HCP”]. The Council is responsible for HCP in Lancashire and for procuring contracts to deliver it. These proceedings concern the 0-19 HCP in Lancashire. The challenge relates to Lot 1 of Lancashire’s overall programme, which is for Public Health Nursing Services. The services are community based and are led by Health Visiting and School Nursing Services. The Contract for Lot 1 is for an initial three year period with an option to extend for up to another two years to a maximum of five years. It is worth an estimated £104m over five years (approximately £20m a year) and represents over 95% of the total estimated value of the HCP programme in Lancashire.

13.

There is and was at all material times a further category of services known as Well Being, Prevention and Early Help [“WPEH”] which have been provided separately. The successful bidder for Lot 1 was to work with the Council to merge services relating to WPEH and Public Health Nursing Services into a shared delivery framework.

14.

The competition for the Contract took place between 29 September 2017 and 27 November 2017. It commenced with the advertisement of the Contract in the Official Journal of the European Union on 29 September 2017.

15.

The terms of the competition were further set out in the ITT, which was also published on 29 September 2017. Within the ITT:

i)

Appendix K was entitled “Evaluation Criteria Selection and Award”: see [16] below;

ii)

Appendix G was entitled “Award Criteria – Lot 1”: see [19] and Annexe 1 below;

iii)

The Service Specification at Appendix A defined the services for which potential Providers were to tender. As was said during trial, it provided the backdrop and context for any and every aspect of the invited tenders. A tender that failed to have regard to the relevant requirements of the specification would run the risk of irrelevance.

Appendix K

16.

Appendix K explained that all submissions would be marked over a two-stage process. Stage 1 involved acceptance or rejection of Tenderers based on business standing, financial standing, technical and professional ability. The Trusts and Virgin passed Stage 1 and nothing more need be said about it.

17.

Appendix K detailed the “Stage 2 Award Evaluation Criteria (for each Lot)” as being “Non Price (80%)” and “Price (20%)”, as follows:

Stage 2 Award Evaluation criteria (for each Lot) – Non Price (80%)

Each question will be scored out of 5 (please see “The interpretations of the non-pricing scorings”).

Weighted marks for each question within a criteria are added together to give the total mark for the criterion.

The Award stage criteria section demonstrates how this will apply to a tenderers score if they received, for each question, the following scores against a variety of weightings:

The interpretations of the non-price scorings are:

INTERPRETATION

SCORE

No response. Not acceptable.

0

Poor response. The response demonstrates a limited or unacceptable understanding and lacks evidence that requirements will be met with reservations.

1

Acceptable response: The response demonstrates a sufficient or acceptable understanding and evidence that requirements will be met.

2

Good response: The response demonstrates a thorough or good understanding and evidence that requirements will be met.

3

Very Good response. The response demonstrates a substantial or very good understanding and evidence that requirements will be met.

4

Excellent response: The response demonstrates a complete or excellent understanding and evidence that requirements will be met.

5

Stage 2 Award Evaluation Criteria (for each Lot) – Price (20%)

Contract Pricing

Having considered all aspects of Authority’s requirements detailed in this ITT, the Tenderer must provide a detailed pricing proposal in Appendix L …

Price Scoring

Tenderers will be required to submit an Overall Price for the delivery of services, for the initial three year contract period; and the following will apply to each Lot tendered for.

The Overall Price will comprise of the annual price (based on Service delivery over 12 months) for each year of the three year initial contract period. Tenderers must submit their annual price (with a full breakdown of cost) for each of the three years, and also submit their Overall Price.

The Overall Price, will be used to calculate the pricing score; and should be the price over the total initial three year contract period for delivering Services as set out in the Specification(s).

Award Criteria Overview

No Tenderer will be advantaged or disadvantaged through the scoring mechanism. Tenderers will be required to complete a separate submission for each Lot as directed and must complete each Submission fully for each Lot. References made within any proposals to information contained within other Lot submissions will be disregarded.

Please ensure that you tailor your answers to the specific Lot your Submission is for.

All Submissions for each Lot will be evaluated and scored on the basis of the most economically advantageous tender against the evaluation criteria which is based on the quality/price split outlined in this document.

Following evaluation and scoring all submissions in each Lot will be ranked based on the highest overall scored.

Score for Question x Weighting Factor (As shown in the award criteria table) = Weighted score.

Lot 1

AWARD CRITERIA

MAXIMUM PRE-WEIGHTED SCORE

WEIGHTING FACTOR

MAXIMUM SCORE (%)

1. Implementation and Mobilisation Plan

5

3

15

2. Delivery model

5

4

20

3. Service Standards

5

2

10

4. Workforce

5

1.5

7.5

5. Safeguarding and Early Help

5

2

10

6. Integration with WPEH

5

1.5

7.5

7. Social Value

5

2

10

MAXIMUM WEIGHTED QUALITY SCORE

80%

MAXIMUM WEIGHTED PRICE SCORE

20%

18.

On its face, Appendix K identified “Non-price” and “Price” as being criteria. It referred to questions that were to be answered as “question(s) within a criteria [sic]” for which marks would be given which, when added together would “give the total mark for that criterion.” This is analogous to describing the questions as “sub-criteria”. The seven headings in the final “Award Criteria” grid were addressed in further detail in Appendix G, where it became apparent that each heading was the subject of a question.

Appendix G

19.

Appendix G assumed central importance during the trial and is, for that reason set out almost in full in Annexe 1. After a preliminary note, it set out seven requirements, which were referred to in the Appendix as questions. Each question was followed by a number of areas which were presented as bullet points. For convenience I shall refer to the seven requirements generically as questions and identify individual questions as Q1 to Q7; and I shall refer to the bullet points generically as such.”

20.

A number of points may immediately be noted about Appendix G:

i)

In the initial passage commencing with the words “Please Note” the reference to “Award Criteria Questions” is a reference to Q1 to Q7. The reference to “sub-criteria” is a reference to the bullet points. The reference to “sub-criteria” could not be a reference to Q1 to Q7 as the questions expressly carried different weighting;

ii)

No marks are specifically attributed to the bullet points under any question. The reference to sub-criteria carrying equal weighting should therefore be a seen as a qualitative statement rather than a quantitative one. I accept the Trusts’ submission that the Council was not required to allocate a specific mark to each bullet point; rather it was to reach the overall mark for the question having taken into account the requirements identified by the bullet points individually and cumulatively;

iii)

The phrase “Tenderers are advised that this list is not exhaustive”, which appears before each list of bullet points, gives rise to difficulties of interpretation. The starting point, to my mind, must be that transparency and fairness require that the Council should not bring undisclosed criteria or sub-criteria into account. However, an interpretation that tries to give coherence to the tender documents as a whole must take into account the fact that the specification provided the backdrop and, though not expressly referred to in the bullet points, would inevitably (and rightly) be considered by a RWIND tenderer when deciding how to reply to the questions and to cover the areas identified by the bullet points. Read in this way, the reference to the list of bullet points not being exhaustive means that the bullet points themselves are not exhaustive descriptions of the scope and content of the answers to be given. Adopting this approach, the use of the word “areas” denotes the generality and scope of the bullet points;

iv)

The Trusts’ evidence (which was not materially disputed by the Council) was that the maximum character counts restricted the amount of potentially relevant information that could be included under any particular question. Choices and decisions therefore had to be made about what to include and what to omit. The Trusts conducted a detailed investigation and analysis of the questions, the bullet points and the Service Specification to try to maximise the relevance of their responses, with a view to maximising their marks under the Appendix K scoring system. I accept that this exercise led to the Trusts taking decisions about what to include and what to omit from their Tender responses. It is evident from any review of the Trusts’ answers that this is what they did, and that they did it thoroughly; it is also evident from any review of Virgin’s answers that they did essentially the same exercise.

Submission of Tender and Instructions to Evaluators

21.

The Council engaged a panel of evaluators. Ms Jane Jones is the Head of Safeguarding for Morecambe Bay Clinical Commissioning Group. Though experienced in her field, she had not previously taken part in a tender evaluation panel. Dr Karen Slade is a consultant in Public Health with wide previous experience of public health but no previous experience of participating in a public procurement as evaluator. Mr Lee Girvan is employed by the Council as a Public Health Specialist. Ms Sally Nightingale is employed by the Council as a Policy, Information and Commissioning Manager. Ms Helen Green completed the panel of evaluators. All but Ms Green gave evidence at trial. The administration of the procurement was the responsibility of Mr Paul Fairclough, who is a Procurement Manager for children, adult and Public Health services employed by the Council, and Mr Jig Parmar. Mr Fairclough manages Mr Parmar, who is described as a Category Manager predominantly focusing on Public Health and who was involved, with Mr Fairclough, in the administration of the procurement. Mr Fairclough gave evidence at trial. Mr Parmar did not.

22.

The Trusts submitted their tender on 7 November 2017. On the same day, Mr Parmar sent instructions by email to the evaluators in preparation for their evaluation of the submitted tenders. In the course of that email he wrote:

“3.

Background reading – published tender documentation inclusive of all contracts, specifications, instructions and evaluation criteria.

a.

As an absolute minimum, please ensure you have read the specification(s) for each lot(s) you are evaluating, and are familiar with the ‘Appendix K evaluation criteria (award). I have included this document as a separate attachment for ease of access. Scores must be based on the evaluation subcriteria identified directly under each question. Specifications are included within the contracts for each lot; accessible through the first zipped folder (attached).

b.

The evaluation of all award criteria must be based on the specification(s) and therefore you must not consider/take into account any prior knowledge/relationship you may have with any bidding organisation.”

23.

In their initial statements for trial, the Council’s witnesses generally referred to the Appendix G questions as “criteria” and the bullet points as “sub-criteria”. In doing so they followed the language of Appendix G. In supplemental statements shortly before trial, two evaluators (Ms Jones and Dr Slade) and Mr Fairclough shifted their ground so as to suggest that there were two “criteria” (quality and pricing) and seven “sub-criteria” (Q1 to Q7). In doing so they adopted the approach of Appendix K. I do not know what provoked this change of heart and evidence; but I do not think it matters. Applying the principles I have outlined above, it is clear that questions Q1 to Q7 and the bullet points under each question were basic elements in the approach to be adopted and applied consistently by tenderers and the authority alike. In other words, the Council was required to assess the tenderer’s answers in the manner set out in the tender documents and, specifically, ensuring that the mark awarded for each question took into account how the tenderer had covered the bullet points under the question being marked.

24.

The individual score sheets completed by the evaluators and their witness evidence shows that each brought her or his own perspective and detailed approach to the task of arriving at their scores. Ms Jones’ scoresheet reflects her approach, which was to read the question and then refer to the bullet points and look for the key points. Her evidence, which I accept, was that she considered the service specification and general quality of the response for the relevant question when scoring each bid individually.

25.

Dr Slade followed the instruction she had been given by Mr Parmar that her evaluation should be based on the specification. In order to help her take in its contents, she compiled what she considered to be the key requirements as she read and understood it and organised her notes as an aide memoire with relevant points for each question being located and listed below the question and bullet points to which she thought them relevant. She described the aide memoire in her oral evidence as “simply a distillation of the specification to get me up to speed so I was familiar with it.” I accept that as an accurate description of what she set out to do. When she came to compile her score sheet, she did not limit herself to points that she had included in her aide memoire; nor did she treat the points in her aide memoire as if she was required to score the tenders by reference to them. I am satisfied and find that, when evaluating and scoring the tenders, she had regard to the questions and bullets as the focus of her scoring, and that she treated her aide memoire as a useful prompt where (and only where) she found it relevant and helpful to that approach to scoring. I accept her evidence that she did not share her aide memoire or discuss it with any of the other panel members at any stage. Her decision to compile and use her aide memoire is the subject of Issue 2, and I will consider it further there.

26.

Mr Girvan scored the bids separately by going through each question and bullet point; and if he thought there was something in the specification that was relevant to a particular bullet point, he would consider it in relation to that point. This had the consequence that he did not expressly refer to (or make a point relating to) every bullet point on his scoresheet. Ms Nightingale scored the tenders based on her assessment of the bidder’s answer to the question whilst also making reference to the bullet points under each question. Her evidence, which I accept, was that she took into account the specification in the context of the bullet points: it was in the course of her evidence that the specification was (rightly) described as the backdrop to the whole process. Ms Green’s approach is apparent from her score-sheet, which shows that she paid close attention to the bullet points and the extent to which she considered that the bidder had (or had not) covered them when scoring the questions. It is also apparent that she had regard to the Service Specification as underpinning the whole exercise.

27.

The moderation meeting took place on 14 November 2017. Mr Parmar chaired the meeting. Mr Fairclough took notes on his laptop. The evaluators could see that he was working on his laptop but could not see what he was doing, whether noting or deleting. In outline, the process that was followed was that the meeting would take the individual evaluators’ scores for a bidder’s answer to the question. There would then be a period when each evaluator was invited in turn to offer their comments on the answer, both negative and positive. Mr Fairclough took a note of the points that were made, though it was not verbatim. Analysis of his record shows that the order in which the evaluators gave their contributions was not constant: typically Mr Parmar would ask an evaluator with an outlying score to start the process. There would be a period of discussion of the points being made as the evaluators attempted to reach consensus on the score for the question under discussion.

28.

It is clear from the evidence of the witnesses (and I find) that, although the panel reached consensus on scores, there was not necessarily or even probably congruity of reasoning that led each evaluator to subscribe to the consensus score for the question. It is also clear (and I find) that the moderated discussion had the result that evaluators might be persuaded to change their minds on particular points so that, for example, a point that they had felt to be influential when carrying out their original evaluation became less influential than it had been or a point that they had not considered to be influential when carrying out their original evaluation assumed greater importance in the light of the discussion that took place. Shifting of ground and reasoning is of course to be anticipated in the course of a moderated discussion such as took place; but it means that the evaluators’ original score sheets are not a reliable guide to the reasons that ultimately caused the group to reach their consensus scores.

29.

Neither Mr Parmar nor the evaluators instituted a system whereby each bullet point was discussed in turn. Rather, the discussion in relation to each question was initiated by reference to the comments (negative and positive) that the evaluators brought to the discussion as a result of their previous evaluations. There was no analysis at the end of the discussion to see whether all bullets had been covered either in the discussion or otherwise. The evidence does not establish that each bullet for every question was expressly discussed. It is not surprising that the procedure adopted at the moderation had the result that not each bullet point can be identified in Mr Fairclough’s notes as having been considered. The positive and negative points recorded in the notes can generally (but not always) be attributed as being relevant to particular bullet points. With some questions, the positive and negative points taken together appear to include reference to all of the bullet points for that question – but this is not always so. In the absence of a comprehensive record of the discussions at moderation, the absence of any recorded point that is attributable to a particular bullet point does not enable me to infer that the bullet point in question was not mentioned or discussed.

30.

However, each of the evaluators had carried out their initial evaluation properly by reference to the bullet points. Where they had specific positive or negative comments, those were raised and informed the evaluation process during the moderation. The absence of a specific positive or negative comment directed at a specified bullet point in the notes of either the initial evaluation or at the moderation meeting does not mean that the bullet point was either ignored or did not inform the views that the evaluators brought with them to the moderation. I accept the evidence of Dr Slade that she knew she had covered all the bullets and that her positive and negative comments would drill down to the bullet points; and I accept that this approach and thinking was, in general terms, representative of the approach of each of the evaluators. I also accept the evidence of Mr Fairclough that throughout the moderation there was what he described as “constant reference” to the bullet points. Having heard from Mr Fairclough and from four of the five evaluators, I find that they had the bullet points before them and had them in mind during the moderation process, even if one or more was not expressly spoken about. Although the moderation notes do not demonstrate that express consideration of each bullet point was undertaken, the range of views and the thoroughness in the approach of the five panel members in their initial evaluations means that the views they expressed in the moderation were informed by their consideration of all the bullet points even if, as may have happened, some were not expressly mentioned.

31.

At some point in the discussion of a question Mr Fairclough would read out points that he had noted. I accept Dr Slade’s evidence that he would typically say something along the lines of “We have arrived at 3. I have these positives and these negatives, is that sufficient?” The evaluators had the opportunity to agree or disagree with the comments that he read out.

32.

The evaluators gave evidence about the observations they had made on their original scoresheets, some of which was plainly reconstruction, some of which was equally plainly based on actual recollection, and some of which was a mixture of both recollection and reconstruction. When it came to giving evidence about the discussions and decisions at the moderation, their evidence was, to my mind, much less convincing for two main reasons. First, their evidence tended to be dominated by what they believed to have been their own particular position (whether or not it changed during the moderation); and, second, when trying to give evidence about what happened at the moderation, it was clear that their evidence was dominated by attempts to reconstruct what was signified by Mr Fairclough’s notes. Focus must therefore shift to Mr Fairclough’s note-taking and notes as a record, and the evidence of the evaluators as to what happened.

33.

Mr Fairclough’s evidence was that he noted points that reflected the discussion that was held. Sometimes he would note a point but would subsequently delete it because the evaluator who had made the point no longer relied upon it after and in the light of the discussion that followed. It is evident that the views of individual evaluators in relation to a given answer might differ, so that one might think that the answer covered a particular bullet point satisfactorily while another did not. In such a case both positive and negative comments might be expressed in the course of the discussion and be noted by Mr Fairclough. I accept Mr Fairclough’s evidence that he would sometimes edit out points that he had initially recorded because of a subsequent change in position by one or more evaluators: but the notes of the moderation have a number of examples where conflicting points are recorded, with no apparent attempt to attribute them to individual evaluators, to reconcile them or to indicate whether or to what extent the panel reached agreement, even if that agreement was an agreement to disagree on the significance of a point while agreeing on the ultimate consensus score. Examples of the recording of contradictory comments are provided at [11] of the Claimants’ closing submissions. I accept the evidence of the evaluators (specifically as summarised in [10] of the Claimants’ closing submissions) that the notes of the evaluation in their final form included points in relation to which the evaluators did not agree either that they were good points at all or as to the weight to be attached to them. Furthermore, even if Mr Fairclough set out to record all material points in his notes, it is apparent from what I have already set out and from his evidence that some were either omitted or edited out. Accordingly, although I accept that the points that have survived in Mr Fairclough’s notes are points that were identified during the discussion, they are not a complete record of the points that were made or even the points that were considered to be material at some stage during the discussion, as some were edited out at unspecified times and for reasons that are not themselves recorded.

34.

Two other features of Mr Fairclough’s notes should be mentioned at this stage.

35.

First, there was no consistency in the manner in which any discussion or decision-making process was recorded over and above the recording of positive and negative points. The forms that were used by the evaluators differed from the form of Appendix G: where Appendix G had said under each question “Please insert your response here:”, the evaluators’ sheets had two boxes, the first of which was headed “Positive Comments” and the second “What was missing or could have made their answer better”. The same format was used for Mr Fairclough’s notes of the moderation, and Mr Fairclough allocated the evaluators’ points accordingly. The record of the agreed score was inserted at the bottom of the “What was missing…” box. Sometimes there was a clear reference to points that had been recorded above, while sometimes there was not. Thus in the notes of the moderation of the Trusts’ bid:

i)

For Q1, the note recorded at the bottom of the “What was missing…” box “Panel agreed a score of 4. Substantial assurance was received by the panel having considered the detail of the responses. Key points highlighted.” The “Positive Comments” box had 12 comments in it, of which three were highlighted by being in bold font. The “What was missing…” box had four comments, none of which was highlighted. An identical comment appeared at the bottom of the “What was missing…” box for Virgin’s Q1, and three positive comments out of 12 were highlighted;

ii)

For Q2, the note recorded at the bottom of the “What was missing” box “Panel agreed a 3 on reflection of the discussion that has taken place – key points in this respect are highlighted.” The “Positive Comments” box had 10 comments in it, none of which was highlighted. The “What was missing…” box had 13 comments, the last four of which were highlighted. By contrast, although the consensus score was recorded against Virgin’s Q2, there was no comment at the bottom of the “What was missing…” box or elsewhere and no comments (positive or negative) were highlighted;

iii)

For Q3, the note recorded at the bottom of the “What was missing” box “Panel agreed a 3. Key point is lack in audit approach, engagement with the authority, and clarity around roles in consortium.” No points were highlighted. The “Positive Comments” box had 12 comments in it. The “What was missing…” box had 8 comments in it, three of which were in similar but not identical terms to what had been identified as “key point”. By contrast, the comment at the bottom of the “What was missing…” box for Virgin’s Q3 stated “Panel agreed a score of 4, this was a very robust answer.” That observation was followed by a further passage to which I shall return. No points (negative or positive) were highlighted;

iv)

For Q4, the note recorded at the bottom of the “What was missing” box “Panel agreed a score of 4, key to the discussion why this wasn’t a 3 was the breadth of training opportunities and measures in place for staff monitoring and development. Staff supervision, qualifications, and training wasn’t always clear and therefore the panel agreed that the response was not a 5.” The “Positive Comments” box had 13 comments in it, none of which was highlighted. The “What was missing…” box had 6 comments, none of which was highlighted. It is possible to identify some correlation between what was described as key to the discussion and the comments that were recorded above; but different terms were used. By contrast, the comment at the bottom of the “What was missing …” box for Virgin’s Q4 stated “Panel agreed a score of 4” before setting out what was called a “discussion point”. There was no reference to “key points” as such;

v)

For Q5, the note recorded at the bottom of the “What was missing” box “Panel agreed a 3. All requirements met to a good standard but not a 4. Response didn’t evidence that processes are embedded or that there is a practical commitment to being lead professional.” The “Positive Comments” box had 9 comments in it, none of which was highlighted. The “What was missing…” box had 11 comments, none of which was highlighted. One of those comments was “Response didn’t evidence that processes are embedded or that there is a practical commitment to being lead professional”, which appears to reflect the later observation apparently giving some explanation for the award of a mark of 3. By contrast, the comment at the bottom of the “What was missing …” box for Virgin’s Q5 stated “Panel agreed a score of 4” before setting out short comments that were said to warrant a score of 4. That passage was then followed by another to which I shall return: see [37] below. None of the points (positive or negative) were highlighted;

vi)

For Q6, the note recorded at the bottom of the “What was missing” box “Panel agreed a 3 – the panel agreed some good elements were present, the response was better than acceptable following a discussion of sub-criteria applied.” The “Positive Comments” box had 11 comments in it, none of which was highlighted. The “What was missing…” box had 6 comments, none of which was highlighted. By contrast, the comment at the bottom of the “What was missing …” box for Virgin’s Q6 stated “Panel agreed a 3” before setting out a short series of points that identified strengths and weaknesses. No points (positive or negative) were highlighted;

vii)

For Q7, the note recorded at the bottom of the “What was missing” box “Panel agreed a score of 4. A range of very good measures are included and on reflection the negatives listed were limited, although the co-production example discussed highlighted some issues in understanding.” The “Positive Comments” box had 10 comments in it, none of which was highlighted. The “What was missing…” box had 3 comments, none of which was highlighted. By contrast, the comment at the bottom of the “What was missing …” box for Virgin’s Q7 stated “Panel agreed a score of 4. A very good response demonstrating a thorough understanding once the above points are considered.” No points (negative or positive) were highlighted.

36.

As this summary shows, there was no consistency either in identifying what were said to be key points or in highlighting points to show that they had been influential. The approach differed even within the record of the same question: thus the Trusts’ Q2 adopted highlighting of points, but Virgin’s Q2 did not; and there was no note at all at the end of Virgin’s Q2 to provide any guidance at all to the decision-making process.

37.

The lack of clarity in the manner of recording the discussion and reasoning of the panel is compounded by the interpolation of comments which, on their face, appear to indicate that the scoring was done by comparing the Trusts’ answers with Virgin’s, which was not the permitted approach. After identifying that the panel had agreed a score of 4 for Virgin’s Q3, the note identified six features that were either “better” or “stronger”. Similarly, having recorded that the panel agreed a score of 4 for Virgin’s Q5, the note identified six features, five of which were expressed as “stronger” or “wider”.

38.

This difficulty is not confined to the passages at the end of each of the questions. Of the 12 positive comments recorded against Virgin’s Q5, two are expressed in relative terms (“wider” or “stronger”).

39.

Mr Fairclough’s explanation for these entries is that, after the moderation had finished and the scores had been revealed, there was time left and so the decision was taken to go over the standstill feedback with everyone whilst they were still together. They focussed on the two questions where the Trusts’ and Virgin’s scores had differed, namely Q3 and Q5. Mr Fairclough went on working on the same set of electronic notes while the panel discussed the characteristics and relative advantages of the highest scoring tender. I accept that evidence. Mr Fairclough now recognises that it would have been better to have started fresh notes or at least to identify within the note what he was doing. I agree. His evidence provides a satisfactory explanation why there are the passages at the end of the assessment of Q3 and Q5; but it is not a satisfactory explanation for the use of relative terms in the positive points for Virgin’s Q5. The inference I draw, on the basis of my acceptance of his evidence that the moderation did not adopt a comparative approach when reaching its consensus scores, is that Mr Fairclough went back and either deleted or overwrote some of the positive comments. The weakness of the approach that was adopted is highlighted by the fact that Mr Fairclough cannot now remember which parts of the passages in question were added as part of the standstill feedback gathering process. I have no confidence that the only points that were altered during this latter stage were those where a relative comment is to be found.

40.

In this unsatisfactory state of the written record of the evaluation, I accept the evidence of Dr Slade that the evaluators considered as a panel whether, balancing the positive and negative comments which had been raised, the response in question could be described as poor, acceptable, good, very good or excellent. But it is not possible to determine in any greater detail what weight was attached to particular comments, save that some were highlighted (which itself is not very informative); nor is it possible to determine whether any points (including those highlighted or referred to in the short passages at the end of the notes for each question) were consensus points or points on which some but not all members of the panel relied. I am not satisfied that all points that were material in the course of or by the end of the discussion are recorded in the notes. Nor am I satisfied that the highlighted comments were the only points that led the panel to reach its consensus scores; and I am satisfied that they do not provide a full or accurate account of the reasons or reasoning that led either individual panel members or the panel as a whole to reach the consensus scores that were reached.

41.

Worse was to follow. The Council’s Tender Panel Guidance provided that the Chairperson would “ensure all evaluation documents, including all evaluation comments, justifications, marks and amendments are fully documented and agreed by both the panel members and the Chairperson.” The guidance presciently said that “by observing and implementing these guidelines, the evaluation panel will ensure that the tender evaluation process has been undertaken in an open, transparent and compliant manner. This will reduce the risk of complaints and legal challenges which are very time consuming and costly.”

42.

This guidance was ignored. On 24 November 2017 Mr Parmar wrote to the panel members saying that it “might be impractical” for panel members to sign the final moderated scoring sheet at that stage. Instead, he asked each panel member to send an email “which clarifies your satisfaction/agreement of the moderation process and final scores.” The notes were not sent to the panel members. Ms Green replied confirming her agreement with the moderation scores; Dr Slade confirmed she was “satisfied that the moderation process and generation of final scores had been fair and robust”; Ms Jones clarified her “agreement of the moderation process and final scores for Lot 1”; Mr Girvan confirmed “agreement with the moderation and final scores for lot 1-4….”. No one purported to agree the notes of the moderation. They were never agreed by the panel as an accurate record of the moderation. On the evidence I find that the panel members were not shown the notes of the moderation until they came to make their statements in these proceedings.

43.

Later, when the Trusts were pressing for information, the Council misled them by first redacting the dates and then backdating three of the individual members’ evaluation notes. To describe this (as the Council did) as merely “a regrettable episode of poor administration” is, to my mind, an unacceptable understatement.

44.

The Trusts were informed that they had been unsuccessful on 27 November 2017. The limited information that was provided to the Trusts showed that the margin by which they had fallen short of the successful tenderer was 4.07%, with Virgin scoring 78.5% and the Trusts 74.43%. 0.07% of the difference was attributable to price. The 4.0% difference on quality evaluation represented a difference of two marks overall, one for Q3 and one for Q5.

45.

There was at one point a dispute about whether the Council acted in breach of its obligations by failing to provide the Trusts with a debrief pursuant to Regulation 86. However, the practical significance of that dispute was overtaken by events when the Council provided the Trusts with the underlying scoring materials relating to the Trusts’ bid on 11 December 2017. The dispute under Regulation 86 has therefore not been pursued as a separate head of challenge. That said, the Trusts maintain that the failure to debrief is symptomatic of and relevant to their complaint that the Council has failed to identify or provide a reasoned explanation of its evaluation which accords with the evaluation methodology which it said it would use.

46.

These proceedings were issued on 14 December 2017. Particulars of Claim were served on 21 December 2017. The proceedings brought the automatic suspension into force, which has remained to date. Having brought the proceedings, the Trusts continued to press for information, including information about Virgin’s bid, some of which was provided at and from the end of January 2018. In the light of further information disclosed by the Council, the Particulars of Claim were amended on 5 February 2018, Re-amended on 22 March 2018 and Re-re-amended on 13 April 2018.

Issue 1: As to the quality evaluation generally:

a)

Are the reasons given by the Defendant for the scores awarded to the Claimants and Virgin for the quality evaluation questions sufficient in law?

b)

Did the Defendant in fact apply or depart from the stated award criteria and/or evaluation methodology when evaluating tenders?

Issue 1(a): Sufficiency of Reasons Given by the Council

47.

The Trusts’ case is in summary that:

i)

The Council is unable to articulate any consensus reasons for the scores for 4 out of 14 questions (Trusts Q6, Virgin Qs 2, 3 and 5);

ii)

Further, on its own evidence, the Council cannot establish a complete or accurate account of the consensus reasons for any question;

iii)

Either or both of these points means that the award decision falls to be quashed.

48.

The Council’s response is in summary that:

i)

The written reasons provided for the evaluators’ decision are set out in the Panel Notes for Consensus Scores for each tenderer i.e. the notes of the moderation to which I have referred above;

ii)

The notes record as comprehensively as possible a range of positive and negative comments made by the Panel in respect of each question and highlight different points that the panel had picked up on;

iii)

The bulleted comments which remain in the final version of the notes form the rationale for the consensus scores for each question;

iv)

Thus the Court should find that the reasons for the consensus scores for each question for each tenderer comprise both the bullet pointed positive comments and negative comments as well as the short summary at the end of the listed bullet points found in the moderated score sheets (Footnote: 1).

49.

There is no substantial issue about the principles to be applied. It is conveniently summarised in two Dynamiki decisions. In Case 272/06 Evropaiki Dynamiki [2008] ECR-II 00169 at [27] the Court said:

“…in accordance with settled case-law, the statement of the reasons on which a decision adversely affecting a person is based must allow the Community Court to exercise its power of review as to its legality and must provide the person concerned with the information necessary to enable him to decide whether or not the decision is well founded”

50.

To similar effect, in Case 447/10 Evropaiki Dynamiki at [92] the Court said:

“the corollary of the discretion enjoyed by the Court of Justice in the area of public procurement is a statement of reasons that sets out the matters of fact and law upon which the Court of Justice based its assessment. It is only in the light of those matters that an applicant is genuinely in a position to understand the reasons why those scores were awarded. Only such a statement of reasons therefore enables him to assert his rights and the General Court to exercise its power of review.”

51.

In Healthcare at Home Limited v The Common Services Agency [2014] UKSC 49, Lord Reed (with whom the other Justices agreed) said at [17]:

“As I have explained, article 41 of Directive 2004/18 imposes on contracting authorities a duty to inform any unsuccessful candidate, on request, of the reasons for the rejection of his application. Guidance as to the effect of that duty can be found in the judgment of the Court of First Instance in Strabag Benelux NV v Council of the European Union (Case T-183/00) [2003] ECR II-138 , paras 54-58, where the court stated (para 54) that the obligation imposed by an analogous provision was fulfilled if tenderers were informed of the relative characteristics and advantages of the successful tenderer and the name of the successful tenderer. The court continued (para 55):

“The reasoning followed by the authority which adopted the measure must be disclosed in a clear and unequivocal fashion so as, on the one hand, to make the persons concerned aware of the reasons for the measure and thereby enable them to defend their rights and, on the other, to enable the court to exercise its supervisory Jurisdiction.””

52.

It is no accident that each of these statements of principle refers to the need to provide “reasons” and “reasoning”. With one possible exception, that is not the same as providing a list of factors that were taken into account. The exception would be if each identified factor was awarded equal weight, in which case one could at least identify the numbers of factors whether positive or negative. In that case the greater number would outweigh the lesser (or equality would be achieved). The exception is inapplicable in this case because it is not what happened here with the points listed in Mr Fairclough’s notes. There is no suggestion that all points were treated as being of equal weight and it is clear from the evidence as a whole that they were not. Quite apart from this general point, the possible exception is inapplicable if some points are “highlighted” without any indication of the relative weight applied to normal and highlighted points. In any event, the exception would itself be of little or no value where, as here, the outcome is not binary (positive or negative) but the panel were required to reach a consensus score on each question between 0 and 5.

53.

The importance of clarity as to the decisions and reasons of a moderation panel is reflected in the following observation of McCloskey J in Resource (NI) v NICTS [2011] NIQB at [35], which I respectfully endorse and adopt:

“I interpose here the observation that, under the current statutory and jurisprudential regime, meetings of contract procurement evaluation panels are something considerably greater than merely formal events. They are solemn exercises of critical importance to economic operators and the public and must be designed, constructed and transacted in such a manner to ensure that full effect is given to the overarching procurement rules and principles.”

54.

In the light of these statements of principle and considerations, I look for the reasons why the Council awarded the scores that it did; and I accept the submission that “a procurement in which the contracting authority cannot explain why it awarded the scores which it did fails the most basic standard of transparency.”

55.

A little further on in the passage from Healthcare at Home that I have cited above, Lord Reed indicated that an Authority is not generally under an obligation to disclose the notes of the moderation. Where, however, the authority relies upon those notes as setting out the written reasons for the evaluators’ decisions, it is to those notes that the Court must look for the reasons and reasoning adopted by the Authority.

56.

Adopting and applying those principles, I refer first to the factual account of what happened at the moderation meeting at [27] ff above; and in particular to my findings at [40] above. The inconsistency in approach in the recording of the moderation of different questions in each tenderer’s bid means that it is not possible to identify a structure in the notes which reveals the reasoning process adopted by the panel that led to and explains their consensus scores on a given question. Furthermore, although the witnesses called by the Council gave broadly similar accounts of the process that was followed, their evidence was not congruent either as to the process or the reasoning that was deployed in the course of the process. This is not surprising; nor is it intrinsically a criticism of the panel members or Mr Fairclough: see [32] above. But it does emphasise the critical importance of being able to find the reasons and reasoning that led to the scores in the notes themselves.

57.

In their closing submissions, the Trusts concentrate on four questions:

i)

Virgin Q2: there is nothing more than lists of positive and negative scores, with no highlighting or commentary. Those lists are not a full account of the discussion even if they were comprehensive as lists of the points that were mentioned, which is uncertain for the reasons given above. Mr Fairclough was right to accept that the only recorded agreement is as to the consensus score that was awarded. There is no account or explanation of any discussion or the reasoning process that led to that consensus score. The notes do not explain why the panel awarded the score they did for this question;

ii)

Virgin Q3 and Q5: see [35] and [37]-[39] above. To the extent that the notes were at any stage a record of the discussion of the moderation, they have been corrupted by the interpolation of the additional material, with the result that it is not possible to identify what forms part of the original note and what does not. If one eliminates the passages where the comparative approach is adopted, the only clue apart from the listing of positive and negative points for Q3 is “… this was a very robust answer”. At its highest, for Q5 there is one sentence which could, if taken on its own, appear to be some justification for awarding Virgin a 4: but even that is corrupted by reference to measures “listed below” which are stated in comparative terms. It is therefore not possible to understand the reasons or reasoning by reference to any clear statement in the notes;

iii)

Trusts Q6: very little was provided in addition to a list of points. I accept the submission that merely to say that “the panel agreed some good elements were present, the response was better than acceptable following a discussion of sub-criteria applied” does not give any substantial information about what considerations led the panel to conclude that 3 was an appropriate mark. Mr Fairclough was right to accept that it is not possible to tell from his notes anything beyond that the answer was considered “good”. I conclude that the notes do not explain why the panel awarded the score they did for this question.

58.

It follows that I accept the specific criticisms made by the Trusts in support of this submission. However, in my judgment the deficiencies are not limited to the four questions identified by the Trusts in this part of their submissions. Other examples of general observations that lack content are to be found in the Trusts’ Q1 and Q2 and Virgin’s Q1 and (to a lesser extent) Q7. And, viewed overall, I am satisfied that the notes do not provide a full, transparent, or fair summary of the discussions that led to the consensus scores sufficient to enable the Trusts to defend their rights or the Court to discharge its supervisory jurisdiction. First, there is evidence, which I accept, that other reasons (including some agreed reasons) were in play and are not reflected in the notes. Second, pervasively there is no or no sufficient account of the reasoning and reasons that led panel members to resolve their differences (if they did) so as to arrive at consensus scores.

59.

Lest there be any doubt, I am not suggesting that it was necessary to keep a complete record of what was said or a comprehensive note of every point that was made. I also accept that the amount of detail that an authority is required to provide when giving its reasons may vary from contract to contract, depending on all the circumstances relevant to the contract in question. Although the Tender documents adopted a rather simplistic format and structure, this was a substantial and complex contract and procurement. I reject each of the main limbs of the Council’s response as set out at [48] above. In summary, the negative and positive points are not, without more, themselves reasons or reasoning and the written reasons do not adequately set out the panel’s reasons or reasoning. While the notes record lists of positive and negative points, they do not do so “as comprehensively as possible” or in a way that enables either the Trusts to defend their rights or the Court to exercise its supervisory jurisdiction. The bullet points may provide material that was relevant to the Panel’s reasons and reasoning, but they do not themselves provide the rationale for the consensus scores. And, even where there are comments in addition to the positive and negative points, they do not adequately reveal the panel’s reasons or reasoning.

60.

Drawing these strands together, I have come to the conclusion that the reasons given were not sufficient in law in the circumstances of this case.

61.

I therefore conclude that the Trusts succeed on Issue 1(a).

Issue 1(b): application of correct criteria and methodology

62.

The Trusts’ case is in summary that the Council:

i)

Did not ensure that the award of the scores for questions were awarded based on an assessment of all of the sub criteria under the question; alternatively

ii)

If and to the extent that the Council did follow that approach, it has not been able to establish what its assessment was.

63.

The Council’s case is that it is clear from the score sheets completed by each member of the evaluation panel and from the moderated score sheets completed during the moderation meeting of the evaluation panel that the Defendant applied the stated award criteria and sub-criteria by evaluating the tender responses which both tenderers had given to each of the seven Award Criteria Questions in Appendix G to the ITT, applying the scoring methodology set out in Appendix K.

64.

I have set out my summary conclusions about:

i)

The manner in which the panel members approached their individual marking of the bids: see [24]-[26] above; and

ii)

The manner in which the moderation was conducted: see [27]-[30] above.

65.

On those findings I conclude that there was substantial compliance with the requirement to ensure that there was an assessment against each bullet point. I am confident that most bullet points were expressly mentioned at the moderation, partly because that is indicated by the positive and negative comments and partly because of the evidence that there was “constant reference” to them during the moderation. Even if a bullet point was not expressly mentioned at the moderation, (a) it had been considered by each panel member in their initial evaluation and that consideration informed their views when reaching their consensus scores at the moderation; and (b) I have found that the panel members had the bullet points before them and in mind during the moderation.

66.

What is less clear is how the panellists approached the requirement that each bullet point would be given equal weighting. Taken overall, it appears that the panel concentrated more on the significance of the points of omission or in favour of the tenderer’s answer to the question (which was permissible) rather than attempting to evaluate and apply different weightings to different bullet points (which would not have been). I am not satisfied that the panel departed from the stated weightings. What weight they gave (either singly or collectively) to particular points of omission or in favour of an answer does not appear, which goes to the criticisms made by the Trusts under Issue 1(a).

67.

Subject to the question of Dr Slade’s aide memoire, to which I turn next, the Claimants have not identified or demonstrated that the panel members individually or collectively exercised an unrestricted freedom of choice in the matters that they considered when evaluating the competing bids or either of them. For the avoidance of any doubt, it was inevitable and right that panel members should drill down to the specification where appropriate. They did so, as did both the Trusts and Virgin in compiling their tenders.

68.

It follows that I find against the Trusts on Issue 1(b).

Issue 2: As to Dr Slade’s “aide memoire”:

a)

Did the aide memoire contain or give effect to undisclosed and/or unlawful sub-criteria, weightings and/or model answers (described in the RAPoC as the Undisclosed Criteria)?

b)

Was the aide memoire lawfully applied or used in the evaluation of tenders by Ms Slade or otherwise and/or did its use render the evaluation unlawful?

69.

I set out the factual background at [24] above. Dr Slade’s evidence (which I accept) made clear that the mere fact that something appeared on her aide memoire under a particular question did not bind her to apply it or make any use of it at all when she came to mark the tenders. The Trusts’ case had been that the aide memoire had a “systemic distortive effect” on Dr Slade’s evaluation and therefore on the evaluation as a whole. In the light of her evidence the Trusts realistically accept that this case is not made out. I agree.

70.

The Trusts maintain the submission that the aide memoire “did in certain respects filter down into Dr Slade’s evaluation in ways which were not foreseeable to the RWIND tenderer.” If that submission is made out, they accept that there would be a second question to be answered, namely whether (as a matter of causation) those points materially affected the scores awarded by Dr Slade and then affected the final score at moderation.

71.

I reject the Trusts’ first submission. I bear in mind that panel members are allowed a certain leeway in and about how they go about their work of evaluation provided that they comply with the instructions and process that have been laid down for the procurement. Put in other words, they may choose how to structure their work of examining and analysing the submitted tenders so long as that does not have the effect of amending the contract award criteria set out in the contract documents or the contract notice. On the facts of this procurement, it was necessary for tenderers and panel members to have in mind the terms of the specification as it provided the backdrop for the exercise as a whole. Some might reasonably choose to write notes for themselves, including notes about the specification or other matters that they thought might come in useful when they came to do their evaluation. If they then became hidebound by their previous work (whether it had been reduced to writing or not) so that they brought it into account when evaluating even if it was irrelevant to what they were supposed to be doing, then there could be a problem: they may effectively amend the procurement’s award criteria. However, that is not what Dr Slade did and there is no evidence that her preparatory work translated into taking into account irrelevant matters or ignoring relevant matters when she came to do her evaluation. On the contrary, I accept her evidence that she did not do so.

72.

It follows that I find against the Trusts on Issue 2.

Issue 3: Did the Defendant make manifest errors or breach the principles of procurement law:

a)

in the evaluation of the Claimants’ tender for any or all of questions 2, 3, 5 or 6? and/or

b)

in the evaluation of Virgin’s tender for any or all of questions 2, 3, 5, 6 or 7?

73.

Issue 3 is concerned with specific scoring errors. The principles to be applied by the Court are well known and may be shortly stated:

i)

The Court is concerned with “manifest errors”, as to which, see the observations of Morgan J in Lion Apparel at [8] above;

ii)

The Court will not embark on a remarking exercise in order to substitute its own view of the appropriate mark or marks that should have been awarded. This is at least in part because there is a margin of appreciation available to the authority in the marks it chooses to award;

iii)

The test to be applied is closely analogous to a Wednesbury-unreasonable test. As stated in a third Dynamiki decision (Case T-250/05: 2007 ECR II-85):

“that review by the Court must be limited to checking that the rules governing the procedure and statement of reasons are complied with, the facts correct and there is no manifest error of assessment or misuse of powers.”

iv)

Put another way, manifest error will typically occur:

“where there has been a failure to consider all relevant matters (or consideration of irrelevant matters), or the decision is irrational in that it is outside the range of reasonable conclusions open to it…”: see MLS (Overseas) Ltd v Secretary of State for Defence [2017] EWHC 3389 (TCC) at [63] per O’Farrell J.

74.

The issue of materiality depends upon whether it can be seen that a manifest error made a difference to the outcome or may have made a difference to the outcome. Where, as here, the gap between the two tenders is very small, it is tempting to say that any manifest error may have made a difference to the outcome. That would be wrong, because a fair assessment of materiality would have to consider whether similar considerations may have affected the score awarded to Virgin as well as whether the Trusts’ scores may have been affected. I reject, however, the Council’s submission that any errors affecting the evaluation of the Trusts’ tender will necessarily have affected the evaluation of Virgin’s to the same extent. The submission is wrong because each response to each question was separately considered for each tenderer and, as shown above, neither the approach nor the recorded reasoning of the panel (so far as any reasoning can be identified) was the consistently the same.

75.

I accept the Trusts’ submission that the Court’s ability to identify whether there has been material error in the panel’s reasons is constrained by the deficiencies that I have identified under Issue 1. In my judgment that constraint is not removed by the fact that four of the five panel members and Mr Fairclough have given evidence. I do not consider that the witnesses’ accounts of the moderation or the significance of various points in the notes of moderation are sufficiently reliable to enable the Court to form a generally reliable view of what weight was in fact attached to particular points: see [32] above. I therefore accept the Trusts’ submission that “there are many instances where the Court could not determine the counterfactual score without in effect remarking the bid against the sub-criteria from scratch.” That submission is well made where there is no pleaded case or satisfactory evidence about what score would have been awarded on a counterfactual basis.

76.

Both sides made detailed submissions about the questions in issue. The Trusts’ submissions concentrate on instances where (in their view) the witness evidence demonstrates that inadequate or inappropriate weight was given to a tenderer’s response. Typically, the Trusts identify one or more of the positive or negative points and submit that either no weight or different weight should have been given to it. The difficulty with this approach is that, as explained under Issue 1, it is not self-evident what weight (if any) was ultimately placed on individual points or how individual points affected the panel’s reasoning and the consensus score that the panel reached. That deficiency is not made good by the evidence given in chief or in cross-examination, because (a) I am not satisfied that the evidence of individual panel members was reliable, (b) different panel members on occasions gave differing explanations, and (c) although detailed evidence was elicited on particular points it is not clear what impact that point had on the overall score either for individual panel members or for the panel as a whole.

77.

The Council’s response rests upon its submission that the reasoning of the panel can be found in the notes of the moderation. The Council performed a detailed exercise for some of the questions, identifying the passages in the various tender responses to which individual points (positive or negative) were referable. The Council invited the Court to extend this analysis to other questions if appropriate. That is an invitation that I have not taken up. In my judgment the Council’s response is fatally undermined by my conclusion under Issue 1 that the reasoning of the panel is not apparent from the moderation notes upon which the Council continues to rely; and by my conclusion that the evidence of the panel members and Mr Fairclough cannot reliably make good the deficiency.

78.

The unreliability of each party’s approach applies even where (as in relation to Virgin’s answer to Q2) the Trusts submit that there has been a complete failure by Virgin to address a particular point, such that it should have been awarded a mark of only 1, or not more than 2, rather than the 3 that was awarded. In order to perform a useful exercise that gives a reliable answer on materiality overall, it would be necessary for the Court to carry out its own re-marking exercise for the whole of both tenders. That is not the Court’s proper function.

79.

In these circumstances I make limited observations about the Questions that are identified for specific criticism by the Trusts:

i)

Trusts Q2: the Trusts criticise the inclusion of three of the four negative points highlighted in the note. I do not accept that any of the points were points that the panel were not entitled to bring into account, and the criticism is essentially one of weight. The relative impact of the highlighted and non-highlighted points cannot be assessed. What the Trusts essentially call for is a re-mark;

ii)

Trusts Q3: the Trusts complain that the three key points identified in the short note at the end do not rationally support a reduction of 2 points out of 5 against all the criteria. This ignores the fact that there were nine negative points in all (as well as 12 positive ones) and that there is nothing to indicate the impact of either the three identified or the other six negative points on the panel’s thinking and approach;

iii)

Trusts Q5: the Trusts criticise the references to a lack of evidence that process were embedded and that there was a practical commitment to being lead professional. The reasons are so sparse as to be useless for the purpose of determining if these perceived weaknesses made the difference between the given score of 3 and a possible score of 4 given the other points that are made, both positive and negative;

iv)

Trusts Q6: once again the reasons are so sparse and uninformative that it is quite impossible to determine what made the difference between the awarded score of 3 and a possible score of 4. In particular, it is not possible to determine that the reference to a lack of evidence of collaborative working or partnership was material or determinative.

80.

Similar observations may be made about the Trusts criticisms of the scoring of Virgin’s Q2, Q3, Q5, Q6 or Q7.

81.

In summary, the pervasive inadequacy of the account of the panel’s reasoning and reasons, discussed under Issue 1 above, prevents any reliable assessment of the extent or materiality of any error in the reasons and reasoning actually adopted.

Issue 4: In so far as the Court finds any legal breach under points 1 to 3 above:

a)

Did any such breach or combination of breaches cause the Claimants to lose the award of the Contract to Virgin; and/or

b)

Did any such breach or combination of breaches cause the Claimants to lose a chance of the award of the Contract?

82.

The Council submits that any breach of Issue 1(a) is not causative since the Trusts have received far more information and documentation by virtue of disclosure than it was entitled to under the 2015 Regulations and this information would have had no bearing on the scoring of the tender. I do not accept the implied submission that the information that the Trusts have received is a substitute for the provision of proper reasons that fulfil the statutory functions of transparency and equal treatment of economic operators. The Council relies upon the notes of the moderation as providing the requisite reasons, and they do not. Furthermore, this is not a case where evidence provided later has plugged the gap, for the reasons already given about the deficiencies of the Council’s witness evidence. The failure to provide transparent and comprehensible reasons prevents the Court from making a reliable assessment of material error in circumstances where only a very modest adjustment in scores (for either Tenderer) would be decisive. That is sufficient to demonstrate the materiality of the breach under Issue 1(a), in which case it is common ground that the decision of the Defendant to award the tender to Virgin must be set aside.

83.

In the light of my findings under Issue 1, I make no further findings or order in relation to Issue 4.

Annexe 1

Relevant Extracts from Appendix G

Please Note:

In reference to all Award Criteria questions: where sub criteria has been used, bidders should assume that sub criteria carries equal weighting.

Implementation and Mobilisation plan

In addition to completing the Award Criteria, please present your service structure chart relating to this Service, on no more than one side of A3. Content must be limited to job titles, number of roles, and the staffing structure.

Note: The content of the service structure chart is not subject to scoring, however it will be considered during the evaluation of this award criteria to contextualise your answers further, and cross reference with your (Appendix L) Pricing Schedule Cost Model Breakdown.

1. Please outline your organisations implementation and mobilisation plan with an explanation of what will be in place by 1 April 2018.

All tenderers would be expected to cover these areas. Tenderers are advised that this list is not exhaustive:

Please include details of:

Timescales, including milestones for completion of key actions.

Ensure management structures, governance and leadership is adequate to enable delivery of Services.

Profiles of proposed delivery sites, including location, services to be delivered and configuration.

Identification and mitigation of implementation and mobilisation risks

Plan for Communication and engagement with the Authority and public

Your service staffing structure in place from 1 April 2018.

This question carries a weighting of 15%

Please insert your response here:

Characters Used:

Maximum Character Count Including Spaces:12000

Delivery Model

2. Please set out your model of service delivery, explaining how it meets the service specification.

All Tenderers would be expected to cover these areas. Tenderers are advised that this list is not exhaustive.

How the Service will:

Improve health and wellbeing outcomes for children, young people and families.

Be responsive to local population health needs and diversity and help reduce health inequalities in Lancashire whilst maintaining a universal service offer.

Provide targeted support and care to children, young people and families with universal plus or universal partnership plus needs.

Work in partnership with key stakeholders, by integrating delivery frameworks, developing seamless pathways and collectively aiming to improve the outcomes for children, young people and families.

Incorporate local policies, strategies and new models of care, including integration with the Sustainability and Transformation Plans and Local Delivery Plans.

Promote health and reduce escalation of need.

Use innovative, creative and evidence based practices

Identify, mitigate and manage ongoing risks

This question carries a weighting of 20%

Please insert your response here:

Characters Used:

Maximum Character Count Including Spaces: 12000

3. Please describe how you will ensure and maintain a high quality service?

All tenderers would be expected to cover these areas. Tenderers are advised that this list is not exhaustive.

How the service will:

Record and report the impact of the service on improving outcomes for children, young people and families.

Utilise quality assurance activities and monitoring systems.

Utilise data and feedback effectively to deliver continuous service improvement.

Respond to changing demographics, need and evidence base.

At all times ensure a sufficient number of staff are available to meet Service demand and improve outcomes for children, young people and families.

Highlight, manage and respond to poor performance.

Engage collaboratively with the Authority.

This question carries a weighting of 10%

Please insert your response here:

Characters Used:

Maximum Character Count Including Spaces: 9000

Workforce

4. Please describe how you will enable staff to have the necessary knowledge, skills and abilities to meet the needs of children, young people and families for the duration of the service contract.

All Tenderers would be expected to cover these areas. Tenderers are advised that this list is not exhaustive.

How will the Service:

Ensure staff possess the necessary qualifications, registrations, current Disclosure and Barring Service check, score skills and values to carry out their specific role.

Ensure staff maintain professional development and training.

Manage individual performance and supervision of workforce.

Maintain effective clinical governance.

Actively use recruitment and retention policy to maintain an optimal workforce capacity. How the service will manage planned and unplanned absence.

This question carries a weighting of 7.5%

Please insert your response here:

Characters Used:

Maximum Character Count Including Spaces: 12000

Safeguarding and Early Help

5. Please describe how your organisation will safeguard children.

All tenderers would be expected to cover these areas. Tenderers are advised that this list is not exhaustive.

How will the Service:

Ensure that the welfare of Lancashire’s children and young people remains paramount.

Work withinstatutory and other process to ensure better outcomes for children. Please given reference to Section 47, Section 17, Looked after Children, Special Educational Needs and Disabilities, and Troubled Families.

Commitment to the Lancashire Common Assessment Frameworks including acting in a Lead Professional role.

Identify, support and improve outcomes of children and families who may be affected by safeguarding issues.

This question carries a weighting of 10%

Please insert your response here:

Characters Used:

Maximum Character Count Including Spaces: 9000

Integration with WPEH

6. Please describe your approach to planning the integration of the Service with Lancashire County Council’s Wellbeing, Prevention and Early Help Services (WPEH).

All Tenderers would be expected to cover these areas. Tenderers are advised that this lsit is not exhaustive.

Describe your approach to:

Identifying opportunities for integration.

Analysing the possible shared outcomes, their benefits, impacts on service users, efficiencies, and effectiveness, of opportunities for integration.

Identify challenges, risks and mitigation actions concerning opportunities for integration.

Identify and analyse the potential impact shared interventions will have

Informing, collaborating and co-producing integration strategy with the Authority

This question carries a weighting of 7.5%

Please insert your response here:

Characters used:

Maximum Character Count Including Spaces: 12000

Social Value

7. Please provide evidence as to how you will meet the requirements of the Authority’s Social Value Policy and Framework; and provide proposals for the delivering measurable social value with reference to:

How you will promote equality and fairness throughout the duration of this contract.

How you will raise the living standards of local residents.

All Tenderers would be expected to cover these areas. Tenderers are advised that this list is not exhaustive.

Achieve co-production in local communities.

Utilise local assets and local experience, including third sector services.

Integrate children, young people and families’ feedback in Service development.

Create employment opportunities for Lancashire people in the areas where the service is delivered.

Create opportunities for staff to develop professionally.

Create opportunities to tackle social isolation of children, young people and families.

This question carries a weighting of 10%

Please insert your response here:

Characters Used:

Maximum Character Count Including Spaces: 9000


Lancashire Care NHS Foundation Trust & Anor v Lancashire County Council

[2018] EWHC 1589 (TCC)

Download options

Download this judgment as a PDF (674.4 KB)

The original format of the judgment as handed down by the court, for printing and downloading.

Download this judgment as XML

The judgment in machine-readable LegalDocML format for developers, data scientists and researchers.