Topic I-03 — Negotiated Acquisitions

Evaluating Past Performance

How to collect, assess, and document past performance information in a source selection. Covers recency, relevancy, confidence ratings, CPARS, questionnaires, and Section L/M language.

1 What Past Performance Evaluation Is

Past performance evaluation is the government's method for predicting whether an offeror will successfully perform a proposed contract based on how it has performed on previous contracts. FAR 15.305(a)(2) requires the evaluation of past performance as part of any source selection conducted under FAR Part 15, unless the contracting officer documents in the acquisition plan why it is not an appropriate factor.

The evaluation looks at three distinct aspects of an offeror's performance record: recency (how recent the contracts are), relevancy (how similar those contracts are to the current requirement), and quality (how well the contractor actually performed). All three must be assessed. A contractor that performed excellent work ten years ago on a very different type of contract does not tell you much about how they will perform on your requirement today.

The end product of a past performance evaluation is a performance confidence assessment. This is not a rating of how "good" the contractor is in the abstract. It is a prediction of how likely the contractor is to successfully perform the specific work being solicited, based on the evidence in their record.


2 Recency, Relevancy, and Quality

Recency refers to how current the past performance information is. The solicitation should define what "recent" means. A common standard is contracts performed within the last three to five years, though the right window depends on the industry. Fast-moving IT services may warrant a three-year window. Construction, where project lifecycles are longer, might justify six years. Whatever you choose, state it in Section L so offerors know what to submit and evaluators know what to consider.

Relevancy measures how similar the prior contracts are to the work being solicited. The DoD Source Selection Procedures (August 2022) define four relevancy ratings:

Relevancy RatingDefinition
Very RelevantPresent/past performance effort involved essentially the same scope and magnitude of effort and complexities this solicitation requires.
RelevantPresent/past performance effort involved similar scope and magnitude of effort and complexities this solicitation requires.
Somewhat RelevantPresent/past performance effort involved some of the scope and magnitude of effort and complexities this solicitation requires.
Not RelevantPresent/past performance effort did not involve any of the scope and magnitude of effort and complexities this solicitation requires.

Relevancy is not binary. An offeror might submit five contracts: two that are Very Relevant, one Relevant, and two Somewhat Relevant. The evaluation team assesses each contract reference individually, then uses the full picture to inform the overall confidence assessment.

Quality is the actual performance record on those contracts. How well did the contractor do the work? Were there schedule delays, cost overruns, or quality deficiencies? Did the contractor receive favorable or unfavorable CPARS ratings? Quality is drawn from the performance data sources described in the next section.

All three aspects are necessary. A contractor with five Very Relevant contracts that all have Marginal or Unsatisfactory CPARS ratings is a concern regardless of how well the scope matches. Conversely, a contractor with Exceptional ratings on entirely unrelated work does not give you confidence they can perform your specific requirement.

3 Where to Find Past Performance Information

Past performance data comes from several sources. The evaluation team should use more than one.

CPARS (Contractor Performance Assessment Reporting System) is the primary government-wide database for contractor performance evaluations. Contracting officers and CORs are required to complete CPARS evaluations for contracts meeting certain dollar thresholds. CPARS ratings cover multiple assessment areas: quality of product/service, schedule, cost control (on cost-type contracts), management/business relations, and sometimes small business subcontracting. Each area receives a rating from Exceptional to Unsatisfactory. CPARS data is available to evaluation teams through the Past Performance Information Retrieval System (PPIRS).

Past Performance Questionnaires are sent by the contracting office directly to the references identified in the offeror's proposal. A questionnaire asks the customer point of contact to rate the contractor's performance on specific aspects of the work, using the rating scale defined in the solicitation. Questionnaires capture information that CPARS may not, particularly for contracts below CPARS reporting thresholds or for commercial customers. More on how to structure questionnaires is covered in Section 5 below.

FAPIIS (Federal Awardee Performance and Integrity Information System) contains information on terminations for cause or default, deficiency findings, non-responsibility determinations, and administrative agreements. It is a responsibility check as much as a performance check. Contracting officers are required to review FAPIIS before award per FAR 9.104-6.

eSRS (Electronic Subcontracting Reporting System) tracks small business subcontracting plan compliance. If the solicitation includes a small business subcontracting evaluation factor, eSRS data shows whether the offeror has met its subcontracting goals on prior contracts.

Interviews with project managers, CORs, and fee determining officials provide context that structured ratings cannot. A COR who managed a contractor's day-to-day performance can explain whether a "Satisfactory" CPARS rating reflected steady performance or a troubled contract that was eventually brought under control. Interviews are optional but useful, particularly when CPARS data alone is inconclusive.

Do not rely on CPARS alone. CPARS coverage varies. Some contracts fall below reporting thresholds. Some evaluations are incomplete or overdue. Some ratings do not reflect the current state of the contractor's performance because they were completed at the midpoint of a contract. Questionnaires and interviews fill these gaps.

4 The Performance Confidence Assessment

After collecting and reviewing all past performance information, the evaluation team assigns a performance confidence assessment rating. This is the final judgment about the government's confidence that the offeror will successfully perform the solicited work. The DoD Source Selection Procedures define five ratings:

Confidence RatingDefinition
Substantial ConfidenceBased on the offeror's recent/relevant performance record, the Government has a high expectation that the offeror will successfully perform the required effort.
Satisfactory ConfidenceBased on the offeror's recent/relevant performance record, the Government has a reasonable expectation that the offeror will successfully perform the required effort.
Neutral ConfidenceNo recent/relevant performance record is available, or the offeror's performance record is so sparse that no meaningful confidence assessment rating can be reasonably assigned. A Neutral rating shall not be used as a negative assessment of the offeror.
Limited ConfidenceBased on the offeror's recent/relevant performance record, the Government has a low expectation that the offeror will successfully perform the required effort.
No ConfidenceBased on the offeror's recent/relevant performance record, the Government has no expectation that the offeror will be able to successfully perform the required effort.

The confidence assessment is not a simple average of individual CPARS ratings. It is an integrated judgment that considers the relevancy of each reference, the recency of the performance, the quality ratings and narratives, and any trends (improving or declining). An offeror with two Very Relevant contracts rated "Very Good" and one Somewhat Relevant contract rated "Satisfactory" might reasonably receive Substantial Confidence if the relevant work was recent and directly comparable.

Neutral is not negative. FAR 15.305(a)(2)(iv) and the DoD SSP both require that offerors without a past performance record (or with a record too sparse to assess) receive a Neutral rating. This means they can still compete. Neutral cannot be treated as a weakness or used as a discriminator against the offeror. In a best value tradeoff, the SSAC may determine that Substantial or Satisfactory Confidence is worth more than Neutral in the tradeoff decision, but the offeror with Neutral is still eligible for award.
Adverse past performance requires disclosure. If the evaluation team identifies negative performance information (CPARS ratings of Marginal or Unsatisfactory, terminations for default, FAPIIS findings), the offeror must be given an opportunity to address the information before the government uses it against them. FAR 15.305(a)(2)(ii) requires this. The CO typically does this through an Evaluation Notice (EN) during the evaluation phase.

5 How to Do Questionnaires

A past performance questionnaire is a form the contracting office sends to customer points of contact, asking them to rate and describe a contractor's performance. The government is not limited to the references the offeror provides in its proposal. The evaluation team can send questionnaires to any known customer, and can use CPARS, FAPIIS, and other government sources to identify contracts the offeror did not list. Questionnaires are particularly valuable when CPARS data is unavailable or incomplete.

The questionnaire should be structured around the same rating scale used in Section M. If your solicitation uses the DoD confidence assessment ratings, the questionnaire should use the CPARS rating scale (Exceptional, Very Good, Satisfactory, Marginal, Unsatisfactory) for individual assessment areas, because that is the scale the references are accustomed to. The evaluation team then takes the questionnaire results, along with CPARS data and any other sources, and synthesizes them into the overall confidence assessment.

A typical questionnaire covers these areas:

  • Quality of Product/Service: Did the contractor deliver work that met the contract requirements? Were there quality deficiencies or rework?
  • Schedule: Did the contractor meet delivery dates and milestones? Were there delays, and if so, were they caused by the contractor?
  • Cost Control (cost-type contracts): Did the contractor manage costs within the estimated/ceiling amounts? Were there unexplained cost growth issues?
  • Management/Business Relations: Was the contractor responsive to government direction? Did they communicate proactively about issues? Was the working relationship professional?
  • Utilization of Small Business (if applicable): Did the contractor meet its small business subcontracting plan goals?

Each area should include both a rating selection and a narrative space. The narrative is often more useful than the rating itself, because it gives evaluators context about what went well or poorly and whether any problems were the contractor's fault.

Sample Questionnaire Question

Quality of Product/Service. Using the rating definitions below, evaluate the contractor's overall quality of products or services delivered under this contract.

Rating: [ ] Exceptional   [ ] Very Good   [ ] Satisfactory   [ ] Marginal   [ ] Unsatisfactory

Narrative: Please describe the contractor's performance in this area. Include specific examples of strengths, weaknesses, or problems encountered. If the rating is Marginal or Unsatisfactory, describe the specific deficiencies. require narrative for all ratings

Set a deadline and follow up. Questionnaire response rates can be low. Include a return deadline (typically 10 to 15 business days), send reminders, and have a plan for what to do when references do not respond. You can evaluate past performance based on the information you have, but sparse data limits the strength of your confidence assessment.
Tell offerors to submit past performance information early. Consider including language in the solicitation encouraging offerors to notify their references well before proposals are due. The people those references point to (CORs, program managers, contracting officers) tend to be slow to respond to questionnaires. If the government sends questionnaires after proposal receipt and the responses come in late, that information may not be available for the evaluation. Offerors who give their references advance notice improve their own chances of a complete record.

6 Section L/M Language for Past Performance

The solicitation must tell offerors exactly what past performance information to submit (Section L) and how the government will evaluate it (Section M). The two must align. Below are sample language blocks for a best value tradeoff.

Section L — Past Performance Instructions (Example)

L.2(c) Volume III — Past Performance. Offerors shall submit past performance information for a minimum of three (3) and a maximum of five (5) contracts (including subcontracts, teaming arrangements, or joint ventures) performed within the past five (5) years that are relevant to the work described in the PWS. Relevant contracts are those involving similar scope, magnitude, and complexity to the solicited requirement. define recency and relevancy

For each contract reference, provide:

(1) Contract number and order number (if applicable)
(2) Contracting agency and contracting officer name and phone number
(3) Contract type and total contract value (including options)
(4) Period of performance
(5) Customer program manager or COR name and current phone number
(6) Brief description of the work performed, including the relevance to this solicitation
(7) Any problems encountered during performance and the corrective actions taken

The Government may use sources of past performance information other than those provided by the offeror, including but not limited to CPARS, FAPIIS, and other government databases. preserve the right to look beyond what they submit

Section M — Past Performance Evaluation (Example)

M.4 Factor 3 — Past Performance. The Government will evaluate the offeror's record of recent and relevant past performance to assess the degree of confidence the Government has in the offeror's ability to successfully perform the requirements of this solicitation.

The Government will first assess the relevancy of each submitted contract using the following scale:

Very Relevant, Relevant, Somewhat Relevant, Not Relevant

The Government will then assess the quality of performance on relevant contracts using available information from CPARS, questionnaires, FAPIIS, and other government sources.

Based on the combined assessment of relevancy and quality, the Government will assign one of the following Performance Confidence Assessment ratings:

Substantial Confidence, Satisfactory Confidence, Neutral Confidence, Limited Confidence, No Confidence

Offerors with no relevant past performance history, or whose past performance information is too limited to assess, will receive a Neutral Confidence rating. A Neutral rating will not be evaluated favorably or unfavorably. FAR 15.305(a)(2)(iv)

Match the rating scale to your source selection method. In a best value tradeoff, use the full confidence assessment scale (Substantial through No Confidence). Under LPTA, past performance is typically evaluated as Acceptable or Unacceptable. A contractor with a Satisfactory Confidence or better rating (including Neutral) passes. A contractor with Limited Confidence or No Confidence fails. The Section M language must state this clearly.

7 Past Performance Under LPTA

When the source selection method is Lowest Price Technically Acceptable, the past performance evaluation changes significantly. Under LPTA there is no tradeoff, so past performance is not rated on a relative scale. Instead, it functions as a pass/fail gate.

The typical approach is to evaluate past performance as Acceptable or Unacceptable. An offeror passes the past performance gate if its confidence assessment is Satisfactory Confidence or better. An offeror with Neutral Confidence (no record) also passes, consistent with FAR 15.305(a)(2)(iv). An offeror with Limited Confidence or No Confidence fails.

Confidence AssessmentLPTA Result
Substantial ConfidenceAcceptable (Pass)
Satisfactory ConfidenceAcceptable (Pass)
Neutral ConfidenceAcceptable (Pass)
Limited ConfidenceUnacceptable (Fail)
No ConfidenceUnacceptable (Fail)

Even though the rating is pass/fail, the evaluation team still goes through the same process: assessing relevancy of each contract, reviewing CPARS and questionnaire data, and forming a confidence assessment. The difference is that under LPTA, the confidence assessment is only used to determine pass or fail, not to differentiate between offerors.

This means an Exceptional CPARS record is worth exactly the same as a Satisfactory one. A contractor with Substantial Confidence and a contractor with Satisfactory Confidence both pass, and neither has any advantage over the other. There is no tradeoff. You cannot award to a higher-priced offeror because their past performance is stronger. Once past performance clears the gate, price drives the winner.

State the pass/fail criteria in Section M. Your Section M must explicitly say which confidence assessment ratings constitute "Acceptable" and which are "Unacceptable." Leaving this undefined creates a protest vulnerability because offerors cannot determine what standard they need to meet.

8 Documenting the Evaluation

The past performance evaluation must be documented thoroughly enough that anyone reviewing the record (including a GAO attorney reviewing a protest) can understand how the evaluation team arrived at its confidence assessment. The documentation should address every contract reference the offeror submitted, state the relevancy determination for each, summarize the quality information obtained, and explain how those inputs led to the overall rating.

At minimum, the evaluation narrative for each offeror should include:

  • A list of each contract reference with its relevancy rating and the basis for that rating
  • A summary of the CPARS data, questionnaire responses, and any other sources reviewed
  • Identification of any performance trends (improving, declining, or consistent)
  • The overall performance confidence assessment rating with a clear rationale connecting the evidence to the rating definition
  • If adverse information was found, documentation that the offeror was notified and given an opportunity to respond, and a summary of their response
Common Evaluation Errors Inconsistent relevancy determinations across offerors are a frequent protest finding. If one offeror's IT help desk contract is rated Very Relevant and another's IT help desk contract of similar scope is rated Somewhat Relevant, the record needs to explain why.

Ignoring negative information is another risk. If a CPARS report shows a Marginal rating in one area but the evaluation narrative does not discuss it, the record looks like the evaluators either missed it or chose to ignore it. Address every data point, positive or negative.

Failing to distinguish between the contractor's fault and external factors. A schedule delay caused by a government-directed stop-work order is not a contractor performance failure. The evaluation team should note the circumstances when explaining the significance of any negative findings.

Each scenario below presents a contractor's past performance record for a specific solicitation. You will review CPARS reports, questionnaire data, and reference details, then assign an overall Performance Confidence Assessment rating. The feedback explains the reasoning behind each rating.

Scenario

FAR 15.305(a)(2) — Past Performance Evaluation

The core FAR provision governing past performance evaluation in source selections, including the requirement to evaluate recent and relevant performance and the Neutral rating rule.

Open FAR 15.305 →

DoD Source Selection Procedures (Aug 2022)

Section 3.1.3 covers the past performance evaluation process, including Tables 4 (Relevancy) and 5 (Confidence Assessments). The authoritative DoD reference for performance confidence ratings.

Open DoD SSP →

CPARS — Contractor Performance Assessment Reporting System

The government-wide system for recording contractor performance evaluations. Evaluation teams access contractor records through PPIRS.

Open CPARS →

PPIRS — Past Performance Information Retrieval System

The retrieval interface for accessing CPARS evaluations during source selections. Available to government evaluators with appropriate access.

Open PPIRS →

FAPIIS — Federal Awardee Performance and Integrity Information System

Contains terminations for cause/default, deficiency findings, non-responsibility determinations, and administrative agreements. Required review per FAR 9.104-6.

Open FAPIIS →

FAR 42.15 — Contractor Performance Information

Governs when and how contracting officers are required to prepare contractor performance evaluations (CPARS reports). Covers reporting thresholds and timelines.

Open FAR 42.15 →

FAR 9.104-6 — FAPIIS Review Requirement

Requires the contracting officer to check FAPIIS for any offeror being considered for award before making a responsibility determination.

Open FAR 9.104-6 →

eSRS — Electronic Subcontracting Reporting System

Tracks contractor compliance with small business subcontracting plan goals. Relevant when the solicitation includes a small business utilization evaluation factor.

Open eSRS →