The biggest myth in government contracting is that you cannot compare proposals to each other. This page explains why that is wrong, when comparative analysis is appropriate, and what a real comparative evaluation actually looks like.
Spend enough time in federal acquisition and you will eventually hear some version of the following statement from a supervisor, a legal reviewer, a training class, or a peer:
It sounds authoritative, and it gets repeated so often that new contracting officers accept it as a rule. It is not a rule. It is a misreading of FAR 15.305 that has been passed down through generations of contracting shops with no one stopping to check the actual text or the GAO case law that explains it.
Every acquisition ever conducted by the federal government has involved comparing offers to each other. You cannot select a winner without making some kind of comparison. The only question is whether you do the comparison explicitly in your evaluation record or pretend you did not do it and hide it behind vague language.
The purpose of this training is to dismantle the myth, show you the FAR and case law that actually govern comparative analysis, and walk through a worked example so you can see what a real comparative evaluation looks like on paper.
Before we get into FAR citations and GAO decisions, consider the simplest case in the book: lowest price technically acceptable.
In an LPTA acquisition, the contracting officer evaluates each proposal against the stated technical criteria, records whether each one is acceptable or unacceptable, and then awards to the lowest-priced acceptable offeror. Every LPTA acquisition ever conducted follows this pattern.
Here is the question nobody likes to answer directly: how do you know which offer is the lowest price?
The answer is that you compare the prices. You lay out the three, five, or twelve acceptable proposals side by side and you compare the dollar amounts. That is a comparative evaluation. There is no other way to identify the lowest price. If comparative analysis were actually prohibited, LPTA source selection would be mathematically impossible.
FAR 15.305(a) is the usual source of the myth. It says proposals shall be evaluated "based solely on the factors and subfactors specified in the solicitation." People read that and conclude that the evaluation must be solely against the solicitation, with no offeror-to-offeror comparison.
That is a misreading. What FAR 15.305(a) prohibits is evaluating against unstated factors. It does not prohibit evaluating the relative merits of proposals against each other using the factors that are stated in the solicitation. Those are two completely different things.
FAR 15.305(a)(3) goes further and explicitly authorizes past performance comparisons: "The evaluation should take into account past performance information regarding predecessor companies, key personnel who have relevant experience, or subcontractors that will perform major or critical aspects of the requirement when such information is relevant." When you weigh one offeror's past performance record against another's, you are doing comparative analysis.
FAR 15.308 (Source Selection Decision) states that the source selection authority's decision "shall be based on a comparative assessment of proposals against all source selection criteria in the solicitation." The FAR literally uses the phrase "comparative assessment" in the operative decision rule. A comparative assessment is, by definition, comparing proposals to each other against the stated criteria.
FAR 16.505(b)(1)(ii) for task and delivery orders under multiple-award IDIQ contracts explicitly authorizes streamlined fair opportunity procedures, which can include comparative evaluations. The FSS procedures in FAR 8.405 likewise allow comparative tradeoffs. Across Parts 8, 15, and 16, the FAR supports comparative evaluation as a legitimate tool when the solicitation is structured to use it.
GAO has repeatedly upheld comparative evaluations when the solicitation is structured to allow them and the record supports the comparison. The common thread in GAO's decisions is this: comparative assessments are permissible and often required when the evaluation involves a tradeoff or a relative merit determination. The problems GAO calls out are not about the existence of comparative analysis. They are about comparative analysis that is not documented, not tied to stated criteria, or not applied consistently across offerors.
Under FAR 16.505 fair opportunity procedures, GAO has been even more explicit. Task order source selections frequently use streamlined evaluations that focus on relative distinctions among a small pool of holders of the same IDIQ contract. GAO regularly upholds these "comparative assessments" as long as the evaluation was tied to the stated criteria and applied consistently.
In best value tradeoff source selections under FAR 15.3, the entire tradeoff analysis is inherently comparative. The SSA cannot weigh a technical advantage against a price premium without comparing the offerors on both dimensions. FAR 15.308 requires the SSA's decision to reflect "a comparative assessment of proposals against all source selection criteria." You cannot perform a tradeoff without doing comparative analysis, and the FAR and case law recognize that.
A baseline-only evaluation reads like this: "Vendor A met the requirement. Vendor B met the requirement. Vendor C met the requirement." That is the default output of someone who has been trained to think comparisons are prohibited. It is useless to anyone trying to understand why the award went where it did.
A comparative evaluation reads like this: "Vendor A proposed a staffing model with 6 FTEs dedicated to this requirement, including two senior engineers with active Top Secret clearances. Vendor B proposed 4 FTEs with one senior engineer holding an interim clearance. Vendor C proposed 3 FTEs without clearances. Under the stated evaluation criterion for qualified personnel, Vendor A's proposal offers the strongest alignment, Vendor B is acceptable but less aligned, and Vendor C presents a material gap because cleared personnel are required to access the facility described in PWS Section 3.2."
Both evaluations look at the same proposals against the same criterion. The first one tells you nothing. The second one tells you everything. The second one also survives a protest because it ties every observation back to the stated criterion and applies the same criterion to every offeror.
Comparative analysis is appropriate in the following situations:
| Situation | Why Comparative Analysis Fits |
|---|---|
| Best value tradeoff source selections | The tradeoff analysis is inherently comparative. You cannot weigh technical superiority against a price premium without comparing offerors on both dimensions. |
| FAR 16.505 fair opportunity task order competitions | Streamlined procedures under FAR 16.505(b)(1)(ii) specifically contemplate comparative evaluations among holders of the same IDIQ contract. |
| FAR 8.405-2(d) FSS tradeoffs | FSS buy procedures allow tradeoffs between technical/past performance and price, which require comparative assessment. |
| Simplified acquisitions with multiple quotes | When awarding to other than the lowest-priced quote, you have to explain why the chosen quote offers better value, which is a comparison. |
| Past performance evaluations | FAR 15.305(a)(2)(i) allows relative past performance ratings, which involve comparing offerors to each other and to the requirement. |
| Any acquisition where the solicitation says so | If the solicitation states that proposals will be evaluated on a comparative basis, the evaluation record should reflect that approach. |
Be careful in these situations:
A comparative evaluation has to live on paper or it did not happen. The documentation standard is not different in kind from any other evaluation; it is different in what it emphasizes. When you document a comparative evaluation, make sure the record shows:
| Element | What to Include |
|---|---|
| Stated criterion | Quote or cite the specific evaluation factor or subfactor from the solicitation. The comparison must be anchored to something the offerors were told about. |
| Offeror findings | What each offeror actually proposed, in the offeror's own terms. Paraphrase accurately. Do not summarize in a way that changes the substance. |
| Relative assessment | How each offeror's proposal compares to the others on the stated criterion. Name the distinction in specific terms and quantify where possible. |
| Consistency check | Apply the same criterion to every offeror. If you credited Offeror A for a capability, you must evaluate Offeror B and Offeror C for the same capability and document whether they offered it. |
| Tie to stated criteria only | Every observation in the comparative analysis must connect back to a factor or subfactor in the solicitation. If you find yourself crediting an offeror for something the solicitation did not ask about, stop and take it out. |
| Mistake | Why It Hurts |
|---|---|
| Pretending no comparison happened | The record ends up full of "meets the requirement" language that does not explain the tradeoff. GAO sees right through this when a protest is filed and asks how the CO arrived at the decision without comparing anything. |
| Comparative language tied to unstated factors | Praising Offeror A for offering a capability the solicitation never asked about is the exact unstated-factor problem that FAR 15.305(a) actually prohibits. The fix is to compare on stated factors, not to stop comparing. |
| Inconsistent application | Crediting Offeror A for a strength but failing to check whether Offeror B offered the same thing creates disparate treatment, which is one of the most common successful protest grounds. |
| Vague relative language | "Offeror A was stronger than Offeror B" without explaining how leaves the record undefended. Name the specific capability, quote or cite the specific solicitation factor, and explain the distinction. |
| Hiding the comparison in the SSDD | Running a mechanical evaluation with no comparative language and then writing a narrative tradeoff in the decision document only makes the SSDD look like it came out of thin air. The comparative work should be visible in the evaluation record, then restated in the SSDD. |
You are the contracting officer for a task order competition under a multiple-award IDIQ contract for cloud engineering services. Three IDIQ holders are competing for a one-year task order for cloud migration support services, estimated at approximately $1.4M. The competition is conducted under FAR 16.505(b)(1)(ii) fair opportunity procedures. The solicitation uses a best-value tradeoff approach with two non-price factors and price.
The solicitation states that proposals will be evaluated "on a comparative basis" against the stated factors, and that non-price factors combined are more important than price. The following sections show the relevant evaluation factors, summaries of the three proposals, and then two versions of the technical evaluation record: a weak non-comparative version and a strong comparative version. Compare the two and you will see why the second is the right approach.
E.1 Basis for Award. The Government will award this task order to the offeror whose proposal represents the best value to the Government, considering the non-price factors and price, evaluated on a comparative basis against other offerors.
E.2 Factor 1 – Migration Approach. The Government will evaluate the offeror's proposed approach to migrating the in-scope systems described in PWS Section 3. The evaluation will consider the following subfactors in order of importance: (a) risk management and rollback strategy; (b) discovery and dependency mapping methodology; (c) cutover sequencing; and (d) downtime minimization.
E.3 Factor 2 – Key Personnel. The Government will evaluate the qualifications and relevant experience of the proposed program manager, lead solutions architect, and lead migration engineer. Required qualifications are stated in PWS Section 5. Offerors may propose additional qualifications beyond the minimum.
E.4 Factor 3 – Price. The Government will evaluate the total proposed price for the base period for reasonableness. Price will not be scored but will be considered in the best-value tradeoff.
E.5 Relative Importance. Factor 1 is more important than Factor 2. The non-price factors combined are more important than price. The Government will pay a price premium for technical superiority only when the advantage is commensurate with the premium.
Three IDIQ holders submitted proposals. Below is a condensed summary of each, focused on the subfactors under Factor 1 (Migration Approach) and Factor 2 (Key Personnel).
Risk management and rollback: Proposes a rollback plan for each major migration wave with pre-cutover snapshot validation. Does not describe a tested rollback rehearsal or a specific roll-forward / roll-back decision window.
Discovery and dependency mapping: Describes the use of a commercial dependency mapping tool (named) and a two-week discovery phase. Does not describe how findings will be validated with the system owners.
Cutover sequencing: Proposes three migration waves sequenced by application tier. Notes that wave order will be "coordinated with the government." No proposed sequencing rationale tied to the systems in PWS Section 3.
Downtime minimization: Proposes weekend cutover windows. Does not quantify a target downtime.
Key personnel: Program manager with 7 years of cloud migration experience (meets minimum). Lead solutions architect with 9 years of experience and AWS Solutions Architect – Professional certification. Lead migration engineer meets minimum qualifications. No resume cross-referencing to prior federal cloud engagements.
Risk management and rollback: Proposes a formal risk register maintained weekly, a tested rollback rehearsal conducted before each wave, and a defined decision window of four hours post-cutover during which the government can invoke rollback. Includes a named rollback decision authority and communication plan.
Discovery and dependency mapping: Proposes a three-phase discovery using an automated tool (named) followed by structured interviews with system owners, cross-validation against the existing CMDB, and a documented dependency map delivered as a signed artifact before wave planning begins.
Cutover sequencing: Proposes wave sequencing by dependency risk: low-dependency systems first, high-dependency systems last, with explicit mapping to the specific systems named in PWS Section 3.2. Provides a draft wave order in the proposal for government review.
Downtime minimization: Commits to a target of under four hours of system downtime per wave, with a blue/green cutover pattern for the two highest-availability systems in scope. Quantifies the target and describes the technical approach to meet it.
Key personnel: Program manager with 11 years of cloud migration experience, including three prior federal cloud migrations at comparable scale, with resume references to specific agency engagements. Lead solutions architect holds AWS Solutions Architect – Professional plus Azure Solutions Expert certifications and is named on two published agency migration case studies. Lead migration engineer has 6 years of experience and prior federal cloud migration experience. All three key personnel exceed the minimum qualifications in PWS Section 5.
Risk management and rollback: Describes rollback as "available if required" without a specified rollback plan, decision window, or rehearsal. Risk management is described generically as "industry best practice."
Discovery and dependency mapping: Describes a discovery phase without specifying tools, duration, or a documented dependency deliverable. Does not describe validation with system owners.
Cutover sequencing: States that cutover will be "phased based on government priorities." Does not propose a sequence, a rationale, or a tie to PWS Section 3 systems.
Downtime minimization: States that the offeror will "minimize downtime to the extent possible." Does not propose a target or a technical approach.
Key personnel: Program manager meets minimum qualifications. Lead solutions architect meets minimum qualifications (no certifications identified). Lead migration engineer meets minimum qualifications. No narrative describing prior federal cloud migration experience.
Below is a Factor 1 evaluation written in the style of a CO who believes comparative analysis is prohibited. Notice that it describes each proposal in isolation and produces ratings that look the same across two very different proposals.
Northpoint Cloud Solutions – Acceptable. The offeror proposes a rollback plan, describes the use of a commercial dependency mapping tool, proposes three migration waves, and proposes weekend cutover windows. The proposal meets the requirements of Factor 1.
Meridian Federal Cloud Group – Acceptable. The offeror proposes a risk register, a three-phase discovery, wave sequencing by dependency risk, and blue/green cutover for high-availability systems. The proposal meets the requirements of Factor 1.
Bluegate Systems LLC – Acceptable. The offeror describes a phased cutover, discovery phase, and rollback capability. The proposal meets the requirements of Factor 1.
All three offerors are rated Acceptable under Factor 1.
Below is the same Factor 1 evaluation, rewritten as a true comparative assessment. Notice that it walks through each subfactor, describes what each offeror proposed in specific terms, and draws explicit distinctions tied to the stated criteria.
Factor 1 has four subfactors in descending order of importance: (a) risk management and rollback strategy, (b) discovery and dependency mapping methodology, (c) cutover sequencing, and (d) downtime minimization. The evaluation compares the three proposals against each subfactor, then summarizes the relative ratings.
Subfactor 1(a) – Risk Management and Rollback StrategyMeridian Federal Cloud Group offered the most developed rollback approach among the three. The proposal includes a formal risk register maintained weekly, a tested rollback rehearsal conducted before each migration wave, a defined four-hour post-cutover decision window during which the government can invoke rollback, a named rollback decision authority, and a communication plan. This level of specificity directly addresses the subfactor and provides the government with an auditable rollback process.
Northpoint Cloud Solutions proposed a rollback plan for each wave with pre-cutover snapshot validation. The approach is serviceable and represents a reasonable baseline, but the proposal does not describe a rehearsed rollback, a decision window, or a named rollback authority. Compared to Meridian, Northpoint's approach is less developed but meets the subfactor.
Bluegate Systems LLC described rollback as "available if required" without a specific rollback plan, rehearsal, decision window, or authority. Compared to both Meridian and Northpoint, this approach lacks any of the specific elements either competitor proposed. Bluegate does not demonstrate an equivalent rollback capability under the stated subfactor.
Subfactor 1(b) – Discovery and Dependency MappingMeridian proposed a three-phase discovery combining an automated dependency mapping tool with structured interviews with system owners, cross-validation against the existing CMDB, and a documented dependency map delivered as a signed artifact before wave planning begins. This provides the government with a concrete discovery deliverable and a validation step the other proposals do not offer.
Northpoint described use of a commercial dependency mapping tool and a two-week discovery phase, but the proposal does not describe validation of the tool's findings with system owners or a signed artifact. Compared to Meridian, Northpoint proposes comparable tool use but a thinner validation process and no formal deliverable.
Bluegate described a discovery phase in general terms without identifying tools, a timeline, or a deliverable. Compared to both Meridian and Northpoint, Bluegate's approach is materially less developed on this subfactor.
Subfactor 1(c) – Cutover SequencingMeridian proposed wave sequencing by dependency risk (low-dependency systems first, high-dependency systems last), with explicit mapping to the specific systems named in PWS Section 3.2, and a draft wave order included in the proposal for government review. This is the only proposal that applies a specific sequencing rationale to the PWS systems.
Northpoint proposed three migration waves sequenced by application tier with wave order "coordinated with the government." The proposal does not tie the sequencing to the specific PWS 3.2 systems or propose a draft order. Compared to Meridian, Northpoint's approach defers too much of the sequencing decision to a later coordination discussion.
Bluegate stated that cutover will be "phased based on government priorities" without proposing a sequence, rationale, or PWS tie. Compared to both competitors, Bluegate offers nothing specific under this subfactor.
Subfactor 1(d) – Downtime MinimizationMeridian committed to a target of under four hours of system downtime per wave and proposed a blue/green cutover pattern for the two highest-availability systems in scope. This is the only proposal to quantify a downtime target and tie a specific technical approach to meeting it.
Northpoint proposed weekend cutover windows without quantifying a downtime target or describing a technical approach to minimize downtime within those windows. Compared to Meridian, Northpoint addresses the subfactor at a surface level.
Bluegate committed to "minimize downtime to the extent possible" without a target or technical approach. Compared to both competitors, Bluegate's response is the weakest on this subfactor.
Summary and Relative RatingsApplying the four subfactors in descending order of importance:
Meridian Federal Cloud Group – Good. Meridian demonstrates meaningful strengths under all four subfactors of Factor 1. The risk management and rollback approach (the most important subfactor) exceeds what the other offerors proposed in specificity and process rigor. Discovery includes a validation step and a signed deliverable. Cutover sequencing is tied to specific PWS 3.2 systems. Downtime is quantified and tied to a technical approach. These findings directly address stated subfactors and are not dependent on any criterion outside the solicitation.
Northpoint Cloud Solutions – Acceptable. Northpoint meets the Factor 1 subfactors with a baseline approach on each one. The proposal does not demonstrate the rollback specificity, discovery validation, sequencing rationale, or downtime quantification that Meridian proposed. No weaknesses rise to the level of impairing performance.
Bluegate Systems LLC – Marginal. Bluegate's proposal under Factor 1 is substantially less developed than either competitor on every subfactor. Rollback is undefined, discovery is generic, cutover sequencing is undefined, and downtime commits to no target. The proposal meets the minimum requirements of the subfactors only in the loosest sense and offers the government no assurance on the most important subfactor (rollback).
Five scenarios covering the core ideas from this page. For each one, choose the option most consistent with a defensible comparative evaluation.
A senior CO reviews your draft technical evaluation and tells you: "You cannot compare offerors to each other. Each proposal has to be evaluated only against the solicitation, not against the other proposals. Rewrite this."
What is the best response?
You are running an LPTA acquisition. Four offers are received; three are found technically acceptable. A colleague asserts that no comparison between proposals is permitted in federal contracting.
How do you make the award decision?
You are conducting a comparative evaluation on a task order competition. Your draft assigns Offeror A a strength for offering 24/7 help desk coverage. The solicitation's evaluation criteria did not mention help desk coverage at all.
What is the correct fix?
Your comparative evaluation credits Offeror A with a strength for proposing a formal risk register and weekly risk reviews. You realize you did not check whether Offeror B and Offeror C offered the same thing.
What do you do?
You are writing a Factor 2 (Past Performance) comparative assessment. Which paragraph best illustrates a defensible comparative analysis?