Where it earns its keep. Where it is overhyped. Where the approved-tool mandate is making federal contracting worse instead of better.
The rest of this track covers the tools, how I build with them, and the short list of places they do not belong. This page is where I put a stake in the ground on the question I get asked most: how much of this should we actually be using?
Short answer: more than most contracting offices are using it now, in a specific set of places, with a clear sense of where it should not go. The long answer is the next few sections.
Every one of these is something I do in a normal week, on unclassified and non-proprietary material, with the best commercial tool I can get my hands on.
Email drafts, memos for record, PNM entries, customer updates, the opening paragraph of a market research report, the cover email for a package you are routing. Text in, text out. No data you do not already own leaves your head. This is the high-frequency, low-drama use case, and it is the bulk of the value for most COs.
Paste in a FAR part you have not touched in two years and ask for the plain-English read. Walk through a clause with a program manager who just rotated in. Prep for a warrant-board oral by having the model role-play a panel member. Frontier models are remarkable at this when you give them the source material and ask clean questions. It is the closest thing to a patient, always-available, infinitely-recallable study partner that has ever existed.
Synopsis skeletons, PWS outlines, a rough first draft of a D&F for a buy you have not written before. You were going to edit the thing into the ground anyway. Let AI get you to a bad first draft in five minutes so you can spend the rest of the hour making it right. Starting is the expensive step.
Covered at length on Building Tools with AI. Excel workbooks with the right formulas baked in, scoring templates, IGCE roll-ups, mini dashboards. AI builds the thing. You run it on your own machine with the real numbers. None of the data ever crosses the chat window.
Quiz me on Part 19. Explain every exception to full and open competition. Walk me through how a CPARS rating gets challenged. Simulate a source-selection dispute and make me defend the position. This is where I have personally gotten the most out of AI during board prep, and it is a use case almost nobody talks about because it is quiet and individual.
Take a paragraph of FAR or DFARS language and rewrite it for a program office, an operational commander, a resource advisor, or a customer who does not live in this world. The workforce that can do that translation well is the workforce customers want to work with. AI makes that translation cheap, which makes you faster and more responsive.
Text you own going in, text you will edit going out. That is the sweet spot. Everything in this section fits that pattern. Nothing sensitive is riding along, and your judgment is the final filter on the way out.
These are the pitches that sound good in a conference keynote and fall apart on contact with the job.
It will not, and it should not, at least not without explicit disclosure to offerors up front. The full argument is on When NOT to Use AI. The short version: source selection is a defensible-judgment exercise, and "the model said so" is not a rationale that survives a GAO protest or a CO's own review of the record.
The D&F is the written record of your judgment, and the signature on it is the decision. Let AI help with structure and language. Do not let it originate the reasoning. If a reviewer reads your D&F and hears a chatbot's voice, they are going to ask who was actually deciding. That is not a conversation you want to have.
A prompt like "do market research for X" gets you a confident-sounding collage of plausible nonsense. AI is excellent when you hand it public information you have already gathered and ask it to summarize, compare, or pull patterns. It is a liability when you ask it to invent the data on top of everything else. Hand it the raw material. Let it shape the output.
Models hallucinate FAR clause numbers, DFARS paragraph letters, and case names with a straight face. If a model cites a reg, pull the reg and verify. If the citation does not resolve, the citation does not exist. This is not a future-model problem. This is a today problem on every tool on the market.
It will not. The signature is the decision, and the decision is the job. AI can carry a lot of the writing, a lot of the editing, a lot of the explaining. It cannot carry the warrant. The contracting shop shrinks where AI handles the repetitive work; it does not vanish.
This is the part of the conversation the federal AI space does not want to have out loud, and it is the one I think matters most.
The pattern: an agency or service stands up an approved AI tool. Sometimes it is a chat wrapper in front of a weaker foundation model. Sometimes it is a branded portal running a capability floor from a year or two ago. The workforce gets a policy that reads as "use this, do not use anything else." Meanwhile the frontier models are a browser tab away, an order of magnitude better for the unclassified, non-proprietary work that makes up most of a CO's day.
A few things happen when the approved-tool mandate is that wide:
The security problem this mandate is trying to solve is real. ATO, authorized use policies, FedRAMP, contract vehicles, the specific conditions under which a model can legally see a given class of data: none of that is negotiable, and none of it is fake. The people standing up the approved tools are not doing it out of ignorance. They are doing it because somebody has to.
The ask is not to throw those rules out. It is to evolve the policy from "use this specific model for everything" to "use the best tool available for the class of data in front of you." For unclassified, non-proprietary, non-SSS work (which, again, is most of the job), letting the workforce use the best tool on the market makes contracting faster, more accurate, and more responsive. Mandating the weakest tool does the opposite, and the workforce already knows it.
Govern the data, not the model. Keep the hard rules around proprietary, CUI, SSS, classified, PII, and privileged material. Stop pretending a mandated internal portal with a year-old capability floor is the right answer for everything outside those rings. The workforce is ready to use better tools responsibly. The policy has to let them.
If I had to compress all of this into something you could repeat in a meeting:
AI is not going to fix federal contracting by itself, and it is not going to take anybody's job in the next five years. What it will do, for the COs who lean into it, is hand back hours every week, make your writing tighter, make your customer interactions better, and make your study habits sharper. That is a substantial unlock for the workforce that uses it well.
The workforce that does not use it, because the only version they have been handed is the weak one, is getting left behind. That is a policy problem, and it is fixable. The first step is talking about it honestly.
Disagree with any of this? I would rather hear it than not. Send me the pushback.