How I think about AI, why I'm building this section, and what you should expect from the rest of it.
I love AI. I treat it like an intern. Capable, new, and worth giving real work. I ask it to do something, then I review what came back. Sometimes I send it back for another pass. Sometimes I rewrite the thing myself because the draft was worse than my own starting point. That back-and-forth is the job.
Everything on this site is built on that assumption. AI should be augmenting the workforce, not replacing it, and this site is my attempt to prove that out in public. Nowhere on ContractingHQ will you find a "hey AI, do this project for me" button. What you will find is AI being used for the things it is actually good at, starting with web design, which is a skill I don't have and was never going to pick up on a reasonable timeline.
AI is also dramatic and occasionally wanders off. A good chunk of my back-and-forth with it is telling it to stop being so punchy in the training copy, or pulling it back to Part 12 when it decides we're doing a major source selection. It's on a leash. I keep pulling it back into its lane.
Structure. This might be the single best thing AI does. If I'm rambling or overexplaining, it can organize the noise into a clean framework. It builds the scaffolding. I fill in the expert content. That division of labor is where AI earns its keep on this site.
Translation. Turning acquisition jargon into plain language for a customer, or a long technical document into a tight summary. AI is good at this. Use it.
Brainstorming. This one surprised me. I've been doing this for over a decade. I'm hard-headed and stuck in my ways. I cannot count the number of times AI has asked "have you thought about it this way?" and opened up an angle I hadn't considered, even on this site, which is basically a public sandbox for how I use AI. The peer mode is real.
You can ask AI to do market research. You can ask it to draft a solicitation. You can ask it for a Section L, a Section M, an evaluation plan. It will hand you a document in minutes.
Go ahead. Mess around with it. Break things. But if you have never written one of those documents yourself, you are in no position to judge what came back. That is the whole game.
This is why you have to become an expert, and you have to treat that like the actual job. If you don't, you will be replaceable, and the thing replacing you will be faster and cheaper than you are. That's the honest version.
Taking your time to get there is fine. Nobody walks into this career knowing a J&A from a D&F, or why a Part 16 decision matters, or what the FAR Overhaul actually did to Part 15. But the mindset is what separates the people who will thrive from the people who won't. If your default question is "what checklist do I follow to get the end result," you are already behind the COs whose default question is "how do I improve the process, improve myself, and beat the status quo?"
Hallucinations are not a theory. Attorneys have been publicly sanctioned for filing briefs with fake case citations. We live in the same regulatory-citation space. Nothing will torch your credibility faster than a memo citing a FAR paragraph that doesn't exist, or a GAO decision the model invented, because you couldn't be bothered to verify the work. Crap outputs just broadcast how lazy you got with a tool that wrote your doc in minutes instead of days.
AI can also make you lazy. Read every word. Fix every mistake. Own the output before you put your name on it.
I've got a weird point of view on prompt engineering. A single prompt is not going to make or break your output.
People who work with AI every day stop treating prompts as magic incantations and start thinking of the whole thing as a relationship. You learn the model. The model gets used to the kind of work you give it. I kind of know what Claude is going to produce before I hit enter, same with ChatGPT. That familiarity is worth more than any "10 prompts that 10x your output" list floating around LinkedIn.
What actually moves the needle is giving AI something real to match.
Take an EPB. If I hand Claude an EPB I love and say "match this writing style, here are my bullets for this year," I'm going to get something close to shippable. If I say "write my EPB, here's what I did this year" with no reference, I get AI slop. Generic, breathless, full of phrases nobody uses out loud.
Same logic works across the board. Drafting a J&A? Give it a good one you've written and tell it to match the structure and tone. Writing a memo for the record? Same move. Building a customer briefing? Same move. AI can absolutely sound like you. It just needs you to show it what "sounding like you" actually means.
Here is the insight that changed how I use this stuff: the interesting question is not "can AI write my doc?" It's "can I build a tool that produces a consistent doc every time?"
If I tell an end user to "do market research," the output is going to be all over the map. A two-page memo from one person. A screenshot of a SAM.gov search from another. Nothing at all from a third. That is a bad workflow, and swapping in AI without structure doesn't fix it.
Better: build a tool that asks for the inputs you actually want, produces the shape of output you actually need, and puts the review burden on the CO where it belongs. Do that once, and every user downstream gets something predictable and reviewable.
That's what most of the tools on this site are. Guided workflows that produce something a CO can actually review and approve, like Ollie's Requirements Builder, the PWS Builder, and the Clause Matrix. AI helped me build them. The tools themselves are just code, so every user gets the same predictable output. Nothing is off freelancing in somebody's Word doc.
Will AI replace contract specialists? I don't think so. Will it replace tasks? Absolutely.
LLMs are great at turning screenshots, quote sheets, and scattered data into a narrative a human can read and decide on. They'll absorb a lot of mechanical work: the reading, the pattern-matching, the first-draft drafting.
LPTA is probably on borrowed time. A monkey can tell you which price is lower. There is nothing in an LPTA evaluation that requires a warranted professional with a DAWIA cert to sign it.
Subjective decisions are a different story. Tradeoff source selections, responsibility determinations, past-performance calls, negotiating scope with a program office. The judgment work that separates a decent acquisition from a mess. AI shouldn't be driving any of that. Someone has to start the machine, and someone has to verify the quality.
Which brings me to a hot take. The shops pushing hardest to shove contracting work over to AI are usually the same shops that already lean heavily on LPTA for everything, and that tracks. Over-reliance on LPTA is a risk-aversion tell. If most of what your office does already behaves like a checklist, of course it feels safe to let a machine run the checklist. If your shop is mostly lowest-price-wins, sure, automate it. For the COs who take risks, push limits, and lean into the subjectivity that makes acquisition interesting, there is always going to be a seat for you.