We started using AI in our engineering team for the obvious reason. Software engineers were the easiest place to start. The work is digital end to end. The output is text. The feedback loop is fast. It would have been weird not to start there.
Six months in, we started asking a different question. The engineers had built a real muscle around AI — context files, sub-agents, guardrails, automations that ran overnight. None of that was specific to engineering. It was specific to the shape of the work: text in, text out, judgment in the middle, blast radius defines what gets autonomy. A lot of the rest of our company looked like that too. We just hadn't been treating it that way.
So we built a tool for the rest of the team. It is not a chatbot. It is an MCP server that exposes our internal tooling — our customer records, our title workflow, our document templates, our checklists — to an AI that anyone on the operations team can use the same way an engineer uses Claude Code. The AI knows what a search package looks like in our system, because we told it. It knows what a clear-to-close plan needs to contain. It can pull up a file and tell you what's missing.
The use cases that ended up mattering were not the ones we predicted.
The first was prioritization. Operators come into work with a list of files in flight. The list is long. The question is always "what is most likely to fall apart today, and what can wait." The AI is good at this — not because it knows their job better than they do, but because it has read every file in the queue while they are still drinking their coffee. The judgment is theirs. The reading is the AI's.
The second was second-eyes review. A search package, a title commitment, a clear-to-close plan — these are documents where being 95% right is not good enough, because the missing 5% is what costs the customer their closing. We had been relying on senior operators to spot what junior operators missed. That worked, but it scaled badly, and it concentrated risk in whoever happened to be reviewing that day. The AI is now the first pass. It catches the mechanical misses — the field that didn't get filled in, the document the assistant forgot to upload, the date that disagrees between two attachments. The senior operator still does the final review. They are doing it on a much shorter list of things that actually need a human eye.
The third, and the one that most changed how we think about the rest of the year, was the closing-readiness check. Before a signing, dozens of things have to be true. The wire has to be received. The package has to be uploaded. The notary has to be confirmed. The buyer has to have signed the right disclosures. We used to walk this list by hand, the day before. Now an operator asks the AI to walk it, and gets back the exact list of what isn't ready yet. It dots the i's. It crosses the t's. It is profoundly boring. It is profoundly useful.
The pattern we keep finding, both in engineering and now in operations, is that the highest-leverage place to put AI is not at the work people enjoy. It is at the work they tolerate. The senior engineer doesn't enjoy reading every line of a dependency PR; the senior operator doesn't enjoy walking a fifty-item readiness list at 5 p.m. on a Thursday. Both of them want to spend their attention on the small number of items that actually need a human's judgment. AI doesn't have judgment. But it has infinite patience for the part of the job nobody loves.
What we are building, at this point, is less an AI strategy and more a way of working. The model is rented. The context is ours. The guardrails are ours. The decision about what AI is allowed to do without supervision, and where the human stays in the loop, is ours. The companies that will win this next era are not the ones with the best models. They are the ones who figured out how to let the rest of their company use the best model — without giving up the judgment that made them the company in the first place.