Beyond the search engine: how regulated practices turn AI into a competitive advantage
Most firms use AI as a faster Google. The firms that move past that stage do something recognisably different — and the move tends to follow the same pattern in legal, medical, and accounting practices alike.
This essay sits primarily under the Can we do AI? and How do we do AI? questions of the practice’s methodology — the literacy gap that constrains current capability, and the practical move from search-based to solution-based use that distinguishes firms moving past it.
A pattern recurs across the Australian regulated SMBs I observe. Most are using AI as a faster search engine. A partner asks ChatGPT to summarise an unfamiliar regulatory development. A senior associate uses it to redraft a clause they could have written themselves. A bookkeeper uses it to translate a piece of guidance into plainer language. The use is real and worth having. It is also, in nearly every case, the simplest application of the tool — a substitute for a Google search or a colleague’s input.
This is not a deficiency. It is a stage. Almost every successful AI adoption I have observed begins this way. The firms that move past it tend to do so within the same year, and the move follows a recognisable pattern across legal, medical, and accounting practices alike.
The substantive shift is from treating the AI as a search engine to treating it as an analyst — a tool for finding patterns in the firm’s own work that no individual reader could have surfaced.
The literacy gap
The technology is advancing faster than most professional-services firms can adapt to it. The gap between AI-capable operators and those without that capability is widening, not narrowing. Mark Cuban has noted the same dynamic in a broader business context, and the pattern holds in regulated Australian practice.
This gap is normally framed as a threat. It is more usefully framed as an opportunity. When a category becomes meaningfully harder to navigate, firms that navigate it well capture disproportionate advantage. The pattern is visible in earlier adoption waves — the first firms to take search engines seriously, the first to take cloud seriously, the first to take data analytics seriously. The early-confident, who adopt with intent rather than with anxiety, tend to do disproportionately well.
In Conversation not Delegation I argue that the literacy gap inside a firm is not a training problem in the conventional sense. It is a shared-understanding problem. Training staff to use ChatGPT better is the smaller version of the work. Helping the firm form a shared understanding of what good AI use looks like — what it is for, what it is not for, and how to know the difference — is the larger version, and the prerequisite for almost everything else.
Without that shared understanding, AI use in a firm tends to drift in two directions: a small group of capable users who are difficult to govern, and a larger group of cautious or disengaged users who quietly opt out. Neither is a sustainable position.
From search to solution
The simplest test for whether a firm has moved past the search-engine stage is this: the AI is no longer used as a faster route to an existing answer. It is used to surface questions the firm did not know to ask.
Daniel Priestley has documented an instructive example. A management team ran three months of the firm’s own sales-call transcripts through a structured analysis. The exercise was prosaic — summarise the calls, identify recurring objections, surface common patterns. The result was not. In 75% of unsuccessful calls, the prospect had mentioned at some point that they would need to consult their spouse before deciding. The firm’s protocol did not invite spouses to meetings. The data revealed that the team was systematically failing to bring decision-makers into the conversation, even though the prospects themselves were repeatedly flagging the requirement.
That single insight, derived from raw interaction data, changed the firm’s protocol and meaningfully improved its conversion rate. It is the kind of insight no individual could have produced by scanning a single call. It emerges only when AI is asked to look across a large body of qualitative material and surface patterns.
For a regulated Australian SMB, the equivalent applications are not hypothetical. A law firm can run three months of client emails through a structured analysis to identify the questions clients ask that the firm is consistently slow to answer. A medical practice can analyse patient interaction notes — with appropriate handling — to find the explanations clinicians repeat most often, and turn those into pre-consultation handouts. An accounting firm can review tax-season correspondence to identify the queries absorbing the most senior time, and intercept them earlier in the process.
None of these workflows are exotic. None require a research budget. All require something a search-engine relationship with AI cannot give: a willingness to ask the AI to look at the firm’s own material, in volume, and report what it sees.
This is also where the data-handling question becomes load-bearing. A firm cannot run client correspondence through a public cloud LLM and call it research. The firms that move past the search-engine stage are the firms that have a credible answer for where the analysis happens — a question I take up in Local AI for regulated Australian small businesses.
The shift to bespoke internal tooling
A second pattern emerges alongside the first: the move from generic SaaS toward bespoke internal tooling, in narrow places where it matters.
Most firms run on generic software because building anything custom used to be expensive. AI changes that calculation in specific places. Not everywhere — the temptation to build a custom version of every off-the-shelf tool is a reliable failure mode — but at the specific points where the off-the-shelf tool maps poorly to how the firm actually works.
The applications that recur are narrow. An internal Q&A tool that lets staff query the firm’s own document corpus rather than searching individual systems. A drafting assistant trained on the firm’s previous work product, so first drafts inherit the firm’s voice rather than fight it. A triage tool that classifies inbound enquiries against the firm’s own categorisation rather than a vendor’s. None of these need be sophisticated; the simplest version of each is a small piece of software running locally against a local model, taking a few days of focused work to build.
The reason most firms do not have these tools is rarely technical capacity. It is that no one has framed the work as worth doing. The shift from search-based AI use to solution-based AI use is, in part, the shift from buying software to occasionally making it — in small, focused doses, where a generic tool does not fit the firm’s actual practice.
Local AI changes the economics of this further. When the marginal cost per query is approximately zero, custom tools become economically sustainable that would not be on a metered cloud subscription. A bespoke internal Q&A tool that runs 10,000 queries a month against the firm’s own corpus is impractical at cloud-token prices. On local infrastructure, the cost is the cost of electricity.
The “key person of AI”
A new role is emerging inside regulated SMBs that is worth naming. It is not “head of IT,” not “AI strategist,” and not “innovation lead.” It is closer to a hybrid of an internal coach and a technical translator. Some practitioners are calling this person the key person of AI — the partner-equivalent figure inside the firm who is responsible for the firm’s relationship with AI without necessarily being a developer.
The role’s responsibilities, in practice:
- Setting the firm’s working position on AI use, in collaboration with leadership.
- Training staff in a shared framework for AI use — what matters is that the framework exists and is shared, not which specific framework.
- Identifying the next bespoke internal tool worth building, and working with the developer or external partner who builds it.
- Maintaining the data-handling discipline that makes the rest defensible.
- Becoming the person staff can ask “is this acceptable to do?” without it becoming a formal escalation.
In smaller firms this is one person, often a partner or senior associate with the necessary credibility and disposition. In larger firms it is a small group. In firms without this role, AI use drifts because no one owns the position.
For most regulated Australian SMBs, identifying this person and giving them the time to do the work is the second-most consequential move after closing the literacy gap. It is not a technical hire. It is a deliberate role assignment to someone the firm already trusts.
What the change looks like
A regulated Australian practice that has moved past the search-engine stage looks recognisably different from one that has not. It has a working position on AI use that staff can articulate. It has run at least one piece of analysis on its own corpus that has produced a finding nobody had before. It has a small set of bespoke internal tools, built quickly, running on its own infrastructure where the data is sensitive. It has a key person of AI who is empowered to make the next decision without convening a committee.
None of these markers are heroic. They are the consequence of treating AI as something the firm operates, rather than something it subscribes to. They distinguish the firms that will look meaningfully different in 2028 from those that will be in approximately the same position they hold today, with marginally better email drafts and a slightly larger subscription bill.
Michael Borck is a Lecturer in AI and Cyber Security at Curtin University and runs borck.consulting, a private practice for regulated Australian SMBs adopting AI. To find out where your firm sits on this transition and what the right next move is, book an AI Readiness Diagnostic.