AI Agents Are 90% Search
When we talk about the next generation of AI agents, we often focus on reasoning, planning, or decision-making. But beneath all of that lies one silent, fundamental operation: search.

What’s an AI agent? An LLM that runs tools in a loop to achieve a goal. When we talk about the next generation of AI agents, we often focus on reasoning, planning, or decision-making. But beneath all of that lies one silent, fundamental operation: search. Every agent interaction, from answering a question to executing a transaction, begins with a search for context. And yet, the search layer of AI remains deeply broken.
The Problem: Search Was Never Designed for Agents
Search engines were built for humans. They were optimized for engagement, not truth, black boxes trained to keep users clicking, not to produce verifiable knowledge. AI agents, however, have a very different mandate. They need to act. They need context they can trust. When agents depend on traditional search, they inherit its weaknesses: lack of traceability, rate limits, paywalls, and results that can’t be audited or reproduced. Agents don’t need relevance. They need determinism. They require:
- Deterministic results, identical queries should never yield different outcomes.
- Auditable trails, every decision should be traceable back to its data source.
- Efficient retrieval, not crawling billions of irrelevant pages, but accessing the right private data points at the right time. If we want autonomous agents we can trust, we must start with auditable search.
Determinism Starts with Search
There’s growing recognition that AI systems need to be more deterministic. Companies like Thinking Machine Labs and EigenAI are pushing toward reproducible inference and verifiability. But determinism in models means little if the search isn’t deterministic too. Every piece of retrieved context must be verifiable, timestamped, and linked to its origin, an event that can be audited by any observer. Without that, “agent autonomy” is just stochastic behavior with a confident tone.
Why the Open Web Isn’t enough
Expanding context windows won’t be enough to extract meaningful information from the World Wide Web in an acceptable amount of time. Even as models like Grok 4 reach 2M tokens, we see diminishing returns, performance actually drops as the context window grows. (You can see this play out on contextarena.ai.) Bigger isn’t smarter. What AI agents truly need is precision, clean, targeted data from trusted sources. The open web can’t provide that. It’s noisy, unstructured, and economically misaligned with truth. The next frontier of search won’t be scraping the web; it’ll be querying private data in real time, with traceability built in.
The Economic Layer of Search
Imagine if every search query were a transaction. Each retrieved data point, a priced unit of verified context. This transforms search from an unaccountable utility into an economy of knowledge, where data providers are rewarded for accuracy, and AI agents pay only for what they need. A growing number of projects are exploring this direction: auditable data marketplaces, context APIs, and programmable search layers for agents. Among them is Kirha, a platform enabling agents to access verified, private data through micropayments, a system built around accountability, not scraping.
CaaS: Context as a Service
CaaS is the new SaaS. As LLMs evolve into general intelligences, they’ll consume the traditional SaaS industry. “We’re entering the fast fashion era of SaaS” where delivering the right context to LLMs matters more than delivering software itself. This shift represents a new infrastructure layer for AI: Context-as-a-Service. A programmable search fabric that agents can query for deterministic, verifiable information, and pay instantly for access. Each query becomes a unit of truth, not a guess. Each transaction leaves an auditable trail of where the data came from, when it was retrieved, and how it was used. That’s the kind of foundation AI needs if we want agents we can actually trust.
The Future of Search Is Auditable
The intelligence of tomorrow’s AI agents won’t come from larger models or bigger context windows. It will come from better search, from systems that can find, verify, and transact on trusted information in real time. Search was never built for agents. Now it’s time to rebuild it. Because AI agents are 90% search and the future of search must be auditable, economically aligned, and built for machines, not humans.
Trusting Your Search is like trusting your Car.