Kirha Monthly Recap — March 2026
Kirha turns one, SLR agents in healthcare, Semantic Layer in production, Kirha Tasks & provider dashboard
Kirha turns one this month 🎉 Our first blog post, AI Agents Are 90% Search set the tone. Accessing external data was the real bottleneck for AI agents. Since then we’ve built a complete search infrastructure: a scalable context graph of curated data providers and routing models that run locally to preserve privacy. Now we’re leveraging that infrastructure to build full agents, starting in healthcare.
Business & Ecosystem
AI-powered Systematic Literature Reviews
The first end-to-end agent we’re building conducts Systematic Literature Reviews (SLR) for Health Economics and Outcomes Research (HEOR).
A traditional SLR ties up a research team for months, with hundreds of hours spent on screening alone. Our AI agents process abstracts in parallel, overnight, and deliver a first structured report within 72 hours. Full PRISMA-compliant output. Audit trail included. Ready for regulatory submission.
- 95% time saved
- 60K+ abstracts processed
This is where everything comes together: our medical data providers supply the literature, and Kirha Tasks orchestrate multi-step agentic workflows that go far beyond what a single prompt can do.
Partnering with Linkup for WebSearch
We partner with Linkup to power our WebSearch fallback. We find better accuracy than Exa, and it works really well combined with Firecrawl for deep crawling. Kirha routes prompts to curated data providers first, but when no specialized source exists, we fall back on WebSearch. Sometimes it’s also the right complement: helping Kirha compose between tools to answer queries that span curated and open-web data. C’est français 🥖
What We Shipped
Full Bitcoin coverage with Xverse MCP
New version of the Xverse MCP now covers full live Bitcoin data. Agents can monitor the activity of any BTC address in real time.
This MCP exposes 52 tools, which is enough to degrade the performance of any planner model. This forced us to deploy our Semantic Layer in production: a system that filters tool definitions so the planning model never sees more than 30 at a time. That’s exactly what Kirha solves for AI agents: context engineering. If you load 100+ tools into an LLM’s context, orchestration quality drops. Semantic filtering with embeddings is the invisible work that lets Kirha scale.
How the Semantic Layer works
Think of it as a compiler for tool discovery. When a query comes in, the system translates natural language into a canonical action vocabulary, then maps those actions to the right tools across hundreds of MCP servers.
The pipeline works in three stages:
- Extract. Each tool is analyzed to extract its core semantic intent (verb, object, domain). Tools like
getSpotPriceandfetchCurrentRateboth map to the same canonical action:get_price. - Cluster. Semantically equivalent tools are grouped using hierarchical clustering with adaptive thresholds. Each cluster becomes a named canonical action with synonyms, domain tags, and an embedding vector.
- Translate. At query time, an LLM classifies the user’s intent against the canonical action vocabulary, then vector search expands the results to catch near-misses. Related tools (dependencies, alternatives) are surfaced automatically via graph traversal.
Translation completes in under 700ms. The LLM call dominates latency, but using a lightweight model keeps it fast. The result: the right tools, every time, without flooding the planner’s context. We’ve been building this for six months. Having it live in production is a milestone.
Kirha Tasks
Kirha Tasks enable multi-step agentic workflows that go far beyond a single prompt. Tasks are what power our SLR agents and context-aware medical recommendations. Break a complex research question into steps, let agents execute them in parallel, and get structured results back.
Data Provider Dashboard
Data providers can now price their tools on Kirha and monitor agentic usage from a dedicated dashboard.
Why do providers benefit? Every time we compose multiple providers for a single prompt, we generate demand that would never reach any provider on their own. For example, see this query: Find engineers working at companies behind the latest patents in machine learning. Kirha first retrieves recent patents, then passes the relevant companies to Apollo.io to find engineer profiles. Neither data provider could answer this independently- but together, they can. As we add more providers, the set of answerable prompts grows combinatorially.
Closing Thoughts
Kirha is leveraging its search infrastructure to build end-to-end agents that solve problems from A to Z, starting in the healthcare vertical. We’re also working on KAP (Kirha Agent Protocol): an open standard for connecting AI agents to Kirha. It’s the natural next step. As long as we keep onboarding data providers, we can match supply and demand. The graph grows, and so does the value of every node in it.
Latest articles

Kirha Monthly Recap — February 2026
February 2, 2026 2 min. read
Adoption in the Medical vertical, faster planning with Cerebras, Kirha SDKs & Skill..

Kirha Monthly Recap — January 2026
January 31, 2026 2 min. read
January 2026 update: Open-Source Tool Planning models, Medical and AdTech verticals, Users growth and San Francisco events..