Aron exposes a production Model Context Protocol server. Any Claude, OpenAI or custom AI agent can quote, bind, read policies, draft documents and act on your book — with scoped permissions and full audit. Probably the first MCP server in insurance distribution.
One protocol. Any AI client. Every Aron capability.
Claude Desktop, Cursor, ChatGPT, your in-house assistant — point it at the Aron MCP endpoint with a scoped token. The client discovers tools and resources automatically.
Customers, policies, claims, documents, quoting tools, AI agents — all exposed as MCP resources and tools. The AI can read what it's allowed to and call what it's authorised for.
Every MCP call is logged: which client, which user, what was read, what was written, what was decided. Same audit-grade trail as human actions.
Insurance-native primitives, not generic REST wrapped in a protocol.
An AI agent can call quoting, binding, endorsement and renewal tools directly. The same operations a producer would perform — agent-callable with the same compliance checks.
Read access to the agency's book exposed as structured resources. The AI grounds its reasoning in the customer's actual record — not a generic training corpus.
Each MCP token is scoped to a producer, an agency, a set of operations, a set of customers. The AI never exceeds the boundary you set.
Aron's own browser-use AI agents are exposed as MCP tools. Your assistant can invoke them to quote a carrier-portal-only product without ever leaving the conversation.
Claude, OpenAI, open-source assistants — they all speak MCP. You pick the model. Aron stays neutral.
Every tool and resource has typed schemas, versioned releases and full documentation. Build against it like a first-class API.
Most platforms force you to use their AI. With MCP, you use yours — Claude, OpenAI, your in-house model — and Aron is just the data and action plane.
The AI reasons over the customer's actual policies, claims and documents. The whole class of 'generic chatbot makes up insurance advice' goes away.
Every MCP call lives under the same audit, advice-boundary and authorisation framework as a human user. AI doesn't void your regulatory posture.
MCP is becoming the standard for AI-to-system integration. Building against it now means your stack remains usable as model providers and assistants evolve.
Model Context Protocol is the emerging standard for connecting AI assistants to systems-of-record. In insurance it matters because it lets any AI agent read your book, take action and stay within compliance — without locking you into one vendor's AI.
Any MCP-compliant client: Claude Desktop, Cursor, official Anthropic SDKs, OpenAI Agent SDK adapters, custom Python/Node clients. We test against the major ones on each release.
Each MCP token is scoped: which producer, which operations (read, quote, bind, endorse), which customers. Tokens are short-lived, rotatable, and every call is logged.
Only if you choose to expose a customer-scoped token to that AI. Most agencies start with internal-only — agents and producers — and decide on customer-facing exposure later.
Yes. Running on Aron's production tenants today. Documentation, change log and release schedule available on request under NDA.
Yes — for clients that don't speak MCP. The REST API and MCP server share the same underlying authorisation and audit layer.
We'll spin up a demo tenant, give you a token, and let your Claude / OpenAI agent quote, read and act — live on the demo call.