Plug Claude, Cursor, and ChatGPT into your review data
The AI Hub exposes your live review data to any Model Context Protocol (MCP) compatible AI client. Ask natural language questions like "summarise my clients' reviews from the last 30 days" or "which locations have the worst response rate", and get real answers from the live database, scoped to the token you created.
Thirteen tools for reviews, organisations, locations, campaigns, contacts (with per-contact activity timelines), metrics, AI insights, auto-respond rules, and more. Read-only by default. PII redacted by default. Every call audit logged. Fully white-label under your own domain.
claude_desktop_config.json
{
"mcpServers": {
"my-agency-reviews": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://app.your-domain.com/api/mcp",
"--header",
"Authorization: Bearer ..."
]
}
}
}AI clients that speak Model Context Protocol can now use your reputation data as a working tool, not a copy-paste screenshot.
Model Context Protocol (MCP) is an open standard from Anthropic that lets AI clients safely call tools on external systems. EmbedMyReviews ships an MCP server out of the box. Your AI tool of choice connects over HTTP with a scoped bearer token, discovers the 12 available tools, and starts answering reputation questions from live data. No CSV exports. No brittle screen scraping.
The problem
Your reviews are locked in dashboards
Every agency owner has the same workflow. Open Claude, paste a screenshot of the review dashboard, ask a question, get half an answer. Or export a CSV, drop it into ChatGPT, remember the AI can't tell you anything the file didn't say. Or copy three tabs worth of stats into a Notion doc and give up. Your review data is stuck in the tool, and AI assistants cannot reach it.
The solution
Let the AI run the queries
The AI Hub turns your reputation data into tools any MCP-compatible client can invoke. Claude can pull live review counts by source, group metrics by month, surface 1-star reviews, compare locations, read private feedback, draft responses for you to approve. The AI runs the query against the live database and returns a clean answer, every time, across all of your customers.
Use cases
What agencies actually ask
These are real questions agency owners throw at their AI assistant once the AI Hub is wired up. No keyword search. Plain English in, structured answers out.
Client reporting
"Summarise last month's reviews for Acme Dental. What were the three most common themes?"
The AI pulls the reviews, groups by sentiment, surfaces themes. You paste the answer into the monthly report.
Multi-location analysis
"Which of my client's 14 locations has the lowest response rate? Show me unanswered 3-star reviews there."
Two tool calls, one answer. Uncovers operational gaps without opening a dashboard.
Campaign autopsy
"How did the October Google campaign perform? Compare it to the July one."
The AI pulls the funnel stats, computes deltas, explains what changed. Ready-to-send email to the client.
Sentiment triage
"Pull all 1 and 2 star reviews across my portfolio from the last 7 days. Flag anything that looks like a service failure."
Daily stand-up prep in 20 seconds. The AI reads every new review and tells you what to look at.
Response drafting
"Draft a reply to review 8421 in the voice of the dental clinic owner. Hold for my approval."
Drafts always enter the Auto-Respond approval queue. Nothing ships until a human clicks approve.
Theme tagging
"Group all reviews mentioning wait times, and tag them 'wait-time-issue' so I can filter later."
The AI tags in bulk (up to 50 per call). Tags are visible across the platform, usable in campaigns and filters.
Private feedback review
"Show me the last 20 private feedback submissions. I need to understand what customers are saying privately before posting publicly."
PII is redacted by default. Your approved tokens can opt in to see names and contact info when you need to follow up.
Proposal support
"Which of my clients have no auto-respond rule set up? I want to pitch it as an upsell."
One query turns your account into a list of upsell prospects. Sharp way to find idle revenue.
Team onboarding
"Our new account manager should be able to answer 'how is client X doing?' without logging into the dashboard."
Create them a scoped token, give them Claude, they can read the data they're allowed to see. Nothing more.
Tool catalog
Thirteen tools out of the box
Every tool enforces per-organisation and per-location access checks. Read-only by default. Write tools require a separate opt-in on the token and always route through the human approval queue.
Read (7 tools)
list_reviewsFilter and list reviews across organisations and locations with source, date, rating, sentiment, response status, verified flag, and full-text search. Paginated.
get_reviewFetch a single review by ID with reply, tags, auto-respond history, and language.
list_organizationsList accessible organisations with review count and average rating. Pass organization_id for detail including locations.
list_locationsList accessible locations with per-location review count and average rating. Scope to one organisation.
get_metricsAggregate metrics over a date range: totals, avg rating, star distribution, response rate, sentiment. Group by day, week, month, location, organisation, source, rating, or sentiment. Supports period-over-period comparison.
list_campaignsList review-request campaigns with status, schedule, location, and full funnel (invited, opened, clicked, reviewed, redirected, testimonials submitted, private feedback, unsubscribes, bounced) with conversion and open rates.
get_ai_insightsRetrieve pre-computed AI analysis (themes, sentiment shifts, recommendations) for an organisation or location.
Read with PII access (4 tools)
list_private_feedbackList private feedback submissions (ratings and messages never published publicly). PII redacted by default.
list_contactsList review-request contacts with subscription state, latest activity, and per-contact engagement counters (invites, opens, clicks, redirects, testimonials, private feedback). PII redacted by default.
get_contact_activityFull timeline for one contact: every invite, open, click, video play, redirect, testimonial, private feedback, and unsubscribe, with the campaign, channel, and ratings attached.
list_auto_respond_rulesList configured auto-respond rules with rating range, sources, delay, approval requirement, and AI usage.
Write, held for approval (2 tools)
draft_review_responseDraft a response to a review. Always held for agency approval. Never auto-sent. Enters the existing Auto-Respond moderation queue.
tag_reviewsAttach or detach tags on a set of reviews. Useful for clustering after summarisation. Capped at 50 reviews per call.
Security model
Built to be safe for AI to touch
AI assistants are still new tools. The AI Hub assumes that, treats every token like a limited guest, and protects your data at every layer.
Scoped by organisation and location
The AI only sees the organisations and locations the token's owner has access to. Try to query an organisation outside that scope and the tool returns a clean access_denied error. No leaks across clients.
PII redacted by default
Private feedback and contact tools strip names, emails, and phone numbers unless the token explicitly holds the reviews.pii permission and the caller asks for include_pii. Two conditions, every call, no accidental PII in AI provider logs.
Write tools never auto-send
AI-drafted review responses always enter the Auto-Respond approval queue with status approval_requested and send_at null. Nothing reaches Google, Facebook, or any review platform until a human clicks approve.
Every call audit-logged
Tool name, parameters (with secrets and PII stripped), duration, rows returned, and result status are recorded for every call. View the audit log from the AI Hub dashboard. Spot misuse, trace a weird answer, keep a clean record.
Rate limited per token and workspace
120 calls per minute per token. 600 calls per minute across the whole workspace. Exceeding either returns a clean 429 with a Retry-After header. One misbehaving client cannot take down the others.
Revocable in one click
Each AI client gets its own named connection. Claude stops behaving? Revoke that connection alone. The others keep working. No "rotate every key" panic.
Three steps to wire it up
If you already have Claude Desktop installed, the whole thing takes under a minute. Cursor and Claude Code work the same way.
Create an AI token
In the dashboard, open Settings then AI Hub and click Create connection. Name it after the client (for example "Claude Desktop"). The token is shown once, copy it right away.
Paste the config
Copy the generated mcpServers block straight into your client config. For Claude Desktop, that's Settings then Developer then Edit Config. Save and restart.
Ask a question
Try "summarise my reviews from the last 30 days" or "which location has the best response rate". The AI discovers the tools, picks what it needs, and answers from live data.
MCP vs API vs CSV exports
You have three ways to get your data out of the platform. The AI Hub is the right layer when the consumer is an AI client. Here's the honest comparison.
| Capability | AI Hub (MCP) | REST API | CSV exports |
|---|---|---|---|
| Natural language questions | Yes | No (structured only) | No |
| Live data, always fresh | Yes | Yes | Stale the moment you download |
| Works in Claude, Cursor, ChatGPT | Yes, natively | Needs a custom connector | Paste and pray |
| PII redaction on by default | Yes | Manual | No |
| Per-call audit log | Yes | Request logs only | Export events only |
| Writes require human approval | Always | No, caller controls | N/A |
| Best for | Live AI assistant work | Custom automations and pipelines | One-off analysis in a spreadsheet |
The AI Hub and the REST API cover different jobs. Many agencies use both, the API for scheduled syncs and the AI Hub for interactive work.
White-label
Your domain, your brand, your AI Hub
The MCP endpoint lives on your own white-label domain, for example https://app.your-agency.com/api/mcp. The server name returned to the AI client uses your company name. Nothing in the configuration snippet, audit log, or error messages mentions EmbedMyReviews. Your team and your customers see only your agency.
Custom domain
MCP URL uses your white-label domain with automatic SSL.
Branded server name
Claude Desktop shows "Your Agency MCP Server" in the tool list.
No platform attribution
Your docs, your dashboard, your tokens. EMR never appears.
AI Hub FAQ
Stop pasting screenshots into Claude
The AI Hub is included in every EmbedMyReviews subscription. $99 per month flat. No per-call fees. Wire up your first AI client in under a minute.
14-day free trial. No credit card required.