The AI Hub is a Model Context Protocol (MCP) server built into the platform. MCP is an open standard from Anthropic that lets AI clients like Claude Desktop, Cursor, and ChatGPT safely call tools on external systems. The AI Hub turns your live review data into 13 tools any MCP-compatible client can use.
This guide walks through creating a connection, understanding the tool catalog, controlling what tokens can and cannot see, reading the audit log, and working with rate limits. Everything here is scoped per token, per organisation, and per location, so you can confidently point an AI client at your data.
What the AI Hub is
The AI Hub is an HTTP endpoint that speaks JSON-RPC 2.0 and implements the 2025-06-18 Model Context Protocol specification. Compatible AI clients connect with a bearer token, discover the available tools through a standard handshake, and start running them.
Unlike a CSV export or a screenshot paste, the AI runs actual queries against the live database on your behalf. Ask "summarise my reviews from the last 30 days grouped by source", and the AI picks the right tool, builds the parameters, runs it, and reads the structured response back.
- MCP 2025-06-18 spec compliant with structuredContent result shape
- Single endpoint at `/api/mcp` on your white-label domain
- Bearer token authentication with scoped abilities
- Read-only by default, with a separate opt-in for write tools
- Per-organisation and per-location access enforced on every call
- PII redacted by default on private feedback and contact tools
- Every tool call recorded in a per-tenant audit log
Who can use the AI Hub
The AI Hub is available to any user who has the `api.access` and `api.mcp` permissions on their account. Agency owners always have it. Team members get it through their role assignment. Agency customers get it if their plan includes API access and the agency has granted `api.mcp` on the plan.
Each connection you create is a Sanctum personal access token scoped to one named AI client. You can have as many connections as you need, and revoke any of them independently without affecting the others.
- Agency owners have access by default
- Team members need the `api.access` and `api.mcp` permissions in their role
- Agency customers need a plan that includes API access and the `api.mcp` permission
- Each connection is its own token with its own scope
- Revoking a connection invalidates it immediately
Supported AI clients
Any MCP-compatible client works. The configuration format is the same across all of them. The setup UI generates a ready-to-paste config snippet automatically using your custom domain.
- Claude Desktop (Anthropic)
- Claude Code (Anthropic)
- Cursor IDE
- VS Code with Copilot Chat
- ChatGPT custom connectors
- Any custom client that speaks MCP over HTTP
The generated snippet uses `mcp-remote`, a small stdio-to-HTTP bridge npm package. This is the most reliable option for desktop clients today because Claude Desktop's direct URL-based config still has some rough edges.
Creating your first connection
Open the dashboard, go to `Settings -> AI Hub` (or `Settings -> API & Webhooks -> Connect AI`), and click `Create connection`. Give it a descriptive name like "Claude Desktop" so you can tell your connections apart later.
If you want the AI to be able to draft review responses or tag reviews in bulk, tick the `Allow write access` checkbox before creating. Leave it unchecked for a read-only token. You can also create one of each and switch between them by swapping the token in your client config.
- Token is shown once at creation time, then never again
- Copy the generated `mcpServers` config snippet and paste it into your AI client
- Restart your AI client after saving the config
- Ask a natural-language question to trigger the first tool call
Config snippet for Claude Desktop
The Connect AI screen generates a ready-to-paste snippet using your domain and token. For reference, it looks like this:
- Paste the `mcpServers` block into your client config file
- For Claude Desktop: `Settings -> Developer -> Edit Config`
- For Cursor: `Settings -> MCP` and paste the same shape
- Save, restart, done
The 13 tools
The server ships with 13 tools grouped by risk level. The AI client automatically sees only the tools your token is allowed to call, which keeps its mental map accurate and prevents it from wasting calls on denied requests.
| Tool | Group | What it does |
|---|---|---|
| `list_reviews` | Read | Filter reviews by source, date, rating, sentiment, response status, verified flag, and full-text search. Paginated. |
| `get_review` | Read | Fetch a single review by ID with reply, tags, auto-respond history, and language. |
| `list_organizations` | Read | List accessible organisations with review count and average rating. Pass `organization_id` for detail including locations. |
| `list_locations` | Read | List accessible locations with per-location review count and average rating. |
| `get_metrics` | Read | Aggregate review metrics. Group by day, week, month, location, organisation, source, rating, sentiment. Period comparison supported. |
| `list_campaigns` | Read | Review-request campaigns with paused state, schedule, location, and full funnel (invited, opened, clicked, reviewed, redirected, testimonials, private feedback, unsubscribed, bounced) plus open, click, redirect, conversion, unsubscribe, and bounce rates. |
| `get_ai_insights` | Read | Retrieve pre-computed AI analysis (themes, sentiment shifts, recommendations) for an organisation or location. |
| `list_private_feedback` | Read with PII | List private feedback submissions. Names, emails, and phones redacted unless `include_pii` is passed. |
| `list_contacts` | Read with PII | List review-request contacts with subscription state, latest status, and per-contact engagement counters (invites, opens, clicks, redirects, testimonials, private feedback, bounces, unsubscribes). PII redacted by default. |
| `get_contact_activity` | Read with PII | Full timeline for one contact: every invite, open, click, video play, redirect, testimonial, private feedback, unsubscribe, bounce, and spam complaint, with the campaign, channel, step, rating, message, and source attached. |
| `list_auto_respond_rules` | Read with PII | List configured auto-respond rules with rating range, sources, delay, approval requirement, and AI usage. |
| `draft_review_response` | Write | Draft a response to a review. Always held for agency approval. Never auto-sent. |
| `tag_reviews` | Write | Attach or detach tags on up to 50 reviews per call. |
Permissions and scope enforcement
Every tool checks three things on every call. First, does the token hold the required Sanctum abilities (for example `mcp:access`, or `mcp:writes` for write tools). Second, does the token's owner have the required semantic permission (for example `reviews.view` or `reviews.respond`). Third, is the requested organisation or location inside the owner's access scope.
All three checks must pass. If any fails, the tool returns a JSON-RPC error with a specific code, and the call is recorded in the audit log as `denied`. The AI client can choose to surface this, retry with different parameters, or give up gracefully.
- `mcp:access` is required for every tool
- `mcp:writes` is required for `draft_review_response` and `tag_reviews`
- `reviews.pii` is required to unlock PII with `include_pii: true`
- Organisation and location scope always enforced, no override
Controlling PII exposure
The AI Hub treats PII (names, emails, phone numbers) as opt-in data. By default, `list_private_feedback` and `list_contacts` return the same structure but with those fields stripped. This is the case even for tenant owner tokens, so a read-only analyst token truly cannot send PII to an AI provider.
To unlock PII, two conditions must both be true on every call. The token must have the `reviews.pii` permission when you create it, and the caller must explicitly pass `include_pii: true` in the tool parameters. Asking for PII with a token that does not hold the permission returns an error. Holding the permission does not auto-expose PII.
- Redacted by default on every call
- Both `reviews.pii` permission and `include_pii: true` required to unlock
- Redactor runs server-side; PII never leaves the server unless both conditions pass
- Create separate tokens for PII work so revoking is targeted
Write tools and the approval queue
Two tools can modify data: `draft_review_response` and `tag_reviews`. Both require `mcp:writes` on the token, and neither bypasses the existing moderation pipeline.
When the AI calls `draft_review_response`, the resulting draft enters the Auto-Respond approval queue with status `approval_requested` and `send_at` set to null. It does not reach the review platform until a human opens the approval queue and clicks approve. If a matching Auto-Respond rule exists for the review's organisation or location, the rule's approval email fires so the reviewer gets notified the same way every other pending response is. If no matching rule exists, a minimal "MCP Drafts" rule is created automatically (with `use_ai` off and `require_approval` on) to carry the draft.
`tag_reviews` attaches or detaches tags on up to 50 reviews per call. Tagging is a soft, reversible operation, but it still requires write access. The flag sentinel tag is reserved and cannot be used through the tool.
Reading the audit log
Every tool call writes a row to the `mcp_tool_calls` table in your tenant database. The row captures the caller, token, tool name, redacted parameters, duration, response size, row count returned, and result status. Tokens, emails, and phone numbers in the parameters are stripped before storage so the log itself does not become a PII surface.
Open the AI Hub dashboard to browse the log. You can filter by date range, tool name, and result status (`ok`, `denied`, `error`). The dashboard also shows a per-tool breakdown, a call-volume chart, and the recent activity for every connected client.
Rows older than 90 days are pruned automatically by the daily `tenants:prune-mcp-audit-log` command, so the audit table stays bounded even on busy accounts.
- One row per tool call, captured asynchronously through a queue so it never blocks the call
- Redacted parameters only, never raw tokens or PII
- Correlation IDs let you group all tool calls from a single AI prompt
- 90-day retention, pruned automatically
Rate limits
Two stacked rate limits protect the server. 120 calls per minute per token, and 600 calls per minute across the entire workspace. Both limits apply simultaneously. Hitting either returns HTTP 429 with a `Retry-After` header and a JSON-RPC error body so the AI client can back off and retry cleanly.
In practice, a single Claude conversation usually fires 5 to 15 tool calls per question, so even a team of several analysts working against the same workspace will not touch these limits.
Correlation IDs for tracing
AI clients that support it can pass an `X-Correlation-Id` header. The server echoes it back on the response and stores it in the audit log. That makes it trivial to group all of the tool calls a single AI prompt generated, which is the fastest way to debug "why did Claude give that answer".
The header accepts any string up to 64 characters. If no header is sent, the server proceeds without one and the audit row's correlation_id is null.
White-label behaviour
The MCP endpoint always resolves on the tenant's own domain, for example `https://app.your-agency.com/api/mcp`. The `serverInfo.name` returned during the MCP initialize handshake uses the tenant's company name. The Connect AI screen, the audit log dashboard, the API documentation, and the generated config snippet all use your agency branding.
Nothing in the MCP protocol exchange mentions EmbedMyReviews. The setup is identical for agency customers on white-label domains.
Troubleshooting
| Symptom | Likely cause and fix |
|---|---|
| Claude says "no tools available" | The client has not completed the initialize handshake. Restart the client; MCP servers are only discovered at startup. |
| All calls return "Authentication required" | Token was revoked, or the config snippet has the wrong token. Regenerate the connection in the dashboard and paste the fresh snippet. |
| "Your token is missing the required permission: reviews.respond" | Read-only token tried to call a write tool. Create a new connection with "Allow write access" ticked, or keep using the read-only token. |
| "Your token is missing the required permission: reviews.pii" | Token was created without PII access and the AI asked for `include_pii: true`. Create a new connection specifically scoped for PII work. |
| HTTP 429 with Retry-After | Rate limit hit. The AI client will usually back off automatically. Wait the number of seconds in the header and retry. |
| Tool returns "Unknown parameter" | The AI client sent a parameter name the tool does not accept (for example `from` instead of `date_from`). The error message lists the valid parameter names. |
| Draft response never reaches Google | Correct behaviour. Open the Auto-Respond approvals queue in the dashboard; the draft is there waiting for human approval. |