Cometly
Platform / MCP · Public beta

Cometly data,
in any AI agent.

Connect Cometly to Claude, ChatGPT, Cursor, and custom AI agents via Model Context Protocol — so your AI workflows operate on real attribution data, not guesses. Now in public beta.

Claude · Connected to Cometly
What's our closed-won pipeline by source this month, and which channel improved most vs last month?
Tool callcometly.pipeline_by_sourceperiod="mtd", compare="prev_month"

Closed-won pipeline this month is $1.42M, led by Google Ads ($512K). LinkedIn Ads improved the most — +38% MoM on the back of three new outbound-warmed campaigns.

Pipeline by source · MTDfrom cometly.pipeline_by_source
$88K
Google AdsGoogle Ads
$64K
MetaMeta
$52K
LinkedInLinkedIn
$38K
Organic
$24K
Outbound emailOutbound
SourcesCometlyCometly · liveattribution_model=last_touch
Ask Claude anything about your funnel…
How the MCP server works

Built on MCP.
Works everywhere.

Standard MCP server

Built on the Model Context Protocol — the open standard for connecting tools to LLMs. Works with Claude, ChatGPT, Cursor, Continue, and custom agents over JSON-RPC.

Live attribution queries

Agents query your real Cometly data — pipeline, CAC, LTV, journeys — through the same metrics layer your dashboards use. Numbers are always current, never a stale export.

Read access today, write on the roadmap

MCP currently exposes Cometly's read endpoints — every metric your dashboards and Ask AI can return. Write actions (pause a campaign, sync an audience, save a dashboard from chat) are rolling out as the beta matures.

Auth + permissions

Scoped API keys with per-workspace, per-tool permissions. Agents only see and act on what their key authorizes, and any token can be revoked in one click with a full audit log.

Use with Claude, ChatGPT, Cursor, or custom

One MCP server, many clients. Plug into Claude Desktop, Cursor, Continue, ChatGPT custom actions, or your own LangChain / LlamaIndex agents — no custom integration work.

No engineering required to start

Generate an API key in Cometly, drop the MCP config into Claude Desktop or Cursor, done. Most teams have AI agents querying their funnel within an hour.

MCP clients

Standard MCP. Works with everything.

Cometly's MCP server speaks the open standard, so any agent or LLM that supports MCP can query your attribution data — no custom integration work required.

Cometly MCP server · cometly-mcp@0.7.0
  • Claude Desktop
    Native MCP — paste config, start asking
    Live
  • Cursor
    MCP in the IDE — agents query Cometly mid-flow
    Live
  • ChatGPT custom actions
    Connect via API key + MCP bridge
    Live
  • Windsurf / Continue
    MCP support in editor agents
    Live
  • LangChain / LlamaIndex
    MCP-compatible agent frameworks
    Compatible
  • Custom agents
    Standard JSON-RPC over stdio / SSE
    Compatible
Same metrics layer as Ask AI · grounded in your live Cometly data
FAQ

MCP, agents, and permissions: questions while we're in public beta.

Which clients work today, what's queryable vs roadmap, and how keys are scoped. Book a demo and we'll plug into your preferred MCP setup.

Talk to sales
What's MCP?
Model Context Protocol — an open standard, originally developed by Anthropic, for connecting tools and data to LLMs. Think of it as the USB-C of AI agents: one connector, many tools.
Which AI tools does this work with?
Anything that supports MCP: Claude Desktop, ChatGPT custom actions, Cursor, Windsurf, Continue, custom LangChain / LlamaIndex agents, and any client that speaks JSON-RPC over stdio / SSE.
What can agents query?
Pipeline, CAC, LTV, journeys, attribution by source, cohort data — everything Cometly's Ask AI can query through our standard metrics layer. Write actions (pause campaigns, sync audiences, generate dashboards from a chat prompt) are coming as the public beta matures.
Is this safe? Can the agent leak data?
API keys are scoped per workspace with role-based permissions. Agents only see what their key authorizes, write access is off by default, and any token can be revoked in one click. Audit logs track every read.
How do I set it up?
Generate an API key in Cometly, drop the MCP server config into your client (Claude Desktop, Cursor, etc.). Most teams are running an AI agent against their funnel within an hour.

Stop letting agents guess.
Give them the data.

Give Claude, Cursor, and your custom agents direct access to your live attribution data — so every answer is grounded in your real funnel instead of a guess.