← Back

I got my AI agent to do my fortnightly analytics review

Every two weeks I sit down to check how my SaaS is performing. Traffic, signups, what's ranking, what isn't, where people are coming from, what they're doing once they arrive. The kind of review that takes a competent founder maybe an hour if they're disciplined, and half a day if they fall down a rabbit hole in PostHog.

I haven't done that manually in months.

I type one sentence into Claude Code, something like "it's been another two weeks, check Search Console, PostHog, and tell me what's working and what isn't", and about ninety seconds later I get a complete performance briefing with period-over-period comparisons, traffic source breakdowns, signup funnel numbers, and observations that are genuinely useful. Not generic "consider A/B testing your CTA" filler. Actual findings from actual data.

What's wired together

PostHog is the cleanest integration. It ships an MCP server, which means Claude can query it directly, trends, funnels, breakdowns, raw HogQL, without me writing analytics code. For Search Console and Google Ads, I have Python scripts that authenticate against both APIs using the same OAuth credentials and output structured data. Claude reads and executes them. It's not as elegant as an MCP integration, but the pattern works for any API: if you can write a script that calls it and prints results, Claude can use it.

All of this runs in parallel. Claude fires off the scripts and PostHog queries concurrently, then synthesises everything into a single briefing. The whole thing takes about a minute.

What comes back

Here's a real example from the review I ran on 16 March 2026. PostHog traffic, period-over-period:

MetricCurrent 28dPrevious 28dChange
Pageviews2,6294,408-40%
Unique visitors872168+419%

Traffic sources:

SourcePageviews%
Direct1,27248%
Google (all)1,18945%
Bing351.3%
ChatGPT331.3%
Perplexity60.2%
Gemini50.2%

And from Search Console, homepage clicks tripled period-over-period, 23 new pages appeared in Google results that weren't indexed before, and it flagged a query generating 2,700 impressions with zero clicks that turned out to be a false positive. Not bad for ninety seconds.

It also comes back with suggestions. From this same session:

Some of these are obvious in hindsight. That's sort of the point. The review surfaces them consistently, every two weeks, without me having to remember to check.

The bit that actually matters

The time saving is nice, but it's not really the point.

Because Claude has all the context loaded, the conversation continues naturally into a working session. After this review I asked it to pull my actual React components and cross-reference the copy against the PostHog funnel data. It read the homepage, the pricing config, the onboarding flow, and came back with specific suggestions tied to specific files. The analytics review becomes the starting context for a working session, not a standalone report that sits in a Google Doc and gets looked at once.

If you want to build something similar

The building blocks are straightforward.

MCP servers are the fastest path. PostHog, Stripe, and a growing number of services ship them. You configure them in your Claude Code settings with an API key and you're done. Claude can query them natively.

For everything else, write a script. Search Console, Google Ads, any internal API. Python works well because Claude can read and execute it directly. Store credentials in a config file that the script loads at runtime, make sure the output is structured (CSV, JSON, whatever), and Claude handles the rest. I use a single YAML file with OAuth credentials that cover both Google APIs because the refresh token carries scopes for Ads and Search Console.

A CLAUDE.md file ties it together. This is the most underrated part. A markdown file in your project root that tells Claude what the product is, what APIs are available, where credentials live, and what the defaults should be:

## Authentication
- Google Ads API: Credentials in ~/google-ads.yaml
- Search Console API: Same OAuth credentials, refresh token has both scopes
- Client library: google-ads Python package, uses GoogleAdsClient.load_from_storage()

## Defaults
- Target market: UK (location 2826), English (language 1000)
- Search Console property: boardpaperscraper.com
- Customer ID: stored in customer_id.txt

## Rules
- NEVER execute mutate operations against the Google Ads API
- All new scripts go in saved_code/
- All CSV output goes in saved_csv/

Claude reads this automatically at the start of every session. No re-explaining context. No "which property should I query?" It just knows.

The honest assessment

This isn't perfect. The PostHog MCP sometimes returns more data than is useful and Claude has to filter it down. Search Console data has a 48-hour lag so the numbers are always slightly stale. And the suggestions, while grounded in real data, are still coming from a model that hasn't watched a session recording or talked to a customer.

But the thing that's easy to miss is how much more data I'm actually looking at now. Before this, I checked the numbers I remembered to check, which meant the same three or four metrics every time. Now I'm getting breakdowns I wouldn't have thought to pull, comparisons across periods I wouldn't have bothered calculating, and observations about pages and queries I didn't know were performing. It's not a replacement for understanding the product. It's giving me significantly more to understand it with.

Where this is going

Most of this is held together with scripts. PostHog has an MCP server, but Search Console and Google Ads don't, so I'm writing Python to bridge the gap. It works, but it's clearly the early version.

The interesting thing to think about is what happens when that's no longer necessary. When Stripe and HubSpot and your CRM and your email platform all ship MCP endpoints as standard, when an agent can query any tool in your stack without someone writing a script first, the amount of operational work one person can handle changes completely. Right now I'm getting traffic, signups, and search performance in a single briefing. There's no reason that can't include revenue, churn, support ticket themes, and pipeline data once the integrations exist. The constraint isn't the AI. It's the plumbing. And the plumbing is getting built.