← Back

Four hours

I shipped an MCP server, a REST API, API docs, API key management, programmatic SEO pages, and a bunch of homepage tweaks for Board Paper Scraper. That was a Tuesday, after work.

Board Paper Scraper indexes NHS trust board papers. Sales teams in health tech use it to find procurement signals. The data is good, but it was locked behind a dashboard. You had to log in, click around, read things with your eyes. Fine for humans, but AI agents want endpoints.

So I built them. A REST API with Bearer token auth:

GET /api/v1/trusts/search?q=london&type=acute
GET /api/v1/trusts/:id
GET /api/v1/trusts/:id/board-papers
GET /api/v1/board-papers/:id
GET /api/v1/board-papers/:id/files/:fileId/download

And an MCP server so Claude Code, Cursor, or Windsurf can connect with a single config block:

{
  "mcpServers": {
    "board-paper-sales": {
      "type": "streamable-http",
      "url": "https://boardpaperscraper.com/api/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Add that to your ~/.claude/mcp.json and you can ask "which acute trusts in London published board papers this month?" and get real answers from real data. No dashboard, no context switching.

Then API key management so customers can generate and revoke keys. Documentation with setup guides for every major AI tool. An llms.txt route so the docs are machine-readable too. Then 30+ programmatic SEO pages targeting integration and use case keywords. Then homepage improvements based on analytics.

Four hours. One PR.

The foundations matter

This was only possible because the existing codebase was well-structured. The internal API endpoints were already clean, the data layer was sensible, the patterns were consistent. AI didn't write this from nothing. It worked fast because it had good code to follow.

This is what people miss about AI-assisted development. It amplifies what's already there. If your codebase has clear patterns, AI extends those patterns at speed. If it's a mess, AI produces a mess faster. The quality of the foundations determines the quality of the output.

Count the roles

Think about what this work actually covers. The API is backend engineering. The MCP server is protocol integration. The SEO pages are content and growth. The documentation is technical writing. The homepage changes are product decisions driven by analytics.

In a traditional setup, that's a backend engineer, a frontend engineer, a content person, a technical writer, and a product manager making the prioritisation calls. Five different roles, or one person context-switching across all of them over several weeks.

Instead it was me, after work, with Claude Code open.

Any idea can be tested

This is the bit that matters. The interesting thing isn't "AI makes you faster." Everyone knows that. The interesting thing is that ideas which previously didn't justify the time now take so little effort that there's no reason not to try them.

Turning my dashboard into an API was always a good idea. It just never won a prioritisation fight. But when the cost drops from a week to four hours, the calculation changes completely. You stop asking "is this worth building?" and start asking "why haven't I built this already?"

No more "we haven't got time for that." You have. Literally. No more "let's revisit that next quarter." Just try it. The cost of testing an idea has dropped so far that the only real waste is not testing it.

A caveat

I should be fair: this works best when you're building something new, or extending a clean codebase. Legacy systems with poor documentation and tangled dependencies are a different story. AI can still help, but it's slower, more cautious work.

Even then, though, the question is the same: how fast is your feedback loop? How quickly can you go from idea to something testable? If the answer is still "weeks," something is wrong, and it's probably not the tools.