Most analytics implementations are an afterthought — added to a site once, then quietly drifting out of sync with reality. I built a system where that's structurally impossible. Here's how it works.
There is a pattern I've seen at every company I've consulted for. The development team ships a new feature. Then, weeks later, someone in analytics asks: "Wait, are we tracking this?" The answer is usually "sort of." The dataLayer push was added in a hurry, the parameter names don't match the documented schema, and the measurement plan — if one exists — hasn't been updated since Q2.
I decided I wanted none of that on my own portfolio. Not because the site is large — it isn't — but because this site is literally my public proof of how I work. Every analytic gap here is a contradiction I'd be sending to every potential client.
So I built a system. This article explains exactly what it is, how it works, and what the underlying file that enforces it — CLAUDE.md — actually does.
CLAUDE.md is a plain-text instruction file that governs how Claude Code behaves in a given project. It sits in the project root alongside your code, gets read automatically at the start of every session, and stays in effect for the entire conversation.
Think of it as a rulebook. Not documentation for humans — documentation for the AI. Everything in it is an instruction that Claude will follow as if you'd said it yourself in every single prompt:
file://")The file uses standard Markdown. Section headers become logical groupings of rules. Numbered lists become ordered procedures. The clearer and more specific your instructions, the more reliably they're enforced.
My CLAUDE.md is organized into these sections:
| Section | What it enforces |
|---|---|
| Always Do First | Invoke the frontend-design skill before writing any frontend code |
| Local Server | Always serve on localhost:3000 via node serve.mjs, never screenshot file:// |
| Screenshot Workflow | Puppeteer path, screenshot naming convention, comparison rounds |
| Brand Assets | Check assets/brand/ before designing, use real assets not placeholders |
| Anti-Generic Guardrails | Typography, shadows, gradients, animation, interactive states |
| Measurement-First Rule | The analytics workflow described in this article |
| Measurement Plan Versioning | When and how to version the measurement plan |
| Hard Rules | Absolute constraints — no transition-all, no default Tailwind palette |
The last three sections are the ones that directly enforce analytics discipline. Let's go through them.
This is the core constraint. The rule is simple but absolute:
Every time any change is made to the website — new page, new section, new interaction, new form, new button, content update — you MUST ask how the change should be measured, implement the dataLayer pushes, update the measurement plan, and re-run the screenshot capture. Do NOT ship any frontend change without confirming analytics coverage first.
What this means in practice: before writing a single line of frontend code for any feature, Claude stops and asks me how I want the new interaction measured. I either approve a proposed approach or specify my own. Only after that confirmation does the implementation proceed — and it proceeds for both the feature and the tracking, simultaneously.
This makes analytics coverage a pre-condition for shipping, not a post-ship cleanup task.
Every website change that touches user-facing behaviour triggers this sequence:
window.dataLayer.push() call is added to the affected HTML or JavaScript file, using the agreed parameter names and values, consistent with the existing schema.measurement_plan/measurement_plan.html viewer and the relevant CSV export are updated to document the new or modified event. The change is visible and searchable immediately.node capture_measurement.mjs re-generates all Puppeteer screenshots so the measurement plan always shows the live site, not a stale state from months ago.The measurement plan is a self-contained HTML file at measurement_plan/measurement_plan.html. No Google Sheets. No Confluence. No external dependency. It opens in any browser, it loads instantly, it lives in the same git repo as the code it documents, and it is always in sync with the site because the rule in CLAUDE.md makes it structurally impossible for it to fall behind.
It is organized into eight tabs:
| Tab | Contents |
|---|---|
| 01 — Intro | Business context, objectives, 5-event overview |
| 02 — Parameters | Full parameter registry — 21 parameters across all events |
| 03 — page_view | One example per unique page_category + page_type combination |
| 04 — select_content | 29 interactions: nav links, hero CTAs, project cards, blog cards, filters, footer socials |
| 05 — generate_lead | Contact form submission with SHA-256 hashed email |
| 06 — orbit_interaction | Skills orbit section: pause detection + individual skill icon hover |
| 07 — search | Blog search: debounced query, results count, no-results state |
| 08 — Version History | Versioned changelog of every major update to the plan |
Each event row includes a plain-English description, typed parameter tags, a copyable JSON dataLayer push example, and a Puppeteer-captured screenshot of the real site with the tracked element highlighted in red.
All events push to window.dataLayer and are picked up by GTM. The schema uses a consistent set of shared parameters across all events:
// Every event carries page context
window.dataLayer.push({
event: 'select_content',
content_type: 'cta',
content_id: 'view_projects',
content_text: 'View Projects',
location: 'hero',
// Shared page parameters (set once on page load)
page_path: '/index.html',
page_title: 'Home',
page_category: 'home',
page_type: 'landing'
})
The shared page parameters — page_path, page_title, page_category, page_type, page_group — are set once via a window.TNK_PAGE object at the top of each HTML file, then picked up by the shared analytics.js loaded on every page. This means I never duplicate page context in individual event calls.
select_content event carries location with the same set of allowed values, you can pivot by location in Looker Studio and trust the result. Ad-hoc parameter names that vary by developer and by day make that impossible.
The measurement plan is versioned. Not in git only — explicitly, visibly, within the document itself.
The versioning rule in CLAUDE.md defines exactly when a version bump is required:
When a major change happens:
measurement_plan.html is copied to measurement_plan/archive/measurement_plan_vN.htmlThis means you can always open archive/measurement_plan_v1.html and see exactly what the tracking looked like before a particular change. The archive is just static HTML — no database, no server-side rendering, no dependency to install. It opens forever.
One of the things I'm most proud of in this setup is the screenshot automation. Each row in the measurement plan has a real screenshot of the live site with the tracked element highlighted — not a mockup, not a placeholder.
capture_measurement.mjs is a Puppeteer script that runs against the local server. For each interaction in the plan, it:
scrollIntoView with block: 'center')position:fixed red-border overlay div positioned using getBoundingClientRect()// The highlight injection — position:fixed so it
// stays in view regardless of scroll position
const r = el.getBoundingClientRect();
const div = document.createElement('div');
div.style.cssText = [
'position:fixed',
`top:${r.top - 6}px`,
`left:${r.left - 6}px`,
`width:${r.width + 12}px`,
`height:${r.height + 12}px`,
'border:3px solid #FF3B3B',
'border-radius:6px',
'pointer-events:none',
'z-index:2147483647',
].join(';');
document.body.appendChild(div);
The script covers all 70+ interactions across all 5 event types. Re-running it after any visual change refreshes every screenshot in under two minutes. The measurement plan always shows the site as it actually is.
Most measurement plans fail for one of two reasons: they were never properly maintained, or they were never connected to the development process in the first place.
The typical flow looks like this: A business analyst writes a tracking plan in a spreadsheet. A developer implements some of it. The site ships. The spreadsheet is never updated again. Six months later, the spreadsheet describes events that no longer exist, and the site fires events that were never documented.
The problem is that the measurement plan and the code live in different systems with different owners and different update cadences. There is no structural reason for them to stay in sync — so they don't.
My setup removes that gap by three mechanisms:
You don't need to use Claude Code or have an AI assistant to apply the underlying principles here. The patterns that actually matter are:
location: "hero" and section: "hero" and placement: "above-fold" are all the same concept, expressed three different ways, that will haunt your Looker Studio reports for years. Agree on names once, encode them in a shared file, enforce them in review.If you're using Claude Code or another AI coding assistant, CLAUDE.md (or whatever the equivalent instruction file is for your tool) is the right place to encode this. Write the rule once, in plain English, with enough specificity that a sufficiently literal reader would know exactly what to do. The AI will follow it. Your future self will thank you.
The measurement plan for this site documents five events, 21 parameters, 29 select_content interactions, and 70+ automated screenshots — all of it generated, maintained, and kept in sync through a workflow that takes me near zero ongoing effort, because the effort is baked into the rule that governs how every single frontend change is made.
That's the real value of CLAUDE.md: not any individual instruction, but the compounding effect of having consistent rules applied reliably across every session, every feature, every change. It turns one-time decisions into permanent process.
Analytics quality isn't about having the right tools. It's about having the right habits — or better yet, making the right habits unavoidable.
Comments