← Back to Blog
Analytics Workflow 10 min read Mar 10, 2026

My Analytics-First Workflow: How CLAUDE.md Keeps My Measurement Plan Permanently Up to Date

Most analytics implementations are an afterthought — added to a site once, then quietly drifting out of sync with reality. I built a system where that's structurally impossible. Here's how it works.

CLAUDE.md — Measurement-First Rule
## Measurement-First Rule
Every time any change is made to the website, you MUST:
1. Ask how the change should be measured
2. Implement the dataLayer push(es)
3. Update measurement_plan.html + CSV
4. Re-run capture_measurement.mjs
Do NOT ship any frontend change without confirming analytics coverage first.
G
Guillermo García
Digital Analytics Specialist & Designer

There is a pattern I've seen at every company I've consulted for. The development team ships a new feature. Then, weeks later, someone in analytics asks: "Wait, are we tracking this?" The answer is usually "sort of." The dataLayer push was added in a hurry, the parameter names don't match the documented schema, and the measurement plan — if one exists — hasn't been updated since Q2.

I decided I wanted none of that on my own portfolio. Not because the site is large — it isn't — but because this site is literally my public proof of how I work. Every analytic gap here is a contradiction I'd be sending to every potential client.

So I built a system. This article explains exactly what it is, how it works, and what the underlying file that enforces it — CLAUDE.md — actually does.


What Is CLAUDE.md?

CLAUDE.md is a plain-text instruction file that governs how Claude Code behaves in a given project. It sits in the project root alongside your code, gets read automatically at the start of every session, and stays in effect for the entire conversation.

Think of it as a rulebook. Not documentation for humans — documentation for the AI. Everything in it is an instruction that Claude will follow as if you'd said it yourself in every single prompt:

The file uses standard Markdown. Section headers become logical groupings of rules. Numbered lists become ordered procedures. The clearer and more specific your instructions, the more reliably they're enforced.

Key distinction
CLAUDE.md is not a README. It is not for human readers. It is a behavioral contract between you and your AI assistant, written once and silently respected in every session that follows.

What Mine Covers

My CLAUDE.md is organized into these sections:

SectionWhat it enforces
Always Do FirstInvoke the frontend-design skill before writing any frontend code
Local ServerAlways serve on localhost:3000 via node serve.mjs, never screenshot file://
Screenshot WorkflowPuppeteer path, screenshot naming convention, comparison rounds
Brand AssetsCheck assets/brand/ before designing, use real assets not placeholders
Anti-Generic GuardrailsTypography, shadows, gradients, animation, interactive states
Measurement-First RuleThe analytics workflow described in this article
Measurement Plan VersioningWhen and how to version the measurement plan
Hard RulesAbsolute constraints — no transition-all, no default Tailwind palette

The last three sections are the ones that directly enforce analytics discipline. Let's go through them.


The Measurement-First Rule

This is the core constraint. The rule is simple but absolute:

Every time any change is made to the website — new page, new section, new interaction, new form, new button, content update — you MUST ask how the change should be measured, implement the dataLayer pushes, update the measurement plan, and re-run the screenshot capture. Do NOT ship any frontend change without confirming analytics coverage first.

What this means in practice: before writing a single line of frontend code for any feature, Claude stops and asks me how I want the new interaction measured. I either approve a proposed approach or specify my own. Only after that confirmation does the implementation proceed — and it proceeds for both the feature and the tracking, simultaneously.

This makes analytics coverage a pre-condition for shipping, not a post-ship cleanup task.

The Four-Step Ritual

Every website change that touches user-facing behaviour triggers this sequence:

  1. 01
    Confirm measurement approach — What event fires? What parameters does it carry? Does it map to an existing event in the schema or does it require a new one? This question is asked before any code is written.
  2. 02
    Implement the dataLayer push — The appropriate window.dataLayer.push() call is added to the affected HTML or JavaScript file, using the agreed parameter names and values, consistent with the existing schema.
  3. 03
    Update the measurement plan — The measurement_plan/measurement_plan.html viewer and the relevant CSV export are updated to document the new or modified event. The change is visible and searchable immediately.
  4. 04
    Re-run the screenshot capturenode capture_measurement.mjs re-generates all Puppeteer screenshots so the measurement plan always shows the live site, not a stale state from months ago.

The Measurement Plan Itself

The measurement plan is a self-contained HTML file at measurement_plan/measurement_plan.html. No Google Sheets. No Confluence. No external dependency. It opens in any browser, it loads instantly, it lives in the same git repo as the code it documents, and it is always in sync with the site because the rule in CLAUDE.md makes it structurally impossible for it to fall behind.

It is organized into eight tabs:

TabContents
01 — IntroBusiness context, objectives, 5-event overview
02 — ParametersFull parameter registry — 21 parameters across all events
03 — page_viewOne example per unique page_category + page_type combination
04 — select_content29 interactions: nav links, hero CTAs, project cards, blog cards, filters, footer socials
05 — generate_leadContact form submission with SHA-256 hashed email
06 — orbit_interactionSkills orbit section: pause detection + individual skill icon hover
07 — searchBlog search: debounced query, results count, no-results state
08 — Version HistoryVersioned changelog of every major update to the plan

Each event row includes a plain-English description, typed parameter tags, a copyable JSON dataLayer push example, and a Puppeteer-captured screenshot of the real site with the tracked element highlighted in red.

Live document
View the Measurement Plan →
5 events · 21 parameters · 70+ screenshots · versioned changelog

The dataLayer Schema

All events push to window.dataLayer and are picked up by GTM. The schema uses a consistent set of shared parameters across all events:

// Every event carries page context
window.dataLayer.push({
  event: 'select_content',
  content_type: 'cta',
  content_id: 'view_projects',
  content_text: 'View Projects',
  location: 'hero',
  // Shared page parameters (set once on page load)
  page_path: '/index.html',
  page_title: 'Home',
  page_category: 'home',
  page_type: 'landing'
})

The shared page parameters — page_path, page_title, page_category, page_type, page_group — are set once via a window.TNK_PAGE object at the top of each HTML file, then picked up by the shared analytics.js loaded on every page. This means I never duplicate page context in individual event calls.

Why this matters for GA4
Consistent parameter naming across all events is what makes GA4 exploration useful. When every select_content event carries location with the same set of allowed values, you can pivot by location in Looker Studio and trust the result. Ad-hoc parameter names that vary by developer and by day make that impossible.

Versioning the Measurement Plan

The measurement plan is versioned. Not in git only — explicitly, visibly, within the document itself.

The versioning rule in CLAUDE.md defines exactly when a version bump is required:

When a major change happens:

  1. The current measurement_plan.html is copied to measurement_plan/archive/measurement_plan_vN.html
  2. The version badge in the header is incremented
  3. A changelog entry is added to the Version History tab: date, version number, summary of what changed

This means you can always open archive/measurement_plan_v1.html and see exactly what the tracking looked like before a particular change. The archive is just static HTML — no database, no server-side rendering, no dependency to install. It opens forever.


The Screenshot Pipeline

One of the things I'm most proud of in this setup is the screenshot automation. Each row in the measurement plan has a real screenshot of the live site with the tracked element highlighted — not a mockup, not a placeholder.

capture_measurement.mjs is a Puppeteer script that runs against the local server. For each interaction in the plan, it:

  1. Navigates to the page
  2. Scrolls the target element into the centre of the viewport (scrollIntoView with block: 'center')
  3. Injects a position:fixed red-border overlay div positioned using getBoundingClientRect()
  4. Takes a 1440×900 screenshot
  5. Removes the overlay
// The highlight injection — position:fixed so it
// stays in view regardless of scroll position
const r = el.getBoundingClientRect();
const div = document.createElement('div');
div.style.cssText = [
  'position:fixed',
  `top:${r.top - 6}px`,
  `left:${r.left - 6}px`,
  `width:${r.width + 12}px`,
  `height:${r.height + 12}px`,
  'border:3px solid #FF3B3B',
  'border-radius:6px',
  'pointer-events:none',
  'z-index:2147483647',
].join(';');
document.body.appendChild(div);

The script covers all 70+ interactions across all 5 event types. Re-running it after any visual change refreshes every screenshot in under two minutes. The measurement plan always shows the site as it actually is.


Why This Works — and Why Most Plans Don't

Most measurement plans fail for one of two reasons: they were never properly maintained, or they were never connected to the development process in the first place.

The typical flow looks like this: A business analyst writes a tracking plan in a spreadsheet. A developer implements some of it. The site ships. The spreadsheet is never updated again. Six months later, the spreadsheet describes events that no longer exist, and the site fires events that were never documented.

The problem is that the measurement plan and the code live in different systems with different owners and different update cadences. There is no structural reason for them to stay in sync — so they don't.

My setup removes that gap by three mechanisms:

The insight
You don't need willpower to maintain documentation. You need a system where the cost of not maintaining it is higher than the cost of updating it. CLAUDE.md creates exactly that asymmetry: the AI won't let you skip the step.

What You Can Take From This

You don't need to use Claude Code or have an AI assistant to apply the underlying principles here. The patterns that actually matter are:

If you're using Claude Code or another AI coding assistant, CLAUDE.md (or whatever the equivalent instruction file is for your tool) is the right place to encode this. Write the rule once, in plain English, with enough specificity that a sufficiently literal reader would know exactly what to do. The AI will follow it. Your future self will thank you.


Wrapping Up

The measurement plan for this site documents five events, 21 parameters, 29 select_content interactions, and 70+ automated screenshots — all of it generated, maintained, and kept in sync through a workflow that takes me near zero ongoing effort, because the effort is baked into the rule that governs how every single frontend change is made.

That's the real value of CLAUDE.md: not any individual instruction, but the compounding effect of having consistent rules applied reliably across every session, every feature, every change. It turns one-time decisions into permanent process.

Analytics quality isn't about having the right tools. It's about having the right habits — or better yet, making the right habits unavoidable.

Like this post

Comments

Leave a comment

Related posts

Analytics

Getting Started with GA4: What Every Digital Marketer Needs to Know

Mar 1, 2024 · 8 min read
Analytics

DataLayer y Medición Robusta: La Base de una Analítica de Calidad

Feb 10, 2026 · 9 min read