Two breakthroughs. The floor rose — twice.
52 weeks of public GitHub data, charting the two patent-pending context-engineering breakthroughs that built grāmatr℠. Every contribution verifiable. Not a demo — production.
GitHub contributions — one person, 52 weeks
What happened, week by week.
Eight months of real work across multiple codebases. The pattern is clear in the chart: bursts of high activity followed by quiet stretches. A 345-contribution week in May, then single digits. A 295-week in August, then back to 63. The average on active weeks was roughly 50 contributions. On many weeks, it was zero. This is the normal rhythm of a solo developer working without persistent AI context — high effort produces high output, but it doesn't sustain.
Two weeks of raw construction velocity. 475 contributions the first week, 816 the next — the all-time peak on the chart, and the first proof the brain worked. This was the intelligence infrastructure being built from scratch: MCP server, embedding pipeline, knowledge graph traversal, security model, offline mode. Greenfield construction, every commit shipping directly into the new system. The output was higher than any prior week in the dataset because the system was already making the operator faster — even before the routing breakthrough that came four months later.
The first brain-assisted week landed at 198. Then a dip to 76 as the system stabilized. Then sustained elevation: 391 (data pipelines, analytics infrastructure, architecture documentation across multiple projects), 209, 189. Even the holiday weeks held above pre-brain baseline. January started slow — 45, 34 — then climbed to 170 and peaked at 518 in the Feb 1 week. The average across this entire phase was roughly 200 contributions per week. Not a spike. A new floor — four times the pre-brain average.
Output dips. The chart shows it clearly: 80, 130, 187, 49, then near-zero. This was deliberate investment. The focus shifted to building the v2 classifier, routing architecture, and generation pipeline — the patent-pending pre-classification system. Scaffolding the new platform from scratch. Testing local LLM fidelity. Building the classification heads. One week near zero includes time away from the keyboard. These are investment weeks where visible output drops because the infrastructure being built is the infrastructure that will multiply future output.
The patent-pending pre-classification routing engine came online and the second step-change hit. 562 contributions in the first week. 376 on the GitHub calendar the following week — and the git log shows what those 376 actually contained: 607 commits, 1,203 files touched, 354,489 lines added, shipped through feature branches, pull requests, code review, automated CI/CD, and 15 tagged production releases. The GitHub calendar undercounts March because squash-merged branches collapse into a single contribution — the calendar is the public lower bound; the git log is authoritative. Velocity and engineering discipline grew together, driven by the routing breakthrough that made both possible at the same time. The post-routing floor settled at roughly 470 contributions per week — nearly ten times the pre-brain baseline.
The floor rise applies everywhere.
The chart shows one person's data. The principle behind it — persistent context that compounds across sessions — applies to every role that uses AI daily.
Individual developers
The chart shows what happens when an intelligence pipeline pre-classifies every request and delivers only what's relevant — in 1,200 tokens instead of 40,000. Less noise, better responses, faster iteration. The floor rise isn't about remembering more. It's about routing smarter. That applies to your workflow too.
See what changes for individuals →Team leads
One person's floor rise is interesting. A team's floor rise is a multiplier. When the intelligence pipeline learns your team's patterns and routes that knowledge to every practitioner — new hires included — the sustained elevation scales across every seat. Not a one-time boost. Compounding returns, every week.
See what changes for teams →Enterprise buyers
52 weeks of verifiable data from a single practitioner. Measurable floor rises at two distinct inflection points. Methodology documented, Contribution graph visible at github.com/bhandrigan. Detailed logs available on request. The board deck writes itself: sustained, measurable, compounding AI ROI — not a one-week demo.
See the enterprise case →Implementation partners
Run the math. A 10% productivity floor rise at $350/hour across 30,000 practitioners at a major consultancy is $1.05 billion in recovered capacity annually. The chart shows a 4x floor rise for one person. Even a conservative fraction of that, applied across an enterprise deployment, changes the ROI model for every AI engagement you deliver.
See the partner model →GitHub measures code output only. The intelligence pipeline accelerated everything.
The contribution chart captures commits, pull requests, issues, and code reviews. It does not capture the full scope of what the intelligence pipeline produced during the brain-assisted and post-routing phases:
The code velocity is the measurable signal — public, verifiable, week-by-week. The full productivity story is bigger. What the chart shows is the floor. What was built on top of that floor spans engineering, content, infrastructure, and strategy.
How to read the numbers. How to verify them.
What "GitHub contributions" measures
GitHub contributions is a composite metric: commits to default branches, pull requests opened, issues opened, and code reviews submitted. It is a broader measure than raw commit count. When the chart says "562 contributions," that includes all four activity types for that week.
What "git commits" measures
Git commits, counted directly from repository logs via git log, captures code changes only. This is the narrower, more precise metric. The breakthrough week of March 22-28 shows 376 on the GitHub contribution calendar but 607 verified git commits when counted directly from the repositories, across 354,489 lines of changed code in 1,203 files.
Why the numbers differ between phases
The November 2025 phase shipped greenfield infrastructure straight into the new system — every commit registered individually on the contribution calendar. The March 2026 routing breakthrough shipped through feature branches, pull requests, code review, and squash merges — structural workflow choices that actively undercount on the GitHub calendar, because an entire branch of work collapses into a single merge commit. Both methodologies are visible. Both are correct for what they were measuring. The March numbers are the lower bound on the calendar; the git log is the authoritative count.
On-device classification performance
Classification latency: under 100ms on CPU. Total model size: 2.3MB. Architecturally ready for edge deployment.
How to verify
The GitHub contribution graph at github.com/bhandrigan shows weekly contribution counts publicly. Repositories are private, but the contribution calendar is independently verifiable by anyone. Detailed git log data is available on request for due diligence.
See what this means for your team.
The floor rise is the proof point. The question is what it looks like at your scale, with your workflows, across your tools.