Should Code Review Live Inside Git?
We launched git-lrc on Product Hunt last week.
It ended up as #3 Product of the Day.
Good signal. But launches are spikes. The more important question in my mind is how to proceed further — and whether we are building something worthwhile for the global engineering community.
As of now, my core idea is simple:
Code review should be versioned, local-first, and repository-native.
Not a SaaS layer floating above Git.
Not comments that disappear into CI logs.
Not feedback locked inside a PR UI.
If review matters, it should live where the code lives.
The Model: Git ↔ GitHub
The mental model is the same as Git and GitHub.
Git → local, versioned, distributed.
GitHub → collaboration layer on top.
Similarly:
git-lrc → local, runs on commit, stores artifacts in the repo.
LiveReview → team visibility and governance on top.
Inference can be cloud or local (any model).
Everything else — execution, storage, policy — is local and Git-native.
That separation feels correct.
Reviews as Artifacts (.lrc/)
Each commit generates a review.
That review is stored:
repo/
.lrc/
reviews/<commit>.json
config.yamlThat does a few important things:
Reviews are diffable.
Policies are versioned.
Improvements in prompts are measurable.
Teams can evolve review behavior deliberately.
Most AI review systems generate text and move on.
Here, review becomes repository state.
That changes incentives. You can’t hand-wave it away.
.lrc as Policy
The .lrc folder is not just output storage.
It defines:
What to enforce.
What to ignore.
How strict to be.
What style of feedback is preferred.
So the repository itself declares how it wants to be reviewed.
Two repos can behave very differently.
That’s a feature.
The Signal Layer (This Matters More Than It Sounds)
Right now, git-lrc generates a web UI on commit.
The obvious next step is letting developers signal on each comment:
👍 Useful
👎 Irrelevant
🛠 Fixed
🚫 False positive
Not for generic model training.
For repository-level refinement.
Over time:
Noisy categories get suppressed.
High-value issue types surface more prominently.
Repeated false positives effectively disappear.
Most AI tools stop at generation.
This closes the loop.
And the loop is everything.
Unifying Static Analysis + AI
There is no reason to choose between AI review and traditional tools.
Tools like:
SonarQube
Semgrep
TruffleHog
already do valuable work.
The opportunity is orchestration:
Run them locally.
Normalize their findings.
Deduplicate semantically.
Rank by severity and confidence.
Present a coherent report.
Developers don’t want five disjoint streams of warnings.
They want one prioritized review.
AI is well-suited for the aggregation layer.
Making Review Callable
If review becomes a repository primitive, other systems should call it.
API endpoint
CLI hooks
MCP interface
Another bot should be able to ask:
“What does git-lrc think about this commit?”
That allows bot-to-bot workflows.
It also means git-lrc becomes infrastructure, not just a CLI tool.
Automatic Fixes via Capability Discovery
Different developers have different agents available:
GitHub Copilot
Claude
Local LLM setups
git-lrc can detect what exists and expose actions:
Apply patch
Trigger Copilot fix
Generate codemod
Open repair suggestion
Review should not end in text.
It should end in improved code.
Understanding Real Usage
To build something durable, I need to understand:
Are people using this solo or inside teams?
What categories of issues dominate?
At what point does friction appear?
When does someone think, “this should be shared across the org”?
Not vanity metrics.
Just clarity about what phase this product is actually in.
The “Upgrade” Path
The transition to LiveReview should be obvious.
A reasonable progression:
Developer installs git-lrc locally.
.lrcpolicies mature.The team wants shared enforcement and visibility.
They layer LiveReview on top.
If git-lrc is Git, LiveReview is GitHub.
That symmetry is intentional.
What I’m Actually Trying to Do
What I do feel strongly at a higher level is this:
If code generation keeps accelerating, review must become tighter, earlier, and versioned.
Running review at commit time — storing it inside the repo — and refining it with explicit developer signal feels like the correct direction for engineers who ship.
Local storage may function better overall because it fits how developers already work.

