AI code review as LSP diagnostics
Lunar reviews code as you write and surfaces issues inline in your editor as standard diagnostics. Same squiggles, same Problems panel, same workflow — just with AI-powered review built directly into the loop.
Always-on
Review runs automatically on file open and edit.
Editor-native
Findings show up inline as standard LSP diagnostics.
Fast feedback
Catch issues while code is still fresh in your head.
Get notified when Lunar rolls out
Join the waitlist for product updates, early access, and launch news.
import { useState } from "react" export function UserProfile({ user }) { const [loading, setLoading] = useState(false) const [data, setData] = useState(null) async function fetchData() { setLoading(true) const res = await fetch("/api/user/" + user.id) const json = await res.json() setData(json) setLoading(false) } return ( <div> {loading ? <p>Loading...</p> : <p>{data?.name}</p>} </div> )}Review at the moment of writing
Most code review happens too late. By the time a PR is open, context is lost and fixes are expensive. Lunar moves review to the moment of writing.
1-Second Debounce
After a 1-second pause in typing, Lunar sends your file to GPT-4o-mini for review. Results appear instantly.
AI Agent Ready
Agents can read source: "lunar" diagnostics via textDocument/publishDiagnostics and fix issues automatically.
Four Severity Levels
Error for security risks, Warning for logic smells, Info for maintainability, Hint for naming suggestions.
Fixed Ruleset
10 rules covering clarity, naming, error handling, security, API contracts, complexity, and more. No hallucinated rule IDs.
Standard LSP
Works with any LSP-compatible editor. Same squiggly lines, same Problems panel you already know.
Stale Result Guard
If the document changes while the model is thinking, outdated results are dropped silently. Always fresh feedback.
Fixed ruleset. No hallucinations.
The AI reviews against a defined set of 10 rules. It cannot hallucinate rule IDs — you always know exactly what it's checking.
Severity Levels
High-impact issue — security risk, broken contract, data loss potential
Should be fixed — logic smell, missing error handling, confusing code
Worth considering — maintainability, testability, documentation gaps
Low-priority suggestion — naming, minor clarity improvement
Review Rules
review/clarityreview/namingreview/error-handlingreview/security-footgunreview/api-contractreview/complexityreview/perf-hotpathreview/maintainabilityreview/testing-gapreview/docs-mismatchHow it works
Get started in minutes. No complex configuration required.
Install and configure
Clone the repo, add your OpenAI API key, and launch the extension in VS Code.
npm install
echo "OPENAI_API_KEY=sk-..." > .env
# Press Ctrl+Shift+B to compile
# Then F5 to launchYou type
Write code as you normally would. Lunar watches every file you open in the background.
async function fetchData() {
const res = await fetch(url + id)
const json = await res.json()
return json
}1s debounce, then review
After a 1-second pause, Lunar sends the file to GPT-4o-mini. Results come back as standard LSP Diagnostics.
// review/security-footgun
// User input in URL - injection risk
// review/error-handling
// Response status not checkedFix or let agents fix
Read the diagnostics yourself, or let AI agents filter by source: "lunar" and fix automatically.
// Agent workflow:
read textDocument/publishDiagnostics
filter source === "lunar"
apply fixes
watch issues clear on next passWhere Lunar fits versus generic code-review MCPs
Lunar is not trying to replace deep PR review or repo-wide analysis. It solves a different problem: fast, local, editor-native review while you are still writing code.
How review starts
Lunar
Push-based. Review runs automatically when you open or edit a file.
Generic review MCP
Pull-based. You usually invoke a review tool or ask an agent for feedback.
Where feedback shows up
Lunar
Inside the editor as standard LSP diagnostics — inline, line-specific, and in the Problems panel.
Generic review MCP
Often in chat, tool output, or another review surface outside the normal diagnostics flow.
Feedback loop
Lunar
Optimized for low-latency, moment-of-writing feedback while context is still fresh.
Generic review MCP
Usually better for checkpoint reviews after a chunk of work or once an agent finishes a pass.
How agents consume findings
Lunar
Structured as standard diagnostics with severity, source, and stable rule codes.
Generic review MCP
May require custom parsing or additional orchestration depending on the tool output format.
Best use case
Lunar
Single-file, incremental review while writing code.
Generic review MCP
PR-level, diff-level, or broader repo-aware review on demand.
The simplest way to think about it
A generic code-review MCP is usually a tool you call. Lunar is a reviewer embedded into the editor’s diagnostic loop. That means the value is not just that AI review exists — it is that review appears inline, immediately, and in the exact workflow developers and coding agents already use to fix issues.
Be first to hear when Lunar rolls out
Lunar is built for fast, editor-native review loops. Join the waitlist to get launch updates, early access news, and product announcements.
No PR review queue required. Just your editor, your code, and immediate feedback.