Launching soon

AI code review as LSP diagnostics

Lunar reviews code as you write and surfaces issues inline in your editor as standard diagnostics. Same squiggles, same Problems panel, same workflow — just with AI-powered review built directly into the loop.

Always-on

Review runs automatically on file open and edit.

Editor-native

Findings show up inline as standard LSP diagnostics.

Fast feedback

Catch issues while code is still fresh in your head.

Launch updates

Get notified when Lunar rolls out

Join the waitlist for product updates, early access, and launch news.

We’ll only use your email for Lunar launch and product updates.

UserProfile.tsx
TypeScript React
1
import { useState } from "react"
2
 
3
export function UserProfile({ user }) {
4
  const [loading, setLoading] = useState(false)
5
  const [data, setData] = useState(null)
6
 
7
  async function fetchData() {
8
    setLoading(true)
9
    const res = await fetch("/api/user/" + user.id)
10
    const json = await res.json()
11
    setData(json)
12
    setLoading(false)
13
  }
14
 
15
  return (
16
    <div>
17
      {loading ? <p>Loading...</p> : <p>{data?.name}</p>}
18
    </div>
19
  )
20
}

Review at the moment of writing

Most code review happens too late. By the time a PR is open, context is lost and fixes are expensive. Lunar moves review to the moment of writing.

1-Second Debounce

After a 1-second pause in typing, Lunar sends your file to GPT-4o-mini for review. Results appear instantly.

AI Agent Ready

Agents can read source: "lunar" diagnostics via textDocument/publishDiagnostics and fix issues automatically.

Four Severity Levels

Error for security risks, Warning for logic smells, Info for maintainability, Hint for naming suggestions.

Fixed Ruleset

10 rules covering clarity, naming, error handling, security, API contracts, complexity, and more. No hallucinated rule IDs.

Standard LSP

Works with any LSP-compatible editor. Same squiggly lines, same Problems panel you already know.

Stale Result Guard

If the document changes while the model is thinking, outdated results are dropped silently. Always fresh feedback.

Fixed ruleset. No hallucinations.

The AI reviews against a defined set of 10 rules. It cannot hallucinate rule IDs — you always know exactly what it's checking.

Severity Levels

Error

High-impact issue — security risk, broken contract, data loss potential

Warning

Should be fixed — logic smell, missing error handling, confusing code

Info

Worth considering — maintainability, testability, documentation gaps

Hint

Low-priority suggestion — naming, minor clarity improvement

Review Rules

Rule
What It Flags
review/clarity
Unclear intent, confusing control flow
review/naming
Misleading or overly generic names
review/error-handling
Swallowed exceptions, missing error paths
review/security-footgun
Injection risks, exposed secrets, unsafe operations
review/api-contract
Type mismatches, nullability violations, broken invariants
review/complexity
Deep nesting, functions that need decomposition
review/perf-hotpath
Redundant work in loops, unnecessary allocations or I/O
review/maintainability
Tight coupling, implicit dependencies, hidden state
review/testing-gap
Logic that is likely untested and hard to test
review/docs-mismatch
Comments that contradict the code, missing critical docs

How it works

Get started in minutes. No complex configuration required.

01

Install and configure

Clone the repo, add your OpenAI API key, and launch the extension in VS Code.

npm install
echo "OPENAI_API_KEY=sk-..." > .env

# Press Ctrl+Shift+B to compile
# Then F5 to launch
02

You type

Write code as you normally would. Lunar watches every file you open in the background.

async function fetchData() {
  const res = await fetch(url + id)
  const json = await res.json()
  return json
}
03

1s debounce, then review

After a 1-second pause, Lunar sends the file to GPT-4o-mini. Results come back as standard LSP Diagnostics.

// review/security-footgun
// User input in URL - injection risk

// review/error-handling  
// Response status not checked
04

Fix or let agents fix

Read the diagnostics yourself, or let AI agents filter by source: "lunar" and fix automatically.

// Agent workflow:
read textDocument/publishDiagnostics
filter source === "lunar"
apply fixes
watch issues clear on next pass
Comparison

Where Lunar fits versus generic code-review MCPs

Lunar is not trying to replace deep PR review or repo-wide analysis. It solves a different problem: fast, local, editor-native review while you are still writing code.

How review starts

Lunar

Push-based. Review runs automatically when you open or edit a file.

Generic review MCP

Pull-based. You usually invoke a review tool or ask an agent for feedback.

Where feedback shows up

Lunar

Inside the editor as standard LSP diagnostics — inline, line-specific, and in the Problems panel.

Generic review MCP

Often in chat, tool output, or another review surface outside the normal diagnostics flow.

Feedback loop

Lunar

Optimized for low-latency, moment-of-writing feedback while context is still fresh.

Generic review MCP

Usually better for checkpoint reviews after a chunk of work or once an agent finishes a pass.

How agents consume findings

Lunar

Structured as standard diagnostics with severity, source, and stable rule codes.

Generic review MCP

May require custom parsing or additional orchestration depending on the tool output format.

Best use case

Lunar

Single-file, incremental review while writing code.

Generic review MCP

PR-level, diff-level, or broader repo-aware review on demand.

The simplest way to think about it

A generic code-review MCP is usually a tool you call. Lunar is a reviewer embedded into the editor’s diagnostic loop. That means the value is not just that AI review exists — it is that review appears inline, immediately, and in the exact workflow developers and coding agents already use to fix issues.

Launch waitlist

Be first to hear when Lunar rolls out

Lunar is built for fast, editor-native review loops. Join the waitlist to get launch updates, early access news, and product announcements.

No PR review queue required. Just your editor, your code, and immediate feedback.