Skip to content

OwlSight

On-premise AI-powered code review for CI/CD pipelines.

OwlSight diffs your changes against a base branch, sends them to an OpenAI-compatible LLM with agentic tool-use capabilities, and produces detailed review findings — all without sending code to third-party services.

$ owlsight review --base main --model gpt-4o

  src/UserService.cs
  CRITICAL  SQL injection vulnerability (src/UserService.cs:42-45)
    User input is interpolated directly into SQL query.
    Suggestion: Use parameterized queries instead.

  WARNING   Missing null check (src/UserService.cs:28)
    GetUser() may return null but caller does not check.

  ╭──────────────────────────────╮
  │ Review Summary               │
  ├──────────────┬───────────────┤
  │ Files        │ 4             │
  │ Findings     │ 3             │
  │ Critical     │ 1             │
  │ Warning      │ 1             │
  │ Info         │ 1             │
  ╰──────────────┴───────────────╯

  Review FAILED — critical issues found.

Why OwlSight?

NeedOwlSight
Keep code on-premiseRuns against any OpenAI-compatible API — local Ollama, Azure OpenAI, self-hosted vLLM
CI/CD gatingExit code 1 on critical findings — standard pipeline gate
Context-aware reviewsAgentic loop lets the LLM read files, search code, check git blame
Custom rulesDrop markdown files in .owlsight/rules/ to enforce team standards
No vendor lock-inWorks with any model that supports tool calling

Quick Start

Install

bash
# Clone and build
git clone https://github.com/radaiko/OwlSight.git
cd OwlSight
dotnet build

# Or pull the Docker image
docker build -t owlsight .

Initialize

bash
owlsight init

Creates .owlsight/config.json and .owlsight/rules/ with an example rule.

Run a Review

bash
owlsight review --base main --api-key $OPENAI_API_KEY --model gpt-4o

See Getting Started for the full setup walkthrough.


Features

AI Code Review

FeatureDescription
owlsight reviewReview changes against a base branch
owlsight initScaffold configuration and example rules
Custom rulesProject-specific review rules in markdown
Agentic loopLLM investigates code context via tool calls
JSON reportsMachine-readable output for CI integration

LLM Tools

The LLM can call these tools to investigate code context during review:

ToolDescription
read_fileRead entire file contents
read_file_linesRead specific line range
list_filesList files with optional glob pattern
search_textRegex search across files
get_file_structureDirectory tree view
get_git_blameGit blame for authorship info
get_git_logRecent commit history

See LLM Tools for details.

Output

  • Console — Spectre.Console colored output with severity, file:line, title, description, suggestion
  • JSON — Structured report with version, timestamp, summary, findings array
  • Exit code0 = pass, 1 = critical findings, 2 = error

Supported LLM Providers

Any OpenAI-compatible API endpoint works. Tested with:

Provider--base-url
OpenAIhttps://api.openai.com/v1 (default)
Azure OpenAIhttps://<name>.openai.azure.com/openai/deployments/<deployment>
Ollamahttp://localhost:11434/v1
vLLMhttp://localhost:8000/v1
LM Studiohttp://localhost:1234/v1

Requirements

RequirementVersion
.NET10.0+
GitAny recent version
LLM APIOpenAI-compatible with tool-calling support

Learn More