Skip to content

realrajaryan/DevAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DevAI — Talk to your codebase.

DevAI lets you index an entire code-base, ask natural-language questions about any part of it, and get context-aware answers backed by Azure OpenAI. A built-in Open WebUI front-end and interactive graph visualizations make the experience feel like IDE autocomplete on steroids. Finally, you can talk to an AI that knows your whole codebase, no more pasting in multiple files.

python -m devai --repo <path to your repo>

DevAI fires up a local FastAPI server on :8000, launches Open WebUI on :3000, and opens your browser.


✨ What DevAI does under the hood

                ┌──────────────┐
                │  Python repo │
                └──────┬───────┘
                       │  parse + AST
              ┌────────▼────────┐
              │  Symbol objects │  (funcs, classes, methods)
              └────────┬────────┘
                       │  static call-graph (DAG)
        ┌──────────────┴───────────────┐
        │  1. Summaries (GPT-4o-mini)  │
        │  2. Embeddings (text-embed-3)│
        └──────────────┬───────────────┘
                       │  vectors
                 ┌─────▼─────┐
                 │  FAISS    │─── nearest-neighbour search
                 └───────────┘
                       ▼
            ┌────────────────────┐  runtime
            │   Prompt builder   │  (select few relevant full sources +
            └────────┬───────────┘   surrounding summaries)
                     │
           Azure ChatCompletion (GPT-4o)
                     │
         natural-language answer + links + relevant code chunks
                     │
          Open WebUI chat  ▸  browser

Features

Feature What it gives you
One-command index devai --repo /path/to/repo parses every .py, builds a call graph, writes summaries & embeddings.
Mixed-granularity RAG Most context is 2-line summaries; only the N most relevant symbols are inlined with full source so prompts stay ≤ 8 k tokens.
Intent router Simple similarity & keyword gate: if the question isn’t about the code, DevAI answers like a normal assistant—no weird dumps.
Interactive graph Click a node to see its signature & summary; a collapsible sunburst + Sankey view shows both package structure and execution flow.
Open WebUI integration Spins up automatically; you get a ChatGPT-style UI without extra setup.
Cache per-repo Re-index only when files change; stored under symbols_cache/<hash>/.

Quick start

git clone https://github.com/yourname/devai.git
cd devai
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt

export AZURE_OPEN_AI_KEY=...
export AZURE_OPEN_AI_ENDPOINT=https://<resource>.openai.azure.com
export AZURE_OPEN_AI_SUMMARY_DEPLOYMENT=gpt-4o-mini
export AZURE_OPEN_AI_ANSWER_DEPLOYMENT=gpt-4o

# index + serve + web UI
python -m devai --repo ~/projects/markitdown

Open http://localhost:3000, pick model DevAI, and start chatting:

“Why does invoice_parser.py open a temp file?”
“Show me the high-level flow for the DOCX conversion pipeline.”


Configuration

Variable Description
AZURE_OPEN_AI_KEY Your Azure OpenAI key
AZURE_OPEN_AI_ENDPOINT e.g. https://myorg.openai.azure.com
AZURE_OPEN_AI_SUMMARY_DEPLOYMENT Deployment name for quick summarisation model (cheap)
AZURE_OPEN_AI_ANSWER_DEPLOYMENT Deployment name for answer-synthesis model (quality)

Optional flags:

--refresh #  force re-index even if cache exists
--visualize graph.html # just export the graph, don’t serve

How the logic works

1 · Indexing stage

  1. AST walk
    • functions, async functions, classes, methods all become symbols.
  2. Call graph
    • Add an edge A → B when A calls B (same-module only for speed).
  3. Tag & metrics
    • Tag each symbol as generic helper, orchestrator, etc.—guides summary prompt.
  4. Summaries (gpt-4o-mini)
    • 1-sentence for helpers, 2 for orchestrators; embeds dependencies if helpful.
  5. Embeddings
    • text-embedding-3-small vector for each summary → FAISS.

2 · Question time

  1. Intent gate
    • If max-cosine < 0.25 and no code keywords → skip RAG, answer generically.
  2. Core-symbol selection
    • Score by similarity, literal mention, graph centrality, call frequency.
  3. Prompt builder
    • Insert full source for top ≤5 symbols, add parent/child summaries, keep under 8 k tokens.
  4. Answer synthesis (gpt-4o)
    • Uses the mixed context; formatting enforced by SYS_ANS guidelines.

3 · Serving

FastAPI on :8000 exposes OpenAI-compatible /v1/chat/completions—Open WebUI connects as if it were the real API.


FAQ

  • Can I use it without WebUI? Yes—curl the /v1/chat/completions endpoint.
  • Does it support non-Python code? Not yet; parser is Python-specific.
  • GPU FAISS? Swap faiss-cpu for faiss-gpu in requirements.txt.

Licensed under MIT.

About

DevAI — Talk to your codebase.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages