lightrag-comparison
Deep comparison of LightRAG and Lattice for knowledge graph-based retrieval, covering databases, user experience, architecture, and integration opportunities.
Executive Summary
| Aspect | LightRAG | Lattice |
|---|---|---|
| Primary Use Case | General RAG with auto-extraction | Research documentation with human control |
| Extraction Trigger | Automatic on ingestion | Human-initiated via Claude Code |
| Database Flexibility | 15+ backends (Neo4j, PostgreSQL, Milvus, etc.) | FalkorDB only |
| Query Modes | 6 modes (local, global, hybrid, naive, mix, bypass) | Semantic search + Cypher |
| API Surface | REST API + Python library | CLI + Claude Code commands |
| LLM Requirements | Required (extraction + queries) | Optional (extraction only) |
Key Insight: LightRAG is a full-featured RAG system optimized for automated pipelines. Lattice is a lightweight CLI optimized for human-in-the-loop documentation workflows with Claude Code.
1. Database Layer Comparison
1.1 Storage Architecture
LightRAG: Pluggable Multi-Backend
LightRAG separates storage into 4 abstract types:
┌─────────────────────────────────────────────────────────────────┐│ LightRAG Storage Layer │├─────────────────────────────────────────────────────────────────┤│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ││ │ KV Storage │ │ Vector Storage │ │ Graph Storage │ ││ │ (7 instances) │ │ (3 instances) │ │ (1 instance) │ ││ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ ││ │ │ │ ││ ┌────────▼────────────────────▼────────────────────▼────────┐ ││ │ Implementations: JSON, Redis, MongoDB, PostgreSQL │ ││ │ Vector: NanoVectorDB, Faiss, Milvus, Qdrant, Chroma │ ││ │ Graph: NetworkX, Neo4j, Memgraph, PostgreSQL (AGE) │ ││ └───────────────────────────────────────────────────────────┘ │└─────────────────────────────────────────────────────────────────┘Lattice: FalkorDB-Native
┌─────────────────────────────────────────────────────────────────┐│ Lattice Storage Layer │├─────────────────────────────────────────────────────────────────┤│ ┌─────────────────────────────────────────────────────────────┐││ │ FalkorDB (Redis Module) │││ │ - Graph storage (GraphBLAS sparse matrices) │││ │ - Vector storage (built-in HNSW) │││ │ - Full-text search │││ └─────────────────────────────────────────────────────────────┘││ ┌─────────────────────────────────────────────────────────────┐││ │ Local File Tracking │││ │ - .graph-state.json (sync state) │││ │ - YAML frontmatter in markdown files │││ └─────────────────────────────────────────────────────────────┘│└─────────────────────────────────────────────────────────────────┘1.2 Database Support Matrix
| Backend | LightRAG | Lattice |
|---|---|---|
| FalkorDB | No | Yes (primary) |
| Neo4j | Yes | No |
| PostgreSQL (pgvector + AGE) | Yes (all-in-one) | No |
| Milvus | Yes | No |
| Qdrant | Yes | No |
| ChromaDB | Yes | No |
| Redis | Yes (KV only) | Via FalkorDB |
| MongoDB | Yes (KV + Graph) | No |
| File-based (JSON/NetworkX) | Yes (default) | No |
1.3 Graph Model Differences
LightRAG Graph Model:
- Nodes: Entities with
name,type,description,source_ids - Edges: Relations with
source,target,keywords,description,source_ids - Entity types: Extracted by LLM (typically Person, Organization, Location, Event, Concept)
- Relationship types: Free-form keywords extracted from text
Lattice Graph Model:
- Nodes: Entities with
name,type,description(8 fixed types) - Edges: 2 relationship types only (REFERENCES, APPEARS_IN)
- Entity types: Topic, Technology, Concept, Tool, Process, Person, Organization, Document
- Design choice: Coarse types + properties to minimize FalkorDB memory
# LightRAG: Many relationship types (memory-expensive in FalkorDB)(Alice)-[:MANAGES]->(Bob)(Alice)-[:COLLABORATES_WITH]->(Carol)(Alice)-[:REPORTS_TO]->(Dave)
# Lattice: Two types only (memory-efficient)(Alice)-[:REFERENCES {type: "manages"}]->(Bob)(Alice)-[:APPEARS_IN]->(document.md)1.4 Vector Search Comparison
| Feature | LightRAG | Lattice |
|---|---|---|
| Entity embeddings | Yes (separate VDB) | Yes (FalkorDB native) |
| Relation embeddings | Yes (separate VDB) | No |
| Chunk embeddings | Yes (separate VDB) | No (summary only) |
| Embedding dimensions | Configurable | 1024 (Voyage AI) |
| Embedding model | OpenAI, Ollama, Jina, etc. | Voyage AI only |
2. User Experience Comparison
2.1 Workflow Patterns
LightRAG: Pipeline-Auto (Fire and Forget)
User adds documents → LightRAG auto-extracts → Graph populated ↓ ↓ ↓ No review step LLM runs extraction Ready for queries# LightRAG: Automatic extraction on insertrag = LightRAG(working_dir="./storage", llm_model_func=gpt_4o_mini_complete)await rag.initialize_storages()await rag.ainsert("Your document content") # Entities extracted automaticallyresult = await rag.aquery("What entities exist?")Lattice: Human-Initiated (Review Optional)
User creates document → User runs /entity-extract → User reviews YAML → User runs /graph-sync ↓ ↓ ↓ ↓ Document exists Claude extracts Human can edit Graph populated# Lattice: Explicit commands at each step/entity-extract docs/new-topic/research.md # Claude Code extracts entities# Optional: Edit the YAML frontmatter/graph-sync # Sync to FalkorDBlattice query "What entities exist?"2.2 Control vs Convenience Trade-off
| Aspect | LightRAG (Auto) | Lattice (Human-Initiated) |
|---|---|---|
| Setup effort | More (LLM API required) | Less (CLI + FalkorDB) |
| Per-document effort | None | Command per batch |
| Quality control | Trust the LLM | Can review/edit |
| Cost per document | LLM tokens every time | LLM tokens on command |
| Bulk ingestion | Fast (parallel) | Slower (explicit commands) |
| Incremental updates | Automatic | Manual command |
2.3 Query Experience
LightRAG: Multi-Mode Retrieval
# 6 query modes for different patternsawait rag.aquery("Who is Alice?", param=QueryParam(mode="local")) # Entity-centricawait rag.aquery("How do they work together?", param=QueryParam(mode="global")) # Relation-centricawait rag.aquery("Complex question", param=QueryParam(mode="hybrid")) # Bothawait rag.aquery("Simple search", param=QueryParam(mode="naive")) # Vector onlyawait rag.aquery("General question", param=QueryParam(mode="mix")) # KG + Vectorawait rag.aquery("Just chat", param=QueryParam(mode="bypass")) # LLM onlyLattice: Semantic Search + Cypher
# Semantic search (default)lattice query "How does FalkorDB handle memory?"
# Keyword searchlattice query -m keyword "FalkorDB memory"
# Raw Cypherlattice query -m cypher "MATCH (e:Entity)-[:REFERENCES]->(t:Entity {type:'Technology'}) RETURN e.name, t.name"3. Architecture Deep Dive
3.1 Core Components
LightRAG Architecture:
┌─────────────────────────────────────────────────────────────────┐│ LightRAG │├──────────────────────────┬──────────────────────────────────────┤│ Client Layer │ API Layer (FastAPI) ││ - Python library │ - /documents/* (CRUD) ││ - REST API │ - /query/* (retrieval) ││ - React Web UI │ - /graphs/* (exploration) ││ - Ollama-compatible API │ - /v1/chat (Ollama compat) │├──────────────────────────┴──────────────────────────────────────┤│ Core Engine (LightRAG class) ││ - Document ingestion pipeline ││ - Entity/relation extraction (LLM-based) ││ - Merge and summarization ││ - Multi-mode query processing │├─────────────────────────────────────────────────────────────────┤│ Integration Layer ││ - 10+ LLM providers (OpenAI, Ollama, Gemini, Bedrock, etc.) ││ - 6 embedding services ││ - 3 reranking services (Cohere, Jina, Aliyun) │├─────────────────────────────────────────────────────────────────┤│ Storage Layer (Pluggable Abstractions) ││ - BaseKVStorage, BaseVectorStorage, BaseGraphStorage ││ - 15+ backend implementations │└─────────────────────────────────────────────────────────────────┘Lattice Architecture:
┌─────────────────────────────────────────────────────────────────┐│ Lattice │├─────────────────────────────────────────────────────────────────┤│ Claude Code Integration ││ - /entity-extract (slash command) ││ - /graph-sync (slash command) ││ - /research (slash command) │├─────────────────────────────────────────────────────────────────┤│ CLI Layer (NestJS + nest-commander) ││ - lattice sync ││ - lattice query ││ - lattice status ││ - lattice validate │├─────────────────────────────────────────────────────────────────┤│ Sync Service ││ - Frontmatter parsing (gray-matter) ││ - Schema validation (Zod) ││ - Content hash tracking ││ - APPEARS_IN relationship generation │├─────────────────────────────────────────────────────────────────┤│ Storage Layer ││ - FalkorDB graph + vector (ioredis) ││ - Voyage AI embeddings ││ - Local .graph-state.json │└─────────────────────────────────────────────────────────────────┘3.2 Entity Extraction Comparison
LightRAG Extraction Pipeline:
- Document → Chunking (1200 tokens, 100 overlap)
- Chunk → LLM extraction (parallel, with gleaning loop)
- Extraction result → JSON parsing
- Entities merged by name (case-insensitive)
- Relations merged by (source, target) pairs
- LLM summarization if descriptions exceed threshold
- Upsert to all storage backends
Lattice Extraction Pipeline:
- User runs
/entity-extractcommand - Claude Code reads document content
- Claude extracts entities following schema
- Claude writes YAML frontmatter to file
- User optionally reviews/edits YAML
- User runs
/graph-syncorlattice sync - CLI parses frontmatter, validates schema
- CLI upserts to FalkorDB, generates embeddings
3.3 Document Tracking
| Feature | LightRAG | Lattice |
|---|---|---|
| Status tracking | DocStatusStorage (PENDING/PROCESSING/PROCESSED/FAILED) | .graph-state.json (contentHash) |
| Duplicate detection | MD5 hash of content | Content hash comparison |
| Incremental updates | Automatic (doc_status check) | Manual (lattice status shows changed) |
| Rollback | Re-extract from source | Re-run /entity-extract |
4. Integration Opportunities
4.1 Where LightRAG Could Replace Lattice
Scenario: Full Replacement
Replace Lattice CLI entirely with LightRAG for auto-extraction:
# Modified /graph-sync command
1. Initialize LightRAG with FalkorDB backend (if supported) or Neo4j2. For each changed document: - Read content - await lightrag.ainsert(content, file_path=path)3. LightRAG handles extraction, merging, storage4. Report resultsPros:
- No frontmatter needed
- Automatic incremental updates
- Richer query modes (hybrid, mix, etc.)
- Reranking support built-in
Cons:
- LLM required for every document (cost)
- No human review step
- Different graph schema (would need migration)
- FalkorDB not directly supported (would need Neo4j)
4.2 Hybrid Integration Options
Option A: LightRAG for Extraction, Lattice for Storage
Document → LightRAG extraction → JSON output → Lattice CLI ingest# Use LightRAG's extraction but write to Lattice-compatible formatextraction = await lightrag.extract_entities(chunk) # Hypotheticallattice_json = convert_to_lattice_format(extraction)# Pipe to: lattice ingest --stdinChallenges:
- LightRAG extraction is tightly coupled to its storage
- Would need to fork/modify LightRAG
Option B: LightRAG Query Layer on Lattice Storage
Lattice sync → FalkorDB ← LightRAG query adapterKeep Lattice for extraction/sync, but use LightRAG’s multi-mode query:
# Custom LightRAG storage adapter for FalkorDBclass FalkorDBGraphStorage(BaseGraphStorage): def __init__(self, redis_client): self.client = redis_client
async def get_node(self, entity_name: str): result = await self.client.graph.query( f"MATCH (e:Entity {{name: '{entity_name}'}}) RETURN e" ) return convert_falkor_to_lightrag(result)Challenges:
- LightRAG expects specific schema (entity_type, description, source_ids)
- Lattice schema is different (type as property, no source_ids per entity)
4.3 Recommended Integration: Extract Module Only
The cleanest integration would be using LightRAG’s extraction prompts/logic without its full system:
Current Lattice:
Claude Code → Entity extraction (custom prompt) → YAML frontmatterEnhanced Lattice with LightRAG-style Extraction:
Lattice CLI → LightRAG extraction prompts → JSON → FalkorDB (use same prompts/parsing)Benefits:
- Battle-tested extraction prompts
- Gleaning loop for better recall
- Keep FalkorDB (proven for our scale)
- Keep human-initiated workflow
Implementation:
- Port
lightrag/prompt.pyentity_extraction prompts to Lattice - Port
lightrag/operate.pyextraction parsing logic - Use in new
lattice ingestcommand (from frontmatter-free proposal)
5. Feature-by-Feature Comparison
5.1 Extraction Quality
| Feature | LightRAG | Lattice |
|---|---|---|
| Entity types | Dynamic (LLM decides) | Fixed 8 types |
| Relationship types | Dynamic keywords | 2 types (REFERENCES, APPEARS_IN) |
| Gleaning | Yes (configurable retries) | No |
| Summarization | LLM map-reduce on merge | Title + summary only |
| Source tracking | Chunk IDs per entity | Document path only |
| Deduplication | Case-insensitive merge | Manual in YAML |
5.2 Query Capabilities
| Feature | LightRAG | Lattice |
|---|---|---|
| Semantic search | Yes (multiple VDBs) | Yes (FalkorDB native) |
| Keyword search | Via naive mode | Yes (Cypher CONTAINS) |
| Graph traversal | Entity→relation expansion | Cypher queries |
| Reranking | Yes (Cohere, Jina, Aliyun) | No |
| Streaming | Yes | No |
| Conversation history | Yes | No |
| Custom prompts | Yes (system_prompt param) | No |
5.3 Deployment & Operations
| Aspect | LightRAG | Lattice |
|---|---|---|
| Deployment | Server (FastAPI/Gunicorn) or embedded | CLI only |
| Multi-tenancy | Workspace isolation | Single graph per deployment |
| Multi-process | Shared memory locks | Not applicable |
| Docker | Official images | DIY |
| Web UI | React app included | None |
| API | REST + Ollama-compatible | None |
5.4 Dependencies & Requirements
LightRAG Requirements:
- Python 3.10+
- LLM API (required for extraction)
- Embedding API (required)
- Storage backend(s)
- Optional: Reranker API
Lattice Requirements:
- Bun runtime
- FalkorDB (Redis module)
- Voyage AI API (for embeddings)
- Claude Code (for slash commands)
- No LLM API required for sync
6. Migration Considerations
6.1 Data Migration: Lattice → LightRAG
If migrating from Lattice to LightRAG:
-
Export Lattice data:
Terminal window lattice export --format json > lattice-export.json -
Transform schema:
# Lattice entity → LightRAG entity{"name": lattice_entity["name"],"type": lattice_entity["type"],"description": lattice_entity["description"],"source_ids": [doc_path] # Convert document path to source} -
Import to LightRAG:
for entity in lattice_entities:await lightrag.chunk_entity_relation_graph.upsert_node(entity)await lightrag.entities_vdb.upsert({entity["name"]: embedding})
6.2 Challenges
| Challenge | Mitigation |
|---|---|
| Relationship types differ | Map REFERENCES → generic relation |
| No chunk-level source_ids | Use document path as single source |
| FalkorDB not supported | Use Neo4j or PostgreSQL |
| Different embedding models | Re-embed all entities |
7. Recommendations
7.1 When to Use LightRAG
- Large corpus ingestion: Auto-extraction saves time
- Multi-user RAG service: REST API + web UI included
- Complex query patterns: 6 modes cover most use cases
- Multiple storage backends: Flexibility to change databases
- Streaming responses: Real-time chat applications
7.2 When to Keep Lattice
- Research documentation: Human control over what gets extracted
- Claude Code integration: Slash commands are integral
- Cost sensitivity: No LLM required for sync operations
- Memory-constrained: FalkorDB optimized for small graphs
- Simple queries: Semantic + Cypher covers most needs
7.3 Hybrid Approach (Recommended)
For the research documentation use case, a hybrid approach works best:
- Keep Lattice for human-initiated extraction and FalkorDB storage
- Port LightRAG prompts to improve extraction quality
- Implement frontmatter-free proposal using JSON format
- Consider LightRAG query modes as future enhancement
This preserves the human-in-the-loop workflow while benefiting from LightRAG’s extraction research.
8. Related Documents
- Architecture - Current Lattice architecture
- Frontmatter-Free Proposal - Eliminating YAML intermediate step
- Graph Database Comparison - FalkorDB vs DuckDB vs SQLite