Deep comparison of LightRAG and Lattice for knowledge graph-based retrieval, covering databases, user experience, architecture, and integration opportunities.

Executive Summary

AspectLightRAGLattice
Primary Use CaseGeneral RAG with auto-extractionResearch documentation with human control
Extraction TriggerAutomatic on ingestionHuman-initiated via Claude Code
Database Flexibility15+ backends (Neo4j, PostgreSQL, Milvus, etc.)FalkorDB only
Query Modes6 modes (local, global, hybrid, naive, mix, bypass)Semantic search + Cypher
API SurfaceREST API + Python libraryCLI + Claude Code commands
LLM RequirementsRequired (extraction + queries)Optional (extraction only)

Key Insight: LightRAG is a full-featured RAG system optimized for automated pipelines. Lattice is a lightweight CLI optimized for human-in-the-loop documentation workflows with Claude Code.


1. Database Layer Comparison

1.1 Storage Architecture

LightRAG: Pluggable Multi-Backend

LightRAG separates storage into 4 abstract types:

┌─────────────────────────────────────────────────────────────────┐
│ LightRAG Storage Layer │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ KV Storage │ │ Vector Storage │ │ Graph Storage │ │
│ │ (7 instances) │ │ (3 instances) │ │ (1 instance) │ │
│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │ │
│ ┌────────▼────────────────────▼────────────────────▼────────┐ │
│ │ Implementations: JSON, Redis, MongoDB, PostgreSQL │ │
│ │ Vector: NanoVectorDB, Faiss, Milvus, Qdrant, Chroma │ │
│ │ Graph: NetworkX, Neo4j, Memgraph, PostgreSQL (AGE) │ │
│ └───────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘

Lattice: FalkorDB-Native

┌─────────────────────────────────────────────────────────────────┐
│ Lattice Storage Layer │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ FalkorDB (Redis Module) ││
│ │ - Graph storage (GraphBLAS sparse matrices) ││
│ │ - Vector storage (built-in HNSW) ││
│ │ - Full-text search ││
│ └─────────────────────────────────────────────────────────────┘│
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ Local File Tracking ││
│ │ - .graph-state.json (sync state) ││
│ │ - YAML frontmatter in markdown files ││
│ └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘

1.2 Database Support Matrix

BackendLightRAGLattice
FalkorDBNoYes (primary)
Neo4jYesNo
PostgreSQL (pgvector + AGE)Yes (all-in-one)No
MilvusYesNo
QdrantYesNo
ChromaDBYesNo
RedisYes (KV only)Via FalkorDB
MongoDBYes (KV + Graph)No
File-based (JSON/NetworkX)Yes (default)No

1.3 Graph Model Differences

LightRAG Graph Model:

  • Nodes: Entities with name, type, description, source_ids
  • Edges: Relations with source, target, keywords, description, source_ids
  • Entity types: Extracted by LLM (typically Person, Organization, Location, Event, Concept)
  • Relationship types: Free-form keywords extracted from text

Lattice Graph Model:

  • Nodes: Entities with name, type, description (8 fixed types)
  • Edges: 2 relationship types only (REFERENCES, APPEARS_IN)
  • Entity types: Topic, Technology, Concept, Tool, Process, Person, Organization, Document
  • Design choice: Coarse types + properties to minimize FalkorDB memory
# LightRAG: Many relationship types (memory-expensive in FalkorDB)
(Alice)-[:MANAGES]->(Bob)
(Alice)-[:COLLABORATES_WITH]->(Carol)
(Alice)-[:REPORTS_TO]->(Dave)
# Lattice: Two types only (memory-efficient)
(Alice)-[:REFERENCES {type: "manages"}]->(Bob)
(Alice)-[:APPEARS_IN]->(document.md)

1.4 Vector Search Comparison

FeatureLightRAGLattice
Entity embeddingsYes (separate VDB)Yes (FalkorDB native)
Relation embeddingsYes (separate VDB)No
Chunk embeddingsYes (separate VDB)No (summary only)
Embedding dimensionsConfigurable1024 (Voyage AI)
Embedding modelOpenAI, Ollama, Jina, etc.Voyage AI only

2. User Experience Comparison

2.1 Workflow Patterns

LightRAG: Pipeline-Auto (Fire and Forget)

User adds documents → LightRAG auto-extracts → Graph populated
↓ ↓ ↓
No review step LLM runs extraction Ready for queries
# LightRAG: Automatic extraction on insert
rag = LightRAG(working_dir="./storage", llm_model_func=gpt_4o_mini_complete)
await rag.initialize_storages()
await rag.ainsert("Your document content") # Entities extracted automatically
result = await rag.aquery("What entities exist?")

Lattice: Human-Initiated (Review Optional)

User creates document → User runs /entity-extract → User reviews YAML → User runs /graph-sync
↓ ↓ ↓ ↓
Document exists Claude extracts Human can edit Graph populated
Terminal window
# Lattice: Explicit commands at each step
/entity-extract docs/new-topic/research.md # Claude Code extracts entities
# Optional: Edit the YAML frontmatter
/graph-sync # Sync to FalkorDB
lattice query "What entities exist?"

2.2 Control vs Convenience Trade-off

AspectLightRAG (Auto)Lattice (Human-Initiated)
Setup effortMore (LLM API required)Less (CLI + FalkorDB)
Per-document effortNoneCommand per batch
Quality controlTrust the LLMCan review/edit
Cost per documentLLM tokens every timeLLM tokens on command
Bulk ingestionFast (parallel)Slower (explicit commands)
Incremental updatesAutomaticManual command

2.3 Query Experience

LightRAG: Multi-Mode Retrieval

# 6 query modes for different patterns
await rag.aquery("Who is Alice?", param=QueryParam(mode="local")) # Entity-centric
await rag.aquery("How do they work together?", param=QueryParam(mode="global")) # Relation-centric
await rag.aquery("Complex question", param=QueryParam(mode="hybrid")) # Both
await rag.aquery("Simple search", param=QueryParam(mode="naive")) # Vector only
await rag.aquery("General question", param=QueryParam(mode="mix")) # KG + Vector
await rag.aquery("Just chat", param=QueryParam(mode="bypass")) # LLM only

Lattice: Semantic Search + Cypher

Terminal window
# Semantic search (default)
lattice query "How does FalkorDB handle memory?"
# Keyword search
lattice query -m keyword "FalkorDB memory"
# Raw Cypher
lattice query -m cypher "MATCH (e:Entity)-[:REFERENCES]->(t:Entity {type:'Technology'}) RETURN e.name, t.name"

3. Architecture Deep Dive

3.1 Core Components

LightRAG Architecture:

┌─────────────────────────────────────────────────────────────────┐
│ LightRAG │
├──────────────────────────┬──────────────────────────────────────┤
│ Client Layer │ API Layer (FastAPI) │
│ - Python library │ - /documents/* (CRUD) │
│ - REST API │ - /query/* (retrieval) │
│ - React Web UI │ - /graphs/* (exploration) │
│ - Ollama-compatible API │ - /v1/chat (Ollama compat) │
├──────────────────────────┴──────────────────────────────────────┤
│ Core Engine (LightRAG class) │
│ - Document ingestion pipeline │
│ - Entity/relation extraction (LLM-based) │
│ - Merge and summarization │
│ - Multi-mode query processing │
├─────────────────────────────────────────────────────────────────┤
│ Integration Layer │
│ - 10+ LLM providers (OpenAI, Ollama, Gemini, Bedrock, etc.) │
│ - 6 embedding services │
│ - 3 reranking services (Cohere, Jina, Aliyun) │
├─────────────────────────────────────────────────────────────────┤
│ Storage Layer (Pluggable Abstractions) │
│ - BaseKVStorage, BaseVectorStorage, BaseGraphStorage │
│ - 15+ backend implementations │
└─────────────────────────────────────────────────────────────────┘

Lattice Architecture:

┌─────────────────────────────────────────────────────────────────┐
│ Lattice │
├─────────────────────────────────────────────────────────────────┤
│ Claude Code Integration │
│ - /entity-extract (slash command) │
│ - /graph-sync (slash command) │
│ - /research (slash command) │
├─────────────────────────────────────────────────────────────────┤
│ CLI Layer (NestJS + nest-commander) │
│ - lattice sync │
│ - lattice query │
│ - lattice status │
│ - lattice validate │
├─────────────────────────────────────────────────────────────────┤
│ Sync Service │
│ - Frontmatter parsing (gray-matter) │
│ - Schema validation (Zod) │
│ - Content hash tracking │
│ - APPEARS_IN relationship generation │
├─────────────────────────────────────────────────────────────────┤
│ Storage Layer │
│ - FalkorDB graph + vector (ioredis) │
│ - Voyage AI embeddings │
│ - Local .graph-state.json │
└─────────────────────────────────────────────────────────────────┘

3.2 Entity Extraction Comparison

LightRAG Extraction Pipeline:

  1. Document → Chunking (1200 tokens, 100 overlap)
  2. Chunk → LLM extraction (parallel, with gleaning loop)
  3. Extraction result → JSON parsing
  4. Entities merged by name (case-insensitive)
  5. Relations merged by (source, target) pairs
  6. LLM summarization if descriptions exceed threshold
  7. Upsert to all storage backends

Lattice Extraction Pipeline:

  1. User runs /entity-extract command
  2. Claude Code reads document content
  3. Claude extracts entities following schema
  4. Claude writes YAML frontmatter to file
  5. User optionally reviews/edits YAML
  6. User runs /graph-sync or lattice sync
  7. CLI parses frontmatter, validates schema
  8. CLI upserts to FalkorDB, generates embeddings

3.3 Document Tracking

FeatureLightRAGLattice
Status trackingDocStatusStorage (PENDING/PROCESSING/PROCESSED/FAILED).graph-state.json (contentHash)
Duplicate detectionMD5 hash of contentContent hash comparison
Incremental updatesAutomatic (doc_status check)Manual (lattice status shows changed)
RollbackRe-extract from sourceRe-run /entity-extract

4. Integration Opportunities

4.1 Where LightRAG Could Replace Lattice

Scenario: Full Replacement

Replace Lattice CLI entirely with LightRAG for auto-extraction:

# Modified /graph-sync command
1. Initialize LightRAG with FalkorDB backend (if supported) or Neo4j
2. For each changed document:
- Read content
- await lightrag.ainsert(content, file_path=path)
3. LightRAG handles extraction, merging, storage
4. Report results

Pros:

  • No frontmatter needed
  • Automatic incremental updates
  • Richer query modes (hybrid, mix, etc.)
  • Reranking support built-in

Cons:

  • LLM required for every document (cost)
  • No human review step
  • Different graph schema (would need migration)
  • FalkorDB not directly supported (would need Neo4j)

4.2 Hybrid Integration Options

Option A: LightRAG for Extraction, Lattice for Storage

Document → LightRAG extraction → JSON output → Lattice CLI ingest
# Use LightRAG's extraction but write to Lattice-compatible format
extraction = await lightrag.extract_entities(chunk) # Hypothetical
lattice_json = convert_to_lattice_format(extraction)
# Pipe to: lattice ingest --stdin

Challenges:

  • LightRAG extraction is tightly coupled to its storage
  • Would need to fork/modify LightRAG

Option B: LightRAG Query Layer on Lattice Storage

Lattice sync → FalkorDB ← LightRAG query adapter

Keep Lattice for extraction/sync, but use LightRAG’s multi-mode query:

# Custom LightRAG storage adapter for FalkorDB
class FalkorDBGraphStorage(BaseGraphStorage):
def __init__(self, redis_client):
self.client = redis_client
async def get_node(self, entity_name: str):
result = await self.client.graph.query(
f"MATCH (e:Entity {{name: '{entity_name}'}}) RETURN e"
)
return convert_falkor_to_lightrag(result)

Challenges:

  • LightRAG expects specific schema (entity_type, description, source_ids)
  • Lattice schema is different (type as property, no source_ids per entity)

The cleanest integration would be using LightRAG’s extraction prompts/logic without its full system:

Current Lattice:

Claude Code → Entity extraction (custom prompt) → YAML frontmatter

Enhanced Lattice with LightRAG-style Extraction:

Lattice CLI → LightRAG extraction prompts → JSON → FalkorDB
(use same prompts/parsing)

Benefits:

  • Battle-tested extraction prompts
  • Gleaning loop for better recall
  • Keep FalkorDB (proven for our scale)
  • Keep human-initiated workflow

Implementation:

  1. Port lightrag/prompt.py entity_extraction prompts to Lattice
  2. Port lightrag/operate.py extraction parsing logic
  3. Use in new lattice ingest command (from frontmatter-free proposal)

5. Feature-by-Feature Comparison

5.1 Extraction Quality

FeatureLightRAGLattice
Entity typesDynamic (LLM decides)Fixed 8 types
Relationship typesDynamic keywords2 types (REFERENCES, APPEARS_IN)
GleaningYes (configurable retries)No
SummarizationLLM map-reduce on mergeTitle + summary only
Source trackingChunk IDs per entityDocument path only
DeduplicationCase-insensitive mergeManual in YAML

5.2 Query Capabilities

FeatureLightRAGLattice
Semantic searchYes (multiple VDBs)Yes (FalkorDB native)
Keyword searchVia naive modeYes (Cypher CONTAINS)
Graph traversalEntity→relation expansionCypher queries
RerankingYes (Cohere, Jina, Aliyun)No
StreamingYesNo
Conversation historyYesNo
Custom promptsYes (system_prompt param)No

5.3 Deployment & Operations

AspectLightRAGLattice
DeploymentServer (FastAPI/Gunicorn) or embeddedCLI only
Multi-tenancyWorkspace isolationSingle graph per deployment
Multi-processShared memory locksNot applicable
DockerOfficial imagesDIY
Web UIReact app includedNone
APIREST + Ollama-compatibleNone

5.4 Dependencies & Requirements

LightRAG Requirements:

  • Python 3.10+
  • LLM API (required for extraction)
  • Embedding API (required)
  • Storage backend(s)
  • Optional: Reranker API

Lattice Requirements:

  • Bun runtime
  • FalkorDB (Redis module)
  • Voyage AI API (for embeddings)
  • Claude Code (for slash commands)
  • No LLM API required for sync

6. Migration Considerations

6.1 Data Migration: Lattice → LightRAG

If migrating from Lattice to LightRAG:

  1. Export Lattice data:

    Terminal window
    lattice export --format json > lattice-export.json
  2. Transform schema:

    # Lattice entity → LightRAG entity
    {
    "name": lattice_entity["name"],
    "type": lattice_entity["type"],
    "description": lattice_entity["description"],
    "source_ids": [doc_path] # Convert document path to source
    }
  3. Import to LightRAG:

    for entity in lattice_entities:
    await lightrag.chunk_entity_relation_graph.upsert_node(entity)
    await lightrag.entities_vdb.upsert({entity["name"]: embedding})

6.2 Challenges

ChallengeMitigation
Relationship types differMap REFERENCES → generic relation
No chunk-level source_idsUse document path as single source
FalkorDB not supportedUse Neo4j or PostgreSQL
Different embedding modelsRe-embed all entities

7. Recommendations

7.1 When to Use LightRAG

  • Large corpus ingestion: Auto-extraction saves time
  • Multi-user RAG service: REST API + web UI included
  • Complex query patterns: 6 modes cover most use cases
  • Multiple storage backends: Flexibility to change databases
  • Streaming responses: Real-time chat applications

7.2 When to Keep Lattice

  • Research documentation: Human control over what gets extracted
  • Claude Code integration: Slash commands are integral
  • Cost sensitivity: No LLM required for sync operations
  • Memory-constrained: FalkorDB optimized for small graphs
  • Simple queries: Semantic + Cypher covers most needs

For the research documentation use case, a hybrid approach works best:

  1. Keep Lattice for human-initiated extraction and FalkorDB storage
  2. Port LightRAG prompts to improve extraction quality
  3. Implement frontmatter-free proposal using JSON format
  4. Consider LightRAG query modes as future enhancement

This preserves the human-in-the-loop workflow while benefiting from LightRAG’s extraction research.


Sources