README
Research on running large language models locally, with focus on Apple Silicon (M3 Ultra) performance, open-source model rankings, and practical deployment strategies.
Documents
| Document | Description |
|---|---|
| Best Open Source LLMs 2025 | Comprehensive ranking and performance benchmarks for top open-source models |
| Best Open Coding Models 2025 | Deep-dive into coding-specific models with benchmark comparisons |
| M3 Ultra Performance Benchmarks | MLX and llama.cpp performance data for Mac Studio M3 Ultra |
Related Research
- WebGPU LLM Research - Browser-based LLM inference
- Claude Opus 4.5 - Proprietary model comparisons