nvme-cleanup-strategies
Purpose
Comprehensive guide for safely cleaning up disk space on Linux systems with NVMe SSDs, targeting developer environments with Docker, Node.js/pnpm, Python tools, and application caches.
Key Findings
Common Storage Consumers in Developer Environments
- Docker artifacts - Images, build cache, and volumes can consume 100GB+
- Language package managers - uv (Python), pnpm/npm (Node.js) maintain global caches
- Application caches - JetBrains, Chrome, Whisper AI models, etc.
- Trash directory - Often forgotten, can accumulate 50GB+
- Project dependencies - node_modules directories across projects
Safe Cleanup Priority Levels
High Priority (Safe, High Impact):
- Empty trash
- Docker build cache and unused images
- Package manager caches (uv, pnpm, npm)
- Application caches (browser, IDE)
- Old downloads
Medium Priority (Requires Review):
- Docker volumes
- AI model caches (if not actively used)
- Project build artifacts
- Large unused projects
Low Priority (Risky, Manual Review):
- System logs (journalctl)
- Project source code
Cleanup Commands & Strategies
1. Empty Trash (Immediate ~61GB Recovery)
# Check trash size firstdu -sh ~/.local/share/Trash
# Empty trashrm -rf ~/.local/share/Trash/*Impact: Can reclaim 50-100GB+ depending on accumulated deletions.
2. Docker Cleanup (~170GB Potential)
# Show Docker disk usagedocker system df
# Remove unused containers, networks, images, and build cachedocker system prune -a --volumes
# More targeted cleanup:docker image prune -a # Remove unused images onlydocker builder prune -a # Remove build cache onlydocker volume prune # Remove unused volumesImpact:
- Build cache: ~78GB
- Unused images: ~86GB
- Volumes: Variable, check before removing
Important: Ensure Docker is running before executing cleanup commands.
3. Python uv Cache Cleanup (~65GB)
# Check uv cache sizedu -sh ~/.cache/uv
# Clean uv cache (safe - will re-download as needed)uv cache clean
# Or manually removerm -rf ~/.cache/uvImpact: ~65GB immediate recovery. Cache rebuilds automatically when needed.
4. pnpm Package Manager Cleanup (~8GB Store)
# Prune unreferenced packagespnpm store prune
# Check store sizedu -sh ~/.local/share/pnpm/store
# Check cachedu -sh ~/.cache/pnpmrm -rf ~/.cache/pnpm # Safe to removeImpact:
- Store pruning: Variable (removes orphaned packages)
- Cache: ~236MB+
Important: Unlike npm, pnpm uses symlinks. Never delete the entire store directory without pnpm store prune as it will break existing node_modules across all projects.
5. npm Cache Cleanup
# Verify and clean npm cachenpm cache verifynpm cache clean --forceImpact: Variable, typically a few GB.
6. Browser & Application Caches
# Chrome cache (~5GB)rm -rf ~/.cache/google-chrome
# GitHub Copilot cache (~6GB)rm -rf ~/.cache/github-copilot
# Whisper AI models (~5GB - only if not used)rm -rf ~/.cache/whisper
# HuggingFace models (~3GB - only if not used)rm -rf ~/.cache/huggingface
# JetBrains cache (~16GB)rm -rf ~/.cache/JetBrainsImpact: 10-30GB total. Caches rebuild automatically when needed.
Warning: Only remove AI model caches (whisper, huggingface) if you’re not actively using them, as redownloading can be time-consuming.
7. Playwright & Puppeteer Browsers
# Playwright browsers (~2GB)rm -rf ~/.cache/ms-playwright
# Puppeteer browsers (~1.1GB)rm -rf ~/.cache/puppeteerImpact: ~3GB. Browsers reinstall when needed.
8. System Journal Logs
# Check journal sizejournalctl --disk-usage
# Keep only last 3 dayssudo journalctl --vacuum-time=3d
# Or limit to size (e.g., 500MB)sudo journalctl --vacuum-size=500MImpact: Variable, typically 1-5GB. Safe to reduce.
9. Downloads Folder Review
# Check downloads sizedu -sh ~/Downloads
# Review and delete manuallyls -lhS ~/Downloads | head -20 # Show largest filesImpact: Variable. Manual review recommended.
10. Find Large node_modules Directories
# Find all node_modules directories and their sizesfind ~ -name "node_modules" -type d -prune -exec du -sh {} \; 2>/dev/null | sort -hr
# Remove specific project's node_modules (can reinstall with pnpm install)rm -rf /path/to/project/node_modulesImpact: Variable. Each node_modules can be 100MB-1GB+. Safe to remove as they can be reinstalled.
Recommended Cleanup Workflow
Quick Win (15 minutes, ~150GB+)
# 1. Empty trashrm -rf ~/.local/share/Trash/*
# 2. Clean Dockerdocker system prune -a --volumes
# 3. Clean Python uv cacheuv cache clean
# 4. Clean pnpmpnpm store prune
# 5. Clean npmnpm cache clean --force
# 6. Clean browser cacherm -rf ~/.cache/google-chrome
# 7. Trim system logssudo journalctl --vacuum-time=7dDeep Clean (1 hour, additional ~50GB)
- Review and clean AI model caches (whisper, huggingface)
- Clean JetBrains cache (rebuilds on next launch)
- Find and remove old project node_modules
- Review Downloads folder
- Audit large projects for old branches/build artifacts
NVMe SSD Specific Considerations
TRIM Support
Ensure TRIM is enabled for optimal SSD performance:
# Check if TRIM is enabledsudo systemctl status fstrim.timer
# Enable weekly TRIMsudo systemctl enable fstrim.timersudo systemctl start fstrim.timerSecure Erase (Data Sanitization)
For complete drive wipe only (backup data first!):
# Check drive capabilitiesnvme id-ctrl /dev/nvme0 -H | grep -E 'Format |Crypto Erase|Sanitize'
# Use nvme-sanitize (most thorough, NVMe 1.3+)sudo nvme sanitize /dev/nvme0 -a 0x02
# Or nvme-format (faster, less thorough)sudo nvme format /dev/nvme0 -s 1Important Warnings:
- Never use on USB-connected drives (can brick device)
- Avoid “Overwrite” action on SSDs (reduces endurance)
- Self-Encrypting Drives (OPAL): Crypto erase changes encryption key, making all data unrecoverable
- Always backup before any secure erase operation
Tools
- nvme-cli: NVMe-specific utilities for format and sanitize operations
- hdparm: Legacy tool for SATA SSDs (not for NVMe)
- Parted Magic: Bootable secure erase utility ($15, comprehensive GUI)
- Vendor tools: Samsung Magician, Dell BIOS Data Wipe, etc.
Monitoring & Prevention
Regular Maintenance
# Weekly: Check disk usagedf -h /
# Monthly: Review largest directoriesdu -h --max-depth=1 ~ | sort -hr | head -20
# Quarterly: Deep clean caches and Dockerdocker system prune -a --volumespnpm store pruneAutomated Cleanup
Consider adding to crontab or systemd timer:
#!/bin/bashdocker system prune -fpnpm store prunenpm cache clean --forcejournalctl --vacuum-time=7drm -rf ~/.cache/google-chromeStorage Monitoring Tools
# Install ncdu for interactive disk usagesudo apt install ncdu
# Use to explore large directoriesncdu ~ncdu /Package Manager Cache Comparison
| Tool | Cache Location | Safe to Delete | Prune Command |
|---|---|---|---|
| uv (Python) | ~/.cache/uv | Yes | uv cache clean |
| pnpm | ~/.local/share/pnpm/store | Prune only | pnpm store prune |
| npm | ~/.cache/npm | Yes | npm cache clean --force |
| pip | ~/.cache/pip | Yes | pip cache purge |
Key Difference: pnpm uses symlinks, so never delete the entire store directory manually. Always use pnpm store prune to safely remove orphaned packages.
Sources
- Solid state drive/Memory cell clearing - ArchWiki
- How to Securely Recycle or Dispose of Your SSD – Linux Hint
- Securely wiping NVMe SSD drives using Linux · GitHub
- Secure Erase on Linux - Safely Wipe Drives for Data Security
- How to clear/clean
.pnpm-storecache · pnpm Discussion - How to Clear the npm Cache on Linux, macOS, and Windows
- pnpm cache delete | pnpm
- How to clean npm junk from disk | dTech
- Dockerfile good practices for Node and NPM