Skip to content

Troubleshooting

This guide covers common issues you might encounter with Thought Ledger and their solutions.

Problem: Ollama installation script fails or doesn’t complete.

Solutions:

Terminal window
# Try manual installation
# macOS
brew install ollama
# Linux
curl -L https://ollama.com/download/ollama-linux-amd64 -o ollama
chmod +x ollama
sudo mv ollama /usr/local/bin/
# Start Ollama service
ollama serve

Check system requirements:

  • Ensure you have admin/sudo privileges
  • Check if ports 11434 are available
  • Verify sufficient disk space (5GB+)

Problem: Thought Ledger requires Node.js 18+ but you have an older version.

Solution:

Terminal window
# Using nvm (recommended)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install 18
nvm use 18
# Verify installation
node --version # Should show v18.x.x

Problem: npm install fails with errors.

Solutions:

Terminal window
# Clear npm cache
npm cache clean --force
# Delete node_modules and package-lock.json
rm -rf node_modules package-lock.json
# Reinstall dependencies
npm install
# If still failing, try with legacy peer deps
npm install --legacy-peer-deps

Problem: Application fails to start or shows errors.

Diagnostics:

Terminal window
# Check if Ollama is running
ollama list
# Check available ports
netstat -an | grep 3000
netstat -an | grep 11434
# Check system resources
free -h # Memory
df -h # Disk space

Common Solutions:

Terminal window
# Kill existing processes
pkill -f thought-ledger
pkill -f ollama
# Restart Ollama
ollama serve
# Start Thought Ledger
npm start

Problem: AI assistance is slow or doesn’t respond.

Diagnostics:

Terminal window
# Check Ollama status
ollama ps
# Test model directly
ollama run llama3.2:3b "Hello, test message"
# Check model is downloaded
ollama list

Solutions:

Terminal window
# Redownload model if corrupted
ollama pull llama3.2:3b
# Try a smaller model if memory is low
ollama pull llama3.2:1b
# Restart Ollama service
pkill -f ollama && ollama serve

Problem: Application crashes or becomes unresponsive due to memory.

Solutions:

Terminal window
# Check memory usage
free -h
ps aux --sort=-%mem | head
# Use smaller AI model
ollama pull llama3.2:1b
# Update configuration to use smaller model
echo '{"ai":{"model":"llama3.2:1b"}}' > config.json
# Restart with increased Node.js memory
NODE_OPTIONS="--max-old-space-size=4096" npm start

Problem: SQLite database fails to create or open.

Diagnostics:

Terminal window
# Check data directory permissions
ls -la data/
ls -la data/decisions.db
# Test database creation
sqlite3 data/test.db "CREATE TABLE test (id INTEGER);"

Solutions:

Terminal window
# Create data directory if missing
mkdir -p data
# Fix permissions
chmod 755 data/
# Remove corrupted database and restart
rm data/decisions.db
npm start

Problem: Search functionality returns no results or errors.

Diagnostics:

Terminal window
# Check database content
sqlite3 data/decisions.db "SELECT COUNT(*) FROM decisions;"
# Check search index
sqlite3 data/decisions.db ".schema"

Solutions:

Terminal window
# Rebuild search index
npm run rebuild-index
# Clear cache and restart
rm -rf .cache/
npm start

Problem: AI assistance takes too long to respond.

Solutions:

Terminal window
# Check system resources
htop # CPU usage
free -h # Memory usage
# Optimize for your hardware:
# 8GB RAM or less:
ollama pull llama3.2:1b
# 16GB RAM:
ollama pull llama3.2:3b
# 32GB+ RAM:
ollama pull llama3.2:8b

Problem: Web interface becomes slow or unresponsive.

Diagnostics:

Terminal window
# Check browser console for errors
# Open developer tools (F12) and check Console tab
# Check network requests
# Network tab in developer tools

Solutions:

Terminal window
# Clear browser cache and cookies
# Try a different browser
# Restart application
pkill -f thought-ledger
npm start

Problem: Configuration changes don’t persist after restart.

Solutions:

Terminal window
# Check config file permissions
ls -la config.json
# Ensure writable permissions
chmod 644 config.json
# Verify config format
cat config.json | python3 -m json.tool

Problem: Can’t switch between AI models.

Diagnostics:

Terminal window
# Check available models
ollama list
# Check current configuration
cat config.json | grep model

Solutions:

Terminal window
# Update configuration manually
nano config.json
# Example configuration:
{
"ai": {
"model": "llama3.2:3b",
"ollama_url": "http://localhost:11434"
}
}
# Restart application
npm start

Problem: Ports 3000 or 11434 are already in use.

Diagnostics:

Terminal window
# Check what's using the ports
lsof -i :3000
lsof -i :11434

Solutions:

Terminal window
# Kill conflicting processes
sudo kill -9 <PID>
# Or use different ports
PORT=3001 npm start

Problem: Thought Ledger can’t connect to Ollama service.

Diagnostics:

Terminal window
# Test Ollama directly
curl http://localhost:11434/api/tags
# Check if Ollama is running
ps aux | grep ollama

Solutions:

Terminal window
# Restart Ollama
pkill -f ollama
ollama serve
# Check firewall settings
# Ensure port 11434 is not blocked

Problem: Decision history disappeared or corrupted.

Solutions:

Terminal window
# Check for database backups
ls -la data/*.db.*
ls -la backups/
# Restore from backup if available
cp backups/decisions.db.backup data/decisions.db
# Export what you can recover
npm run export-data

Problem: Database file is corrupted and won’t open.

Solutions:

Terminal window
# Try database repair
sqlite3 data/decisions.db ".recover" | sqlite3 data/decisions_repaired.db
# If repair fails, start fresh (data loss warning!)
mv data/decisions.db data/decisions.db.corrupted
npm start

Enable detailed logging for troubleshooting:

Terminal window
# Start with debug logging
DEBUG=thought-ledger:* npm start
# Check log files
tail -f logs/app.log
tail -f logs/error.log

Generate a system report for support:

Terminal window
# Create system report
npm run system-report
# This creates:
# - Hardware information
# - Software versions
# - Configuration details
# - Recent log entries

If you continue to experience issues:

  1. Generate system report: npm run system-report
  2. Collect error logs: Check logs/error.log
  3. Describe the issue: Include steps to reproduce
  4. Contact: support@thought-ledger.com

Still having issues? Our community is active and ready to help. Don’t hesitate to reach out!