Skip to content

Installing

This guide will walk you through installing Thought Ledger and its dependencies on your local machine.

Ollama is the local AI runtime that powers Thought Ledger. It allows you to run AI models directly on your machine without API keys or cloud dependencies.

Terminal window
# Download and install Ollama
curl -fsSL https://ollama.com/install.sh | sh

Or download the macOS installer directly.

Terminal window
# Download and install Ollama
curl -fsSL https://ollama.com/install.sh | sh

For specific Linux distributions:

Terminal window
# Ubuntu/Debian
sudo apt update && sudo apt install -y curl
# Fedora
sudo dnf install -y curl
# Arch Linux
sudo pacman -S curl

Install via WSL2:

Terminal window
# In WSL2 terminal
curl -fsSL https://ollama.com/install.sh | sh
Terminal window
# Check Ollama is running
ollama --version
# Start Ollama service (if needed)
sudo systemctl start ollama

Thought Ledger requires Node.js 18 or higher.

Terminal window
# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Reload shell
source ~/.bashrc
# Install Node.js
nvm install 18
nvm use 18

Download from nodejs.org or use your system package manager:

Terminal window
# macOS (Homebrew)
brew install node@18
# Ubuntu/Debian
sudo apt install nodejs npm
# Fedora
sudo dnf install nodejs npm
Terminal window
# Check Node.js version
node --version # Should be v18+
# Check npm version
npm --version

Now install Thought Ledger itself:

Terminal window
# Clone the repository
git clone https://github.com/your-org/thought-ledger.git
cd thought-ledger
# Install dependencies
npm install
# Build the application
npm run build
# Start the development server
npm run dev

Thought Ledger needs local AI models to function. We recommend starting with:

Terminal window
# Download a lightweight model (good for testing)
ollama pull llama3.2:3b
# Download a more capable model (recommended)
ollama pull llama3.2:8b
# Download a coding-focused model (optional)
ollama pull codegemma:7b
ModelSizeUse CaseRAM Required
llama3.2:3b2GBBasic decisions, testing4GB+
llama3.8b5GBGeneral use, recommended8GB+
codegemma:7b5GBCode-related decisions8GB+
llama3.1:70b40GBComplex decisions64GB+

Create your configuration file:

Terminal window
# Copy example configuration
cp config.example.json config.json
# Edit configuration
nano config.json

Example configuration:

{
"ai": {
"model": "llama3.2:8b",
"ollama_url": "http://localhost:11434"
},
"memory": {
"storage_path": "./data/decisions.db",
"max_decisions": 10000
},
"ui": {
"theme": "auto",
"language": "en"
}
}
Terminal window
# Start the application
npm start
# Or run in development mode
npm run dev

Open your browser and navigate to http://localhost:3000 to access Thought Ledger.

Test your installation:

  1. Check Ollama: ollama list should show downloaded models
  2. Check Thought Ledger: The web interface should load at localhost:3000
  3. Test AI: Try a simple decision in the interface

Having issues? Check our troubleshooting guide for common problems.