AI agent harness for small language models
| memory | ||
| src | ||
| .env.example | ||
| .gitignore | ||
| config.json | ||
| docker-compose.yml | ||
| Dockerfile | ||
| package-lock.json | ||
| package.json | ||
| README.md | ||
| soul.json | ||
| tsconfig.json | ||
| user.json | ||
Agent Harness
AI agent harness for small language models (SLMs) with a lightweight tool calling protocol.
Overview
A minimal, token-efficient agent loop designed for SLMs (3B-8B parameters). Built on Ollama for local inference.
Architecture
┌─────────────────────────────────────┐
│ Tool Registry │
│ (definitions + execution engine) │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────────────┐
│ Prompt Composer │
│ (system + tools + history + query) │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────────────┐
│ Model Backend (Ollama) │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────────────┐
│ Output Parser │
│ (extract tool calls / responses) │
└─────────────────────────────────────┘
Quick Start
npm install
npm run dev
Configuration
Edit config.json to set:
model: Ollama model to usetools: Enabled toolsmaxIterations: Max tool calls per turnollamaUrl: Ollama endpoint
Tool Protocol
Tools are defined as JSON schemas:
{
"name": "filesystem_read",
"description": "Read contents of a file",
"parameters": {
"type": "object",
"properties": {
"path": { "type": "string", "description": "File path to read" }
},
"required": ["path"]
}
}
Model outputs tool calls in this format:
<tool_call>
{"name": "filesystem_read", "arguments": {"path": "/etc/hosts"}}
</tool_call>
Available Tools
filesystem_read- Read filesfilesystem_write- Write filesbash- Execute shell commandsweb_search- Search the web (Brave API)memory_search- Query persistent memory
License
MIT