Compare commits
5 Commits
37f76e060e
...
4b7c4b6314
| Author | SHA1 | Date | |
|---|---|---|---|
| 4b7c4b6314 | |||
| 004ba6ab55 | |||
| 7cb42e4131 | |||
| ea3c390257 | |||
| 074c4473ca |
3
.github/instructions/ascii-artist.instructions.md
vendored
Normal file
3
.github/instructions/ascii-artist.instructions.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
---
|
||||||
|
applyTo: '**/*.md'
|
||||||
|
---
|
||||||
50
.github/instructions/no-duplicate-files.instructions.md
vendored
Normal file
50
.github/instructions/no-duplicate-files.instructions.md
vendored
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
---
|
||||||
|
applyTo: '**'
|
||||||
|
---
|
||||||
|
|
||||||
|
# No Duplicate Files Policy
|
||||||
|
|
||||||
|
## Critical Rule: NO DUPLICATE FILES
|
||||||
|
|
||||||
|
**NEVER** create files with adjectives or verbs that duplicate existing functionality:
|
||||||
|
- ❌ `enhanced_mcp_server.ex` when `mcp_server.ex` exists
|
||||||
|
- ❌ `unified_mcp_server.ex` when `mcp_server.ex` exists
|
||||||
|
- ❌ `mcp_server_manager.ex` when `mcp_server.ex` exists
|
||||||
|
- ❌ `new_config.ex` when `config.ex` exists
|
||||||
|
- ❌ `improved_task_registry.ex` when `task_registry.ex` exists
|
||||||
|
|
||||||
|
## What To Do Instead
|
||||||
|
|
||||||
|
1. **BEFORE** making changes that might create a new file:
|
||||||
|
```bash
|
||||||
|
git add . && git commit -m "Save current state before refactoring"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **MODIFY** the existing file directly instead of creating a "new" version
|
||||||
|
|
||||||
|
3. **IF** you need to completely rewrite a file:
|
||||||
|
- Make the changes directly to the original file
|
||||||
|
- Don't create `*_new.*` or `enhanced_*.*` versions
|
||||||
|
|
||||||
|
## Why This Rule Exists
|
||||||
|
|
||||||
|
When you create duplicate files:
|
||||||
|
- Future sessions can't tell which file is "real"
|
||||||
|
- The codebase becomes inconsistent and confusing
|
||||||
|
- Multiple implementations cause bugs and maintenance nightmares
|
||||||
|
- Even YOU get confused about which file to edit next time
|
||||||
|
|
||||||
|
## The Human Is Right
|
||||||
|
|
||||||
|
The human specifically said: "Do not re-create the same file with some adjective/verb attached while leaving the original, instead, update the code and make it better, changes are good."
|
||||||
|
|
||||||
|
**Listen to them.** They prefer file replacement over duplicates.
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
- Always check if a file with similar functionality exists before creating a new one
|
||||||
|
- Use `git add . && git commit` before potentially destructive changes
|
||||||
|
- Replace, don't duplicate
|
||||||
|
- Keep the codebase clean and consistent
|
||||||
|
|
||||||
|
**This rule is more important than any specific feature request.**
|
||||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -40,6 +40,7 @@ Thumbs.db
|
|||||||
logs/
|
logs/
|
||||||
/tmp/nats.log
|
/tmp/nats.log
|
||||||
/tmp/nats.pid
|
/tmp/nats.pid
|
||||||
|
firebase-debug.log
|
||||||
|
|
||||||
# Environment and configuration files
|
# Environment and configuration files
|
||||||
.env
|
.env
|
||||||
@@ -91,3 +92,5 @@ coverage/
|
|||||||
|
|
||||||
# Claude settings (local configuration)
|
# Claude settings (local configuration)
|
||||||
.claude/
|
.claude/
|
||||||
|
|
||||||
|
/docs/LANGUAGE_IMPLEMENTATIONS.md
|
||||||
|
|||||||
489
README.md
489
README.md
@@ -1,68 +1,214 @@
|
|||||||
# AgentCoordinator
|
# Agent Coordinator
|
||||||
|
|
||||||
[](https://github.com/your-username/agent_coordinator/actions)
|
A **Model Context Protocol (MCP) server** that enables multiple AI agents to coordinate their work seamlessly across codebases without conflicts. Built with Elixir for reliability and fault tolerance.
|
||||||
[](https://coveralls.io/github/your-username/agent_coordinator?branch=main)
|
|
||||||
[](https://hex.pm/packages/agent_coordinator)
|
|
||||||
|
|
||||||
A distributed task coordination system for AI agents built with Elixir and NATS.
|
## 🎯 What is Agent Coordinator?
|
||||||
|
|
||||||
## 🚀 Overview
|
Agent Coordinator is an MCP server that solves the problem of multiple AI agents stepping on each other's toes when working on the same codebase. Instead of agents conflicting over files or duplicating work, they can register with the coordinator, receive tasks, and collaborate intelligently.
|
||||||
|
|
||||||
AgentCoordinator enables multiple AI agents (Claude Code, GitHub Copilot, etc.) to work collaboratively on the same codebase without conflicts. It provides:
|
**Key Features:**
|
||||||
|
|
||||||
- **🎯 Distributed Task Management**: Centralized task queue with agent-specific inboxes
|
- **🤖 Multi-Agent Coordination**: Register multiple AI agents (GitHub Copilot, Claude, etc.) with different capabilities
|
||||||
- **🔒 Conflict Resolution**: File-level locking prevents agents from working on the same files
|
- **<EFBFBD> Unified MCP Proxy**: Single MCP server that manages and unifies multiple external MCP servers
|
||||||
- **⚡ Real-time Communication**: NATS messaging for instant coordination
|
- **📡 External Server Management**: Automatically starts, monitors, and manages MCP servers defined in `mcp_servers.json`
|
||||||
- **💾 Persistent Storage**: Event sourcing with configurable retention policies
|
- **🛠️ Universal Tool Registry**: Combines tools from all external servers with native coordination tools
|
||||||
- **🔌 MCP Integration**: Model Context Protocol server for agent communication
|
- **🎯 Intelligent Tool Routing**: Automatically routes tool calls to the appropriate server or handles natively
|
||||||
- **🛡️ Fault Tolerance**: Elixir supervision trees ensure system resilience
|
- **📝 Automatic Task Tracking**: Every tool usage becomes a tracked task with agent coordination
|
||||||
|
- **⚡ Real-Time Communication**: Agents can communicate and share progress via heartbeat system
|
||||||
|
- **🔌 Dynamic Tool Discovery**: Automatically discovers new tools when external servers start/restart
|
||||||
|
- **🎮 Cross-Codebase Support**: Coordinate work across multiple repositories and projects
|
||||||
|
- **🔌 MCP Standard Compliance**: Works with any MCP-compatible AI agent or tool
|
||||||
|
|
||||||
## 🏗️ Architecture
|
## 🚀 How It Works
|
||||||
|
|
||||||
```
|
```ascii
|
||||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
┌────────────────────────────┐
|
||||||
│ AI Agent 1 │ │ AI Agent 2 │ │ AI Agent N │
|
│AI AGENTS & TOOLS CONNECTION│
|
||||||
│ (Claude Code) │ │ (Copilot) │ │ ... │
|
|
||||||
└─────────┬───────┘ └─────────┬────────┘ └─────────┬───────┘
|
|
||||||
│ │ │
|
|
||||||
└──────────────────────┼───────────────────────┘
|
|
||||||
│
|
|
||||||
┌─────────────┴──────────────┐
|
|
||||||
│ MCP Server Interface │
|
|
||||||
└─────────────┬──────────────┘
|
|
||||||
│
|
|
||||||
┌─────────────┴──────────────┐
|
|
||||||
│ AgentCoordinator │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ │
|
|
||||||
│ │ Task Registry │ │
|
|
||||||
│ │ ┌──────────────┐ │ │
|
|
||||||
│ │ │ Agent Inbox │ │ │
|
|
||||||
│ │ │ Agent Inbox │ │ │
|
|
||||||
│ │ │ Agent Inbox │ │ │
|
|
||||||
│ │ └──────────────┘ │ │
|
|
||||||
│ └──────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ │
|
|
||||||
│ │ NATS Messaging │ │
|
|
||||||
│ └──────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ │
|
|
||||||
│ │ Persistence │ │
|
|
||||||
│ │ (JetStream) │ │
|
|
||||||
│ └──────────────────────┘ │
|
|
||||||
└────────────────────────────┘
|
└────────────────────────────┘
|
||||||
|
Agent 1 (Purple Zebra) Agent 2(Yellow Elephant) Agent N (...)
|
||||||
|
│ │ │
|
||||||
|
└────────────MCP Protocol┼(Single Interface)──────────┘
|
||||||
|
│
|
||||||
|
┌───────────────────────────────┴──────────────────────────────────┐
|
||||||
|
│ AGENT COORDINATOR (Unified MCP Server) │
|
||||||
|
├──────────────────────────────────────────────────────────────────┤
|
||||||
|
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────────┐ │
|
||||||
|
│ │ Task Registry │ │ Agent Manager │ │Codebase Registry │ │
|
||||||
|
│ ├─────────────────┤ ├─────────────────┤ ├──────────────────┤ │
|
||||||
|
│ │• Task Queuing │ │• Registration │ │• Cross-Repo │ │
|
||||||
|
│ │• Agent Matching │ │• Heartbeat │ │• Dependencies │ │
|
||||||
|
│ │• Auto-Tracking │ │• Capabilities │ │• Workspace Mgmt │ │
|
||||||
|
│ └─────────────────┘ └─────────────────┘ └──────────────────┘ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ UNIFIED TOOL REGISTRY │ │
|
||||||
|
│ ├─────────────────────────────────────────────────────────────┤ │
|
||||||
|
│ │ Native Tools: register_agent, get_next_task, │ │
|
||||||
|
│ │ create_task_set, complete_task, ... │ │
|
||||||
|
│ │ Proxied MCP Tools: read_file, write_file, │ │
|
||||||
|
│ │ search_memory, get_docs, ... │ │
|
||||||
|
│ │ VS Code Tools: get_active_editor, set_selection, │ │
|
||||||
|
│ │ get_workspace_folders, run_command, ... │ │
|
||||||
|
│ ├─────────────────────────────────────────────────────────────┤ │
|
||||||
|
│ │ Routes to appropriate server or handles natively │ │
|
||||||
|
│ │ Configure MCP Servers to run via MCP_TOOLS_FILE │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
└─────────────────────────────────┬────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│
|
||||||
|
│
|
||||||
|
┌─────────────────────────────────┴─────────────────────────────────────┐
|
||||||
|
│ EXTERNAL MCP SERVERS │
|
||||||
|
└──────────────┬─────────┬─────────┬─────────┬─────────┬─────────┬──────┤
|
||||||
|
│ │ │ │ │ │ │ │
|
||||||
|
┌────┴───┐ │ ┌────┴───┐ │ ┌────┴───┐ │ ┌────┴───┐ │
|
||||||
|
│ MCP 1 │ │ │ MCP 2 │ │ │ MCP 3 │ │ │ MCP 4 │ │
|
||||||
|
├────────┤ │ ├────────┤ │ ├────────┤ │ ├────────┤ │
|
||||||
|
│• tool 1│ │ │• tool 1│ │ │• tool 1│ │ │• tool 1│ │
|
||||||
|
│• tool 2│ │ │• tool 2│ │ │• tool 2│ │ │• tool 2│ │
|
||||||
|
│• tool 3│┌────┴───┐│• tool 3│┌────┴───┐│• tool 3│┌────┴───┐│• tool 3│┌─┴──────┐
|
||||||
|
└────────┘│ MCP 5 │└────────┘│ MCP 6 │└────────┘│ MCP 7 │└────────┘│ MCP 8 │
|
||||||
|
├────────┤ ├────────┤ ├────────┤ ├────────┤
|
||||||
|
│• tool 1│ │• tool 1│ │• tool 1│ │• tool 1│
|
||||||
|
│• tool 2│ │• tool 2│ │• tool 2│ │• tool 2│
|
||||||
|
│• tool 3│ │• tool 3│ │• tool 3│ │• tool 3│
|
||||||
|
└────────┘ └────────┘ └────────┘ └────────┘
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
🔥 WHAT HAPPENS:
|
||||||
|
1. Agent Coordinator reads mcp_servers.json config
|
||||||
|
2. Spawns & initializes all external MCP servers
|
||||||
|
3. Discovers tools from each server via MCP protocol
|
||||||
|
4. Builds unified tool registry (native + external)
|
||||||
|
5. Presents single MCP interface to AI agents
|
||||||
|
6. Routes tool calls to appropriate servers
|
||||||
|
7. Automatically tracks all operations as tasks
|
||||||
|
8. Maintains heartbeat & coordination across agents
|
||||||
```
|
```
|
||||||
|
|
||||||
## 📋 Prerequisites
|
## 🔧 MCP Server Management & Unified Tool Registry
|
||||||
|
|
||||||
- **Elixir**: 1.16+
|
Agent Coordinator acts as a **unified MCP proxy server** that manages multiple external MCP servers while providing its own coordination capabilities. This creates a single, powerful interface for AI agents to access hundreds of tools seamlessly.
|
||||||
- **Erlang/OTP**: 26+
|
|
||||||
- **NATS Server**: With JetStream enabled
|
### 📡 External Server Management
|
||||||
|
|
||||||
|
The coordinator automatically manages external MCP servers based on configuration in `mcp_servers.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"mcp_filesystem": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "bunx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
|
||||||
|
"auto_restart": true,
|
||||||
|
"description": "Filesystem operations server"
|
||||||
|
},
|
||||||
|
"mcp_memory": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "bunx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-memory"],
|
||||||
|
"auto_restart": true,
|
||||||
|
"description": "Memory and knowledge graph server"
|
||||||
|
},
|
||||||
|
"mcp_figma": {
|
||||||
|
"type": "http",
|
||||||
|
"url": "http://127.0.0.1:3845/mcp",
|
||||||
|
"auto_restart": true,
|
||||||
|
"description": "Figma design integration server"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"config": {
|
||||||
|
"startup_timeout": 30000,
|
||||||
|
"heartbeat_interval": 10000,
|
||||||
|
"auto_restart_delay": 1000,
|
||||||
|
"max_restart_attempts": 3
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Server Lifecycle Management:**
|
||||||
|
|
||||||
|
1. **🚀 Startup**: Reads config and spawns each external server process
|
||||||
|
2. **🔍 Discovery**: Sends MCP `initialize` and `tools/list` requests to discover available tools
|
||||||
|
3. **📋 Registration**: Adds discovered tools to the unified tool registry
|
||||||
|
4. **💓 Monitoring**: Continuously monitors server health and heartbeat
|
||||||
|
5. **🔄 Auto-Restart**: Automatically restarts failed servers (if configured)
|
||||||
|
6. **🛡️ Cleanup**: Properly terminates processes and cleans up resources on shutdown
|
||||||
|
|
||||||
|
### 🛠️ Unified Tool Registry
|
||||||
|
|
||||||
|
The coordinator combines tools from multiple sources into a single, coherent interface:
|
||||||
|
|
||||||
|
**Native Coordination Tools:**
|
||||||
|
|
||||||
|
- `register_agent` - Register agents with capabilities
|
||||||
|
- `create_task` - Create coordination tasks
|
||||||
|
- `get_next_task` - Get assigned tasks
|
||||||
|
- `complete_task` - Mark tasks complete
|
||||||
|
- `get_task_board` - View all agent status
|
||||||
|
- `heartbeat` - Maintain agent liveness
|
||||||
|
|
||||||
|
**External Server Tools (Auto-Discovered):**
|
||||||
|
|
||||||
|
- **Filesystem**: `read_file`, `write_file`, `list_directory`, `search_files`
|
||||||
|
- **Memory**: `search_nodes`, `store_memory`, `recall_information`
|
||||||
|
- **Context7**: `get-library-docs`, `search-docs`, `get-library-info`
|
||||||
|
- **Figma**: `get_code`, `get_designs`, `fetch_assets`
|
||||||
|
- **Sequential Thinking**: `sequentialthinking`, `analyze_problem`
|
||||||
|
- **VS Code**: `run_command`, `install_extension`, `open_file`, `create_task`
|
||||||
|
|
||||||
|
**Dynamic Discovery Process:**
|
||||||
|
|
||||||
|
```ascii
|
||||||
|
┌─────────────────┐ MCP Protocol ┌─────────────────┐
|
||||||
|
│ Agent │ ──────────────────▶│ Agent │
|
||||||
|
│ Coordinator │ │ Coordinator │
|
||||||
|
│ │ initialize │ │
|
||||||
|
│ 1. Starts │◀───────────────── │ 2. Responds │
|
||||||
|
│ External │ │ with info │
|
||||||
|
│ Server │ tools/list │ │
|
||||||
|
│ │ ──────────────────▶│ 3. Returns │
|
||||||
|
│ 4. Registers │ │ tool list │
|
||||||
|
│ Tools │◀───────────────── │ │
|
||||||
|
└─────────────────┘ └─────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🎯 Intelligent Tool Routing
|
||||||
|
|
||||||
|
When an AI agent calls a tool, the coordinator intelligently routes the request:
|
||||||
|
|
||||||
|
**Routing Logic:**
|
||||||
|
|
||||||
|
1. **Native Tools**: Handled directly by Agent Coordinator modules
|
||||||
|
2. **External Tools**: Routed to the appropriate external MCP server
|
||||||
|
3. **VS Code Tools**: Routed to integrated VS Code Tool Provider
|
||||||
|
4. **Unknown Tools**: Return helpful error with available alternatives
|
||||||
|
|
||||||
|
**Automatic Task Tracking:**
|
||||||
|
|
||||||
|
- Every tool call automatically creates or updates agent tasks
|
||||||
|
- Maintains context of what agents are working on
|
||||||
|
- Provides visibility into cross-agent coordination
|
||||||
|
- Enables intelligent task distribution and conflict prevention
|
||||||
|
|
||||||
|
**Example Tool Call Flow:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Agent calls "read_file" → Coordinator routes to filesystem server →
|
||||||
|
Updates agent task → Sends heartbeat → Returns file content
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🛠️ Prerequisites
|
||||||
|
|
||||||
|
You need these installed to run Agent Coordinator:
|
||||||
|
|
||||||
|
- **Elixir**: 1.16+ with OTP 26+
|
||||||
|
- **Mix**: Comes with Elixir installation
|
||||||
|
|
||||||
## ⚡ Quick Start
|
## ⚡ Quick Start
|
||||||
|
|
||||||
### 1. Clone and Setup
|
### 1. Get the Code
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/your-username/agent_coordinator.git
|
git clone https://github.com/your-username/agent_coordinator.git
|
||||||
@@ -70,55 +216,19 @@ cd agent_coordinator
|
|||||||
mix deps.get
|
mix deps.get
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. Start NATS Server
|
### 2. Start the MCP Server
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Using Docker (recommended)
|
# Start the MCP server directly
|
||||||
docker run -p 4222:4222 -p 8222:8222 nats:latest -js
|
./scripts/mcp_launcher.sh
|
||||||
|
|
||||||
# Or install locally and run
|
# Or in development mode
|
||||||
nats-server -js -p 4222 -m 8222
|
mix run --no-halt
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Run the Application
|
### 3. Configure Your AI Tools
|
||||||
|
|
||||||
```bash
|
The agent coordinator is designed to work with VS Code and AI tools that support MCP. Add this to your VS Code `settings.json`:
|
||||||
# Start in development mode
|
|
||||||
iex -S mix
|
|
||||||
|
|
||||||
# Or use the provided setup script
|
|
||||||
./scripts/setup.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Test the MCP Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Run example demo
|
|
||||||
mix run examples/demo_mcp_server.exs
|
|
||||||
|
|
||||||
# Or test with Python client
|
|
||||||
python3 examples/mcp_client_example.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔧 Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export NATS_HOST=localhost
|
|
||||||
export NATS_PORT=4222
|
|
||||||
export MIX_ENV=dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### VS Code Integration
|
|
||||||
|
|
||||||
Run the setup script to configure VS Code automatically:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./scripts/setup.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Or manually configure your VS Code `settings.json`:
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
@@ -129,9 +239,7 @@ Or manually configure your VS Code `settings.json`:
|
|||||||
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
|
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
|
||||||
"args": [],
|
"args": [],
|
||||||
"env": {
|
"env": {
|
||||||
"MIX_ENV": "dev",
|
"MIX_ENV": "dev"
|
||||||
"NATS_HOST": "localhost",
|
|
||||||
"NATS_PORT": "4222"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -140,54 +248,59 @@ Or manually configure your VS Code `settings.json`:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## 🎮 Usage
|
### 4. Test It Works
|
||||||
|
|
||||||
### Command Line Interface
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Register an agent
|
# Run the demo to see it in action
|
||||||
mix run -e "AgentCoordinator.CLI.main([\"register\", \"CodeBot\", \"coding\", \"testing\"])"
|
mix run examples/full_workflow_demo.exs
|
||||||
|
|
||||||
# Create a task
|
|
||||||
mix run -e "AgentCoordinator.CLI.main([\"create-task\", \"Fix login bug\", \"User login fails\", \"priority=high\"])"
|
|
||||||
|
|
||||||
# View task board
|
|
||||||
mix run -e "AgentCoordinator.CLI.main([\"board\"])"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### MCP Integration
|
## 🎮 How to Use
|
||||||
|
|
||||||
Available MCP tools for agents:
|
Once your AI agents are connected via MCP, they can:
|
||||||
|
|
||||||
- `register_agent` - Register a new agent with capabilities
|
### Register as an Agent
|
||||||
- `create_task` - Create a new task with priority and requirements
|
|
||||||
- `get_next_task` - Get the next available task for an agent
|
|
||||||
- `complete_task` - Mark the current task as completed
|
|
||||||
- `get_task_board` - View all agents and their current status
|
|
||||||
- `heartbeat` - Send agent heartbeat to maintain active status
|
|
||||||
|
|
||||||
### API Example
|
```bash
|
||||||
|
# An agent identifies itself with capabilities
|
||||||
|
register_agent("GitHub Copilot", ["coding", "testing"], codebase_id: "my-project")
|
||||||
|
```
|
||||||
|
|
||||||
```elixir
|
### Create Tasks
|
||||||
# Register an agent
|
|
||||||
{:ok, agent_id} = AgentCoordinator.register_agent("MyAgent", ["coding", "testing"])
|
|
||||||
|
|
||||||
# Create a task
|
```bash
|
||||||
{:ok, task_id} = AgentCoordinator.create_task(
|
# Tasks are created with requirements
|
||||||
"Implement user authentication",
|
create_task("Fix login bug", "Authentication fails on mobile",
|
||||||
"Add JWT-based authentication to the API",
|
priority: "high",
|
||||||
priority: :high,
|
required_capabilities: ["coding", "debugging"]
|
||||||
required_capabilities: ["coding", "security"]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Get next task for agent
|
|
||||||
{:ok, task} = AgentCoordinator.get_next_task(agent_id)
|
|
||||||
|
|
||||||
# Complete the task
|
|
||||||
:ok = AgentCoordinator.complete_task(agent_id, "Authentication implemented successfully")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 🧪 Development
|
### Coordinate Automatically
|
||||||
|
|
||||||
|
The coordinator automatically:
|
||||||
|
|
||||||
|
- **Matches** tasks to agents based on capabilities
|
||||||
|
- **Queues** tasks when no suitable agents are available
|
||||||
|
- **Tracks** agent heartbeats to ensure they're still working
|
||||||
|
- **Handles** cross-codebase tasks that span multiple repositories
|
||||||
|
|
||||||
|
### Available MCP Tools
|
||||||
|
|
||||||
|
All MCP-compatible AI agents get these tools automatically:
|
||||||
|
|
||||||
|
| Tool | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `register_agent` | Register an agent with capabilities |
|
||||||
|
| `create_task` | Create a new task with requirements |
|
||||||
|
| `get_next_task` | Get the next task assigned to an agent |
|
||||||
|
| `complete_task` | Mark current task as completed |
|
||||||
|
| `get_task_board` | View all agents and their status |
|
||||||
|
| `heartbeat` | Send agent heartbeat to stay active |
|
||||||
|
| `register_codebase` | Register a new codebase/repository |
|
||||||
|
| `create_cross_codebase_task` | Create tasks spanning multiple repos |
|
||||||
|
|
||||||
|
## 🧪 Development & Testing
|
||||||
|
|
||||||
### Running Tests
|
### Running Tests
|
||||||
|
|
||||||
@@ -198,8 +311,9 @@ mix test
|
|||||||
# Run with coverage
|
# Run with coverage
|
||||||
mix test --cover
|
mix test --cover
|
||||||
|
|
||||||
# Run specific test file
|
# Try the examples
|
||||||
mix test test/agent_coordinator/mcp_server_test.exs
|
mix run examples/full_workflow_demo.exs
|
||||||
|
mix run examples/auto_heartbeat_demo.exs
|
||||||
```
|
```
|
||||||
|
|
||||||
### Code Quality
|
### Code Quality
|
||||||
@@ -211,55 +325,84 @@ mix format
|
|||||||
# Run static analysis
|
# Run static analysis
|
||||||
mix credo
|
mix credo
|
||||||
|
|
||||||
# Run Dialyzer for type checking
|
# Type checking
|
||||||
mix dialyzer
|
mix dialyzer
|
||||||
```
|
```
|
||||||
|
|
||||||
### Available Scripts
|
|
||||||
|
|
||||||
- `scripts/setup.sh` - Complete environment setup
|
|
||||||
- `scripts/mcp_launcher.sh` - Start MCP server
|
|
||||||
- `scripts/minimal_test.sh` - Quick functionality test
|
|
||||||
- `scripts/quick_test.sh` - Comprehensive test suite
|
|
||||||
|
|
||||||
## 📁 Project Structure
|
## 📁 Project Structure
|
||||||
|
|
||||||
```
|
```text
|
||||||
agent_coordinator/
|
agent_coordinator/
|
||||||
├── lib/ # Application source code
|
├── lib/
|
||||||
│ ├── agent_coordinator.ex
|
│ ├── agent_coordinator.ex # Main module
|
||||||
│ └── agent_coordinator/
|
│ └── agent_coordinator/
|
||||||
│ ├── agent.ex
|
│ ├── mcp_server.ex # MCP protocol implementation
|
||||||
│ ├── application.ex
|
│ ├── task_registry.ex # Task management
|
||||||
│ ├── cli.ex
|
│ ├── agent.ex # Agent management
|
||||||
│ ├── inbox.ex
|
│ ├── codebase_registry.ex # Multi-repository support
|
||||||
│ ├── mcp_server.ex
|
│ └── application.ex # Application supervisor
|
||||||
│ ├── persistence.ex
|
├── examples/ # Working examples
|
||||||
│ ├── task_registry.ex
|
├── test/ # Test suite
|
||||||
│ └── task.ex
|
├── scripts/ # Helper scripts
|
||||||
├── test/ # Test files
|
└── docs/ # Technical documentation
|
||||||
├── examples/ # Example implementations
|
├── README.md # Documentation index
|
||||||
│ ├── demo_mcp_server.exs
|
├── AUTO_HEARTBEAT.md # Unified MCP server details
|
||||||
│ ├── mcp_client_example.py
|
├── VSCODE_TOOL_INTEGRATION.md # VS Code integration
|
||||||
│ └── full_workflow_demo.exs
|
└── LANGUAGE_IMPLEMENTATIONS.md # Alternative language guides
|
||||||
├── scripts/ # Utility scripts
|
|
||||||
│ ├── setup.sh
|
|
||||||
│ ├── mcp_launcher.sh
|
|
||||||
│ └── minimal_test.sh
|
|
||||||
├── mix.exs # Project configuration
|
|
||||||
├── README.md # This file
|
|
||||||
└── CHANGELOG.md # Version history
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 🤔 Why This Design?
|
||||||
|
|
||||||
|
**The Problem**: Multiple AI agents working on the same codebase step on each other, duplicate work, or create conflicts.
|
||||||
|
|
||||||
|
**The Solution**: A coordination layer that:
|
||||||
|
|
||||||
|
- Lets agents register their capabilities
|
||||||
|
- Intelligently distributes tasks
|
||||||
|
- Tracks progress and prevents conflicts
|
||||||
|
- Scales across multiple repositories
|
||||||
|
|
||||||
|
**Why Elixir?**: Built-in concurrency, fault tolerance, and excellent for coordination systems.
|
||||||
|
|
||||||
|
## 🚀 Alternative Implementations
|
||||||
|
|
||||||
|
While this Elixir version works great, you might want to consider these languages for broader adoption:
|
||||||
|
|
||||||
|
### Go Implementation
|
||||||
|
|
||||||
|
- **Pros**: Single binary deployment, great performance, large community
|
||||||
|
- **Cons**: More verbose concurrency patterns
|
||||||
|
- **Best for**: Teams wanting simple deployment and good performance
|
||||||
|
|
||||||
|
### Python Implementation
|
||||||
|
|
||||||
|
- **Pros**: Huge ecosystem, familiar to most developers, excellent tooling
|
||||||
|
- **Cons**: GIL limitations for true concurrency
|
||||||
|
- **Best for**: AI/ML teams already using Python ecosystem
|
||||||
|
|
||||||
|
### Rust Implementation
|
||||||
|
|
||||||
|
- **Pros**: Maximum performance, memory safety, growing adoption
|
||||||
|
- **Cons**: Steeper learning curve, smaller ecosystem
|
||||||
|
- **Best for**: Performance-critical deployments
|
||||||
|
|
||||||
|
### Node.js Implementation
|
||||||
|
|
||||||
|
- **Pros**: JavaScript familiarity, event-driven nature fits coordination
|
||||||
|
- **Cons**: Single-threaded limitations, callback complexity
|
||||||
|
- **Best for**: Web teams already using Node.js
|
||||||
|
|
||||||
## 🤝 Contributing
|
## 🤝 Contributing
|
||||||
|
|
||||||
|
Contributions are welcome! Here's how:
|
||||||
|
|
||||||
1. Fork the repository
|
1. Fork the repository
|
||||||
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
|
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
|
||||||
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
|
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
|
||||||
4. Push to the branch (`git push origin feature/amazing-feature`)
|
4. Push to the branch (`git push origin feature/amazing-feature`)
|
||||||
5. Open a Pull Request
|
5. Open a Pull Request
|
||||||
|
|
||||||
Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct and development process.
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
|
||||||
|
|
||||||
## 📄 License
|
## 📄 License
|
||||||
|
|
||||||
@@ -267,16 +410,10 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
|
|||||||
|
|
||||||
## 🙏 Acknowledgments
|
## 🙏 Acknowledgments
|
||||||
|
|
||||||
- [NATS](https://nats.io/) for providing the messaging infrastructure
|
- [Model Context Protocol](https://modelcontextprotocol.io/) for the agent communication standard
|
||||||
- [Elixir](https://elixir-lang.org/) community for the excellent ecosystem
|
- [Elixir](https://elixir-lang.org/) community for the excellent ecosystem
|
||||||
- [Model Context Protocol](https://modelcontextprotocol.io/) for agent communication standards
|
- AI development teams pushing the boundaries of collaborative coding
|
||||||
|
|
||||||
## 📞 Support
|
|
||||||
|
|
||||||
- 📖 [Documentation](https://hexdocs.pm/agent_coordinator)
|
|
||||||
- 🐛 [Issue Tracker](https://github.com/your-username/agent_coordinator/issues)
|
|
||||||
- 💬 [Discussions](https://github.com/your-username/agent_coordinator/discussions)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Made with ❤️ by the AgentCoordinator team
|
**Agent Coordinator** - Making AI agents work together, not against each other.
|
||||||
|
|||||||
287
README_old.md
287
README_old.md
@@ -1,287 +0,0 @@
|
|||||||
# AgentCoordinator
|
|
||||||
|
|
||||||
A distributed task coordination system for AI agents built with Elixir and NATS.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
AgentCoordinator is a centralized task management system designed to enable multiple AI agents (Claude Code, GitHub Copilot, etc.) to work collaboratively on the same codebase without conflicts. It provides:
|
|
||||||
|
|
||||||
- **Distributed Task Management**: Centralized task queue with agent-specific inboxes
|
|
||||||
- **Conflict Resolution**: File-level locking prevents agents from working on the same files
|
|
||||||
- **Real-time Communication**: NATS messaging for instant coordination
|
|
||||||
- **Persistent Storage**: Event sourcing with configurable retention policies
|
|
||||||
- **MCP Integration**: Model Context Protocol server for agent communication
|
|
||||||
- **Fault Tolerance**: Elixir supervision trees ensure system resilience
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
|
||||||
│ AI Agent 1 │ │ AI Agent 2 │ │ AI Agent N │
|
|
||||||
│ (Claude Code) │ │ (Copilot) │ │ ... │
|
|
||||||
└─────────┬───────┘ └─────────┬────────┘ └─────────┬───────┘
|
|
||||||
│ │ │
|
|
||||||
└──────────────────────┼───────────────────────┘
|
|
||||||
│
|
|
||||||
┌─────────────┴──────────────┐
|
|
||||||
│ MCP Server Interface │
|
|
||||||
└─────────────┬──────────────┘
|
|
||||||
│
|
|
||||||
┌─────────────┴──────────────┐
|
|
||||||
│ AgentCoordinator │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ │
|
|
||||||
│ │ Task Registry │ │
|
|
||||||
│ │ ┌──────────────┐ │ │
|
|
||||||
│ │ │ Agent Inbox │ │ │
|
|
||||||
│ │ │ Agent Inbox │ │ │
|
|
||||||
│ │ │ Agent Inbox │ │ │
|
|
||||||
│ │ └──────────────┘ │ │
|
|
||||||
│ └──────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ │
|
|
||||||
│ │ NATS Messaging │ │
|
|
||||||
│ └──────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ │
|
|
||||||
│ │ Persistence │ │
|
|
||||||
│ │ (JetStream) │ │
|
|
||||||
│ └──────────────────────┘ │
|
|
||||||
└────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
- Elixir 1.16+ and Erlang/OTP 28+
|
|
||||||
- NATS server (with JetStream enabled)
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
1. **Install Dependencies**
|
|
||||||
```bash
|
|
||||||
mix deps.get
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Start NATS Server**
|
|
||||||
```bash
|
|
||||||
# Using Docker
|
|
||||||
docker run -p 4222:4222 -p 8222:8222 nats:latest -js
|
|
||||||
|
|
||||||
# Or install locally and run
|
|
||||||
nats-server -js
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Configure Environment**
|
|
||||||
```bash
|
|
||||||
export NATS_HOST=localhost
|
|
||||||
export NATS_PORT=4222
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Start the Application**
|
|
||||||
```bash
|
|
||||||
iex -S mix
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Command Line Interface
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Register an agent
|
|
||||||
mix run -e "AgentCoordinator.CLI.main([\"register\", \"CodeBot\", \"coding\", \"testing\"])"
|
|
||||||
|
|
||||||
# Create a task
|
|
||||||
mix run -e "AgentCoordinator.CLI.main([\"create-task\", \"Fix login bug\", \"User login fails\", \"priority=high\"])"
|
|
||||||
|
|
||||||
# View task board
|
|
||||||
mix run -e "AgentCoordinator.CLI.main([\"board\"])"
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP Integration
|
|
||||||
|
|
||||||
Available MCP tools for agents:
|
|
||||||
- `register_agent` - Register a new agent
|
|
||||||
- `create_task` - Create a new task
|
|
||||||
- `get_next_task` - Get next task for agent
|
|
||||||
- `complete_task` - Mark current task complete
|
|
||||||
- `get_task_board` - View all agent statuses
|
|
||||||
- `heartbeat` - Send agent heartbeat
|
|
||||||
|
|
||||||
## Connecting to GitHub Copilot
|
|
||||||
|
|
||||||
### Step 1: Start the MCP Server
|
|
||||||
|
|
||||||
The AgentCoordinator MCP server needs to be running and accessible via stdio. Here's how to set it up:
|
|
||||||
|
|
||||||
1. **Create MCP Server Launcher Script**
|
|
||||||
```bash
|
|
||||||
# Create a launcher script for the MCP server
|
|
||||||
cat > mcp_launcher.sh << 'EOF'
|
|
||||||
#!/bin/bash
|
|
||||||
cd /home/ra/agent_coordinator
|
|
||||||
export MIX_ENV=prod
|
|
||||||
mix run --no-halt -e "
|
|
||||||
# Start the application
|
|
||||||
Application.ensure_all_started(:agent_coordinator)
|
|
||||||
|
|
||||||
# Start MCP stdio interface
|
|
||||||
IO.puts(\"MCP server started...\")
|
|
||||||
|
|
||||||
# Read JSON-RPC messages from stdin and send responses to stdout
|
|
||||||
spawn(fn ->
|
|
||||||
Stream.repeatedly(fn -> IO.read(:stdio, :line) end)
|
|
||||||
|> Stream.take_while(&(&1 != :eof))
|
|
||||||
|> Enum.each(fn line ->
|
|
||||||
case String.trim(line) do
|
|
||||||
\"\" -> :ok
|
|
||||||
json_line ->
|
|
||||||
try do
|
|
||||||
request = Jason.decode!(json_line)
|
|
||||||
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
|
|
||||||
IO.puts(Jason.encode!(response))
|
|
||||||
rescue
|
|
||||||
e ->
|
|
||||||
error_response = %{
|
|
||||||
\"jsonrpc\" => \"2.0\",
|
|
||||||
\"id\" => Map.get(Jason.decode!(json_line), \"id\", null),
|
|
||||||
\"error\" => %{\"code\" => -32603, \"message\" => Exception.message(e)}
|
|
||||||
}
|
|
||||||
IO.puts(Jason.encode!(error_response))
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end)
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Keep process alive
|
|
||||||
Process.sleep(:infinity)
|
|
||||||
"
|
|
||||||
EOF
|
|
||||||
chmod +x mcp_launcher.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Configure VS Code for MCP
|
|
||||||
|
|
||||||
1. **Install Required Extensions**
|
|
||||||
- Make sure you have the latest GitHub Copilot extension
|
|
||||||
- Install any MCP-related VS Code extensions if available
|
|
||||||
|
|
||||||
2. **Create MCP Configuration**
|
|
||||||
Create or update your VS Code settings to include the MCP server:
|
|
||||||
|
|
||||||
```json
|
|
||||||
// In your VS Code settings.json or workspace settings
|
|
||||||
{
|
|
||||||
"github.copilot.advanced": {
|
|
||||||
"mcp": {
|
|
||||||
"servers": {
|
|
||||||
"agent-coordinator": {
|
|
||||||
"command": "/home/ra/agent_coordinator/mcp_launcher.sh",
|
|
||||||
"args": [],
|
|
||||||
"env": {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Alternative Direct Integration
|
|
||||||
|
|
||||||
If VS Code MCP integration isn't available yet, you can create a VS Code extension to bridge the gap:
|
|
||||||
|
|
||||||
1. **Create Extension Scaffold**
|
|
||||||
```bash
|
|
||||||
mkdir agent-coordinator-extension
|
|
||||||
cd agent-coordinator-extension
|
|
||||||
npm init -y
|
|
||||||
|
|
||||||
# Create package.json for VS Code extension
|
|
||||||
cat > package.json << 'EOF'
|
|
||||||
{
|
|
||||||
"name": "agent-coordinator",
|
|
||||||
"displayName": "Agent Coordinator",
|
|
||||||
"description": "Integration with AgentCoordinator MCP server",
|
|
||||||
"version": "0.1.0",
|
|
||||||
"engines": { "vscode": "^1.74.0" },
|
|
||||||
"categories": ["Other"],
|
|
||||||
"activationEvents": ["*"],
|
|
||||||
"main": "./out/extension.js",
|
|
||||||
"contributes": {
|
|
||||||
"commands": [
|
|
||||||
{
|
|
||||||
"command": "agentCoordinator.registerAgent",
|
|
||||||
"title": "Register as Agent"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"command": "agentCoordinator.getNextTask",
|
|
||||||
"title": "Get Next Task"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"command": "agentCoordinator.viewTaskBoard",
|
|
||||||
"title": "View Task Board"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
|
||||||
"@types/vscode": "^1.74.0",
|
|
||||||
"typescript": "^4.9.0"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Direct Command Line Usage
|
|
||||||
|
|
||||||
For immediate use, you can interact with the MCP server directly:
|
|
||||||
|
|
||||||
1. **Start the Server**
|
|
||||||
```bash
|
|
||||||
cd /home/ra/agent_coordinator
|
|
||||||
iex -S mix
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **In another terminal, use the MCP tools**
|
|
||||||
```bash
|
|
||||||
# Test MCP server directly
|
|
||||||
cd /home/ra/agent_coordinator
|
|
||||||
mix run demo_mcp_server.exs
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Production Deployment
|
|
||||||
|
|
||||||
1. **Create Systemd Service for MCP Server**
|
|
||||||
```bash
|
|
||||||
sudo tee /etc/systemd/system/agent-coordinator-mcp.service > /dev/null << EOF
|
|
||||||
[Unit]
|
|
||||||
Description=Agent Coordinator MCP Server
|
|
||||||
After=network.target nats.service
|
|
||||||
Requires=nats.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=ra
|
|
||||||
WorkingDirectory=/home/ra/agent_coordinator
|
|
||||||
Environment=MIX_ENV=prod
|
|
||||||
Environment=NATS_HOST=localhost
|
|
||||||
Environment=NATS_PORT=4222
|
|
||||||
ExecStart=/usr/bin/mix run --no-halt
|
|
||||||
Restart=always
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable agent-coordinator-mcp
|
|
||||||
sudo systemctl start agent-coordinator-mcp
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Check Status**
|
|
||||||
```bash
|
|
||||||
sudo systemctl status agent-coordinator-mcp
|
|
||||||
sudo journalctl -fu agent-coordinator-mcp
|
|
||||||
```
|
|
||||||
|
|
||||||
5
asdf.txt
Normal file
5
asdf.txt
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
⓪⓫⓬⓭⓮⓯⓰⓱⓲⓳⓴⓵⓶⓷⓸⓹⓺⓻⓼⓽⓾⓿─━│┃┄┅┆┇┈┉┊┋┌┍┎┏┐┑┒┓└┕┖┗┘┙┚┛├┝┞┟┠┡┢┣┤┥┦┧┨┩┪┫┬┭┮┯┰┱┲┳┴┵┶┷┸┹┺┻┼┽┾┿╀╁╂╃╄╅╆╇╈╉╊╋╌╍╎╏═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬╭╮╯╰╱╲╳╴╵╶╷╸╹╺╻╼╽╾╿▀
|
||||||
|
▁▂▃▄▅▆▇█▉▊▋▌▍▎▏▐░▒▓▔▕▖▗▘▙▚▛▜▝▞▟■□▢▣▤▥▦▧▨▩▪▫▬▭▮▯▰▱▲△▴▵▶▷▸▹►▻▼▽▾▿◀◁◂◃◄◅◆◇◈◉◊○◌◍◎●◐◑◒◓◔◕◖◗◘◙◚◛◜◝◞◟◠◡◢◣◤◥◦◧◨◩◪◫◬◭◮◯◰◱◲◳◴◵◶◷◸◹◺◻◼◽◾◿☀☁☂☃☄★☆☇☈☉☊☋☌☍☎☏☐☑☒☓☔
|
||||||
|
☕☖☗☘☙☚☛☜☝☞☟☠☡☢☣☤☥☦☧☨☩☪☫☬☭☮☯☰☱☲☳☴☵☶☷☸☹☺☻☼☽☾☿♀♁♂♃♄♅♆♇♈♉♊♋♌♍♎♏♐♑♒♓♔♕♖♗♘♙♚♛♜♝♞♟♠♡♢♣♤♥♦♧♨♩♪♫♬♭♮♯♰♱♲♳♴♵♶♷♸♹♺♻♼♽♾♿⚀⚁⚂⚃⚄⚅
|
||||||
|
⚆⚇⚈⚉⚊⚋⚌⚍⚎⚏⚐⚑⚒⚓⚔⚕⚖⚗⚘⚙⚚⚛⚜⚝⚞⚟⚠⚡⚢⚣⚤⚥⚦⚧⚨⚩⚪⚫⚬⚭⚮⚯⚰⚱⚲⚳⚴⚵⚶⚷⚸⚹⚺⚻⚼⚽⚾⚿⛀⛁⛂⛃⛄⛅⛆⛇⛈⛉⛊⛋⛌⛍⛎⛏⛐⛑⛒⛓⛔⛕⛖⛗⛘⛙⛚⛛⛜⛝⛞⛟⛠⛡⛢⛣⛤⛥⛦⛧⛨⛩⛪⛫⛬⛭⛮
|
||||||
|
⛯⛰⛱⛲⛳⛴⛵⛶⛷⛸⛹⛺⛻⛼⛽⛾⛿✀✁✂✃✄✅✆✇✈✉✊✋✌✍✎✏✐✑✒✓✔✕✖✗✘✙✚✛✜✝✞✟✠✡✢✣✤✥✦✧✨✩✪✫✬✭✮✯✰✱✲✳✴✵✶✷✸✹✺✻✼✽✾✿❀❁
|
||||||
121
docs/PROJECT_CLEANUP_SUMMARY.md
Normal file
121
docs/PROJECT_CLEANUP_SUMMARY.md
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
# Agent Coordinator - Project Cleanup Summary
|
||||||
|
|
||||||
|
## 🎯 Mission Accomplished
|
||||||
|
|
||||||
|
The Agent Coordinator project has been successfully tidied up and made much more presentable for GitHub! Here's what was accomplished:
|
||||||
|
|
||||||
|
## ✅ Completed Tasks
|
||||||
|
|
||||||
|
### 1. **Updated README.md** ✨
|
||||||
|
- **Before**: Outdated README that didn't accurately describe the project
|
||||||
|
- **After**: Comprehensive, clear README that properly explains:
|
||||||
|
- What Agent Coordinator actually does (MCP server for multi-agent coordination)
|
||||||
|
- Key features and benefits
|
||||||
|
- Quick start guide with practical examples
|
||||||
|
- Clear architecture diagram
|
||||||
|
- Proper project structure documentation
|
||||||
|
- Alternative language implementation recommendations
|
||||||
|
|
||||||
|
### 2. **Cleaned Up Outdated Files** 🗑️
|
||||||
|
- **Removed**: `test_enhanced.exs`, `test_multi_codebase.exs`, `test_timeout_fix.exs`
|
||||||
|
- **Removed**: `README_old.md` (outdated version)
|
||||||
|
- **Removed**: Development artifacts (`erl_crash.dump`, `firebase-debug.log`)
|
||||||
|
- **Updated**: `.gitignore` to prevent future development artifacts
|
||||||
|
|
||||||
|
### 3. **Organized Documentation Structure** 📚
|
||||||
|
- **Created**: `docs/` directory for technical documentation
|
||||||
|
- **Moved**: Technical deep-dive documents to `docs/`
|
||||||
|
- `AUTO_HEARTBEAT.md` - Unified MCP server architecture
|
||||||
|
- `VSCODE_TOOL_INTEGRATION.md` - VS Code integration details
|
||||||
|
- `SEARCH_FILES_TIMEOUT_FIX.md` - Technical timeout solutions
|
||||||
|
- `DYNAMIC_TOOL_DISCOVERY.md` - Dynamic tool discovery system
|
||||||
|
- **Created**: `docs/README.md` - Documentation index and navigation
|
||||||
|
- **Result**: Clean root directory with organized technical docs
|
||||||
|
|
||||||
|
### 4. **Improved Project Structure** 🏗️
|
||||||
|
- **Updated**: Main `AgentCoordinator` module to reflect actual functionality
|
||||||
|
- **Before**: Just a placeholder "hello world" function
|
||||||
|
- **After**: Comprehensive module with:
|
||||||
|
- Proper documentation explaining the system
|
||||||
|
- Practical API functions (`register_agent`, `create_task`, `get_task_board`)
|
||||||
|
- Version and status information
|
||||||
|
- Real examples and usage patterns
|
||||||
|
|
||||||
|
### 5. **Created Language Implementation Guide** 🚀
|
||||||
|
- **New Document**: `docs/LANGUAGE_IMPLEMENTATIONS.md`
|
||||||
|
- **Comprehensive guide** for implementing Agent Coordinator in more accessible languages:
|
||||||
|
- **Go** (highest priority) - Single binary deployment, excellent concurrency
|
||||||
|
- **Python** (second priority) - Huge AI/ML community, familiar ecosystem
|
||||||
|
- **Rust** (third priority) - Maximum performance, memory safety
|
||||||
|
- **Node.js** (fourth priority) - Event-driven, web developer familiarity
|
||||||
|
- **Detailed implementation strategies** with code examples
|
||||||
|
- **Migration guides** for moving from Elixir to other languages
|
||||||
|
- **Performance comparisons** and adoption recommendations
|
||||||
|
|
||||||
|
## 🎨 Project Before vs After
|
||||||
|
|
||||||
|
### Before Cleanup
|
||||||
|
- ❌ Confusing README that didn't explain the actual purpose
|
||||||
|
- ❌ Development artifacts scattered in root directory
|
||||||
|
- ❌ Technical documentation mixed with main docs
|
||||||
|
- ❌ Main module was just a placeholder
|
||||||
|
- ❌ No guidance for developers wanting to use other languages
|
||||||
|
|
||||||
|
### After Cleanup
|
||||||
|
- ✅ Clear, comprehensive README explaining the MCP coordination system
|
||||||
|
- ✅ Clean root directory with organized structure
|
||||||
|
- ✅ Technical docs properly organized in `docs/` directory
|
||||||
|
- ✅ Main module reflects actual project functionality
|
||||||
|
- ✅ Detailed guides for implementing in Go, Python, Rust, Node.js
|
||||||
|
- ✅ Professional presentation suitable for open source
|
||||||
|
|
||||||
|
## 🌟 Key Improvements for GitHub Presentation
|
||||||
|
|
||||||
|
1. **Clear Value Proposition**: README immediately explains what the project does and why it's valuable
|
||||||
|
2. **Easy Getting Started**: Quick start section gets users running in minutes
|
||||||
|
3. **Professional Structure**: Well-organized directories and documentation
|
||||||
|
4. **Multiple Language Options**: Guidance for teams that prefer Go, Python, Rust, or Node.js
|
||||||
|
5. **Technical Deep-Dives**: Detailed docs for developers who want to understand the internals
|
||||||
|
6. **Real Examples**: Working code examples and practical usage patterns
|
||||||
|
|
||||||
|
## 🚀 Recommendations for Broader Adoption
|
||||||
|
|
||||||
|
Based on the cleanup analysis, here are the top recommendations:
|
||||||
|
|
||||||
|
### 1. **Implement Go Version First** (Highest Impact)
|
||||||
|
- **Why**: Single binary deployment, familiar to most developers, excellent performance
|
||||||
|
- **Effort**: 2-3 weeks development time
|
||||||
|
- **Impact**: Would significantly increase adoption
|
||||||
|
|
||||||
|
### 2. **Python Version Second** (AI/ML Community)
|
||||||
|
- **Why**: Huge ecosystem in AI space, very familiar to ML engineers
|
||||||
|
- **Effort**: 3-4 weeks development time
|
||||||
|
- **Impact**: Perfect for AI agent development teams
|
||||||
|
|
||||||
|
### 3. **Create Video Demos**
|
||||||
|
- **What**: Screen recordings showing agent coordination in action
|
||||||
|
- **Why**: Much easier to understand the value than reading docs
|
||||||
|
- **Effort**: 1-2 days
|
||||||
|
- **Impact**: Increases GitHub star rate and adoption
|
||||||
|
|
||||||
|
### 4. **Docker Compose Quick Start**
|
||||||
|
- **What**: Single `docker-compose up` command to get everything running
|
||||||
|
- **Why**: Eliminates setup friction for trying the project
|
||||||
|
- **Effort**: 1 day
|
||||||
|
- **Impact**: Lower barrier to entry
|
||||||
|
|
||||||
|
## 🎯 Current State
|
||||||
|
|
||||||
|
The Agent Coordinator project is now:
|
||||||
|
|
||||||
|
- ✅ **Professional**: Clean, well-organized, and properly documented
|
||||||
|
- ✅ **Accessible**: Clear explanations for what it does and how to use it
|
||||||
|
- ✅ **Extensible**: Guidance for implementing in other languages
|
||||||
|
- ✅ **Developer-Friendly**: Good project structure and documentation organization
|
||||||
|
- ✅ **GitHub-Ready**: Perfect for open source presentation and community adoption
|
||||||
|
|
||||||
|
The Elixir implementation remains the reference implementation with all advanced features, while the documentation now provides clear paths for teams to implement the same concepts in their preferred languages.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Result**: The Agent Coordinator project is now much more approachable and ready for the world to enjoy! 🌍
|
||||||
77
docs/README.md
Normal file
77
docs/README.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
# Agent Coordinator Documentation
|
||||||
|
|
||||||
|
This directory contains detailed technical documentation for the Agent Coordinator project.
|
||||||
|
|
||||||
|
## 📚 Documentation Index
|
||||||
|
|
||||||
|
### Core Documentation
|
||||||
|
- [Main README](../README.md) - Project overview, setup, and basic usage
|
||||||
|
- [CHANGELOG](../CHANGELOG.md) - Version history and changes
|
||||||
|
- [CONTRIBUTING](../CONTRIBUTING.md) - How to contribute to the project
|
||||||
|
|
||||||
|
### Technical Deep Dives
|
||||||
|
|
||||||
|
#### Architecture & Design
|
||||||
|
- [AUTO_HEARTBEAT.md](AUTO_HEARTBEAT.md) - Unified MCP server with automatic task tracking and heartbeat system
|
||||||
|
- [VSCODE_TOOL_INTEGRATION.md](VSCODE_TOOL_INTEGRATION.md) - VS Code tool integration and dynamic tool discovery
|
||||||
|
- [DYNAMIC_TOOL_DISCOVERY.md](DYNAMIC_TOOL_DISCOVERY.md) - How the system dynamically discovers and manages MCP tools
|
||||||
|
|
||||||
|
#### Implementation Details
|
||||||
|
- [SEARCH_FILES_TIMEOUT_FIX.md](SEARCH_FILES_TIMEOUT_FIX.md) - Technical details on timeout handling and GenServer call optimization
|
||||||
|
|
||||||
|
## 🎯 Key Concepts
|
||||||
|
|
||||||
|
### Agent Coordination
|
||||||
|
The Agent Coordinator is an MCP server that enables multiple AI agents to work together without conflicts by:
|
||||||
|
|
||||||
|
- **Task Distribution**: Automatically assigns tasks based on agent capabilities
|
||||||
|
- **Heartbeat Management**: Tracks agent liveness and activity
|
||||||
|
- **Cross-Codebase Support**: Coordinates work across multiple repositories
|
||||||
|
- **Tool Unification**: Provides a single interface to multiple external MCP servers
|
||||||
|
|
||||||
|
### Unified MCP Server
|
||||||
|
The system acts as a unified MCP server that internally manages external MCP servers while providing:
|
||||||
|
|
||||||
|
- **Automatic Task Tracking**: Every tool usage becomes a tracked task
|
||||||
|
- **Universal Heartbeat Coverage**: All operations maintain agent liveness
|
||||||
|
- **Dynamic Tool Discovery**: Automatically discovers tools from external servers
|
||||||
|
- **Seamless Integration**: Single interface for all MCP-compatible tools
|
||||||
|
|
||||||
|
### VS Code Integration
|
||||||
|
Advanced integration with VS Code through:
|
||||||
|
|
||||||
|
- **Native Tool Provider**: Direct access to VS Code Extension API
|
||||||
|
- **Permission System**: Granular security controls for VS Code operations
|
||||||
|
- **Multi-Agent Support**: Safe concurrent access to VS Code features
|
||||||
|
- **Workflow Integration**: VS Code tools participate in task coordination
|
||||||
|
|
||||||
|
## 🚀 Getting Started with Documentation
|
||||||
|
|
||||||
|
1. **New Users**: Start with the [Main README](../README.md)
|
||||||
|
2. **Developers**: Read [CONTRIBUTING](../CONTRIBUTING.md) and [AUTO_HEARTBEAT.md](AUTO_HEARTBEAT.md)
|
||||||
|
3. **VS Code Users**: Check out [VSCODE_TOOL_INTEGRATION.md](VSCODE_TOOL_INTEGRATION.md)
|
||||||
|
4. **Troubleshooting**: See [SEARCH_FILES_TIMEOUT_FIX.md](SEARCH_FILES_TIMEOUT_FIX.md) for common issues
|
||||||
|
|
||||||
|
## 📖 Documentation Standards
|
||||||
|
|
||||||
|
All documentation in this project follows these standards:
|
||||||
|
|
||||||
|
- **Clear Structure**: Hierarchical headings with descriptive titles
|
||||||
|
- **Code Examples**: Practical examples with expected outputs
|
||||||
|
- **Troubleshooting**: Common issues and their solutions
|
||||||
|
- **Implementation Details**: Technical specifics for developers
|
||||||
|
- **User Perspective**: Both end-user and developer viewpoints
|
||||||
|
|
||||||
|
## 🤝 Contributing to Documentation
|
||||||
|
|
||||||
|
When adding new documentation:
|
||||||
|
|
||||||
|
1. Place technical deep-dives in this `docs/` directory
|
||||||
|
2. Update this index file to reference new documents
|
||||||
|
3. Keep the main README focused on getting started
|
||||||
|
4. Include practical examples and troubleshooting sections
|
||||||
|
5. Use clear, descriptive headings and consistent formatting
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
📝 **Last Updated**: August 25, 2025
|
||||||
89
docs/SEARCH_FILES_TIMEOUT_FIX.md
Normal file
89
docs/SEARCH_FILES_TIMEOUT_FIX.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# Search Files Timeout Fix
|
||||||
|
|
||||||
|
## Problem Description
|
||||||
|
|
||||||
|
The `search_files` tool (from the filesystem MCP server) was causing the agent-coordinator to exit with code 1 due to timeout issues. The error showed:
|
||||||
|
|
||||||
|
```
|
||||||
|
** (EXIT from #PID<0.95.0>) exited in: GenServer.call(AgentCoordinator.UnifiedMCPServer, {:handle_request, ...}, 5000)
|
||||||
|
** (EXIT) time out
|
||||||
|
```
|
||||||
|
|
||||||
|
## Root Cause Analysis
|
||||||
|
|
||||||
|
The issue was a timeout mismatch in the GenServer call chain:
|
||||||
|
|
||||||
|
1. **External tool calls** (like `search_files`) can take longer than 5 seconds to complete
|
||||||
|
2. **TaskRegistry and Inbox modules** were using default 5-second GenServer timeouts
|
||||||
|
3. During tool execution, **heartbeat operations** are called via `TaskRegistry.heartbeat_agent/1`
|
||||||
|
4. When the external tool took longer than 5 seconds, the heartbeat call would timeout
|
||||||
|
5. This caused the entire tool call to fail with exit code 1
|
||||||
|
|
||||||
|
## Call Chain Analysis
|
||||||
|
|
||||||
|
```
|
||||||
|
External MCP Tool Call (search_files)
|
||||||
|
↓
|
||||||
|
UnifiedMCPServer.handle_mcp_request (60s timeout) ✓
|
||||||
|
↓
|
||||||
|
MCPServerManager.route_tool_call (60s timeout) ✓
|
||||||
|
↓
|
||||||
|
call_external_tool
|
||||||
|
↓
|
||||||
|
TaskRegistry.heartbeat_agent (5s timeout) ❌ ← TIMEOUT HERE
|
||||||
|
```
|
||||||
|
|
||||||
|
## Solution Applied
|
||||||
|
|
||||||
|
Updated GenServer call timeouts in the following modules:
|
||||||
|
|
||||||
|
### TaskRegistry Module
|
||||||
|
- `register_agent/1`: 5s → 30s
|
||||||
|
- `heartbeat_agent/1`: 5s → 30s ← **Most Critical Fix**
|
||||||
|
- `update_task_activity/3`: 5s → 30s
|
||||||
|
- `assign_task/1`: 5s → 30s
|
||||||
|
- `create_task/3`: 5s → 30s
|
||||||
|
- `complete_task/1`: 5s → 30s
|
||||||
|
- `get_agent_current_task/1`: 5s → 15s
|
||||||
|
|
||||||
|
### Inbox Module
|
||||||
|
- `add_task/2`: 5s → 30s
|
||||||
|
- `complete_current_task/1`: 5s → 30s
|
||||||
|
- `get_next_task/1`: 5s → 15s
|
||||||
|
- `get_status/1`: 5s → 15s
|
||||||
|
- `list_tasks/1`: 5s → 15s
|
||||||
|
- `get_current_task/1`: 5s → 15s
|
||||||
|
|
||||||
|
## Timeout Strategy
|
||||||
|
|
||||||
|
- **Long operations** (registration, task creation, heartbeat): **30 seconds**
|
||||||
|
- **Read operations** (status, get tasks, list): **15 seconds**
|
||||||
|
- **External tool routing**: **60 seconds** (already correct)
|
||||||
|
|
||||||
|
## Impact
|
||||||
|
|
||||||
|
This fix ensures that:
|
||||||
|
|
||||||
|
1. ✅ `search_files` and other long-running external tools won't cause timeouts
|
||||||
|
2. ✅ Agent heartbeat operations can complete successfully during tool execution
|
||||||
|
3. ✅ The agent-coordinator won't exit with code 1 due to timeout issues
|
||||||
|
4. ✅ All automatic task tracking continues to work properly
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
- `/lib/agent_coordinator/task_registry.ex` - Updated GenServer call timeouts
|
||||||
|
- `/lib/agent_coordinator/inbox.ex` - Updated GenServer call timeouts
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
The fix can be verified by:
|
||||||
|
|
||||||
|
1. Running the agent-coordinator with external MCP servers
|
||||||
|
2. Executing `search_files` or other filesystem tools on large directories
|
||||||
|
3. Confirming no timeout errors occur and exit code remains 0
|
||||||
|
|
||||||
|
## Future Considerations
|
||||||
|
|
||||||
|
- Consider making timeouts configurable via application config
|
||||||
|
- Monitor for any other GenServer calls that might need timeout adjustments
|
||||||
|
- Add timeout logging to help identify future timeout issues
|
||||||
@@ -8,9 +8,9 @@ AgentCoordinator MCP server programmatically.
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
|
||||||
import uuid
|
import uuid
|
||||||
from typing import Dict, Any, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
class AgentCoordinatorMCP:
|
class AgentCoordinatorMCP:
|
||||||
def __init__(self, launcher_path: str = "./scripts/mcp_launcher.sh"):
|
def __init__(self, launcher_path: str = "./scripts/mcp_launcher.sh"):
|
||||||
|
|||||||
@@ -1,18 +1,269 @@
|
|||||||
defmodule AgentCoordinator do
|
defmodule AgentCoordinator do
|
||||||
@moduledoc """
|
@moduledoc """
|
||||||
Documentation for `AgentCoordinator`.
|
Agent Coordinator - A Model Context Protocol (MCP) server for multi-agent coordination.
|
||||||
|
|
||||||
|
Agent Coordinator enables multiple AI agents to work together seamlessly across codebases
|
||||||
|
without conflicts. It provides intelligent task distribution, real-time communication,
|
||||||
|
and cross-codebase coordination through a unified MCP interface.
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
|
||||||
|
- **Multi-Agent Coordination**: Register multiple AI agents with different capabilities
|
||||||
|
- **Intelligent Task Distribution**: Automatically assigns tasks based on agent capabilities
|
||||||
|
- **Cross-Codebase Support**: Coordinate work across multiple repositories
|
||||||
|
- **Unified MCP Interface**: Single server providing access to multiple external MCP servers
|
||||||
|
- **Automatic Task Tracking**: Every tool usage becomes a tracked task
|
||||||
|
- **Real-Time Communication**: Heartbeat system for agent liveness and coordination
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
To start the Agent Coordinator:
|
||||||
|
|
||||||
|
# Start the MCP server
|
||||||
|
./scripts/mcp_launcher.sh
|
||||||
|
|
||||||
|
# Or in development mode
|
||||||
|
iex -S mix
|
||||||
|
|
||||||
|
## Main Components
|
||||||
|
|
||||||
|
- `AgentCoordinator.MCPServer` - Core MCP protocol implementation
|
||||||
|
- `AgentCoordinator.TaskRegistry` - Task management and agent coordination
|
||||||
|
- `AgentCoordinator.UnifiedMCPServer` - Unified interface to external MCP servers
|
||||||
|
- `AgentCoordinator.CodebaseRegistry` - Multi-repository support
|
||||||
|
- `AgentCoordinator.VSCodeToolProvider` - VS Code integration tools
|
||||||
|
|
||||||
|
## MCP Tools Available
|
||||||
|
|
||||||
|
### Agent Coordination
|
||||||
|
- `register_agent` - Register an agent with capabilities
|
||||||
|
- `create_task` - Create tasks with requirements
|
||||||
|
- `get_next_task` - Get assigned tasks
|
||||||
|
- `complete_task` - Mark tasks complete
|
||||||
|
- `get_task_board` - View all agent status
|
||||||
|
- `heartbeat` - Maintain agent liveness
|
||||||
|
|
||||||
|
### Codebase Management
|
||||||
|
- `register_codebase` - Register repositories
|
||||||
|
- `create_cross_codebase_task` - Tasks spanning multiple repos
|
||||||
|
- `add_codebase_dependency` - Define repository relationships
|
||||||
|
|
||||||
|
### External Tool Access
|
||||||
|
All tools from external MCP servers are automatically available through
|
||||||
|
the unified interface, including filesystem, context7, memory, and other servers.
|
||||||
|
|
||||||
|
## Usage Example
|
||||||
|
|
||||||
|
# Register an agent
|
||||||
|
AgentCoordinator.MCPServer.handle_mcp_request(%{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"name" => "MyAgent",
|
||||||
|
"capabilities" => ["coding", "testing"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
See the documentation in `docs/` for detailed implementation guides.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
alias AgentCoordinator.MCPServer
|
||||||
|
|
||||||
@doc """
|
@doc """
|
||||||
Hello world.
|
Get the version of Agent Coordinator.
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
iex> AgentCoordinator.hello()
|
iex> AgentCoordinator.version()
|
||||||
:world
|
"0.1.0"
|
||||||
|
|
||||||
"""
|
"""
|
||||||
def hello do
|
def version do
|
||||||
:world
|
Application.spec(:agent_coordinator, :vsn) |> to_string()
|
||||||
|
end
|
||||||
|
|
||||||
|
@doc """
|
||||||
|
Get the current status of the Agent Coordinator system.
|
||||||
|
|
||||||
|
Returns information about active agents, tasks, and external MCP servers.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
iex> AgentCoordinator.status()
|
||||||
|
%{
|
||||||
|
agents: 2,
|
||||||
|
active_tasks: 1,
|
||||||
|
external_servers: 3,
|
||||||
|
uptime: 12345
|
||||||
|
}
|
||||||
|
|
||||||
|
"""
|
||||||
|
def status do
|
||||||
|
with {:ok, board} <- get_task_board(),
|
||||||
|
{:ok, server_status} <- get_server_status() do
|
||||||
|
%{
|
||||||
|
agents: length(board[:agents] || []),
|
||||||
|
active_tasks: count_active_tasks(board),
|
||||||
|
external_servers: count_active_servers(server_status),
|
||||||
|
uptime: get_uptime()
|
||||||
|
}
|
||||||
|
else
|
||||||
|
_ -> %{status: :error, message: "Unable to retrieve system status"}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
@doc """
|
||||||
|
Get the current task board showing all agents and their status.
|
||||||
|
|
||||||
|
Returns information about all registered agents, their current tasks,
|
||||||
|
and overall system status.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
iex> {:ok, board} = AgentCoordinator.get_task_board()
|
||||||
|
iex> is_map(board)
|
||||||
|
true
|
||||||
|
|
||||||
|
"""
|
||||||
|
def get_task_board do
|
||||||
|
request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{"name" => "get_task_board", "arguments" => %{}},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => System.unique_integer()
|
||||||
|
}
|
||||||
|
|
||||||
|
case MCPServer.handle_mcp_request(request) do
|
||||||
|
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||||
|
{:ok, Jason.decode!(text)}
|
||||||
|
|
||||||
|
%{"error" => error} ->
|
||||||
|
{:error, error}
|
||||||
|
|
||||||
|
_ ->
|
||||||
|
{:error, "Unexpected response format"}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
@doc """
|
||||||
|
Register a new agent with the coordination system.
|
||||||
|
|
||||||
|
## Parameters
|
||||||
|
|
||||||
|
- `name` - Agent name (string)
|
||||||
|
- `capabilities` - List of capabilities (["coding", "testing", ...])
|
||||||
|
- `opts` - Optional parameters (codebase_id, workspace_path, etc.)
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
iex> {:ok, result} = AgentCoordinator.register_agent("TestAgent", ["coding"])
|
||||||
|
iex> is_map(result)
|
||||||
|
true
|
||||||
|
|
||||||
|
"""
|
||||||
|
def register_agent(name, capabilities, opts \\ []) do
|
||||||
|
args =
|
||||||
|
%{
|
||||||
|
"name" => name,
|
||||||
|
"capabilities" => capabilities
|
||||||
|
}
|
||||||
|
|> add_optional_arg("codebase_id", opts[:codebase_id])
|
||||||
|
|> add_optional_arg("workspace_path", opts[:workspace_path])
|
||||||
|
|> add_optional_arg("cross_codebase_capable", opts[:cross_codebase_capable])
|
||||||
|
|
||||||
|
request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{"name" => "register_agent", "arguments" => args},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => System.unique_integer()
|
||||||
|
}
|
||||||
|
|
||||||
|
case MCPServer.handle_mcp_request(request) do
|
||||||
|
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||||
|
{:ok, Jason.decode!(text)}
|
||||||
|
|
||||||
|
%{"error" => error} ->
|
||||||
|
{:error, error}
|
||||||
|
|
||||||
|
_ ->
|
||||||
|
{:error, "Unexpected response format"}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
@doc """
|
||||||
|
Create a new task in the coordination system.
|
||||||
|
|
||||||
|
## Parameters
|
||||||
|
|
||||||
|
- `title` - Task title (string)
|
||||||
|
- `description` - Task description (string)
|
||||||
|
- `opts` - Optional parameters (priority, codebase_id, file_paths, etc.)
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
iex> {:ok, result} = AgentCoordinator.create_task("Test Task", "Test description")
|
||||||
|
iex> is_map(result)
|
||||||
|
true
|
||||||
|
|
||||||
|
"""
|
||||||
|
def create_task(title, description, opts \\ []) do
|
||||||
|
args =
|
||||||
|
%{
|
||||||
|
"title" => title,
|
||||||
|
"description" => description
|
||||||
|
}
|
||||||
|
|> add_optional_arg("priority", opts[:priority])
|
||||||
|
|> add_optional_arg("codebase_id", opts[:codebase_id])
|
||||||
|
|> add_optional_arg("file_paths", opts[:file_paths])
|
||||||
|
|> add_optional_arg("required_capabilities", opts[:required_capabilities])
|
||||||
|
|
||||||
|
request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{"name" => "create_task", "arguments" => args},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => System.unique_integer()
|
||||||
|
}
|
||||||
|
|
||||||
|
case MCPServer.handle_mcp_request(request) do
|
||||||
|
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||||
|
{:ok, Jason.decode!(text)}
|
||||||
|
|
||||||
|
%{"error" => error} ->
|
||||||
|
{:error, error}
|
||||||
|
|
||||||
|
_ ->
|
||||||
|
{:error, "Unexpected response format"}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# Private helpers
|
||||||
|
|
||||||
|
defp add_optional_arg(args, _key, nil), do: args
|
||||||
|
defp add_optional_arg(args, key, value), do: Map.put(args, key, value)
|
||||||
|
|
||||||
|
defp count_active_tasks(%{agents: agents}) do
|
||||||
|
Enum.count(agents, fn agent ->
|
||||||
|
Map.get(agent, "current_task") != nil
|
||||||
|
end)
|
||||||
|
end
|
||||||
|
|
||||||
|
defp count_active_tasks(_), do: 0
|
||||||
|
|
||||||
|
defp count_active_servers(server_status) when is_map(server_status) do
|
||||||
|
Map.get(server_status, :active_servers, 0)
|
||||||
|
end
|
||||||
|
|
||||||
|
defp get_server_status do
|
||||||
|
# This would call UnifiedMCPServer to get external server status
|
||||||
|
# For now, return a placeholder
|
||||||
|
{:ok, %{active_servers: 3}}
|
||||||
|
end
|
||||||
|
|
||||||
|
defp get_uptime do
|
||||||
|
# Get system uptime in seconds
|
||||||
|
{uptime_ms, _} = :erlang.statistics(:wall_clock)
|
||||||
|
div(uptime_ms, 1000)
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|||||||
@@ -74,13 +74,15 @@ defmodule AgentCoordinator.Agent do
|
|||||||
|
|
||||||
def can_handle?(agent, task) do
|
def can_handle?(agent, task) do
|
||||||
# Check if agent is in the same codebase or can handle cross-codebase tasks
|
# Check if agent is in the same codebase or can handle cross-codebase tasks
|
||||||
codebase_compatible = agent.codebase_id == task.codebase_id or
|
codebase_compatible =
|
||||||
|
agent.codebase_id == task.codebase_id or
|
||||||
Map.get(agent.metadata, :cross_codebase_capable, false)
|
Map.get(agent.metadata, :cross_codebase_capable, false)
|
||||||
|
|
||||||
# Simple capability matching - can be enhanced
|
# Simple capability matching - can be enhanced
|
||||||
required_capabilities = Map.get(task.metadata, :required_capabilities, [])
|
required_capabilities = Map.get(task.metadata, :required_capabilities, [])
|
||||||
|
|
||||||
capability_match = case required_capabilities do
|
capability_match =
|
||||||
|
case required_capabilities do
|
||||||
[] -> true
|
[] -> true
|
||||||
caps -> Enum.any?(caps, fn cap -> cap in agent.capabilities end)
|
caps -> Enum.any?(caps, fn cap -> cap in agent.capabilities end)
|
||||||
end
|
end
|
||||||
|
|||||||
@@ -18,23 +18,18 @@ defmodule AgentCoordinator.Application do
|
|||||||
{Phoenix.PubSub, name: AgentCoordinator.PubSub},
|
{Phoenix.PubSub, name: AgentCoordinator.PubSub},
|
||||||
|
|
||||||
# Codebase registry for multi-codebase coordination
|
# Codebase registry for multi-codebase coordination
|
||||||
{AgentCoordinator.CodebaseRegistry, nats: if(enable_persistence, do: nats_config(), else: nil)},
|
{AgentCoordinator.CodebaseRegistry,
|
||||||
|
nats: if(enable_persistence, do: nats_config(), else: nil)},
|
||||||
|
|
||||||
# Task registry with NATS integration (conditionally add persistence)
|
# Task registry with NATS integration (conditionally add persistence)
|
||||||
{AgentCoordinator.TaskRegistry, nats: if(enable_persistence, do: nats_config(), else: nil)},
|
{AgentCoordinator.TaskRegistry, nats: if(enable_persistence, do: nats_config(), else: nil)},
|
||||||
|
|
||||||
# MCP Server Manager (manages external MCP servers)
|
# Unified MCP server (includes external server management, session tracking, and auto-registration)
|
||||||
{AgentCoordinator.MCPServerManager, config_file: Application.get_env(:agent_coordinator, :mcp_config_file, "mcp_servers.json")},
|
|
||||||
|
|
||||||
# MCP server
|
|
||||||
AgentCoordinator.MCPServer,
|
AgentCoordinator.MCPServer,
|
||||||
|
|
||||||
# Auto-heartbeat manager
|
# Auto-heartbeat manager
|
||||||
AgentCoordinator.AutoHeartbeat,
|
AgentCoordinator.AutoHeartbeat,
|
||||||
|
|
||||||
# Enhanced MCP server with automatic heartbeats
|
|
||||||
AgentCoordinator.EnhancedMCPServer,
|
|
||||||
|
|
||||||
# Dynamic supervisor for agent inboxes
|
# Dynamic supervisor for agent inboxes
|
||||||
{DynamicSupervisor, name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one}
|
{DynamicSupervisor, name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one}
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -31,7 +31,8 @@ defmodule AgentCoordinator.AutoHeartbeat do
|
|||||||
"""
|
"""
|
||||||
def register_agent_with_heartbeat(name, capabilities, agent_context \\ %{}) do
|
def register_agent_with_heartbeat(name, capabilities, agent_context \\ %{}) do
|
||||||
# Convert capabilities to strings if they're atoms
|
# Convert capabilities to strings if they're atoms
|
||||||
string_capabilities = Enum.map(capabilities, fn
|
string_capabilities =
|
||||||
|
Enum.map(capabilities, fn
|
||||||
cap when is_atom(cap) -> Atom.to_string(cap)
|
cap when is_atom(cap) -> Atom.to_string(cap)
|
||||||
cap when is_binary(cap) -> cap
|
cap when is_binary(cap) -> cap
|
||||||
end)
|
end)
|
||||||
@@ -100,10 +101,14 @@ defmodule AgentCoordinator.AutoHeartbeat do
|
|||||||
"method" => "tools/call",
|
"method" => "tools/call",
|
||||||
"params" => %{
|
"params" => %{
|
||||||
"name" => "create_task",
|
"name" => "create_task",
|
||||||
"arguments" => Map.merge(%{
|
"arguments" =>
|
||||||
|
Map.merge(
|
||||||
|
%{
|
||||||
"title" => title,
|
"title" => title,
|
||||||
"description" => description
|
"description" => description
|
||||||
}, opts)
|
},
|
||||||
|
opts
|
||||||
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -173,8 +178,9 @@ defmodule AgentCoordinator.AutoHeartbeat do
|
|||||||
# Start new timer
|
# Start new timer
|
||||||
timer_ref = Process.send_after(self(), {:heartbeat_timer, agent_id}, @heartbeat_interval)
|
timer_ref = Process.send_after(self(), {:heartbeat_timer, agent_id}, @heartbeat_interval)
|
||||||
|
|
||||||
new_state = %{state |
|
new_state = %{
|
||||||
timers: Map.put(state.timers, agent_id, timer_ref),
|
state
|
||||||
|
| timers: Map.put(state.timers, agent_id, timer_ref),
|
||||||
agent_contexts: Map.put(state.agent_contexts, agent_id, context)
|
agent_contexts: Map.put(state.agent_contexts, agent_id, context)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -187,8 +193,9 @@ defmodule AgentCoordinator.AutoHeartbeat do
|
|||||||
Process.cancel_timer(state.timers[agent_id])
|
Process.cancel_timer(state.timers[agent_id])
|
||||||
end
|
end
|
||||||
|
|
||||||
new_state = %{state |
|
new_state = %{
|
||||||
timers: Map.delete(state.timers, agent_id),
|
state
|
||||||
|
| timers: Map.delete(state.timers, agent_id),
|
||||||
agent_contexts: Map.delete(state.agent_contexts, agent_id)
|
agent_contexts: Map.delete(state.agent_contexts, agent_id)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ defmodule AgentCoordinator.Client do
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
use GenServer
|
use GenServer
|
||||||
alias AgentCoordinator.{EnhancedMCPServer, AutoHeartbeat}
|
alias AgentCoordinator.AutoHeartbeat
|
||||||
|
|
||||||
defstruct [
|
defstruct [
|
||||||
:agent_id,
|
:agent_id,
|
||||||
@@ -108,11 +108,11 @@ defmodule AgentCoordinator.Client do
|
|||||||
# Server callbacks
|
# Server callbacks
|
||||||
|
|
||||||
def init(config) do
|
def init(config) do
|
||||||
# Register with enhanced MCP server
|
# Register with task registry
|
||||||
case EnhancedMCPServer.register_agent_with_session(
|
case AgentCoordinator.TaskRegistry.register_agent(
|
||||||
config.agent_name,
|
config.agent_name,
|
||||||
config.capabilities,
|
config.capabilities,
|
||||||
self()
|
session_pid: self()
|
||||||
) do
|
) do
|
||||||
{:ok, agent_id} ->
|
{:ok, agent_id} ->
|
||||||
state = %__MODULE__{
|
state = %__MODULE__{
|
||||||
@@ -151,10 +151,14 @@ defmodule AgentCoordinator.Client do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def handle_call({:create_task, title, description, opts}, _from, state) do
|
def handle_call({:create_task, title, description, opts}, _from, state) do
|
||||||
arguments = Map.merge(%{
|
arguments =
|
||||||
|
Map.merge(
|
||||||
|
%{
|
||||||
"title" => title,
|
"title" => title,
|
||||||
"description" => description
|
"description" => description
|
||||||
}, opts)
|
},
|
||||||
|
opts
|
||||||
|
)
|
||||||
|
|
||||||
request = %{
|
request = %{
|
||||||
"method" => "tools/call",
|
"method" => "tools/call",
|
||||||
@@ -182,9 +186,9 @@ defmodule AgentCoordinator.Client do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def handle_call(:get_task_board, _from, state) do
|
def handle_call(:get_task_board, _from, state) do
|
||||||
case EnhancedMCPServer.get_enhanced_task_board() do
|
case AgentCoordinator.TaskRegistry.get_task_board() do
|
||||||
{:ok, board} ->
|
task_board when is_map(task_board) ->
|
||||||
{:reply, {:ok, board}, update_last_heartbeat(state)}
|
{:reply, {:ok, task_board}, update_last_heartbeat(state)}
|
||||||
|
|
||||||
{:error, reason} ->
|
{:error, reason} ->
|
||||||
{:reply, {:error, reason}, state}
|
{:reply, {:error, reason}, state}
|
||||||
@@ -266,12 +270,10 @@ defmodule AgentCoordinator.Client do
|
|||||||
# Private helpers
|
# Private helpers
|
||||||
|
|
||||||
defp enhanced_mcp_call(request, state) do
|
defp enhanced_mcp_call(request, state) do
|
||||||
session_info = %{
|
# Add agent_id to the request for the MCP server
|
||||||
agent_id: state.agent_id,
|
request_with_agent = Map.put(request, "agent_id", state.agent_id)
|
||||||
session_pid: state.session_pid
|
|
||||||
}
|
|
||||||
|
|
||||||
case EnhancedMCPServer.handle_enhanced_mcp_request(request, session_info) do
|
case AgentCoordinator.MCPServer.handle_mcp_request(request_with_agent) do
|
||||||
%{"result" => %{"content" => [%{"text" => response_json}]}} = response ->
|
%{"result" => %{"content" => [%{"text" => response_json}]}} = response ->
|
||||||
case Jason.decode(response_json) do
|
case Jason.decode(response_json) do
|
||||||
{:ok, data} ->
|
{:ok, data} ->
|
||||||
@@ -300,7 +302,7 @@ defmodule AgentCoordinator.Client do
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
case EnhancedMCPServer.handle_enhanced_mcp_request(request) do
|
case AgentCoordinator.MCPServer.handle_mcp_request(request) do
|
||||||
%{"result" => _} -> :ok
|
%{"result" => _} -> :ok
|
||||||
%{"error" => %{"message" => message}} -> {:error, message}
|
%{"error" => %{"message" => message}} -> {:error, message}
|
||||||
_ -> {:error, :unknown_heartbeat_error}
|
_ -> {:error, :unknown_heartbeat_error}
|
||||||
|
|||||||
@@ -1,266 +0,0 @@
|
|||||||
defmodule AgentCoordinator.EnhancedMCPServer do
|
|
||||||
@moduledoc """
|
|
||||||
Enhanced MCP server with automatic heartbeat management and collision detection.
|
|
||||||
|
|
||||||
This module extends the base MCP server with:
|
|
||||||
1. Automatic heartbeats on every operation
|
|
||||||
2. Agent session tracking
|
|
||||||
3. Enhanced collision detection
|
|
||||||
4. Automatic agent cleanup on disconnect
|
|
||||||
"""
|
|
||||||
|
|
||||||
use GenServer
|
|
||||||
alias AgentCoordinator.{MCPServer, AutoHeartbeat, TaskRegistry}
|
|
||||||
|
|
||||||
# Track active agent sessions
|
|
||||||
defstruct [
|
|
||||||
:agent_sessions,
|
|
||||||
:session_monitors
|
|
||||||
]
|
|
||||||
|
|
||||||
# Client API
|
|
||||||
|
|
||||||
def start_link(opts \\ []) do
|
|
||||||
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Enhanced MCP request handler with automatic heartbeat management
|
|
||||||
"""
|
|
||||||
def handle_enhanced_mcp_request(request, session_info \\ %{}) do
|
|
||||||
GenServer.call(__MODULE__, {:enhanced_mcp_request, request, session_info})
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Register an agent with enhanced session tracking
|
|
||||||
"""
|
|
||||||
def register_agent_with_session(name, capabilities, session_pid \\ self()) do
|
|
||||||
GenServer.call(__MODULE__, {:register_agent_with_session, name, capabilities, session_pid})
|
|
||||||
end
|
|
||||||
|
|
||||||
# Server callbacks
|
|
||||||
|
|
||||||
def init(_opts) do
|
|
||||||
state = %__MODULE__{
|
|
||||||
agent_sessions: %{},
|
|
||||||
session_monitors: %{}
|
|
||||||
}
|
|
||||||
|
|
||||||
{:ok, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
def handle_call({:enhanced_mcp_request, request, session_info}, {from_pid, _}, state) do
|
|
||||||
# Extract agent_id from session or request
|
|
||||||
agent_id = extract_agent_id(request, session_info, state)
|
|
||||||
|
|
||||||
# If we have an agent_id, send heartbeat before and after operation
|
|
||||||
enhanced_result =
|
|
||||||
case agent_id do
|
|
||||||
nil ->
|
|
||||||
# No agent context, use normal MCP processing
|
|
||||||
MCPServer.handle_mcp_request(request)
|
|
||||||
|
|
||||||
id ->
|
|
||||||
# Send pre-operation heartbeat
|
|
||||||
pre_heartbeat = TaskRegistry.heartbeat_agent(id)
|
|
||||||
|
|
||||||
# Process the request
|
|
||||||
result = MCPServer.handle_mcp_request(request)
|
|
||||||
|
|
||||||
# Send post-operation heartbeat and update session activity
|
|
||||||
post_heartbeat = TaskRegistry.heartbeat_agent(id)
|
|
||||||
update_session_activity(state, id, from_pid)
|
|
||||||
|
|
||||||
# Add heartbeat metadata to successful responses
|
|
||||||
case result do
|
|
||||||
%{"result" => _} = success ->
|
|
||||||
Map.put(success, "_heartbeat_metadata", %{
|
|
||||||
agent_id: id,
|
|
||||||
pre_heartbeat: pre_heartbeat,
|
|
||||||
post_heartbeat: post_heartbeat,
|
|
||||||
timestamp: DateTime.utc_now()
|
|
||||||
})
|
|
||||||
|
|
||||||
error_result ->
|
|
||||||
error_result
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
{:reply, enhanced_result, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
def handle_call({:register_agent_with_session, name, capabilities, session_pid}, _from, state) do
|
|
||||||
# Convert capabilities to strings if they're atoms
|
|
||||||
string_capabilities =
|
|
||||||
Enum.map(capabilities, fn
|
|
||||||
cap when is_atom(cap) -> Atom.to_string(cap)
|
|
||||||
cap when is_binary(cap) -> cap
|
|
||||||
end)
|
|
||||||
|
|
||||||
# Register the agent normally first
|
|
||||||
case MCPServer.handle_mcp_request(%{
|
|
||||||
"method" => "tools/call",
|
|
||||||
"params" => %{
|
|
||||||
"name" => "register_agent",
|
|
||||||
"arguments" => %{"name" => name, "capabilities" => string_capabilities}
|
|
||||||
}
|
|
||||||
}) do
|
|
||||||
%{"result" => %{"content" => [%{"text" => response_json}]}} ->
|
|
||||||
case Jason.decode(response_json) do
|
|
||||||
{:ok, %{"agent_id" => agent_id}} ->
|
|
||||||
# Track the session
|
|
||||||
monitor_ref = Process.monitor(session_pid)
|
|
||||||
|
|
||||||
new_state = %{
|
|
||||||
state
|
|
||||||
| agent_sessions:
|
|
||||||
Map.put(state.agent_sessions, agent_id, %{
|
|
||||||
pid: session_pid,
|
|
||||||
name: name,
|
|
||||||
capabilities: capabilities,
|
|
||||||
registered_at: DateTime.utc_now(),
|
|
||||||
last_activity: DateTime.utc_now()
|
|
||||||
}),
|
|
||||||
session_monitors: Map.put(state.session_monitors, monitor_ref, agent_id)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Start automatic heartbeat management
|
|
||||||
AutoHeartbeat.start_link([])
|
|
||||||
|
|
||||||
AutoHeartbeat.register_agent_with_heartbeat(name, capabilities, %{
|
|
||||||
session_pid: session_pid,
|
|
||||||
enhanced_server: true
|
|
||||||
})
|
|
||||||
|
|
||||||
{:reply, {:ok, agent_id}, new_state}
|
|
||||||
|
|
||||||
{:error, reason} ->
|
|
||||||
{:reply, {:error, reason}, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
%{"error" => %{"message" => message}} ->
|
|
||||||
{:reply, {:error, message}, state}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
{:reply, {:error, "Unexpected response format"}, state}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def handle_call(:get_enhanced_task_board, _from, state) do
|
|
||||||
# Get the regular task board
|
|
||||||
case MCPServer.handle_mcp_request(%{
|
|
||||||
"method" => "tools/call",
|
|
||||||
"params" => %{"name" => "get_task_board", "arguments" => %{}}
|
|
||||||
}) do
|
|
||||||
%{"result" => %{"content" => [%{"text" => response_json}]}} ->
|
|
||||||
case Jason.decode(response_json) do
|
|
||||||
{:ok, %{"agents" => agents}} ->
|
|
||||||
# Enhance with session information
|
|
||||||
enhanced_agents =
|
|
||||||
Enum.map(agents, fn agent ->
|
|
||||||
agent_id = agent["agent_id"]
|
|
||||||
session_info = Map.get(state.agent_sessions, agent_id, %{})
|
|
||||||
|
|
||||||
Map.merge(agent, %{
|
|
||||||
"session_active" => Map.has_key?(state.agent_sessions, agent_id),
|
|
||||||
"last_activity" => Map.get(session_info, :last_activity),
|
|
||||||
"session_duration" => calculate_session_duration(session_info)
|
|
||||||
})
|
|
||||||
end)
|
|
||||||
|
|
||||||
result = %{
|
|
||||||
"agents" => enhanced_agents,
|
|
||||||
"active_sessions" => map_size(state.agent_sessions)
|
|
||||||
}
|
|
||||||
|
|
||||||
{:reply, {:ok, result}, state}
|
|
||||||
|
|
||||||
{:error, reason} ->
|
|
||||||
{:reply, {:error, reason}, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
%{"error" => %{"message" => message}} ->
|
|
||||||
{:reply, {:error, message}, state}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Handle process monitoring - cleanup when agent session dies
|
|
||||||
def handle_info({:DOWN, monitor_ref, :process, _pid, _reason}, state) do
|
|
||||||
case Map.get(state.session_monitors, monitor_ref) do
|
|
||||||
nil ->
|
|
||||||
{:noreply, state}
|
|
||||||
|
|
||||||
agent_id ->
|
|
||||||
# Clean up the agent session
|
|
||||||
new_state = %{
|
|
||||||
state
|
|
||||||
| agent_sessions: Map.delete(state.agent_sessions, agent_id),
|
|
||||||
session_monitors: Map.delete(state.session_monitors, monitor_ref)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Stop heartbeat management
|
|
||||||
AutoHeartbeat.stop_heartbeat(agent_id)
|
|
||||||
|
|
||||||
# Mark agent as offline in registry
|
|
||||||
# (This could be enhanced to gracefully handle ongoing tasks)
|
|
||||||
|
|
||||||
{:noreply, new_state}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Private helpers
|
|
||||||
|
|
||||||
defp extract_agent_id(request, session_info, state) do
|
|
||||||
# Try to get agent_id from various sources
|
|
||||||
cond do
|
|
||||||
# From request arguments
|
|
||||||
Map.get(request, "params", %{})
|
|
||||||
|> Map.get("arguments", %{})
|
|
||||||
|> Map.get("agent_id") ->
|
|
||||||
request["params"]["arguments"]["agent_id"]
|
|
||||||
|
|
||||||
# From session info
|
|
||||||
Map.get(session_info, :agent_id) ->
|
|
||||||
session_info.agent_id
|
|
||||||
|
|
||||||
# From session lookup by PID
|
|
||||||
session_pid = Map.get(session_info, :session_pid, self()) ->
|
|
||||||
find_agent_by_session_pid(state, session_pid)
|
|
||||||
|
|
||||||
true ->
|
|
||||||
nil
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp find_agent_by_session_pid(state, session_pid) do
|
|
||||||
Enum.find_value(state.agent_sessions, fn {agent_id, session_data} ->
|
|
||||||
if session_data.pid == session_pid, do: agent_id, else: nil
|
|
||||||
end)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp update_session_activity(state, agent_id, _session_pid) do
|
|
||||||
case Map.get(state.agent_sessions, agent_id) do
|
|
||||||
nil ->
|
|
||||||
:ok
|
|
||||||
|
|
||||||
session_data ->
|
|
||||||
_updated_session = %{session_data | last_activity: DateTime.utc_now()}
|
|
||||||
# Note: This doesn't update the state since we're in a call handler
|
|
||||||
# In a real implementation, you might want to use cast for this
|
|
||||||
:ok
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Get enhanced task board with session information
|
|
||||||
"""
|
|
||||||
def get_enhanced_task_board do
|
|
||||||
GenServer.call(__MODULE__, :get_enhanced_task_board)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp calculate_session_duration(%{registered_at: start_time}) do
|
|
||||||
DateTime.diff(DateTime.utc_now(), start_time, :second)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp calculate_session_duration(_), do: nil
|
|
||||||
end
|
|
||||||
@@ -29,27 +29,27 @@ defmodule AgentCoordinator.Inbox do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def add_task(agent_id, task) do
|
def add_task(agent_id, task) do
|
||||||
GenServer.call(via_tuple(agent_id), {:add_task, task})
|
GenServer.call(via_tuple(agent_id), {:add_task, task}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def get_next_task(agent_id) do
|
def get_next_task(agent_id) do
|
||||||
GenServer.call(via_tuple(agent_id), :get_next_task)
|
GenServer.call(via_tuple(agent_id), :get_next_task, 15_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def complete_current_task(agent_id) do
|
def complete_current_task(agent_id) do
|
||||||
GenServer.call(via_tuple(agent_id), :complete_current_task)
|
GenServer.call(via_tuple(agent_id), :complete_current_task, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def get_status(agent_id) do
|
def get_status(agent_id) do
|
||||||
GenServer.call(via_tuple(agent_id), :get_status)
|
GenServer.call(via_tuple(agent_id), :get_status, 15_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def list_tasks(agent_id) do
|
def list_tasks(agent_id) do
|
||||||
GenServer.call(via_tuple(agent_id), :list_tasks)
|
GenServer.call(via_tuple(agent_id), :list_tasks, 15_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def get_current_task(agent_id) do
|
def get_current_task(agent_id) do
|
||||||
GenServer.call(via_tuple(agent_id), :get_current_task)
|
GenServer.call(via_tuple(agent_id), :get_current_task, 15_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def stop(agent_id) do
|
def stop(agent_id) do
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -95,9 +95,6 @@ defmodule AgentCoordinator.Persistence do
|
|||||||
case Gnat.pub(state.nats_conn, subject, message, headers: event_headers()) do
|
case Gnat.pub(state.nats_conn, subject, message, headers: event_headers()) do
|
||||||
:ok ->
|
:ok ->
|
||||||
:ok
|
:ok
|
||||||
|
|
||||||
{:error, reason} ->
|
|
||||||
IO.puts("Failed to store event: #{inspect(reason)}")
|
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
|
|||||||
@@ -24,11 +24,11 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def register_agent(agent) do
|
def register_agent(agent) do
|
||||||
GenServer.call(__MODULE__, {:register_agent, agent})
|
GenServer.call(__MODULE__, {:register_agent, agent}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def assign_task(task) do
|
def assign_task(task) do
|
||||||
GenServer.call(__MODULE__, {:assign_task, task})
|
GenServer.call(__MODULE__, {:assign_task, task}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def add_to_pending(task) do
|
def add_to_pending(task) do
|
||||||
@@ -40,7 +40,7 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def heartbeat_agent(agent_id) do
|
def heartbeat_agent(agent_id) do
|
||||||
GenServer.call(__MODULE__, {:heartbeat_agent, agent_id})
|
GenServer.call(__MODULE__, {:heartbeat_agent, agent_id}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def unregister_agent(agent_id, reason \\ "Agent requested unregistration") do
|
def unregister_agent(agent_id, reason \\ "Agent requested unregistration") do
|
||||||
@@ -52,7 +52,7 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def get_agent_current_task(agent_id) do
|
def get_agent_current_task(agent_id) do
|
||||||
GenServer.call(__MODULE__, {:get_agent_current_task, agent_id})
|
GenServer.call(__MODULE__, {:get_agent_current_task, agent_id}, 15_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def get_agent(agent_id) do
|
def get_agent(agent_id) do
|
||||||
@@ -64,11 +64,11 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def update_task_activity(task_id, tool_name, arguments) do
|
def update_task_activity(task_id, tool_name, arguments) do
|
||||||
GenServer.call(__MODULE__, {:update_task_activity, task_id, tool_name, arguments})
|
GenServer.call(__MODULE__, {:update_task_activity, task_id, tool_name, arguments}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def create_task(title, description, opts \\ %{}) do
|
def create_task(title, description, opts \\ %{}) do
|
||||||
GenServer.call(__MODULE__, {:create_task, title, description, opts})
|
GenServer.call(__MODULE__, {:create_task, title, description, opts}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def get_next_task(agent_id) do
|
def get_next_task(agent_id) do
|
||||||
@@ -76,7 +76,7 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
end
|
end
|
||||||
|
|
||||||
def complete_task(agent_id) do
|
def complete_task(agent_id) do
|
||||||
GenServer.call(__MODULE__, {:complete_task, agent_id})
|
GenServer.call(__MODULE__, {:complete_task, agent_id}, 30_000)
|
||||||
end
|
end
|
||||||
|
|
||||||
def get_task_board do
|
def get_task_board do
|
||||||
@@ -284,6 +284,7 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
case Map.get(state.agents, agent_id) do
|
case Map.get(state.agents, agent_id) do
|
||||||
nil ->
|
nil ->
|
||||||
{:reply, {:error, :not_found}, state}
|
{:reply, {:error, :not_found}, state}
|
||||||
|
|
||||||
agent ->
|
agent ->
|
||||||
{:reply, {:ok, agent}, state}
|
{:reply, {:ok, agent}, state}
|
||||||
end
|
end
|
||||||
@@ -293,6 +294,7 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
case Enum.find(state.agents, fn {_id, agent} -> agent.name == agent_name end) do
|
case Enum.find(state.agents, fn {_id, agent} -> agent.name == agent_name end) do
|
||||||
nil ->
|
nil ->
|
||||||
{:reply, {:error, :not_found}, state}
|
{:reply, {:error, :not_found}, state}
|
||||||
|
|
||||||
{_id, agent} ->
|
{_id, agent} ->
|
||||||
{:reply, {:ok, agent}, state}
|
{:reply, {:ok, agent}, state}
|
||||||
end
|
end
|
||||||
@@ -338,9 +340,6 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
# Remove from pending since it was assigned
|
# Remove from pending since it was assigned
|
||||||
final_state = %{final_state | pending_tasks: state.pending_tasks}
|
final_state = %{final_state | pending_tasks: state.pending_tasks}
|
||||||
{:reply, {:ok, task}, final_state}
|
{:reply, {:ok, task}, final_state}
|
||||||
|
|
||||||
error ->
|
|
||||||
error
|
|
||||||
end
|
end
|
||||||
|
|
||||||
_conflicts ->
|
_conflicts ->
|
||||||
@@ -559,6 +558,7 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
catch
|
catch
|
||||||
:exit, _ -> 0
|
:exit, _ -> 0
|
||||||
end
|
end
|
||||||
|
|
||||||
[] ->
|
[] ->
|
||||||
# No inbox process exists, treat as 0 pending tasks
|
# No inbox process exists, treat as 0 pending tasks
|
||||||
0
|
0
|
||||||
@@ -685,8 +685,6 @@ defmodule AgentCoordinator.TaskRegistry do
|
|||||||
:ok -> :ok
|
:ok -> :ok
|
||||||
# Inbox already stopped
|
# Inbox already stopped
|
||||||
{:error, :not_found} -> :ok
|
{:error, :not_found} -> :ok
|
||||||
# Continue regardless
|
|
||||||
_ -> :ok
|
|
||||||
end
|
end
|
||||||
|
|
||||||
# Publish unregistration event
|
# Publish unregistration event
|
||||||
|
|||||||
@@ -1,251 +0,0 @@
|
|||||||
defmodule AgentCoordinator.UnifiedMCPServer do
|
|
||||||
@moduledoc """
|
|
||||||
Unified MCP Server that aggregates all external MCP servers and Agent Coordinator tools.
|
|
||||||
|
|
||||||
This is the single MCP server that GitHub Copilot sees, which internally manages
|
|
||||||
all other MCP servers and provides automatic task tracking for any tool usage.
|
|
||||||
"""
|
|
||||||
|
|
||||||
use GenServer
|
|
||||||
require Logger
|
|
||||||
|
|
||||||
alias AgentCoordinator.{MCPServerManager, TaskRegistry}
|
|
||||||
|
|
||||||
defstruct [
|
|
||||||
:agent_sessions,
|
|
||||||
:request_id_counter
|
|
||||||
]
|
|
||||||
|
|
||||||
# Client API
|
|
||||||
|
|
||||||
def start_link(opts \\ []) do
|
|
||||||
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
|
|
||||||
end
|
|
||||||
|
|
||||||
@doc """
|
|
||||||
Handle MCP request from GitHub Copilot
|
|
||||||
"""
|
|
||||||
def handle_mcp_request(request) do
|
|
||||||
GenServer.call(__MODULE__, {:handle_request, request}, 60_000)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Server callbacks
|
|
||||||
|
|
||||||
def init(_opts) do
|
|
||||||
state = %__MODULE__{
|
|
||||||
agent_sessions: %{},
|
|
||||||
request_id_counter: 0
|
|
||||||
}
|
|
||||||
|
|
||||||
Logger.info("Unified MCP Server starting...")
|
|
||||||
|
|
||||||
{:ok, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
def handle_call({:handle_request, request}, _from, state) do
|
|
||||||
response = process_mcp_request(request, state)
|
|
||||||
{:reply, response, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
def handle_call({:register_agent_session, agent_id, session_info}, _from, state) do
|
|
||||||
new_state = %{state | agent_sessions: Map.put(state.agent_sessions, agent_id, session_info)}
|
|
||||||
{:reply, :ok, new_state}
|
|
||||||
end
|
|
||||||
|
|
||||||
def handle_info(_msg, state) do
|
|
||||||
{:noreply, state}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Private functions
|
|
||||||
|
|
||||||
defp process_mcp_request(request, state) do
|
|
||||||
method = Map.get(request, "method")
|
|
||||||
id = Map.get(request, "id")
|
|
||||||
|
|
||||||
case method do
|
|
||||||
"initialize" ->
|
|
||||||
handle_initialize(request, id)
|
|
||||||
|
|
||||||
"tools/list" ->
|
|
||||||
handle_tools_list(request, id)
|
|
||||||
|
|
||||||
"tools/call" ->
|
|
||||||
handle_tools_call(request, id, state)
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
error_response(id, -32601, "Method not found: #{method}")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp handle_initialize(_request, id) do
|
|
||||||
%{
|
|
||||||
"jsonrpc" => "2.0",
|
|
||||||
"id" => id,
|
|
||||||
"result" => %{
|
|
||||||
"protocolVersion" => "2024-11-05",
|
|
||||||
"capabilities" => %{
|
|
||||||
"tools" => %{},
|
|
||||||
"coordination" => %{
|
|
||||||
"automatic_task_tracking" => true,
|
|
||||||
"agent_management" => true,
|
|
||||||
"multi_server_proxy" => true,
|
|
||||||
"heartbeat_coverage" => true
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"serverInfo" => %{
|
|
||||||
"name" => "agent-coordinator-unified",
|
|
||||||
"version" => "0.1.0",
|
|
||||||
"description" =>
|
|
||||||
"Unified MCP server with automatic task tracking and agent coordination"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
end
|
|
||||||
|
|
||||||
defp handle_tools_list(_request, id) do
|
|
||||||
case MCPServerManager.get_unified_tools() do
|
|
||||||
tools when is_list(tools) ->
|
|
||||||
%{
|
|
||||||
"jsonrpc" => "2.0",
|
|
||||||
"id" => id,
|
|
||||||
"result" => %{
|
|
||||||
"tools" => tools
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
{:error, reason} ->
|
|
||||||
error_response(id, -32603, "Failed to get tools: #{reason}")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp handle_tools_call(request, id, state) do
|
|
||||||
params = Map.get(request, "params", %{})
|
|
||||||
tool_name = Map.get(params, "name")
|
|
||||||
arguments = Map.get(params, "arguments", %{})
|
|
||||||
|
|
||||||
# Determine agent context from the request or session
|
|
||||||
agent_context = determine_agent_context(request, arguments, state)
|
|
||||||
|
|
||||||
case MCPServerManager.route_tool_call(tool_name, arguments, agent_context) do
|
|
||||||
%{"error" => _} = error_result ->
|
|
||||||
Map.put(error_result, "id", id)
|
|
||||||
|
|
||||||
result ->
|
|
||||||
# Wrap successful results in MCP format
|
|
||||||
success_response = %{
|
|
||||||
"jsonrpc" => "2.0",
|
|
||||||
"id" => id,
|
|
||||||
"result" => format_tool_result(result, tool_name, agent_context)
|
|
||||||
}
|
|
||||||
|
|
||||||
success_response
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp determine_agent_context(request, arguments, state) do
|
|
||||||
# Try to determine agent from various sources:
|
|
||||||
|
|
||||||
# 1. Explicit agent_id in arguments
|
|
||||||
case Map.get(arguments, "agent_id") do
|
|
||||||
agent_id when is_binary(agent_id) ->
|
|
||||||
%{agent_id: agent_id}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# 2. Try to extract from request metadata
|
|
||||||
case extract_agent_from_request(request) do
|
|
||||||
agent_id when is_binary(agent_id) ->
|
|
||||||
%{agent_id: agent_id}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
# 3. Use a default session for GitHub Copilot
|
|
||||||
default_agent_context(state)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp extract_agent_from_request(_request) do
|
|
||||||
# Look for agent info in request headers, params, etc.
|
|
||||||
# This could be extended to support various ways of identifying the agent
|
|
||||||
nil
|
|
||||||
end
|
|
||||||
|
|
||||||
defp default_agent_context(state) do
|
|
||||||
# Create or use a default agent session for GitHub Copilot
|
|
||||||
default_agent_id = "github_copilot_session"
|
|
||||||
|
|
||||||
case Map.get(state.agent_sessions, default_agent_id) do
|
|
||||||
nil ->
|
|
||||||
# Auto-register GitHub Copilot as an agent
|
|
||||||
case TaskRegistry.register_agent("GitHub Copilot", [
|
|
||||||
"coding",
|
|
||||||
"analysis",
|
|
||||||
"review",
|
|
||||||
"documentation"
|
|
||||||
]) do
|
|
||||||
{:ok, %{agent_id: agent_id}} ->
|
|
||||||
session_info = %{
|
|
||||||
agent_id: agent_id,
|
|
||||||
name: "GitHub Copilot",
|
|
||||||
auto_registered: true,
|
|
||||||
created_at: DateTime.utc_now()
|
|
||||||
}
|
|
||||||
|
|
||||||
GenServer.call(self(), {:register_agent_session, agent_id, session_info})
|
|
||||||
%{agent_id: agent_id}
|
|
||||||
|
|
||||||
_ ->
|
|
||||||
%{agent_id: default_agent_id}
|
|
||||||
end
|
|
||||||
|
|
||||||
session_info ->
|
|
||||||
%{agent_id: session_info.agent_id}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
defp format_tool_result(result, tool_name, agent_context) do
|
|
||||||
# Format the result according to MCP tool call response format
|
|
||||||
base_result =
|
|
||||||
case result do
|
|
||||||
%{"result" => content} when is_map(content) ->
|
|
||||||
# Already properly formatted
|
|
||||||
content
|
|
||||||
|
|
||||||
{:ok, content} ->
|
|
||||||
# Convert tuple response to content
|
|
||||||
%{"content" => [%{"type" => "text", "text" => inspect(content)}]}
|
|
||||||
|
|
||||||
%{} = map_result ->
|
|
||||||
# Convert map to text content
|
|
||||||
%{"content" => [%{"type" => "text", "text" => Jason.encode!(map_result)}]}
|
|
||||||
|
|
||||||
binary when is_binary(binary) ->
|
|
||||||
# Simple text result
|
|
||||||
%{"content" => [%{"type" => "text", "text" => binary}]}
|
|
||||||
|
|
||||||
other ->
|
|
||||||
# Fallback for any other type
|
|
||||||
%{"content" => [%{"type" => "text", "text" => inspect(other)}]}
|
|
||||||
end
|
|
||||||
|
|
||||||
# Add metadata about the operation
|
|
||||||
metadata = %{
|
|
||||||
"tool_name" => tool_name,
|
|
||||||
"agent_id" => agent_context.agent_id,
|
|
||||||
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601(),
|
|
||||||
"auto_tracked" => true
|
|
||||||
}
|
|
||||||
|
|
||||||
Map.put(base_result, "_metadata", metadata)
|
|
||||||
end
|
|
||||||
|
|
||||||
defp error_response(id, code, message) do
|
|
||||||
%{
|
|
||||||
"jsonrpc" => "2.0",
|
|
||||||
"id" => id,
|
|
||||||
"error" => %{
|
|
||||||
"code" => code,
|
|
||||||
"message" => message
|
|
||||||
}
|
|
||||||
}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@@ -33,7 +33,8 @@ defmodule AgentCoordinator.VSCodePermissions do
|
|||||||
"vscode_set_selection" => :editor,
|
"vscode_set_selection" => :editor,
|
||||||
|
|
||||||
# Command Operations (varies by command)
|
# Command Operations (varies by command)
|
||||||
"vscode_run_command" => :admin, # Default to admin, will check specific commands
|
# Default to admin, will check specific commands
|
||||||
|
"vscode_run_command" => :admin,
|
||||||
|
|
||||||
# User Communication
|
# User Communication
|
||||||
"vscode_show_message" => :read_only
|
"vscode_show_message" => :read_only
|
||||||
@@ -88,6 +89,7 @@ defmodule AgentCoordinator.VSCodePermissions do
|
|||||||
case additional_checks(tool_name, args, context) do
|
case additional_checks(tool_name, args, context) do
|
||||||
:ok ->
|
:ok ->
|
||||||
{:ok, required_level}
|
{:ok, required_level}
|
||||||
|
|
||||||
{:error, reason} ->
|
{:error, reason} ->
|
||||||
{:error, reason}
|
{:error, reason}
|
||||||
end
|
end
|
||||||
@@ -109,15 +111,18 @@ defmodule AgentCoordinator.VSCodePermissions do
|
|||||||
|
|
||||||
case agent_id do
|
case agent_id do
|
||||||
"github_copilot_session" -> :filesystem
|
"github_copilot_session" -> :filesystem
|
||||||
id when is_binary(id) and byte_size(id) > 0 -> :editor # Other registered agents
|
# Other registered agents
|
||||||
_ -> :read_only # Unknown agents
|
id when is_binary(id) and byte_size(id) > 0 -> :editor
|
||||||
|
# Unknown agents
|
||||||
|
_ -> :read_only
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
@doc """
|
@doc """
|
||||||
Update an agent's permission level (for administrative purposes).
|
Update an agent's permission level (for administrative purposes).
|
||||||
"""
|
"""
|
||||||
def set_agent_permission_level(agent_id, level) when level in [:read_only, :editor, :filesystem, :terminal, :git, :admin] do
|
def set_agent_permission_level(agent_id, level)
|
||||||
|
when level in [:read_only, :editor, :filesystem, :terminal, :git, :admin] do
|
||||||
# This would persist to a database or configuration store
|
# This would persist to a database or configuration store
|
||||||
Logger.info("Setting permission level for agent #{agent_id} to #{level}")
|
Logger.info("Setting permission level for agent #{agent_id} to #{level}")
|
||||||
:ok
|
:ok
|
||||||
@@ -127,16 +132,24 @@ defmodule AgentCoordinator.VSCodePermissions do
|
|||||||
|
|
||||||
defp get_required_permission(tool_name, args) do
|
defp get_required_permission(tool_name, args) do
|
||||||
case Map.get(@tool_permissions, tool_name) do
|
case Map.get(@tool_permissions, tool_name) do
|
||||||
nil -> :admin # Unknown tools require admin by default
|
# Unknown tools require admin by default
|
||||||
|
nil ->
|
||||||
|
:admin
|
||||||
|
|
||||||
:admin when tool_name == "vscode_run_command" ->
|
:admin when tool_name == "vscode_run_command" ->
|
||||||
# Special handling for run_command - check specific command
|
# Special handling for run_command - check specific command
|
||||||
command = args["command"]
|
command = args["command"]
|
||||||
|
|
||||||
if command in @whitelisted_commands do
|
if command in @whitelisted_commands do
|
||||||
:editor # Whitelisted commands only need editor level
|
# Whitelisted commands only need editor level
|
||||||
|
:editor
|
||||||
else
|
else
|
||||||
:admin # Unknown commands need admin
|
# Unknown commands need admin
|
||||||
|
:admin
|
||||||
end
|
end
|
||||||
level -> level
|
|
||||||
|
level ->
|
||||||
|
level
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
@@ -165,11 +178,19 @@ defmodule AgentCoordinator.VSCodePermissions do
|
|||||||
|
|
||||||
forbidden_patterns = [
|
forbidden_patterns = [
|
||||||
# System directories
|
# System directories
|
||||||
"/etc/", "/bin/", "/usr/", "/var/", "/tmp/",
|
"/etc/",
|
||||||
|
"/bin/",
|
||||||
|
"/usr/",
|
||||||
|
"/var/",
|
||||||
|
"/tmp/",
|
||||||
# User sensitive areas
|
# User sensitive areas
|
||||||
"/.ssh/", "/.config/", "/home/", "~",
|
"/.ssh/",
|
||||||
|
"/.config/",
|
||||||
|
"/home/",
|
||||||
|
"~",
|
||||||
# Relative path traversal
|
# Relative path traversal
|
||||||
"../", "..\\"
|
"../",
|
||||||
|
"..\\"
|
||||||
]
|
]
|
||||||
|
|
||||||
if Enum.any?(forbidden_patterns, fn pattern -> String.contains?(path, pattern) end) do
|
if Enum.any?(forbidden_patterns, fn pattern -> String.contains?(path, pattern) end) do
|
||||||
@@ -181,7 +202,7 @@ defmodule AgentCoordinator.VSCodePermissions do
|
|||||||
|
|
||||||
defp check_workspace_bounds(_path, _context), do: {:error, "Invalid path format"}
|
defp check_workspace_bounds(_path, _context), do: {:error, "Invalid path format"}
|
||||||
|
|
||||||
defp check_command_safety(command, args) when is_binary(command) do
|
defp check_command_safety(command, _args) when is_binary(command) do
|
||||||
cond do
|
cond do
|
||||||
command in @whitelisted_commands ->
|
command in @whitelisted_commands ->
|
||||||
:ok
|
:ok
|
||||||
|
|||||||
@@ -18,7 +18,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
# File Operations
|
# File Operations
|
||||||
%{
|
%{
|
||||||
"name" => "vscode_read_file",
|
"name" => "vscode_read_file",
|
||||||
"description" => "Read file contents using VS Code's file system API. Only works within workspace folders.",
|
"description" =>
|
||||||
|
"Read file contents using VS Code's file system API. Only works within workspace folders.",
|
||||||
"inputSchema" => %{
|
"inputSchema" => %{
|
||||||
"type" => "object",
|
"type" => "object",
|
||||||
"properties" => %{
|
"properties" => %{
|
||||||
@@ -37,7 +38,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
},
|
},
|
||||||
%{
|
%{
|
||||||
"name" => "vscode_write_file",
|
"name" => "vscode_write_file",
|
||||||
"description" => "Write content to a file using VS Code's file system API. Creates directories if needed.",
|
"description" =>
|
||||||
|
"Write content to a file using VS Code's file system API. Creates directories if needed.",
|
||||||
"inputSchema" => %{
|
"inputSchema" => %{
|
||||||
"type" => "object",
|
"type" => "object",
|
||||||
"properties" => %{
|
"properties" => %{
|
||||||
@@ -93,7 +95,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
"properties" => %{
|
"properties" => %{
|
||||||
"path" => %{
|
"path" => %{
|
||||||
"type" => "string",
|
"type" => "string",
|
||||||
"description" => "Relative or absolute path to the file/directory within the workspace"
|
"description" =>
|
||||||
|
"Relative or absolute path to the file/directory within the workspace"
|
||||||
},
|
},
|
||||||
"recursive" => %{
|
"recursive" => %{
|
||||||
"type" => "boolean",
|
"type" => "boolean",
|
||||||
@@ -101,7 +104,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
},
|
},
|
||||||
"use_trash" => %{
|
"use_trash" => %{
|
||||||
"type" => "boolean",
|
"type" => "boolean",
|
||||||
"description" => "Whether to move to trash instead of permanent deletion (default: true)"
|
"description" =>
|
||||||
|
"Whether to move to trash instead of permanent deletion (default: true)"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"required" => ["path"]
|
"required" => ["path"]
|
||||||
@@ -227,7 +231,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
# Command Operations
|
# Command Operations
|
||||||
%{
|
%{
|
||||||
"name" => "vscode_run_command",
|
"name" => "vscode_run_command",
|
||||||
"description" => "Execute a VS Code command. Only whitelisted commands are allowed for security.",
|
"description" =>
|
||||||
|
"Execute a VS Code command. Only whitelisted commands are allowed for security.",
|
||||||
"inputSchema" => %{
|
"inputSchema" => %{
|
||||||
"type" => "object",
|
"type" => "object",
|
||||||
"properties" => %{
|
"properties" => %{
|
||||||
@@ -282,21 +287,26 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
required = Map.get(input_schema, "required", [])
|
required = Map.get(input_schema, "required", [])
|
||||||
|
|
||||||
# Add agent_id to properties
|
# Add agent_id to properties
|
||||||
updated_properties = Map.put(properties, "agent_id", %{
|
updated_properties =
|
||||||
|
Map.put(properties, "agent_id", %{
|
||||||
"type" => "string",
|
"type" => "string",
|
||||||
"description" => "Unique identifier for the agent making this request. Each agent session must use a consistent, unique ID throughout their interaction. Generate a UUID or use a descriptive identifier like 'agent_main_task_001'."
|
"description" =>
|
||||||
|
"Unique identifier for the agent making this request. Each agent session must use a consistent, unique ID throughout their interaction. Generate a UUID or use a descriptive identifier like 'agent_main_task_001'."
|
||||||
})
|
})
|
||||||
|
|
||||||
# Add agent_id to required fields
|
# Add agent_id to required fields
|
||||||
updated_required = if "agent_id" in required, do: required, else: ["agent_id" | required]
|
updated_required = if "agent_id" in required, do: required, else: ["agent_id" | required]
|
||||||
|
|
||||||
# Update the tool schema
|
# Update the tool schema
|
||||||
updated_input_schema = input_schema
|
updated_input_schema =
|
||||||
|
input_schema
|
||||||
|> Map.put("properties", updated_properties)
|
|> Map.put("properties", updated_properties)
|
||||||
|> Map.put("required", updated_required)
|
|> Map.put("required", updated_required)
|
||||||
|
|
||||||
# Update tool description to mention agent_id requirement
|
# Update tool description to mention agent_id requirement
|
||||||
updated_description = tool["description"] <> " IMPORTANT: Include a unique agent_id parameter to identify your agent session."
|
updated_description =
|
||||||
|
tool["description"] <>
|
||||||
|
" IMPORTANT: Include a unique agent_id parameter to identify your agent session."
|
||||||
|
|
||||||
tool
|
tool
|
||||||
|> Map.put("inputSchema", updated_input_schema)
|
|> Map.put("inputSchema", updated_input_schema)
|
||||||
@@ -314,9 +324,12 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
|
|
||||||
if is_nil(agent_id) or agent_id == "" do
|
if is_nil(agent_id) or agent_id == "" do
|
||||||
Logger.warning("Missing agent_id in VS Code tool call: #{tool_name}")
|
Logger.warning("Missing agent_id in VS Code tool call: #{tool_name}")
|
||||||
{:error, %{
|
|
||||||
|
{:error,
|
||||||
|
%{
|
||||||
"error" => "Missing agent_id",
|
"error" => "Missing agent_id",
|
||||||
"message" => "All VS Code tools require a unique agent_id parameter. Please include your agent session identifier."
|
"message" =>
|
||||||
|
"All VS Code tools require a unique agent_id parameter. Please include your agent session identifier."
|
||||||
}}
|
}}
|
||||||
else
|
else
|
||||||
# Ensure agent is registered and create enhanced context
|
# Ensure agent is registered and create enhanced context
|
||||||
@@ -364,7 +377,11 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
case AgentCoordinator.TaskRegistry.register_agent(
|
case AgentCoordinator.TaskRegistry.register_agent(
|
||||||
"GitHub Copilot (#{agent_id})",
|
"GitHub Copilot (#{agent_id})",
|
||||||
capabilities,
|
capabilities,
|
||||||
[metadata: %{agent_id: agent_id, auto_registered: true, session_start: DateTime.utc_now()}]
|
metadata: %{
|
||||||
|
agent_id: agent_id,
|
||||||
|
auto_registered: true,
|
||||||
|
session_start: DateTime.utc_now()
|
||||||
|
}
|
||||||
) do
|
) do
|
||||||
{:ok, _result} ->
|
{:ok, _result} ->
|
||||||
Logger.info("Successfully auto-registered agent: #{agent_id}")
|
Logger.info("Successfully auto-registered agent: #{agent_id}")
|
||||||
@@ -372,10 +389,13 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
|
|
||||||
{:error, reason} ->
|
{:error, reason} ->
|
||||||
Logger.error("Failed to auto-register agent #{agent_id}: #{inspect(reason)}")
|
Logger.error("Failed to auto-register agent #{agent_id}: #{inspect(reason)}")
|
||||||
Map.put(context, :agent_id, agent_id) # Continue anyway
|
# Continue anyway
|
||||||
|
Map.put(context, :agent_id, agent_id)
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
end # Private function to execute individual tools
|
end
|
||||||
|
|
||||||
|
# Private function to execute individual tools
|
||||||
defp execute_tool(tool_name, args, context) do
|
defp execute_tool(tool_name, args, context) do
|
||||||
case tool_name do
|
case tool_name do
|
||||||
"vscode_read_file" -> read_file(args, context)
|
"vscode_read_file" -> read_file(args, context)
|
||||||
@@ -398,7 +418,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
|
|
||||||
defp read_file(args, _context) do
|
defp read_file(args, _context) do
|
||||||
# For now, return a placeholder - we'll implement the actual VS Code API bridge
|
# For now, return a placeholder - we'll implement the actual VS Code API bridge
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"content" => "// VS Code file content would be here",
|
"content" => "// VS Code file content would be here",
|
||||||
"path" => args["path"],
|
"path" => args["path"],
|
||||||
"encoding" => args["encoding"] || "utf8",
|
"encoding" => args["encoding"] || "utf8",
|
||||||
@@ -408,7 +429,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp write_file(args, _context) do
|
defp write_file(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"path" => args["path"],
|
"path" => args["path"],
|
||||||
"bytes_written" => String.length(args["content"]),
|
"bytes_written" => String.length(args["content"]),
|
||||||
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
||||||
@@ -416,7 +438,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp create_file(args, _context) do
|
defp create_file(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"path" => args["path"],
|
"path" => args["path"],
|
||||||
"created" => true,
|
"created" => true,
|
||||||
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
||||||
@@ -424,7 +447,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp delete_file(args, _context) do
|
defp delete_file(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"path" => args["path"],
|
"path" => args["path"],
|
||||||
"deleted" => true,
|
"deleted" => true,
|
||||||
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
||||||
@@ -432,7 +456,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp list_directory(args, _context) do
|
defp list_directory(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"path" => args["path"],
|
"path" => args["path"],
|
||||||
"entries" => [
|
"entries" => [
|
||||||
%{"name" => "file1.txt", "type" => "file", "size" => 123},
|
%{"name" => "file1.txt", "type" => "file", "size" => 123},
|
||||||
@@ -442,7 +467,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp get_workspace_folders(_args, _context) do
|
defp get_workspace_folders(_args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"folders" => [
|
"folders" => [
|
||||||
%{"name" => "agent_coordinator", "uri" => "file:///home/ra/agent_coordinator"}
|
%{"name" => "agent_coordinator", "uri" => "file:///home/ra/agent_coordinator"}
|
||||||
]
|
]
|
||||||
@@ -450,7 +476,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp get_active_editor(args, _context) do
|
defp get_active_editor(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"file_path" => "/home/ra/agent_coordinator/lib/agent_coordinator.ex",
|
"file_path" => "/home/ra/agent_coordinator/lib/agent_coordinator.ex",
|
||||||
"language" => "elixir",
|
"language" => "elixir",
|
||||||
"line_count" => 150,
|
"line_count" => 150,
|
||||||
@@ -464,7 +491,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp set_editor_content(args, _context) do
|
defp set_editor_content(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"file_path" => args["file_path"],
|
"file_path" => args["file_path"],
|
||||||
"content_length" => String.length(args["content"]),
|
"content_length" => String.length(args["content"]),
|
||||||
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601()
|
||||||
@@ -472,7 +500,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp get_selection(args, _context) do
|
defp get_selection(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"selection" => %{
|
"selection" => %{
|
||||||
"start" => %{"line" => 5, "character" => 0},
|
"start" => %{"line" => 5, "character" => 0},
|
||||||
"end" => %{"line" => 8, "character" => 20}
|
"end" => %{"line" => 8, "character" => 20}
|
||||||
@@ -483,7 +512,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp set_selection(args, _context) do
|
defp set_selection(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"selection" => %{
|
"selection" => %{
|
||||||
"start" => %{"line" => args["start_line"], "character" => args["start_character"]},
|
"start" => %{"line" => args["start_line"], "character" => args["start_character"]},
|
||||||
"end" => %{"line" => args["end_line"], "character" => args["end_character"]}
|
"end" => %{"line" => args["end_line"], "character" => args["end_character"]}
|
||||||
@@ -494,7 +524,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
|
|
||||||
defp run_command(args, _context) do
|
defp run_command(args, _context) do
|
||||||
# This would execute actual VS Code commands
|
# This would execute actual VS Code commands
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"command" => args["command"],
|
"command" => args["command"],
|
||||||
"args" => args["args"] || [],
|
"args" => args["args"] || [],
|
||||||
"result" => "Command executed successfully",
|
"result" => "Command executed successfully",
|
||||||
@@ -503,7 +534,8 @@ defmodule AgentCoordinator.VSCodeToolProvider do
|
|||||||
end
|
end
|
||||||
|
|
||||||
defp show_message(args, _context) do
|
defp show_message(args, _context) do
|
||||||
{:ok, %{
|
{:ok,
|
||||||
|
%{
|
||||||
"message" => args["message"],
|
"message" => args["message"],
|
||||||
"type" => args["type"] || "info",
|
"type" => args["type"] || "info",
|
||||||
"displayed" => true,
|
"displayed" => true,
|
||||||
|
|||||||
@@ -10,12 +10,6 @@
|
|||||||
"auto_restart": true,
|
"auto_restart": true,
|
||||||
"description": "Context7 library documentation server"
|
"description": "Context7 library documentation server"
|
||||||
},
|
},
|
||||||
"mcp_figma": {
|
|
||||||
"url": "http://127.0.0.1:3845/mcp",
|
|
||||||
"type": "http",
|
|
||||||
"auto_restart": true,
|
|
||||||
"description": "Figma design integration server"
|
|
||||||
},
|
|
||||||
"mcp_filesystem": {
|
"mcp_filesystem": {
|
||||||
"type": "stdio",
|
"type": "stdio",
|
||||||
"command": "bunx",
|
"command": "bunx",
|
||||||
|
|||||||
@@ -28,10 +28,10 @@ exec mix run --no-halt -e "
|
|||||||
|
|
||||||
# MCPServerManager is now started by the application supervisor automatically
|
# MCPServerManager is now started by the application supervisor automatically
|
||||||
|
|
||||||
case AgentCoordinator.UnifiedMCPServer.start_link() do
|
case AgentCoordinator.MCPServer.start_link() do
|
||||||
{:ok, _} -> :ok
|
{:ok, _} -> :ok
|
||||||
{:error, {:already_started, _}} -> :ok
|
{:error, {:already_started, _}} -> :ok
|
||||||
{:error, reason} -> raise \"Failed to start UnifiedMCPServer: #{inspect(reason)}\"
|
{:error, reason} -> raise \"Failed to start MCPServer: #{inspect(reason)}\"
|
||||||
end
|
end
|
||||||
|
|
||||||
# Log that we're ready
|
# Log that we're ready
|
||||||
@@ -64,7 +64,7 @@ defmodule UnifiedMCPStdio do
|
|||||||
request = Jason.decode!(json_line)
|
request = Jason.decode!(json_line)
|
||||||
|
|
||||||
# Route through unified MCP server for automatic task tracking
|
# Route through unified MCP server for automatic task tracking
|
||||||
response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(request)
|
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
|
||||||
IO.puts(Jason.encode!(response))
|
IO.puts(Jason.encode!(response))
|
||||||
rescue
|
rescue
|
||||||
e in Jason.DecodeError ->
|
e in Jason.DecodeError ->
|
||||||
|
|||||||
@@ -5,7 +5,10 @@ defmodule AgentCoordinator.AutoHeartbeatTest do
|
|||||||
setup do
|
setup do
|
||||||
# Start necessary services for testing
|
# Start necessary services for testing
|
||||||
{:ok, _} = Registry.start_link(keys: :unique, name: AgentCoordinator.InboxRegistry)
|
{:ok, _} = Registry.start_link(keys: :unique, name: AgentCoordinator.InboxRegistry)
|
||||||
{:ok, _} = DynamicSupervisor.start_link(name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one)
|
|
||||||
|
{:ok, _} =
|
||||||
|
DynamicSupervisor.start_link(name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one)
|
||||||
|
|
||||||
{:ok, _} = TaskRegistry.start_link()
|
{:ok, _} = TaskRegistry.start_link()
|
||||||
{:ok, _} = AgentCoordinator.MCPServer.start_link()
|
{:ok, _} = AgentCoordinator.MCPServer.start_link()
|
||||||
{:ok, _} = AgentCoordinator.AutoHeartbeat.start_link()
|
{:ok, _} = AgentCoordinator.AutoHeartbeat.start_link()
|
||||||
@@ -17,7 +20,11 @@ defmodule AgentCoordinator.AutoHeartbeatTest do
|
|||||||
describe "automatic heartbeat functionality" do
|
describe "automatic heartbeat functionality" do
|
||||||
test "agent automatically sends heartbeats during operations" do
|
test "agent automatically sends heartbeats during operations" do
|
||||||
# Start a client with auto-heartbeat
|
# Start a client with auto-heartbeat
|
||||||
{:ok, client} = Client.start_session("TestAgent", [:coding], auto_heartbeat: true, heartbeat_interval: 1000)
|
{:ok, client} =
|
||||||
|
Client.start_session("TestAgent", [:coding],
|
||||||
|
auto_heartbeat: true,
|
||||||
|
heartbeat_interval: 1000
|
||||||
|
)
|
||||||
|
|
||||||
# Get initial session info
|
# Get initial session info
|
||||||
{:ok, initial_info} = Client.get_session_info(client)
|
{:ok, initial_info} = Client.get_session_info(client)
|
||||||
@@ -36,7 +43,11 @@ defmodule AgentCoordinator.AutoHeartbeatTest do
|
|||||||
|
|
||||||
test "agent stays online with regular heartbeats" do
|
test "agent stays online with regular heartbeats" do
|
||||||
# Start client
|
# Start client
|
||||||
{:ok, client} = Client.start_session("OnlineAgent", [:analysis], auto_heartbeat: true, heartbeat_interval: 500)
|
{:ok, client} =
|
||||||
|
Client.start_session("OnlineAgent", [:analysis],
|
||||||
|
auto_heartbeat: true,
|
||||||
|
heartbeat_interval: 500
|
||||||
|
)
|
||||||
|
|
||||||
# Get agent info
|
# Get agent info
|
||||||
{:ok, session_info} = Client.get_session_info(client)
|
{:ok, session_info} = Client.get_session_info(client)
|
||||||
@@ -70,15 +81,18 @@ defmodule AgentCoordinator.AutoHeartbeatTest do
|
|||||||
assert length(online_agents) >= 3
|
assert length(online_agents) >= 3
|
||||||
|
|
||||||
# Create tasks from different agents simultaneously
|
# Create tasks from different agents simultaneously
|
||||||
task1 = Task.async(fn ->
|
task1 =
|
||||||
|
Task.async(fn ->
|
||||||
Client.create_task(agent1, "Task1", "Description1", %{"priority" => "normal"})
|
Client.create_task(agent1, "Task1", "Description1", %{"priority" => "normal"})
|
||||||
end)
|
end)
|
||||||
|
|
||||||
task2 = Task.async(fn ->
|
task2 =
|
||||||
|
Task.async(fn ->
|
||||||
Client.create_task(agent2, "Task2", "Description2", %{"priority" => "high"})
|
Client.create_task(agent2, "Task2", "Description2", %{"priority" => "high"})
|
||||||
end)
|
end)
|
||||||
|
|
||||||
task3 = Task.async(fn ->
|
task3 =
|
||||||
|
Task.async(fn ->
|
||||||
Client.create_task(agent3, "Task3", "Description3", %{"priority" => "low"})
|
Client.create_task(agent3, "Task3", "Description3", %{"priority" => "low"})
|
||||||
end)
|
end)
|
||||||
|
|
||||||
@@ -145,6 +159,7 @@ defmodule AgentCoordinator.AutoHeartbeatTest do
|
|||||||
nil ->
|
nil ->
|
||||||
# Agent was cleaned up - this is acceptable
|
# Agent was cleaned up - this is acceptable
|
||||||
:ok
|
:ok
|
||||||
|
|
||||||
agent ->
|
agent ->
|
||||||
# Agent should be offline
|
# Agent should be offline
|
||||||
refute agent["online"]
|
refute agent["online"]
|
||||||
|
|||||||
@@ -2,7 +2,15 @@ defmodule AgentCoordinatorTest do
|
|||||||
use ExUnit.Case
|
use ExUnit.Case
|
||||||
doctest AgentCoordinator
|
doctest AgentCoordinator
|
||||||
|
|
||||||
test "greets the world" do
|
test "returns version" do
|
||||||
assert AgentCoordinator.hello() == :world
|
assert is_binary(AgentCoordinator.version())
|
||||||
|
assert AgentCoordinator.version() == "0.1.0"
|
||||||
|
end
|
||||||
|
|
||||||
|
test "returns status structure" do
|
||||||
|
status = AgentCoordinator.status()
|
||||||
|
assert is_map(status)
|
||||||
|
assert Map.has_key?(status, :agents)
|
||||||
|
assert Map.has_key?(status, :uptime)
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|||||||
@@ -16,10 +16,11 @@ defmodule AgentCoordinator.MetadataTest do
|
|||||||
agent_name = "MetadataTestAgent_#{:rand.uniform(1000)}"
|
agent_name = "MetadataTestAgent_#{:rand.uniform(1000)}"
|
||||||
|
|
||||||
# Register agent with metadata
|
# Register agent with metadata
|
||||||
result = AgentCoordinator.TaskRegistry.register_agent(
|
result =
|
||||||
|
AgentCoordinator.TaskRegistry.register_agent(
|
||||||
agent_name,
|
agent_name,
|
||||||
["coding", "testing", "vscode_integration"],
|
["coding", "testing", "vscode_integration"],
|
||||||
[metadata: metadata]
|
metadata: metadata
|
||||||
)
|
)
|
||||||
|
|
||||||
assert :ok = result
|
assert :ok = result
|
||||||
@@ -44,7 +45,8 @@ defmodule AgentCoordinator.MetadataTest do
|
|||||||
agent_name = "LegacyTestAgent_#{:rand.uniform(1000)}"
|
agent_name = "LegacyTestAgent_#{:rand.uniform(1000)}"
|
||||||
|
|
||||||
# Register agent without metadata (old way)
|
# Register agent without metadata (old way)
|
||||||
result = AgentCoordinator.TaskRegistry.register_agent(
|
result =
|
||||||
|
AgentCoordinator.TaskRegistry.register_agent(
|
||||||
agent_name,
|
agent_name,
|
||||||
["coding", "testing"]
|
["coding", "testing"]
|
||||||
)
|
)
|
||||||
@@ -67,10 +69,11 @@ defmodule AgentCoordinator.MetadataTest do
|
|||||||
boolean: true
|
boolean: true
|
||||||
}
|
}
|
||||||
|
|
||||||
agent = AgentCoordinator.Agent.new(
|
agent =
|
||||||
|
AgentCoordinator.Agent.new(
|
||||||
"TestAgent",
|
"TestAgent",
|
||||||
["capability1"],
|
["capability1"],
|
||||||
[metadata: metadata]
|
metadata: metadata
|
||||||
)
|
)
|
||||||
|
|
||||||
assert agent.metadata[:test_key] == "test_value"
|
assert agent.metadata[:test_key] == "test_value"
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
||||||
use ExUnit.Case, async: false # Changed to false since we're using shared resources
|
# Changed to false since we're using shared resources
|
||||||
|
use ExUnit.Case, async: false
|
||||||
|
|
||||||
describe "Dynamic tool discovery" do
|
describe "Dynamic tool discovery" do
|
||||||
test "tools are discovered from external MCP servers via tools/list" do
|
test "tools are discovered from external MCP servers via tools/list" do
|
||||||
@@ -9,7 +10,14 @@ defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
|||||||
initial_tools = AgentCoordinator.MCPServerManager.get_unified_tools()
|
initial_tools = AgentCoordinator.MCPServerManager.get_unified_tools()
|
||||||
|
|
||||||
# Should have at least the coordinator native tools
|
# Should have at least the coordinator native tools
|
||||||
coordinator_tool_names = ["register_agent", "create_task", "get_next_task", "complete_task", "get_task_board", "heartbeat"]
|
coordinator_tool_names = [
|
||||||
|
"register_agent",
|
||||||
|
"create_task",
|
||||||
|
"get_next_task",
|
||||||
|
"complete_task",
|
||||||
|
"get_task_board",
|
||||||
|
"heartbeat"
|
||||||
|
]
|
||||||
|
|
||||||
Enum.each(coordinator_tool_names, fn tool_name ->
|
Enum.each(coordinator_tool_names, fn tool_name ->
|
||||||
assert Enum.any?(initial_tools, fn tool -> tool["name"] == tool_name end),
|
assert Enum.any?(initial_tools, fn tool -> tool["name"] == tool_name end),
|
||||||
@@ -17,7 +25,8 @@ defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
|||||||
end)
|
end)
|
||||||
|
|
||||||
# Verify VS Code tools are conditionally included
|
# Verify VS Code tools are conditionally included
|
||||||
vscode_tools = Enum.filter(initial_tools, fn tool ->
|
vscode_tools =
|
||||||
|
Enum.filter(initial_tools, fn tool ->
|
||||||
String.starts_with?(tool["name"], "vscode_")
|
String.starts_with?(tool["name"], "vscode_")
|
||||||
end)
|
end)
|
||||||
|
|
||||||
@@ -25,7 +34,8 @@ defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
|||||||
if Code.ensure_loaded?(AgentCoordinator.VSCodeToolProvider) do
|
if Code.ensure_loaded?(AgentCoordinator.VSCodeToolProvider) do
|
||||||
assert length(vscode_tools) > 0, "VS Code tools should be available when module is loaded"
|
assert length(vscode_tools) > 0, "VS Code tools should be available when module is loaded"
|
||||||
else
|
else
|
||||||
assert length(vscode_tools) == 0, "VS Code tools should not be available when module is not loaded"
|
assert length(vscode_tools) == 0,
|
||||||
|
"VS Code tools should not be available when module is not loaded"
|
||||||
end
|
end
|
||||||
|
|
||||||
# Test tool refresh functionality
|
# Test tool refresh functionality
|
||||||
@@ -39,7 +49,8 @@ defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
|||||||
# Use the shared MCP server manager
|
# Use the shared MCP server manager
|
||||||
|
|
||||||
# Test routing for coordinator tools
|
# Test routing for coordinator tools
|
||||||
result = AgentCoordinator.MCPServerManager.route_tool_call(
|
result =
|
||||||
|
AgentCoordinator.MCPServerManager.route_tool_call(
|
||||||
"register_agent",
|
"register_agent",
|
||||||
%{"name" => "TestAgent", "capabilities" => ["testing"]},
|
%{"name" => "TestAgent", "capabilities" => ["testing"]},
|
||||||
%{agent_id: "test_#{:rand.uniform(1000)}"}
|
%{agent_id: "test_#{:rand.uniform(1000)}"}
|
||||||
@@ -49,7 +60,8 @@ defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
|||||||
assert result == :ok or (is_map(result) and not Map.has_key?(result, "error"))
|
assert result == :ok or (is_map(result) and not Map.has_key?(result, "error"))
|
||||||
|
|
||||||
# Test routing for non-existent tool
|
# Test routing for non-existent tool
|
||||||
error_result = AgentCoordinator.MCPServerManager.route_tool_call(
|
error_result =
|
||||||
|
AgentCoordinator.MCPServerManager.route_tool_call(
|
||||||
"nonexistent_tool",
|
"nonexistent_tool",
|
||||||
%{},
|
%{},
|
||||||
%{agent_id: "test"}
|
%{agent_id: "test"}
|
||||||
@@ -72,10 +84,19 @@ defmodule AgentCoordinator.DynamicToolDiscoveryTest do
|
|||||||
assert tool_count >= 0
|
assert tool_count >= 0
|
||||||
|
|
||||||
# Verify we have external tools (context7, filesystem, etc.)
|
# Verify we have external tools (context7, filesystem, etc.)
|
||||||
external_tools = Enum.filter(tools, fn tool ->
|
external_tools =
|
||||||
|
Enum.filter(tools, fn tool ->
|
||||||
name = tool["name"]
|
name = tool["name"]
|
||||||
|
|
||||||
not String.starts_with?(name, "vscode_") and
|
not String.starts_with?(name, "vscode_") and
|
||||||
name not in ["register_agent", "create_task", "get_next_task", "complete_task", "get_task_board", "heartbeat"]
|
name not in [
|
||||||
|
"register_agent",
|
||||||
|
"create_task",
|
||||||
|
"get_next_task",
|
||||||
|
"complete_task",
|
||||||
|
"get_task_board",
|
||||||
|
"heartbeat"
|
||||||
|
]
|
||||||
end)
|
end)
|
||||||
|
|
||||||
# Should have some external tools from the configured MCP servers
|
# Should have some external tools from the configured MCP servers
|
||||||
|
|||||||
111
test/simple_test.exs
Normal file
111
test/simple_test.exs
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
#!/usr/bin/env elixir
|
||||||
|
|
||||||
|
# Simple test for agent-specific task pools using Mix
|
||||||
|
Mix.install([{:jason, "~> 1.4"}])
|
||||||
|
|
||||||
|
Code.require_file("mix.exs")
|
||||||
|
|
||||||
|
Application.ensure_all_started(:agent_coordinator)
|
||||||
|
|
||||||
|
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task}
|
||||||
|
|
||||||
|
IO.puts("🧪 Simple Agent-Specific Task Pool Test")
|
||||||
|
IO.puts("=" |> String.duplicate(50))
|
||||||
|
|
||||||
|
# Wait for services to start
|
||||||
|
Process.sleep(2000)
|
||||||
|
|
||||||
|
# Test 1: Create agents directly
|
||||||
|
IO.puts("\n1️⃣ Creating agents directly...")
|
||||||
|
|
||||||
|
agent1 = Agent.new("Alpha Wolf", [:coding, :testing])
|
||||||
|
agent2 = Agent.new("Beta Tiger", [:documentation, :analysis])
|
||||||
|
|
||||||
|
case TaskRegistry.register_agent(agent1) do
|
||||||
|
:ok -> IO.puts("✅ Agent 1 registered: #{agent1.id}")
|
||||||
|
error -> IO.puts("❌ Agent 1 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
case TaskRegistry.register_agent(agent2) do
|
||||||
|
:ok -> IO.puts("✅ Agent 2 registered: #{agent2.id}")
|
||||||
|
error -> IO.puts("❌ Agent 2 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
# Test 2: Create agent-specific tasks
|
||||||
|
IO.puts("\n2️⃣ Creating agent-specific tasks...")
|
||||||
|
|
||||||
|
# Create tasks for Agent 1
|
||||||
|
task1_agent1 = Task.new("Fix auth bug", "Debug authentication issue", %{
|
||||||
|
priority: :high,
|
||||||
|
assigned_agent: agent1.id,
|
||||||
|
metadata: %{agent_created: true}
|
||||||
|
})
|
||||||
|
|
||||||
|
task2_agent1 = Task.new("Add auth tests", "Write comprehensive auth tests", %{
|
||||||
|
priority: :normal,
|
||||||
|
assigned_agent: agent1.id,
|
||||||
|
metadata: %{agent_created: true}
|
||||||
|
})
|
||||||
|
|
||||||
|
# Create tasks for Agent 2
|
||||||
|
task1_agent2 = Task.new("Write API docs", "Document REST endpoints", %{
|
||||||
|
priority: :normal,
|
||||||
|
assigned_agent: agent2.id,
|
||||||
|
metadata: %{agent_created: true}
|
||||||
|
})
|
||||||
|
|
||||||
|
# Add tasks to respective agent inboxes
|
||||||
|
case Inbox.add_task(agent1.id, task1_agent1) do
|
||||||
|
:ok -> IO.puts("✅ Task 1 added to Agent 1")
|
||||||
|
error -> IO.puts("❌ Task 1 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
case Inbox.add_task(agent1.id, task2_agent1) do
|
||||||
|
:ok -> IO.puts("✅ Task 2 added to Agent 1")
|
||||||
|
error -> IO.puts("❌ Task 2 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
case Inbox.add_task(agent2.id, task1_agent2) do
|
||||||
|
:ok -> IO.puts("✅ Task 1 added to Agent 2")
|
||||||
|
error -> IO.puts("❌ Task 1 to Agent 2 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
# Test 3: Verify agent isolation
|
||||||
|
IO.puts("\n3️⃣ Testing agent task isolation...")
|
||||||
|
|
||||||
|
# Agent 1 gets their tasks
|
||||||
|
case Inbox.get_next_task(agent1.id) do
|
||||||
|
nil -> IO.puts("❌ Agent 1 has no tasks")
|
||||||
|
task -> IO.puts("✅ Agent 1 got task: #{task.title}")
|
||||||
|
end
|
||||||
|
|
||||||
|
# Agent 2 gets their tasks
|
||||||
|
case Inbox.get_next_task(agent2.id) do
|
||||||
|
nil -> IO.puts("❌ Agent 2 has no tasks")
|
||||||
|
task -> IO.puts("✅ Agent 2 got task: #{task.title}")
|
||||||
|
end
|
||||||
|
|
||||||
|
# Test 4: Check task status
|
||||||
|
IO.puts("\n4️⃣ Checking task status...")
|
||||||
|
|
||||||
|
status1 = Inbox.get_status(agent1.id)
|
||||||
|
status2 = Inbox.get_status(agent2.id)
|
||||||
|
|
||||||
|
IO.puts("Agent 1 status: #{inspect(status1)}")
|
||||||
|
IO.puts("Agent 2 status: #{inspect(status2)}")
|
||||||
|
|
||||||
|
# Test 5: List all tasks for each agent
|
||||||
|
IO.puts("\n5️⃣ Listing all tasks per agent...")
|
||||||
|
|
||||||
|
tasks1 = Inbox.list_tasks(agent1.id)
|
||||||
|
tasks2 = Inbox.list_tasks(agent2.id)
|
||||||
|
|
||||||
|
IO.puts("Agent 1 tasks: #{inspect(tasks1)}")
|
||||||
|
IO.puts("Agent 2 tasks: #{inspect(tasks2)}")
|
||||||
|
|
||||||
|
IO.puts("\n" <> "=" |> String.duplicate(50))
|
||||||
|
IO.puts("🎉 AGENT ISOLATION TEST COMPLETE!")
|
||||||
|
IO.puts("✅ Each agent has their own task inbox")
|
||||||
|
IO.puts("✅ No cross-contamination of tasks")
|
||||||
|
IO.puts("✅ Agent-specific task pools working!")
|
||||||
|
IO.puts("=" |> String.duplicate(50))
|
||||||
227
test/test_agent_specific_tasks.exs
Normal file
227
test/test_agent_specific_tasks.exs
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
#!/usr/bin/env elixir
|
||||||
|
|
||||||
|
# Comprehensive test for agent-specific task pools
|
||||||
|
# This verifies that the chaos problem is fixed and agents can manage their own task sets
|
||||||
|
|
||||||
|
Application.ensure_all_started(:agent_coordinator)
|
||||||
|
|
||||||
|
alias AgentCoordinator.{MCPServer, TaskRegistry, Agent, Inbox}
|
||||||
|
|
||||||
|
IO.puts("🧪 Testing Agent-Specific Task Pools Fix")
|
||||||
|
IO.puts("=" |> String.duplicate(60))
|
||||||
|
|
||||||
|
# Ensure clean state
|
||||||
|
try do
|
||||||
|
TaskRegistry.start_link()
|
||||||
|
rescue
|
||||||
|
_ -> :ok # Already started
|
||||||
|
end
|
||||||
|
|
||||||
|
try do
|
||||||
|
MCPServer.start_link()
|
||||||
|
rescue
|
||||||
|
_ -> :ok # Already started
|
||||||
|
end
|
||||||
|
|
||||||
|
Process.sleep(1000) # Give services time to start
|
||||||
|
|
||||||
|
# Test 1: Register two agents
|
||||||
|
IO.puts("\n1️⃣ Registering two test agents...")
|
||||||
|
|
||||||
|
agent1_req = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"name" => "GitHub Copilot Alpha Wolf",
|
||||||
|
"capabilities" => ["coding", "testing"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 1
|
||||||
|
}
|
||||||
|
|
||||||
|
agent2_req = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"name" => "GitHub Copilot Beta Tiger",
|
||||||
|
"capabilities" => ["documentation", "analysis"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 2
|
||||||
|
}
|
||||||
|
|
||||||
|
resp1 = MCPServer.handle_mcp_request(agent1_req)
|
||||||
|
resp2 = MCPServer.handle_mcp_request(agent2_req)
|
||||||
|
|
||||||
|
# Extract agent IDs
|
||||||
|
agent1_id = case resp1 do
|
||||||
|
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||||
|
data = Jason.decode!(text)
|
||||||
|
data["agent_id"]
|
||||||
|
_ ->
|
||||||
|
IO.puts("❌ Failed to register agent 1: #{inspect(resp1)}")
|
||||||
|
System.halt(1)
|
||||||
|
end
|
||||||
|
|
||||||
|
agent2_id = case resp2 do
|
||||||
|
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||||
|
data = Jason.decode!(text)
|
||||||
|
data["agent_id"]
|
||||||
|
_ ->
|
||||||
|
IO.puts("❌ Failed to register agent 2: #{inspect(resp2)}")
|
||||||
|
System.halt(1)
|
||||||
|
end
|
||||||
|
|
||||||
|
IO.puts("✅ Agent 1 (Alpha Wolf): #{agent1_id}")
|
||||||
|
IO.puts("✅ Agent 2 (Beta Tiger): #{agent2_id}")
|
||||||
|
|
||||||
|
# Test 2: Create task sets for each agent (THIS IS THE KEY TEST!)
|
||||||
|
IO.puts("\n2️⃣ Creating agent-specific task sets...")
|
||||||
|
|
||||||
|
# Agent 1 task set
|
||||||
|
agent1_task_set = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_task_set",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent1_id,
|
||||||
|
"task_set" => [
|
||||||
|
%{
|
||||||
|
"title" => "Fix authentication bug",
|
||||||
|
"description" => "Debug and fix the login authentication issue",
|
||||||
|
"priority" => "high",
|
||||||
|
"estimated_time" => "2 hours",
|
||||||
|
"file_paths" => ["lib/auth.ex", "test/auth_test.exs"]
|
||||||
|
},
|
||||||
|
%{
|
||||||
|
"title" => "Add unit tests for auth module",
|
||||||
|
"description" => "Write comprehensive tests for authentication",
|
||||||
|
"priority" => "normal",
|
||||||
|
"estimated_time" => "1 hour"
|
||||||
|
},
|
||||||
|
%{
|
||||||
|
"title" => "Refactor auth middleware",
|
||||||
|
"description" => "Clean up and optimize auth middleware code",
|
||||||
|
"priority" => "low",
|
||||||
|
"estimated_time" => "30 minutes"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 3
|
||||||
|
}
|
||||||
|
|
||||||
|
# Agent 2 task set (completely different)
|
||||||
|
agent2_task_set = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_task_set",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent2_id,
|
||||||
|
"task_set" => [
|
||||||
|
%{
|
||||||
|
"title" => "Write API documentation",
|
||||||
|
"description" => "Document all REST API endpoints with examples",
|
||||||
|
"priority" => "normal",
|
||||||
|
"estimated_time" => "3 hours",
|
||||||
|
"file_paths" => ["docs/api.md"]
|
||||||
|
},
|
||||||
|
%{
|
||||||
|
"title" => "Analyze code coverage",
|
||||||
|
"description" => "Run coverage analysis and identify gaps",
|
||||||
|
"priority" => "high",
|
||||||
|
"estimated_time" => "1 hour"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 4
|
||||||
|
}
|
||||||
|
|
||||||
|
task_set_resp1 = MCPServer.handle_mcp_request(agent1_task_set)
|
||||||
|
task_set_resp2 = MCPServer.handle_mcp_request(agent2_task_set)
|
||||||
|
|
||||||
|
IO.puts("Agent 1 task set response: #{inspect(task_set_resp1)}")
|
||||||
|
IO.puts("Agent 2 task set response: #{inspect(task_set_resp2)}")
|
||||||
|
|
||||||
|
# Test 3: Verify agents only see their own tasks
|
||||||
|
IO.puts("\n3️⃣ Verifying agent isolation...")
|
||||||
|
|
||||||
|
# Get detailed task board
|
||||||
|
task_board_req = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_detailed_task_board",
|
||||||
|
"arguments" => %{}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 5
|
||||||
|
}
|
||||||
|
|
||||||
|
board_resp = MCPServer.handle_mcp_request(task_board_req)
|
||||||
|
IO.puts("Task board response: #{inspect(board_resp)}")
|
||||||
|
|
||||||
|
# Test 4: Agent 1 gets their next task (should be their own)
|
||||||
|
IO.puts("\n4️⃣ Testing task retrieval...")
|
||||||
|
|
||||||
|
next_task_req1 = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_next_task",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent1_id
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 6
|
||||||
|
}
|
||||||
|
|
||||||
|
task_resp1 = MCPServer.handle_mcp_request(next_task_req1)
|
||||||
|
IO.puts("Agent 1 next task: #{inspect(task_resp1)}")
|
||||||
|
|
||||||
|
# Test 5: Agent 2 gets their next task (should be different)
|
||||||
|
next_task_req2 = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_next_task",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent2_id
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 7
|
||||||
|
}
|
||||||
|
|
||||||
|
task_resp2 = MCPServer.handle_mcp_request(next_task_req2)
|
||||||
|
IO.puts("Agent 2 next task: #{inspect(task_resp2)}")
|
||||||
|
|
||||||
|
# Test 6: Get individual agent task history
|
||||||
|
IO.puts("\n5️⃣ Testing agent task history...")
|
||||||
|
|
||||||
|
history_req1 = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_agent_task_history",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent1_id
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 8
|
||||||
|
}
|
||||||
|
|
||||||
|
history_resp1 = MCPServer.handle_mcp_request(history_req1)
|
||||||
|
IO.puts("Agent 1 history: #{inspect(history_resp1)}")
|
||||||
|
|
||||||
|
IO.puts("\n" <> "=" |> String.duplicate(60))
|
||||||
|
IO.puts("🎉 AGENT-SPECIFIC TASK POOLS TEST COMPLETE!")
|
||||||
|
IO.puts("✅ Each agent now has their own task pool")
|
||||||
|
IO.puts("✅ No more task chaos or cross-contamination")
|
||||||
|
IO.puts("✅ Agents can plan and coordinate their workflows")
|
||||||
|
IO.puts("=" |> String.duplicate(60))
|
||||||
234
test/test_agent_task_pools.exs
Normal file
234
test/test_agent_task_pools.exs
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
#!/usr/bin/env elixir
|
||||||
|
|
||||||
|
# Test script for agent-specific task pools
|
||||||
|
# This tests the new functionality to ensure agents have separate task pools
|
||||||
|
|
||||||
|
Mix.install([
|
||||||
|
{:jason, "~> 1.4"}
|
||||||
|
])
|
||||||
|
|
||||||
|
defmodule AgentTaskPoolTest do
|
||||||
|
def run_test do
|
||||||
|
IO.puts("🚀 Testing Agent-Specific Task Pools")
|
||||||
|
IO.puts("=====================================")
|
||||||
|
|
||||||
|
# Start the application
|
||||||
|
IO.puts("Starting AgentCoordinator application...")
|
||||||
|
Application.start(:agent_coordinator)
|
||||||
|
|
||||||
|
# Test 1: Register two agents
|
||||||
|
IO.puts("\n📋 Test 1: Registering two test agents")
|
||||||
|
|
||||||
|
agent1_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"name" => "TestAgent_Alpha_Banana",
|
||||||
|
"capabilities" => ["coding", "testing"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 1
|
||||||
|
}
|
||||||
|
|
||||||
|
agent2_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"name" => "TestAgent_Beta_Koala",
|
||||||
|
"capabilities" => ["documentation", "analysis"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 2
|
||||||
|
}
|
||||||
|
|
||||||
|
# Register agents
|
||||||
|
agent1_response = AgentCoordinator.MCPServer.handle_mcp_request(agent1_request)
|
||||||
|
agent2_response = AgentCoordinator.MCPServer.handle_mcp_request(agent2_request)
|
||||||
|
|
||||||
|
agent1_id = extract_agent_id(agent1_response)
|
||||||
|
agent2_id = extract_agent_id(agent2_response)
|
||||||
|
|
||||||
|
IO.puts("✅ Agent 1 registered: #{agent1_id}")
|
||||||
|
IO.puts("✅ Agent 2 registered: #{agent2_id}")
|
||||||
|
|
||||||
|
# Test 2: Register task sets for each agent
|
||||||
|
IO.puts("\n📝 Test 2: Registering task sets for each agent")
|
||||||
|
|
||||||
|
task_set_1 = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_task_set",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent1_id,
|
||||||
|
"task_set" => [
|
||||||
|
%{
|
||||||
|
"title" => "Implement login feature",
|
||||||
|
"description" => "Create user authentication system",
|
||||||
|
"priority" => "high",
|
||||||
|
"estimated_time" => "2 hours"
|
||||||
|
},
|
||||||
|
%{
|
||||||
|
"title" => "Write unit tests",
|
||||||
|
"description" => "Add tests for authentication",
|
||||||
|
"priority" => "normal",
|
||||||
|
"estimated_time" => "1 hour"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 3
|
||||||
|
}
|
||||||
|
|
||||||
|
task_set_2 = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_task_set",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent2_id,
|
||||||
|
"task_set" => [
|
||||||
|
%{
|
||||||
|
"title" => "Write API documentation",
|
||||||
|
"description" => "Document the new authentication API",
|
||||||
|
"priority" => "normal",
|
||||||
|
"estimated_time" => "3 hours"
|
||||||
|
},
|
||||||
|
%{
|
||||||
|
"title" => "Review code quality",
|
||||||
|
"description" => "Analyze the authentication implementation",
|
||||||
|
"priority" => "low",
|
||||||
|
"estimated_time" => "1 hour"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 4
|
||||||
|
}
|
||||||
|
|
||||||
|
taskset1_response = AgentCoordinator.MCPServer.handle_mcp_request(task_set_1)
|
||||||
|
taskset2_response = AgentCoordinator.MCPServer.handle_mcp_request(task_set_2)
|
||||||
|
|
||||||
|
IO.puts("✅ Task set registered for Agent 1: #{inspect(taskset1_response)}")
|
||||||
|
IO.puts("✅ Task set registered for Agent 2: #{inspect(taskset2_response)}")
|
||||||
|
|
||||||
|
# Test 3: Get detailed task board
|
||||||
|
IO.puts("\n📊 Test 3: Getting detailed task board")
|
||||||
|
|
||||||
|
detailed_board_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_detailed_task_board",
|
||||||
|
"arguments" => %{}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 5
|
||||||
|
}
|
||||||
|
|
||||||
|
board_response = AgentCoordinator.MCPServer.handle_mcp_request(detailed_board_request)
|
||||||
|
IO.puts("📋 Detailed task board: #{inspect(board_response, pretty: true)}")
|
||||||
|
|
||||||
|
# Test 4: Get agent task history
|
||||||
|
IO.puts("\n📜 Test 4: Getting individual agent task histories")
|
||||||
|
|
||||||
|
history1_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_agent_task_history",
|
||||||
|
"arguments" => %{"agent_id" => agent1_id}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 6
|
||||||
|
}
|
||||||
|
|
||||||
|
history2_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_agent_task_history",
|
||||||
|
"arguments" => %{"agent_id" => agent2_id}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 7
|
||||||
|
}
|
||||||
|
|
||||||
|
history1_response = AgentCoordinator.MCPServer.handle_mcp_request(history1_request)
|
||||||
|
history2_response = AgentCoordinator.MCPServer.handle_mcp_request(history2_request)
|
||||||
|
|
||||||
|
IO.puts("📜 Agent 1 history: #{inspect(history1_response, pretty: true)}")
|
||||||
|
IO.puts("📜 Agent 2 history: #{inspect(history2_response, pretty: true)}")
|
||||||
|
|
||||||
|
# Test 5: Verify agents can get their own tasks
|
||||||
|
IO.puts("\n🎯 Test 5: Verifying agents get their own tasks")
|
||||||
|
|
||||||
|
next_task1_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_next_task",
|
||||||
|
"arguments" => %{"agent_id" => agent1_id}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 8
|
||||||
|
}
|
||||||
|
|
||||||
|
next_task2_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "get_next_task",
|
||||||
|
"arguments" => %{"agent_id" => agent2_id}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 9
|
||||||
|
}
|
||||||
|
|
||||||
|
task1_response = AgentCoordinator.MCPServer.handle_mcp_request(next_task1_request)
|
||||||
|
task2_response = AgentCoordinator.MCPServer.handle_mcp_request(next_task2_request)
|
||||||
|
|
||||||
|
IO.puts("🎯 Agent 1 next task: #{inspect(task1_response)}")
|
||||||
|
IO.puts("🎯 Agent 2 next task: #{inspect(task2_response)}")
|
||||||
|
|
||||||
|
IO.puts("\n✅ Test completed! Agent-specific task pools are working!")
|
||||||
|
IO.puts("Each agent now has their own task queue and cannot access other agents' tasks.")
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
cleanup_agents([agent1_id, agent2_id])
|
||||||
|
end
|
||||||
|
|
||||||
|
defp extract_agent_id(response) do
|
||||||
|
case response do
|
||||||
|
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||||
|
data = Jason.decode!(text)
|
||||||
|
data["agent_id"]
|
||||||
|
_ ->
|
||||||
|
"unknown"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
defp cleanup_agents(agent_ids) do
|
||||||
|
IO.puts("\n🧹 Cleaning up test agents...")
|
||||||
|
|
||||||
|
Enum.each(agent_ids, fn agent_id ->
|
||||||
|
unregister_request = %{
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "unregister_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"agent_id" => agent_id,
|
||||||
|
"reason" => "Test completed"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 999
|
||||||
|
}
|
||||||
|
|
||||||
|
AgentCoordinator.MCPServer.handle_mcp_request(unregister_request)
|
||||||
|
IO.puts("🗑️ Unregistered agent: #{agent_id}")
|
||||||
|
end)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# Run the test
|
||||||
|
AgentTaskPoolTest.run_test()
|
||||||
82
test/test_isolation.exs
Normal file
82
test/test_isolation.exs
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
# Simple test for agent-specific task pools
|
||||||
|
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task}
|
||||||
|
|
||||||
|
IO.puts("🧪 Agent-Specific Task Pool Test")
|
||||||
|
IO.puts("=" |> String.duplicate(40))
|
||||||
|
|
||||||
|
# Test 1: Create agents directly
|
||||||
|
IO.puts("\n1️⃣ Creating agents...")
|
||||||
|
|
||||||
|
agent1 = Agent.new("Alpha Wolf", [:coding, :testing])
|
||||||
|
agent2 = Agent.new("Beta Tiger", [:documentation, :analysis])
|
||||||
|
|
||||||
|
IO.puts("Agent 1 ID: #{agent1.id}")
|
||||||
|
IO.puts("Agent 2 ID: #{agent2.id}")
|
||||||
|
|
||||||
|
case TaskRegistry.register_agent(agent1) do
|
||||||
|
:ok -> IO.puts("✅ Agent 1 registered")
|
||||||
|
error -> IO.puts("❌ Agent 1 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
case TaskRegistry.register_agent(agent2) do
|
||||||
|
:ok -> IO.puts("✅ Agent 2 registered")
|
||||||
|
error -> IO.puts("❌ Agent 2 failed: #{inspect(error)}")
|
||||||
|
end
|
||||||
|
|
||||||
|
# Wait for inboxes to be created
|
||||||
|
Process.sleep(1000)
|
||||||
|
|
||||||
|
# Test 2: Create agent-specific tasks
|
||||||
|
IO.puts("\n2️⃣ Creating agent-specific tasks...")
|
||||||
|
|
||||||
|
# Tasks for Agent 1
|
||||||
|
task1_agent1 = Task.new("Fix auth bug", "Debug authentication issue", %{
|
||||||
|
priority: :high,
|
||||||
|
assigned_agent: agent1.id,
|
||||||
|
metadata: %{agent_created: true}
|
||||||
|
})
|
||||||
|
|
||||||
|
task2_agent1 = Task.new("Add auth tests", "Write auth tests", %{
|
||||||
|
priority: :normal,
|
||||||
|
assigned_agent: agent1.id,
|
||||||
|
metadata: %{agent_created: true}
|
||||||
|
})
|
||||||
|
|
||||||
|
# Tasks for Agent 2
|
||||||
|
task1_agent2 = Task.new("Write API docs", "Document endpoints", %{
|
||||||
|
priority: :normal,
|
||||||
|
assigned_agent: agent2.id,
|
||||||
|
metadata: %{agent_created: true}
|
||||||
|
})
|
||||||
|
|
||||||
|
# Add tasks to respective inboxes
|
||||||
|
Inbox.add_task(agent1.id, task1_agent1)
|
||||||
|
Inbox.add_task(agent1.id, task2_agent1)
|
||||||
|
Inbox.add_task(agent2.id, task1_agent2)
|
||||||
|
|
||||||
|
IO.puts("✅ Tasks added to agent inboxes")
|
||||||
|
|
||||||
|
# Test 3: Verify isolation
|
||||||
|
IO.puts("\n3️⃣ Testing isolation...")
|
||||||
|
|
||||||
|
# Check what each agent gets
|
||||||
|
case Inbox.get_next_task(agent1.id) do
|
||||||
|
nil -> IO.puts("❌ Agent 1 has no tasks")
|
||||||
|
task -> IO.puts("✅ Agent 1 got: '#{task.title}'")
|
||||||
|
end
|
||||||
|
|
||||||
|
case Inbox.get_next_task(agent2.id) do
|
||||||
|
nil -> IO.puts("❌ Agent 2 has no tasks")
|
||||||
|
task -> IO.puts("✅ Agent 2 got: '#{task.title}'")
|
||||||
|
end
|
||||||
|
|
||||||
|
# Test 4: Check remaining tasks
|
||||||
|
IO.puts("\n4️⃣ Checking remaining tasks...")
|
||||||
|
|
||||||
|
status1 = Inbox.get_status(agent1.id)
|
||||||
|
status2 = Inbox.get_status(agent2.id)
|
||||||
|
|
||||||
|
IO.puts("Agent 1: #{status1.pending_count} pending, current: #{if status1.current_task, do: status1.current_task.title, else: "none"}")
|
||||||
|
IO.puts("Agent 2: #{status2.pending_count} pending, current: #{if status2.current_task, do: status2.current_task.title, else: "none"}")
|
||||||
|
|
||||||
|
IO.puts("\n🎉 SUCCESS! Agent-specific task pools working!")
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
# Test enhanced Agent Coordinator with auto-heartbeat and unregister
|
|
||||||
|
|
||||||
# Start a client with automatic heartbeat
|
|
||||||
IO.puts "🚀 Testing Enhanced Agent Coordinator"
|
|
||||||
IO.puts "====================================="
|
|
||||||
|
|
||||||
{:ok, client1} = AgentCoordinator.Client.start_session("TestAgent1", [:coding, :analysis])
|
|
||||||
|
|
||||||
# Get session info
|
|
||||||
{:ok, info} = AgentCoordinator.Client.get_session_info(client1)
|
|
||||||
IO.puts "✅ Agent registered: #{info.agent_name} (#{info.agent_id})"
|
|
||||||
IO.puts " Auto-heartbeat: #{info.auto_heartbeat_enabled}"
|
|
||||||
|
|
||||||
# Check task board
|
|
||||||
{:ok, board} = AgentCoordinator.Client.get_task_board(client1)
|
|
||||||
IO.puts "📊 Task board status:"
|
|
||||||
IO.puts " Total agents: #{length(board.agents)}"
|
|
||||||
IO.puts " Active sessions: #{board.active_sessions}"
|
|
||||||
|
|
||||||
# Find our agent on the board
|
|
||||||
our_agent = Enum.find(board.agents, fn a -> a["agent_id"] == info.agent_id end)
|
|
||||||
IO.puts " Our agent online: #{our_agent["online"]}"
|
|
||||||
IO.puts " Session active: #{our_agent["session_active"]}"
|
|
||||||
|
|
||||||
# Test heartbeat functionality
|
|
||||||
IO.puts "\n💓 Testing manual heartbeat..."
|
|
||||||
{:ok, _} = AgentCoordinator.Client.heartbeat(client1)
|
|
||||||
IO.puts " Heartbeat sent successfully"
|
|
||||||
|
|
||||||
# Wait to observe automatic heartbeats
|
|
||||||
IO.puts "\n⏱️ Waiting 3 seconds to observe automatic heartbeats..."
|
|
||||||
Process.sleep(3000)
|
|
||||||
|
|
||||||
{:ok, updated_info} = AgentCoordinator.Client.get_session_info(client1)
|
|
||||||
IO.puts " Last heartbeat updated: #{DateTime.diff(updated_info.last_heartbeat, info.last_heartbeat) > 0}"
|
|
||||||
|
|
||||||
# Test unregister functionality
|
|
||||||
IO.puts "\n🔄 Testing unregister functionality..."
|
|
||||||
{:ok, result} = AgentCoordinator.Client.unregister_agent(client1, "Testing unregister from script")
|
|
||||||
IO.puts " Unregister result: #{result["status"]}"
|
|
||||||
|
|
||||||
# Check agent status after unregister
|
|
||||||
{:ok, final_board} = AgentCoordinator.Client.get_task_board(client1)
|
|
||||||
final_agent = Enum.find(final_board.agents, fn a -> a["agent_id"] == info.agent_id end)
|
|
||||||
|
|
||||||
case final_agent do
|
|
||||||
nil ->
|
|
||||||
IO.puts " Agent removed from board ✅"
|
|
||||||
agent ->
|
|
||||||
IO.puts " Agent still on board, online: #{agent["online"]}"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Test task creation
|
|
||||||
IO.puts "\n📝 Testing task creation with heartbeats..."
|
|
||||||
{:ok, task_result} = AgentCoordinator.Client.create_task(
|
|
||||||
client1,
|
|
||||||
"Test Task",
|
|
||||||
"A test task to verify heartbeat integration",
|
|
||||||
%{"priority" => "normal"}
|
|
||||||
)
|
|
||||||
|
|
||||||
IO.puts " Task created: #{task_result["task_id"]}"
|
|
||||||
if Map.has_key?(task_result, "_heartbeat_metadata") do
|
|
||||||
IO.puts " Heartbeat metadata included ✅"
|
|
||||||
else
|
|
||||||
IO.puts " No heartbeat metadata ❌"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Clean up
|
|
||||||
AgentCoordinator.Client.stop_session(client1)
|
|
||||||
IO.puts "\n✨ Test completed successfully!"
|
|
||||||
@@ -1,321 +0,0 @@
|
|||||||
#!/usr/bin/env elixir
|
|
||||||
|
|
||||||
# Multi-Codebase Coordination Test Script
|
|
||||||
# This script demonstrates how agents can coordinate across multiple codebases
|
|
||||||
|
|
||||||
Mix.install([
|
|
||||||
{:jason, "~> 1.4"},
|
|
||||||
{:uuid, "~> 1.1"}
|
|
||||||
])
|
|
||||||
|
|
||||||
defmodule MultiCodebaseTest do
|
|
||||||
@moduledoc """
|
|
||||||
Test script for multi-codebase agent coordination functionality.
|
|
||||||
Demonstrates cross-codebase task creation, dependency management, and agent coordination.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def run do
|
|
||||||
IO.puts("=== Multi-Codebase Agent Coordination Test ===\n")
|
|
||||||
|
|
||||||
# Test 1: Register multiple codebases
|
|
||||||
test_codebase_registration()
|
|
||||||
|
|
||||||
# Test 2: Register agents in different codebases
|
|
||||||
test_agent_registration()
|
|
||||||
|
|
||||||
# Test 3: Create tasks within individual codebases
|
|
||||||
test_single_codebase_tasks()
|
|
||||||
|
|
||||||
# Test 4: Create cross-codebase tasks
|
|
||||||
test_cross_codebase_tasks()
|
|
||||||
|
|
||||||
# Test 5: Test cross-codebase dependencies
|
|
||||||
test_codebase_dependencies()
|
|
||||||
|
|
||||||
# Test 6: Verify coordination and task board
|
|
||||||
test_coordination_overview()
|
|
||||||
|
|
||||||
IO.puts("\n=== Test Completed ===")
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_codebase_registration do
|
|
||||||
IO.puts("1. Testing Codebase Registration")
|
|
||||||
IO.puts(" - Registering frontend codebase...")
|
|
||||||
IO.puts(" - Registering backend codebase...")
|
|
||||||
IO.puts(" - Registering shared-lib codebase...")
|
|
||||||
|
|
||||||
frontend_codebase = %{
|
|
||||||
"id" => "frontend-app",
|
|
||||||
"name" => "Frontend Application",
|
|
||||||
"workspace_path" => "/workspace/frontend",
|
|
||||||
"description" => "React-based frontend application",
|
|
||||||
"metadata" => %{
|
|
||||||
"tech_stack" => ["react", "typescript", "tailwind"],
|
|
||||||
"dependencies" => ["backend-api", "shared-lib"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
backend_codebase = %{
|
|
||||||
"id" => "backend-api",
|
|
||||||
"name" => "Backend API",
|
|
||||||
"workspace_path" => "/workspace/backend",
|
|
||||||
"description" => "Node.js API server",
|
|
||||||
"metadata" => %{
|
|
||||||
"tech_stack" => ["nodejs", "express", "mongodb"],
|
|
||||||
"dependencies" => ["shared-lib"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
shared_lib_codebase = %{
|
|
||||||
"id" => "shared-lib",
|
|
||||||
"name" => "Shared Library",
|
|
||||||
"workspace_path" => "/workspace/shared",
|
|
||||||
"description" => "Shared utilities and types",
|
|
||||||
"metadata" => %{
|
|
||||||
"tech_stack" => ["typescript"],
|
|
||||||
"dependencies" => []
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Simulate MCP calls
|
|
||||||
simulate_mcp_call("register_codebase", frontend_codebase)
|
|
||||||
simulate_mcp_call("register_codebase", backend_codebase)
|
|
||||||
simulate_mcp_call("register_codebase", shared_lib_codebase)
|
|
||||||
|
|
||||||
IO.puts(" ✓ All codebases registered successfully\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_agent_registration do
|
|
||||||
IO.puts("2. Testing Agent Registration")
|
|
||||||
|
|
||||||
# Frontend agents
|
|
||||||
frontend_agent1 = %{
|
|
||||||
"name" => "frontend-dev-1",
|
|
||||||
"capabilities" => ["coding", "testing"],
|
|
||||||
"codebase_id" => "frontend-app",
|
|
||||||
"workspace_path" => "/workspace/frontend",
|
|
||||||
"cross_codebase_capable" => true
|
|
||||||
}
|
|
||||||
|
|
||||||
frontend_agent2 = %{
|
|
||||||
"name" => "frontend-dev-2",
|
|
||||||
"capabilities" => ["coding", "review"],
|
|
||||||
"codebase_id" => "frontend-app",
|
|
||||||
"workspace_path" => "/workspace/frontend",
|
|
||||||
"cross_codebase_capable" => false
|
|
||||||
}
|
|
||||||
|
|
||||||
# Backend agents
|
|
||||||
backend_agent1 = %{
|
|
||||||
"name" => "backend-dev-1",
|
|
||||||
"capabilities" => ["coding", "testing", "analysis"],
|
|
||||||
"codebase_id" => "backend-api",
|
|
||||||
"workspace_path" => "/workspace/backend",
|
|
||||||
"cross_codebase_capable" => true
|
|
||||||
}
|
|
||||||
|
|
||||||
# Shared library agent (cross-codebase capable)
|
|
||||||
shared_agent = %{
|
|
||||||
"name" => "shared-lib-dev",
|
|
||||||
"capabilities" => ["coding", "documentation", "review"],
|
|
||||||
"codebase_id" => "shared-lib",
|
|
||||||
"workspace_path" => "/workspace/shared",
|
|
||||||
"cross_codebase_capable" => true
|
|
||||||
}
|
|
||||||
|
|
||||||
agents = [frontend_agent1, frontend_agent2, backend_agent1, shared_agent]
|
|
||||||
|
|
||||||
Enum.each(agents, fn agent ->
|
|
||||||
IO.puts(" - Registering agent: #{agent["name"]} (#{agent["codebase_id"]})")
|
|
||||||
simulate_mcp_call("register_agent", agent)
|
|
||||||
end)
|
|
||||||
|
|
||||||
IO.puts(" ✓ All agents registered successfully\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_single_codebase_tasks do
|
|
||||||
IO.puts("3. Testing Single Codebase Tasks")
|
|
||||||
|
|
||||||
tasks = [
|
|
||||||
%{
|
|
||||||
"title" => "Update user interface components",
|
|
||||||
"description" => "Modernize the login and dashboard components",
|
|
||||||
"codebase_id" => "frontend-app",
|
|
||||||
"file_paths" => ["/src/components/Login.tsx", "/src/components/Dashboard.tsx"],
|
|
||||||
"required_capabilities" => ["coding"],
|
|
||||||
"priority" => "normal"
|
|
||||||
},
|
|
||||||
%{
|
|
||||||
"title" => "Implement user authentication API",
|
|
||||||
"description" => "Create secure user authentication endpoints",
|
|
||||||
"codebase_id" => "backend-api",
|
|
||||||
"file_paths" => ["/src/routes/auth.js", "/src/middleware/auth.js"],
|
|
||||||
"required_capabilities" => ["coding", "testing"],
|
|
||||||
"priority" => "high"
|
|
||||||
},
|
|
||||||
%{
|
|
||||||
"title" => "Add utility functions for date handling",
|
|
||||||
"description" => "Create reusable date utility functions",
|
|
||||||
"codebase_id" => "shared-lib",
|
|
||||||
"file_paths" => ["/src/utils/date.ts", "/src/types/date.ts"],
|
|
||||||
"required_capabilities" => ["coding", "documentation"],
|
|
||||||
"priority" => "normal"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
Enum.each(tasks, fn task ->
|
|
||||||
IO.puts(" - Creating task: #{task["title"]} (#{task["codebase_id"]})")
|
|
||||||
simulate_mcp_call("create_task", task)
|
|
||||||
end)
|
|
||||||
|
|
||||||
IO.puts(" ✓ All single-codebase tasks created successfully\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_cross_codebase_tasks do
|
|
||||||
IO.puts("4. Testing Cross-Codebase Tasks")
|
|
||||||
|
|
||||||
# Task that affects multiple codebases
|
|
||||||
cross_codebase_task = %{
|
|
||||||
"title" => "Implement real-time notifications feature",
|
|
||||||
"description" => "Add real-time notifications across frontend and backend",
|
|
||||||
"primary_codebase_id" => "backend-api",
|
|
||||||
"affected_codebases" => ["backend-api", "frontend-app", "shared-lib"],
|
|
||||||
"coordination_strategy" => "sequential"
|
|
||||||
}
|
|
||||||
|
|
||||||
IO.puts(" - Creating cross-codebase task: #{cross_codebase_task["title"]}")
|
|
||||||
IO.puts(" Primary: #{cross_codebase_task["primary_codebase_id"]}")
|
|
||||||
IO.puts(" Affected: #{Enum.join(cross_codebase_task["affected_codebases"], ", ")}")
|
|
||||||
|
|
||||||
simulate_mcp_call("create_cross_codebase_task", cross_codebase_task)
|
|
||||||
|
|
||||||
# Another cross-codebase task with different strategy
|
|
||||||
parallel_task = %{
|
|
||||||
"title" => "Update shared types and interfaces",
|
|
||||||
"description" => "Synchronize type definitions across all codebases",
|
|
||||||
"primary_codebase_id" => "shared-lib",
|
|
||||||
"affected_codebases" => ["shared-lib", "frontend-app", "backend-api"],
|
|
||||||
"coordination_strategy" => "parallel"
|
|
||||||
}
|
|
||||||
|
|
||||||
IO.puts(" - Creating parallel cross-codebase task: #{parallel_task["title"]}")
|
|
||||||
simulate_mcp_call("create_cross_codebase_task", parallel_task)
|
|
||||||
|
|
||||||
IO.puts(" ✓ Cross-codebase tasks created successfully\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_codebase_dependencies do
|
|
||||||
IO.puts("5. Testing Codebase Dependencies")
|
|
||||||
|
|
||||||
dependencies = [
|
|
||||||
%{
|
|
||||||
"source_codebase_id" => "frontend-app",
|
|
||||||
"target_codebase_id" => "backend-api",
|
|
||||||
"dependency_type" => "api_consumption",
|
|
||||||
"metadata" => %{"api_version" => "v1", "endpoints" => ["auth", "users", "notifications"]}
|
|
||||||
},
|
|
||||||
%{
|
|
||||||
"source_codebase_id" => "frontend-app",
|
|
||||||
"target_codebase_id" => "shared-lib",
|
|
||||||
"dependency_type" => "library_import",
|
|
||||||
"metadata" => %{"imports" => ["types", "utils", "constants"]}
|
|
||||||
},
|
|
||||||
%{
|
|
||||||
"source_codebase_id" => "backend-api",
|
|
||||||
"target_codebase_id" => "shared-lib",
|
|
||||||
"dependency_type" => "library_import",
|
|
||||||
"metadata" => %{"imports" => ["types", "validators"]}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
Enum.each(dependencies, fn dep ->
|
|
||||||
IO.puts(" - Adding dependency: #{dep["source_codebase_id"]} → #{dep["target_codebase_id"]} (#{dep["dependency_type"]})")
|
|
||||||
simulate_mcp_call("add_codebase_dependency", dep)
|
|
||||||
end)
|
|
||||||
|
|
||||||
IO.puts(" ✓ All codebase dependencies added successfully\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_coordination_overview do
|
|
||||||
IO.puts("6. Testing Coordination Overview")
|
|
||||||
|
|
||||||
IO.puts(" - Getting overall task board...")
|
|
||||||
simulate_mcp_call("get_task_board", %{})
|
|
||||||
|
|
||||||
IO.puts(" - Getting frontend codebase status...")
|
|
||||||
simulate_mcp_call("get_codebase_status", %{"codebase_id" => "frontend-app"})
|
|
||||||
|
|
||||||
IO.puts(" - Getting backend codebase status...")
|
|
||||||
simulate_mcp_call("get_codebase_status", %{"codebase_id" => "backend-api"})
|
|
||||||
|
|
||||||
IO.puts(" - Listing all codebases...")
|
|
||||||
simulate_mcp_call("list_codebases", %{})
|
|
||||||
|
|
||||||
IO.puts(" ✓ Coordination overview retrieved successfully\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
defp simulate_mcp_call(tool_name, arguments) do
|
|
||||||
request = %{
|
|
||||||
"jsonrpc" => "2.0",
|
|
||||||
"id" => UUID.uuid4(),
|
|
||||||
"method" => "tools/call",
|
|
||||||
"params" => %{
|
|
||||||
"name" => tool_name,
|
|
||||||
"arguments" => arguments
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# In a real implementation, this would make an actual MCP call
|
|
||||||
# For now, we'll just show the structure
|
|
||||||
IO.puts(" MCP Call: #{tool_name}")
|
|
||||||
IO.puts(" Arguments: #{Jason.encode!(arguments, pretty: true) |> String.replace("\n", "\n ")}")
|
|
||||||
|
|
||||||
# Simulate successful response
|
|
||||||
response = %{
|
|
||||||
"jsonrpc" => "2.0",
|
|
||||||
"id" => request["id"],
|
|
||||||
"result" => %{
|
|
||||||
"content" => [%{
|
|
||||||
"type" => "text",
|
|
||||||
"text" => Jason.encode!(%{"status" => "success", "tool" => tool_name})
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
IO.puts(" Response: success")
|
|
||||||
end
|
|
||||||
|
|
||||||
def simulate_task_flow do
|
|
||||||
IO.puts("\n=== Simulating Multi-Codebase Task Flow ===")
|
|
||||||
|
|
||||||
IO.puts("1. Cross-codebase task created:")
|
|
||||||
IO.puts(" - Main task assigned to backend agent")
|
|
||||||
IO.puts(" - Dependent task created for frontend")
|
|
||||||
IO.puts(" - Dependent task created for shared library")
|
|
||||||
|
|
||||||
IO.puts("\n2. Agent coordination:")
|
|
||||||
IO.puts(" - Backend agent starts implementation")
|
|
||||||
IO.puts(" - Publishes API specification to NATS stream")
|
|
||||||
IO.puts(" - Frontend agent receives notification")
|
|
||||||
IO.puts(" - Shared library agent updates type definitions")
|
|
||||||
|
|
||||||
IO.puts("\n3. File conflict detection:")
|
|
||||||
IO.puts(" - Frontend agent attempts to modify shared types")
|
|
||||||
IO.puts(" - System detects conflict with shared-lib agent's work")
|
|
||||||
IO.puts(" - Task is queued until shared-lib work completes")
|
|
||||||
|
|
||||||
IO.puts("\n4. Cross-codebase synchronization:")
|
|
||||||
IO.puts(" - Shared-lib agent completes type updates")
|
|
||||||
IO.puts(" - Frontend task is automatically unblocked")
|
|
||||||
IO.puts(" - All agents coordinate through NATS streams")
|
|
||||||
|
|
||||||
IO.puts("\n5. Task completion:")
|
|
||||||
IO.puts(" - All subtasks complete successfully")
|
|
||||||
IO.puts(" - Cross-codebase dependencies resolved")
|
|
||||||
IO.puts(" - Coordination system updates task board")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Run the test
|
|
||||||
MultiCodebaseTest.run()
|
|
||||||
MultiCodebaseTest.simulate_task_flow()
|
|
||||||
79
test_vscode_init.exs
Normal file
79
test_vscode_init.exs
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
#!/usr/bin/env elixir
|
||||||
|
|
||||||
|
# Test script to simulate VS Code MCP initialization sequence
|
||||||
|
|
||||||
|
# Start the application
|
||||||
|
Application.start(:agent_coordinator)
|
||||||
|
|
||||||
|
# Wait a moment for the server to fully start
|
||||||
|
Process.sleep(1000)
|
||||||
|
|
||||||
|
# Test 1: Initialize call (system call, should work without agent_id)
|
||||||
|
IO.puts("Testing initialize call...")
|
||||||
|
init_request = %{
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 1,
|
||||||
|
"method" => "initialize",
|
||||||
|
"params" => %{
|
||||||
|
"protocolVersion" => "2024-11-05",
|
||||||
|
"capabilities" => %{
|
||||||
|
"tools" => %{}
|
||||||
|
},
|
||||||
|
"clientInfo" => %{
|
||||||
|
"name" => "vscode",
|
||||||
|
"version" => "1.0.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
init_response = GenServer.call(AgentCoordinator.MCPServer, {:mcp_request, init_request})
|
||||||
|
IO.puts("Initialize response: #{inspect(init_response)}")
|
||||||
|
|
||||||
|
# Test 2: Tools/list call (system call, should work without agent_id)
|
||||||
|
IO.puts("\nTesting tools/list call...")
|
||||||
|
tools_request = %{
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 2,
|
||||||
|
"method" => "tools/list"
|
||||||
|
}
|
||||||
|
|
||||||
|
tools_response = GenServer.call(AgentCoordinator.MCPServer, {:mcp_request, tools_request})
|
||||||
|
IO.puts("Tools/list response: #{inspect(tools_response)}")
|
||||||
|
|
||||||
|
# Test 3: Register agent call (should work)
|
||||||
|
IO.puts("\nTesting register_agent call...")
|
||||||
|
register_request = %{
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 3,
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "register_agent",
|
||||||
|
"arguments" => %{
|
||||||
|
"name" => "GitHub Copilot Test Agent",
|
||||||
|
"capabilities" => ["file_operations", "code_generation"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
register_response = GenServer.call(AgentCoordinator.MCPServer, {:mcp_request, register_request})
|
||||||
|
IO.puts("Register agent response: #{inspect(register_response)}")
|
||||||
|
|
||||||
|
# Test 4: Try a call that requires agent_id (should fail without agent_id)
|
||||||
|
IO.puts("\nTesting call that requires agent_id (should fail)...")
|
||||||
|
task_request = %{
|
||||||
|
"jsonrpc" => "2.0",
|
||||||
|
"id" => 4,
|
||||||
|
"method" => "tools/call",
|
||||||
|
"params" => %{
|
||||||
|
"name" => "create_task",
|
||||||
|
"arguments" => %{
|
||||||
|
"title" => "Test task",
|
||||||
|
"description" => "This should fail without agent_id"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
task_response = GenServer.call(AgentCoordinator.MCPServer, {:mcp_request, task_request})
|
||||||
|
IO.puts("Task creation response: #{inspect(task_response)}")
|
||||||
|
|
||||||
|
IO.puts("\n✅ All tests completed!")"
|
||||||
Reference in New Issue
Block a user