Fix inbox creation issues in agent coordinator

- Fixed Task.new/3 to handle both maps and keyword lists
- Added robust inbox existence checking in find_available_agent
- Ensure inbox creation during agent registration and task assignment
- Add helper function ensure_inbox_exists to avoid crashes
This commit is contained in:
Ra
2025-08-23 14:46:28 -07:00
parent 5048db99c7
commit 943d8ad4d7
40 changed files with 7798 additions and 404 deletions

70
.gitignore vendored
View File

@@ -21,3 +21,73 @@ agent_coordinator-*.tar
# Temporary files, for example, from tests.
/tmp/
# IDE and Editor files
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
.vimrc
.vim/
# OS generated files
Thumbs.db
# Log files
*.log
logs/
/tmp/nats.log
/tmp/nats.pid
# Environment and configuration files
.env
.env.local
.env.production
config/dev.secret.exs
config/prod.secret.exs
# Development and testing artifacts
*.beam
*.plt
*.dialyzer_plt
dialyzer_plt
dialyzer.plt
priv/plts/
# NATS related files
nats.log
nats.pid
# Python cache and virtual environments
__pycache__/
*.py[cod]
*$py.class
.Python
env/
venv/
ENV/
env.bak/
venv.bak/
.pytest_cache/
# Node.js (if any frontend components are added)
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Coverage reports
cover/
coverage/
*.cover
*.lcov
# Backup files
*.backup
*.bak
*.orig
# Claude settings (local configuration)
.claude/

333
AUTO_HEARTBEAT.md Normal file
View File

@@ -0,0 +1,333 @@
# Unified MCP Server with Auto-Heartbeat System Documentation
## Overview
The Agent Coordinator now operates as a **unified MCP server** that internally manages all external MCP servers (Context7, Figma, Filesystem, Firebase, Memory, Sequential Thinking, etc.) while providing automatic task tracking and heartbeat coverage for every tool operation. GitHub Copilot sees only a single MCP server, but gets access to all tools with automatic coordination.
## Key Features
### 1. Unified MCP Server Architecture
- **Single interface**: GitHub Copilot connects to only the Agent Coordinator
- **Internal server management**: Automatically starts and manages all external MCP servers
- **Unified tool registry**: Aggregates tools from all servers into one comprehensive list
- **Automatic task tracking**: Every tool call automatically creates/updates agent tasks
### 2. Automatic Task Tracking
- **Transparent operation**: Any tool usage automatically becomes a tracked task
- **No explicit coordination needed**: Agents don't need to call `create_task` manually
- **Real-time activity monitoring**: See what each agent is working on in real-time
- **Smart task titles**: Automatically generated based on tool usage and context
### 3. Enhanced Heartbeat Coverage
- **Universal coverage**: Every tool call from any server includes heartbeat management
- **Agent session tracking**: Automatic agent registration for GitHub Copilot
- **Activity-based heartbeats**: Heartbeats sent before/after each tool operation
- **Session metadata**: Enhanced task board shows real activity and tool usage
## Architecture
```
GitHub Copilot
Agent Coordinator (Single Visible MCP Server)
┌─────────────────────────────────────────────────────────┐
│ Unified MCP Server │
│ • Aggregates all tools into single interface │
│ • Automatic task tracking for every operation │
│ • Agent coordination tools (create_task, etc.) │
│ • Universal heartbeat coverage │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ MCP Server Manager │
│ • Starts & manages external servers internally │
│ • Health monitoring & auto-restart │
│ • Tool aggregation & routing │
│ • Auto-task creation for any tool usage │
└─────────────────────────────────────────────────────────┘
┌──────────┬──────────┬───────────┬──────────┬─────────────┐
│ Context7 │ Figma │Filesystem │ Firebase │ Memory + │
│ Server │ Server │ Server │ Server │ Sequential │
└──────────┴──────────┴───────────┴──────────┴─────────────┘
```
## Usage
### GitHub Copilot Experience
From GitHub Copilot's perspective, there's only one MCP server with all tools available:
```javascript
// All these tools are available from the single Agent Coordinator server:
// Agent coordination tools
register_agent, create_task, get_next_task, complete_task, get_task_board, heartbeat
// Context7 tools
mcp_context7_get-library-docs, mcp_context7_resolve-library-id
// Figma tools
mcp_figma_get_code, mcp_figma_get_image, mcp_figma_get_variable_defs
// Filesystem tools
mcp_filesystem_read_file, mcp_filesystem_write_file, mcp_filesystem_list_directory
// Firebase tools
mcp_firebase_firestore_get_documents, mcp_firebase_auth_get_user
// Memory tools
mcp_memory_search_nodes, mcp_memory_create_entities
// Sequential thinking tools
mcp_sequentialthi_sequentialthinking
// Plus any other configured MCP servers...
```
### Automatic Task Tracking
Every tool usage automatically creates or updates an agent's current task:
```elixir
# When GitHub Copilot calls any tool, it automatically:
# 1. Sends pre-operation heartbeat
# 2. Creates/updates current task based on tool usage
# 3. Routes to appropriate external server
# 4. Sends post-operation heartbeat
# 5. Updates task activity log
# Example: Reading a file automatically creates a task
Tool Call: mcp_filesystem_read_file(%{"path" => "/project/src/main.rs"})
Auto-Created Task: "Reading file: main.rs"
Description: "Reading and analyzing file content from /project/src/main.rs"
# Example: Figma code generation automatically creates a task
Tool Call: mcp_figma_get_code(%{"nodeId" => "123:456"})
Auto-Created Task: "Generating Figma code: 123:456"
Description: "Generating code for Figma component 123:456"
# Example: Library research automatically creates a task
Tool Call: mcp_context7_get-library-docs(%{"context7CompatibleLibraryID" => "/vercel/next.js"})
Auto-Created Task: "Researching: /vercel/next.js"
Description: "Researching documentation for /vercel/next.js library"
```
### Task Board with Real Activity
```elixir
# Get enhanced task board showing real agent activity
{:ok, board} = get_task_board()
# Returns:
%{
agents: [
%{
agent_id: "github_copilot_session",
name: "GitHub Copilot",
status: :working,
current_task: %{
title: "Reading file: database.ex",
description: "Reading and analyzing file content from /project/lib/database.ex",
auto_generated: true,
tool_name: "mcp_filesystem_read_file",
created_at: ~U[2025-08-23 10:30:00Z]
},
last_heartbeat: ~U[2025-08-23 10:30:05Z],
online: true
}
],
pending_tasks: [],
total_agents: 1,
active_tasks: 1,
pending_count: 0
}
```
## Configuration
### MCP Server Configuration
External servers are configured in `mcp_servers.json`:
```json
{
"servers": {
"mcp_context7": {
"type": "stdio",
"command": "uvx",
"args": ["mcp-server-context7"],
"auto_restart": true,
"description": "Context7 library documentation server"
},
"mcp_figma": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@figma/mcp-server-figma"],
"auto_restart": true,
"description": "Figma design integration server"
},
"mcp_filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
"auto_restart": true,
"description": "Filesystem operations with auto-task tracking"
}
},
"config": {
"startup_timeout": 30000,
"heartbeat_interval": 10000,
"auto_restart_delay": 1000,
"max_restart_attempts": 3
}
}
```
### VS Code Settings
Update your VS Code MCP settings to point to the unified server:
```json
{
"mcp.servers": {
"agent-coordinator": {
"command": "/home/ra/agent_coordinator/scripts/mcp_launcher.sh",
"args": []
}
}
}
```
## Benefits
### 1. Simplified Configuration
- **One server**: GitHub Copilot only needs to connect to Agent Coordinator
- **No manual setup**: External servers are managed automatically
- **Unified tools**: All tools appear in one comprehensive list
### 2. Automatic Coordination
- **Zero-effort tracking**: Every tool usage automatically tracked as tasks
- **Real-time visibility**: See exactly what agents are working on
- **Smart task creation**: Descriptive task titles based on actual tool usage
- **Universal heartbeats**: Every operation maintains agent liveness
### 3. Enhanced Collaboration
- **Agent communication**: Coordination tools still available for planning
- **Multi-agent workflows**: Agents can create tasks for each other
- **Activity awareness**: Agents can see what others are working on
- **File conflict prevention**: Automatic file locking across operations
### 4. Operational Excellence
- **Auto-restart**: Failed external servers automatically restarted
- **Health monitoring**: Real-time status of all managed servers
- **Error handling**: Graceful degradation when servers unavailable
- **Performance**: Direct routing without external proxy overhead
## Migration Guide
### From Individual MCP Servers
**Before:**
```json
// VS Code settings with multiple servers
{
"mcp.servers": {
"context7": {"command": "uvx", "args": ["mcp-server-context7"]},
"figma": {"command": "npx", "args": ["-y", "@figma/mcp-server-figma"]},
"filesystem": {"command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"]},
"agent-coordinator": {"command": "/path/to/mcp_launcher.sh"}
}
}
```
**After:**
```json
// VS Code settings with single unified server
{
"mcp.servers": {
"agent-coordinator": {
"command": "/home/ra/agent_coordinator/scripts/mcp_launcher.sh",
"args": []
}
}
}
```
### Configuration Migration
1. **Remove individual MCP servers** from VS Code settings
2. **Add external servers** to `mcp_servers.json` configuration
3. **Update launcher script** path if needed
4. **Restart VS Code** to apply changes
## Startup and Testing
### Starting the Unified Server
```bash
# From the project directory
./scripts/mcp_launcher.sh
```
### Testing Tool Aggregation
```bash
# Test that all tools are available
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | ./scripts/mcp_launcher.sh
# Should return tools from Agent Coordinator + all external servers
```
### Testing Automatic Task Tracking
```bash
# Use any tool - it should automatically create a task
echo '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"mcp_filesystem_read_file","arguments":{"path":"/home/ra/test.txt"}}}' | ./scripts/mcp_launcher.sh
# Check task board to see auto-created task
echo '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"get_task_board","arguments":{}}}' | ./scripts/mcp_launcher.sh
```
## Troubleshooting
### External Server Issues
1. **Server won't start**
- Check command path in `mcp_servers.json`
- Verify dependencies are installed (`npm install -g @modelcontextprotocol/server-*`)
- Check logs for startup errors
2. **Tools not appearing**
- Verify server started successfully
- Check server health: use `get_server_status` tool
- Restart specific servers if needed
3. **Auto-restart not working**
- Check `auto_restart: true` in server config
- Verify process monitoring is active
- Check restart attempt limits
### Task Tracking Issues
1. **Tasks not auto-creating**
- Verify agent session is active
- Check that GitHub Copilot is registered as agent
- Ensure heartbeat system is working
2. **Incorrect task titles**
- Task titles are generated based on tool name and arguments
- Can be customized in `generate_task_title/2` function
- File-based operations use file paths in titles
## Future Enhancements
Planned improvements:
1. **Dynamic server discovery** - Auto-detect and add new MCP servers
2. **Load balancing** - Distribute tool calls across multiple server instances
3. **Tool versioning** - Support multiple versions of the same tool
4. **Custom task templates** - Configurable task generation based on tool patterns
5. **Inter-agent messaging** - Direct communication channels between agents
6. **Workflow orchestration** - Multi-step task coordination across agents

56
CHANGELOG.md Normal file
View File

@@ -0,0 +1,56 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Initial repository structure cleanup
- Organized scripts into dedicated directories
- Enhanced documentation
- GitHub Actions CI/CD workflow
- Development and testing dependencies
### Changed
- Moved demo files to `examples/` directory
- Moved utility scripts to `scripts/` directory
- Updated project metadata in mix.exs
- Enhanced .gitignore for better coverage
## [0.1.0] - 2025-08-22
### Features
- Initial release of AgentCoordinator
- Distributed task coordination system for AI agents
- NATS-based messaging and persistence
- MCP (Model Context Protocol) server integration
- Task registry with agent-specific inboxes
- File-level conflict resolution
- Real-time agent communication
- Event sourcing with configurable retention
- Fault-tolerant supervision trees
- Command-line interface for task management
- VS Code integration setup scripts
- Comprehensive examples and documentation
### Core Features
- Agent registration and capability management
- Task creation, assignment, and completion
- Task board visualization
- Heartbeat monitoring for agent health
- Persistent task state with NATS JetStream
- MCP tools for external agent integration
### Development Tools
- Setup scripts for NATS and VS Code configuration
- Example MCP client implementations
- Test scripts for various scenarios
- Demo workflows for testing functionality

195
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,195 @@
# Contributing to AgentCoordinator
Thank you for your interest in contributing to AgentCoordinator! This document provides guidelines for contributing to the project.
## 🤝 Code of Conduct
By participating in this project, you agree to abide by our Code of Conduct. Please report unacceptable behavior to the project maintainers.
## 🚀 How to Contribute
### Reporting Bugs
1. **Check existing issues** first to see if the bug has already been reported
2. **Create a new issue** with a clear title and description
3. **Include reproduction steps** with specific details
4. **Provide system information** (Elixir version, OS, etc.)
5. **Add relevant logs** or error messages
### Suggesting Features
1. **Check existing feature requests** to avoid duplicates
2. **Create a new issue** with the `enhancement` label
3. **Describe the feature** and its use case clearly
4. **Explain why** this feature would be beneficial
5. **Provide examples** of how it would be used
### Development Setup
1. **Fork the repository** on GitHub
2. **Clone your fork** locally:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
```
3. **Install dependencies**:
```bash
mix deps.get
```
4. **Start NATS server**:
```bash
nats-server -js -p 4222 -m 8222
```
5. **Run tests** to ensure everything works:
```bash
mix test
```
### Making Changes
1. **Create a feature branch**:
```bash
git checkout -b feature/your-feature-name
```
2. **Make your changes** following our coding standards
3. **Add tests** for new functionality
4. **Run the test suite**:
```bash
mix test
```
5. **Run code quality checks**:
```bash
mix format
mix credo
mix dialyzer
```
6. **Commit your changes** with a descriptive message:
```bash
git commit -m "Add feature: your feature description"
```
7. **Push to your fork**:
```bash
git push origin feature/your-feature-name
```
8. **Create a Pull Request** on GitHub
## 📝 Coding Standards
### Elixir Style Guide
- Follow the [Elixir Style Guide](https://github.com/christopheradams/elixir_style_guide)
- Use `mix format` to format your code
- Write clear, descriptive function and variable names
- Add `@doc` and `@spec` for public functions
- Follow the existing code patterns in the project
### Code Organization
- Keep modules focused and cohesive
- Use appropriate GenServer patterns for stateful processes
- Follow OTP principles and supervision tree design
- Organize code into logical namespaces
### Testing
- Write comprehensive tests for all new functionality
- Use descriptive test names that explain what is being tested
- Follow the existing test patterns and structure
- Ensure tests are fast and reliable
- Aim for good test coverage (check with `mix test --cover`)
### Documentation
- Update documentation for any API changes
- Add examples for new features
- Keep the README.md up to date
- Use clear, concise language
- Include code examples where helpful
## 🔧 Pull Request Guidelines
### Before Submitting
- [ ] Tests pass locally (`mix test`)
- [ ] Code is properly formatted (`mix format`)
- [ ] No linting errors (`mix credo`)
- [ ] Type checks pass (`mix dialyzer`)
- [ ] Documentation is updated
- [ ] CHANGELOG.md is updated (if applicable)
### Pull Request Description
Please include:
1. **Clear title** describing the change
2. **Description** of what the PR does
3. **Issue reference** if applicable (fixes #123)
4. **Testing instructions** for reviewers
5. **Breaking changes** if any
6. **Screenshots** if UI changes are involved
### Review Process
1. At least one maintainer will review your PR
2. Address any feedback or requested changes
3. Once approved, a maintainer will merge your PR
4. Your contribution will be credited in the release notes
## 🧪 Testing
### Running Tests
```bash
# Run all tests
mix test
# Run tests with coverage
mix test --cover
# Run specific test file
mix test test/agent_coordinator/mcp_server_test.exs
# Run tests in watch mode
mix test.watch
```
### Writing Tests
- Place test files in the `test/` directory
- Mirror the structure of the `lib/` directory
- Use descriptive `describe` blocks to group related tests
- Use `setup` blocks for common test setup
- Mock external dependencies appropriately
## 🚀 Release Process
1. Update version in `mix.exs`
2. Update `CHANGELOG.md` with new version details
3. Create and push a version tag
4. Create a GitHub release
5. Publish to Hex (maintainers only)
## 📞 Getting Help
- **GitHub Issues**: For bugs and feature requests
- **GitHub Discussions**: For questions and general discussion
- **Documentation**: Check the [online docs](https://hexdocs.pm/agent_coordinator)
## 🏷️ Issue Labels
- `bug`: Something isn't working
- `enhancement`: New feature or request
- `documentation`: Improvements or additions to documentation
- `good first issue`: Good for newcomers
- `help wanted`: Extra attention is needed
- `question`: Further information is requested
## 🎉 Recognition
Contributors will be:
- Listed in the project's contributors section
- Mentioned in release notes for significant contributions
- Given credit in any related blog posts or presentations
Thank you for contributing to AgentCoordinator! 🚀

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 AgentCoordinator Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

253
README.md
View File

@@ -1,19 +1,23 @@
# AgentCoordinator
[![Elixir CI](https://github.com/your-username/agent_coordinator/workflows/CI/badge.svg)](https://github.com/your-username/agent_coordinator/actions)
[![Coverage Status](https://coveralls.io/repos/github/your-username/agent_coordinator/badge.svg?branch=main)](https://coveralls.io/github/your-username/agent_coordinator?branch=main)
[![Hex.pm](https://img.shields.io/hexpm/v/agent_coordinator.svg)](https://hex.pm/packages/agent_coordinator)
A distributed task coordination system for AI agents built with Elixir and NATS.
## Overview
## 🚀 Overview
AgentCoordinator is a centralized task management system designed to enable multiple AI agents (Claude Code, GitHub Copilot, etc.) to work collaboratively on the same codebase without conflicts. It provides:
AgentCoordinator enables multiple AI agents (Claude Code, GitHub Copilot, etc.) to work collaboratively on the same codebase without conflicts. It provides:
- **Distributed Task Management**: Centralized task queue with agent-specific inboxes
- **Conflict Resolution**: File-level locking prevents agents from working on the same files
- **Real-time Communication**: NATS messaging for instant coordination
- **Persistent Storage**: Event sourcing with configurable retention policies
- **MCP Integration**: Model Context Protocol server for agent communication
- **Fault Tolerance**: Elixir supervision trees ensure system resilience
- **🎯 Distributed Task Management**: Centralized task queue with agent-specific inboxes
- **🔒 Conflict Resolution**: File-level locking prevents agents from working on the same files
- **Real-time Communication**: NATS messaging for instant coordination
- **💾 Persistent Storage**: Event sourcing with configurable retention policies
- **🔌 MCP Integration**: Model Context Protocol server for agent communication
- **🛡️ Fault Tolerance**: Elixir supervision trees ensure system resilience
## Architecture
## 🏗️ Architecture
```
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
@@ -50,41 +54,93 @@ AgentCoordinator is a centralized task management system designed to enable mult
└────────────────────────────┘
```
## Installation
## 📋 Prerequisites
### Prerequisites
- **Elixir**: 1.16+
- **Erlang/OTP**: 26+
- **NATS Server**: With JetStream enabled
- Elixir 1.16+ and Erlang/OTP 28+
- NATS server (with JetStream enabled)
## ⚡ Quick Start
### Setup
### 1. Clone and Setup
1. **Install Dependencies**
```bash
mix deps.get
```
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
mix deps.get
```
2. **Start NATS Server**
```bash
# Using Docker
docker run -p 4222:4222 -p 8222:8222 nats:latest -js
### 2. Start NATS Server
# Or install locally and run
nats-server -js
```
```bash
# Using Docker (recommended)
docker run -p 4222:4222 -p 8222:8222 nats:latest -js
3. **Configure Environment**
```bash
export NATS_HOST=localhost
export NATS_PORT=4222
```
# Or install locally and run
nats-server -js -p 4222 -m 8222
```
4. **Start the Application**
```bash
iex -S mix
```
### 3. Run the Application
## Usage
```bash
# Start in development mode
iex -S mix
# Or use the provided setup script
./scripts/setup.sh
```
### 4. Test the MCP Server
```bash
# Run example demo
mix run examples/demo_mcp_server.exs
# Or test with Python client
python3 examples/mcp_client_example.py
```
## 🔧 Configuration
### Environment Variables
```bash
export NATS_HOST=localhost
export NATS_PORT=4222
export MIX_ENV=dev
```
### VS Code Integration
Run the setup script to configure VS Code automatically:
```bash
./scripts/setup.sh
```
Or manually configure your VS Code `settings.json`:
```json
{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
"args": [],
"env": {
"MIX_ENV": "dev",
"NATS_HOST": "localhost",
"NATS_PORT": "4222"
}
}
}
}
}
}
```
## 🎮 Usage
### Command Line Interface
@@ -102,10 +158,125 @@ mix run -e "AgentCoordinator.CLI.main([\"board\"])"
### MCP Integration
Available MCP tools for agents:
- `register_agent` - Register a new agent
- `create_task` - Create a new task
- `get_next_task` - Get next task for agent
- `complete_task` - Mark current task complete
- `get_task_board` - View all agent statuses
- `heartbeat` - Send agent heartbeat
- `register_agent` - Register a new agent with capabilities
- `create_task` - Create a new task with priority and requirements
- `get_next_task` - Get the next available task for an agent
- `complete_task` - Mark the current task as completed
- `get_task_board` - View all agents and their current status
- `heartbeat` - Send agent heartbeat to maintain active status
### API Example
```elixir
# Register an agent
{:ok, agent_id} = AgentCoordinator.register_agent("MyAgent", ["coding", "testing"])
# Create a task
{:ok, task_id} = AgentCoordinator.create_task(
"Implement user authentication",
"Add JWT-based authentication to the API",
priority: :high,
required_capabilities: ["coding", "security"]
)
# Get next task for agent
{:ok, task} = AgentCoordinator.get_next_task(agent_id)
# Complete the task
:ok = AgentCoordinator.complete_task(agent_id, "Authentication implemented successfully")
```
## 🧪 Development
### Running Tests
```bash
# Run all tests
mix test
# Run with coverage
mix test --cover
# Run specific test file
mix test test/agent_coordinator/mcp_server_test.exs
```
### Code Quality
```bash
# Format code
mix format
# Run static analysis
mix credo
# Run Dialyzer for type checking
mix dialyzer
```
### Available Scripts
- `scripts/setup.sh` - Complete environment setup
- `scripts/mcp_launcher.sh` - Start MCP server
- `scripts/minimal_test.sh` - Quick functionality test
- `scripts/quick_test.sh` - Comprehensive test suite
## 📁 Project Structure
```
agent_coordinator/
├── lib/ # Application source code
│ ├── agent_coordinator.ex
│ └── agent_coordinator/
│ ├── agent.ex
│ ├── application.ex
│ ├── cli.ex
│ ├── inbox.ex
│ ├── mcp_server.ex
│ ├── persistence.ex
│ ├── task_registry.ex
│ └── task.ex
├── test/ # Test files
├── examples/ # Example implementations
│ ├── demo_mcp_server.exs
│ ├── mcp_client_example.py
│ └── full_workflow_demo.exs
├── scripts/ # Utility scripts
│ ├── setup.sh
│ ├── mcp_launcher.sh
│ └── minimal_test.sh
├── mix.exs # Project configuration
├── README.md # This file
└── CHANGELOG.md # Version history
```
## 🤝 Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct and development process.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- [NATS](https://nats.io/) for providing the messaging infrastructure
- [Elixir](https://elixir-lang.org/) community for the excellent ecosystem
- [Model Context Protocol](https://modelcontextprotocol.io/) for agent communication standards
## 📞 Support
- 📖 [Documentation](https://hexdocs.pm/agent_coordinator)
- 🐛 [Issue Tracker](https://github.com/your-username/agent_coordinator/issues)
- 💬 [Discussions](https://github.com/your-username/agent_coordinator/discussions)
---
Made with ❤️ by the AgentCoordinator team

287
README_old.md Normal file
View File

@@ -0,0 +1,287 @@
# AgentCoordinator
A distributed task coordination system for AI agents built with Elixir and NATS.
## Overview
AgentCoordinator is a centralized task management system designed to enable multiple AI agents (Claude Code, GitHub Copilot, etc.) to work collaboratively on the same codebase without conflicts. It provides:
- **Distributed Task Management**: Centralized task queue with agent-specific inboxes
- **Conflict Resolution**: File-level locking prevents agents from working on the same files
- **Real-time Communication**: NATS messaging for instant coordination
- **Persistent Storage**: Event sourcing with configurable retention policies
- **MCP Integration**: Model Context Protocol server for agent communication
- **Fault Tolerance**: Elixir supervision trees ensure system resilience
## Architecture
```
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ AI Agent 1 │ │ AI Agent 2 │ │ AI Agent N │
│ (Claude Code) │ │ (Copilot) │ │ ... │
└─────────┬───────┘ └─────────┬────────┘ └─────────┬───────┘
│ │ │
└──────────────────────┼───────────────────────┘
┌─────────────┴──────────────┐
│ MCP Server Interface │
└─────────────┬──────────────┘
┌─────────────┴──────────────┐
│ AgentCoordinator │
│ │
│ ┌──────────────────────┐ │
│ │ Task Registry │ │
│ │ ┌──────────────┐ │ │
│ │ │ Agent Inbox │ │ │
│ │ │ Agent Inbox │ │ │
│ │ │ Agent Inbox │ │ │
│ │ └──────────────┘ │ │
│ └──────────────────────┘ │
│ │
│ ┌──────────────────────┐ │
│ │ NATS Messaging │ │
│ └──────────────────────┘ │
│ │
│ ┌──────────────────────┐ │
│ │ Persistence │ │
│ │ (JetStream) │ │
│ └──────────────────────┘ │
└────────────────────────────┘
```
## Installation
### Prerequisites
- Elixir 1.16+ and Erlang/OTP 28+
- NATS server (with JetStream enabled)
### Setup
1. **Install Dependencies**
```bash
mix deps.get
```
2. **Start NATS Server**
```bash
# Using Docker
docker run -p 4222:4222 -p 8222:8222 nats:latest -js
# Or install locally and run
nats-server -js
```
3. **Configure Environment**
```bash
export NATS_HOST=localhost
export NATS_PORT=4222
```
4. **Start the Application**
```bash
iex -S mix
```
## Usage
### Command Line Interface
```bash
# Register an agent
mix run -e "AgentCoordinator.CLI.main([\"register\", \"CodeBot\", \"coding\", \"testing\"])"
# Create a task
mix run -e "AgentCoordinator.CLI.main([\"create-task\", \"Fix login bug\", \"User login fails\", \"priority=high\"])"
# View task board
mix run -e "AgentCoordinator.CLI.main([\"board\"])"
```
### MCP Integration
Available MCP tools for agents:
- `register_agent` - Register a new agent
- `create_task` - Create a new task
- `get_next_task` - Get next task for agent
- `complete_task` - Mark current task complete
- `get_task_board` - View all agent statuses
- `heartbeat` - Send agent heartbeat
## Connecting to GitHub Copilot
### Step 1: Start the MCP Server
The AgentCoordinator MCP server needs to be running and accessible via stdio. Here's how to set it up:
1. **Create MCP Server Launcher Script**
```bash
# Create a launcher script for the MCP server
cat > mcp_launcher.sh << 'EOF'
#!/bin/bash
cd /home/ra/agent_coordinator
export MIX_ENV=prod
mix run --no-halt -e "
# Start the application
Application.ensure_all_started(:agent_coordinator)
# Start MCP stdio interface
IO.puts(\"MCP server started...\")
# Read JSON-RPC messages from stdin and send responses to stdout
spawn(fn ->
Stream.repeatedly(fn -> IO.read(:stdio, :line) end)
|> Stream.take_while(&(&1 != :eof))
|> Enum.each(fn line ->
case String.trim(line) do
\"\" -> :ok
json_line ->
try do
request = Jason.decode!(json_line)
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
IO.puts(Jason.encode!(response))
rescue
e ->
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => Map.get(Jason.decode!(json_line), \"id\", null),
\"error\" => %{\"code\" => -32603, \"message\" => Exception.message(e)}
}
IO.puts(Jason.encode!(error_response))
end
end
end)
end)
# Keep process alive
Process.sleep(:infinity)
"
EOF
chmod +x mcp_launcher.sh
```
### Step 2: Configure VS Code for MCP
1. **Install Required Extensions**
- Make sure you have the latest GitHub Copilot extension
- Install any MCP-related VS Code extensions if available
2. **Create MCP Configuration**
Create or update your VS Code settings to include the MCP server:
```json
// In your VS Code settings.json or workspace settings
{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "/home/ra/agent_coordinator/mcp_launcher.sh",
"args": [],
"env": {}
}
}
}
}
}
```
### Step 3: Alternative Direct Integration
If VS Code MCP integration isn't available yet, you can create a VS Code extension to bridge the gap:
1. **Create Extension Scaffold**
```bash
mkdir agent-coordinator-extension
cd agent-coordinator-extension
npm init -y
# Create package.json for VS Code extension
cat > package.json << 'EOF'
{
"name": "agent-coordinator",
"displayName": "Agent Coordinator",
"description": "Integration with AgentCoordinator MCP server",
"version": "0.1.0",
"engines": { "vscode": "^1.74.0" },
"categories": ["Other"],
"activationEvents": ["*"],
"main": "./out/extension.js",
"contributes": {
"commands": [
{
"command": "agentCoordinator.registerAgent",
"title": "Register as Agent"
},
{
"command": "agentCoordinator.getNextTask",
"title": "Get Next Task"
},
{
"command": "agentCoordinator.viewTaskBoard",
"title": "View Task Board"
}
]
},
"devDependencies": {
"@types/vscode": "^1.74.0",
"typescript": "^4.9.0"
}
}
EOF
```
### Step 4: Direct Command Line Usage
For immediate use, you can interact with the MCP server directly:
1. **Start the Server**
```bash
cd /home/ra/agent_coordinator
iex -S mix
```
2. **In another terminal, use the MCP tools**
```bash
# Test MCP server directly
cd /home/ra/agent_coordinator
mix run demo_mcp_server.exs
```
### Step 5: Production Deployment
1. **Create Systemd Service for MCP Server**
```bash
sudo tee /etc/systemd/system/agent-coordinator-mcp.service > /dev/null << EOF
[Unit]
Description=Agent Coordinator MCP Server
After=network.target nats.service
Requires=nats.service
[Service]
Type=simple
User=ra
WorkingDirectory=/home/ra/agent_coordinator
Environment=MIX_ENV=prod
Environment=NATS_HOST=localhost
Environment=NATS_PORT=4222
ExecStart=/usr/bin/mix run --no-halt
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable agent-coordinator-mcp
sudo systemctl start agent-coordinator-mcp
```
2. **Check Status**
```bash
sudo systemctl status agent-coordinator-mcp
sudo journalctl -fu agent-coordinator-mcp
```

View File

@@ -0,0 +1,226 @@
#!/usr/bin/env elixir
# Auto-heartbeat demo script
# This demonstrates the enhanced coordination system with automatic heartbeats
Mix.install([
{:jason, "~> 1.4"},
{:uuid, "~> 1.1"}
])
# Load the agent coordinator modules
Code.require_file("lib/agent_coordinator.ex")
Code.require_file("lib/agent_coordinator/agent.ex")
Code.require_file("lib/agent_coordinator/task.ex")
Code.require_file("lib/agent_coordinator/inbox.ex")
Code.require_file("lib/agent_coordinator/task_registry.ex")
Code.require_file("lib/agent_coordinator/mcp_server.ex")
Code.require_file("lib/agent_coordinator/auto_heartbeat.ex")
Code.require_file("lib/agent_coordinator/enhanced_mcp_server.ex")
Code.require_file("lib/agent_coordinator/client.ex")
defmodule AutoHeartbeatDemo do
@moduledoc """
Demonstrates the automatic heartbeat functionality
"""
def run do
IO.puts("🚀 Starting Auto-Heartbeat Demo")
IO.puts("================================")
# Start the core services
start_services()
# Demo 1: Basic client with auto-heartbeat
demo_basic_client()
# Demo 2: Multiple agents with coordination
demo_multiple_agents()
# Demo 3: Task creation and completion with heartbeats
demo_task_workflow()
IO.puts("\n✅ Demo completed!")
end
defp start_services do
IO.puts("\n📡 Starting coordination services...")
# Start registry for inboxes
Registry.start_link(keys: :unique, name: AgentCoordinator.InboxRegistry)
# Start dynamic supervisor
DynamicSupervisor.start_link(name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one)
# Start task registry (without NATS for demo)
AgentCoordinator.TaskRegistry.start_link()
# Start MCP servers
AgentCoordinator.MCPServer.start_link()
AgentCoordinator.AutoHeartbeat.start_link()
AgentCoordinator.EnhancedMCPServer.start_link()
Process.sleep(500) # Let services initialize
IO.puts("✅ Services started")
end
defp demo_basic_client do
IO.puts("\n🤖 Demo 1: Basic Client with Auto-Heartbeat")
IO.puts("-------------------------------------------")
# Start a client session
{:ok, client} = AgentCoordinator.Client.start_session(
"DemoAgent1",
[:coding, :analysis],
auto_heartbeat: true,
heartbeat_interval: 3000 # 3 seconds for demo
)
# Get session info
{:ok, info} = AgentCoordinator.Client.get_session_info(client)
IO.puts("Agent registered: #{info.agent_name} (ID: #{info.agent_id})")
IO.puts("Auto-heartbeat enabled: #{info.auto_heartbeat_enabled}")
# Check task board to see the agent
{:ok, board} = AgentCoordinator.Client.get_task_board(client)
agent = Enum.find(board.agents, fn a -> a["agent_id"] == info.agent_id end)
IO.puts("Agent status: #{agent["status"]}")
IO.puts("Agent online: #{agent["online"]}")
IO.puts("Session active: #{agent["session_active"]}")
# Wait and check heartbeat activity
IO.puts("\n⏱️ Waiting 8 seconds to observe automatic heartbeats...")
Process.sleep(8000)
# Check board again
{:ok, updated_board} = AgentCoordinator.Client.get_task_board(client)
updated_agent = Enum.find(updated_board.agents, fn a -> a["agent_id"] == info.agent_id end)
IO.puts("Agent still online: #{updated_agent["online"]}")
IO.puts("Active sessions: #{updated_board.active_sessions}")
# Stop the client
AgentCoordinator.Client.stop_session(client)
IO.puts("✅ Client session stopped")
end
defp demo_multiple_agents do
IO.puts("\n👥 Demo 2: Multiple Agents Coordination")
IO.puts("--------------------------------------")
# Start multiple agents
agents = []
{:ok, agent1} = AgentCoordinator.Client.start_session("CodingAgent", [:coding, :testing])
{:ok, agent2} = AgentCoordinator.Client.start_session("AnalysisAgent", [:analysis, :documentation])
{:ok, agent3} = AgentCoordinator.Client.start_session("ReviewAgent", [:review, :analysis])
agents = [agent1, agent2, agent3]
# Check the task board
{:ok, board} = AgentCoordinator.Client.get_task_board(agent1)
IO.puts("Total agents: #{length(board.agents)}")
IO.puts("Active sessions: #{board.active_sessions}")
Enum.each(board.agents, fn agent ->
if agent["online"] do
IO.puts(" - #{agent["name"]}: #{Enum.join(agent["capabilities"], ", ")} (ONLINE)")
else
IO.puts(" - #{agent["name"]}: #{Enum.join(agent["capabilities"], ", ")} (offline)")
end
end)
# Demonstrate heartbeat coordination
IO.puts("\n💓 All agents sending heartbeats...")
# Each agent does some activity
Enum.each(agents, fn agent ->
AgentCoordinator.Client.heartbeat(agent)
end)
Process.sleep(1000)
# Check board after activity
{:ok, updated_board} = AgentCoordinator.Client.get_task_board(agent1)
online_count = Enum.count(updated_board.agents, fn a -> a["online"] end)
IO.puts("Agents online after heartbeat activity: #{online_count}/#{length(updated_board.agents)}")
# Cleanup
Enum.each(agents, &AgentCoordinator.Client.stop_session/1)
IO.puts("✅ All agents disconnected")
end
defp demo_task_workflow do
IO.puts("\n📋 Demo 3: Task Workflow with Heartbeats")
IO.puts("---------------------------------------")
# Start an agent
{:ok, agent} = AgentCoordinator.Client.start_session("WorkflowAgent", [:coding, :testing])
# Create a task
task_result = AgentCoordinator.Client.create_task(
agent,
"Fix Bug #123",
"Fix the authentication bug in user login",
%{
"priority" => "high",
"file_paths" => ["lib/auth.ex", "test/auth_test.exs"],
"required_capabilities" => ["coding", "testing"]
}
)
case task_result do
{:ok, task_data} ->
IO.puts("✅ Task created: #{task_data["task_id"]}")
# Check heartbeat metadata
if Map.has_key?(task_data, "_heartbeat_metadata") do
metadata = task_data["_heartbeat_metadata"]
IO.puts(" Heartbeat metadata: Agent #{metadata["agent_id"]} at #{metadata["timestamp"]}")
end
{:error, reason} ->
IO.puts("❌ Task creation failed: #{reason}")
end
# Try to get next task
case AgentCoordinator.Client.get_next_task(agent) do
{:ok, task} ->
if Map.has_key?(task, "task_id") do
IO.puts("📝 Got task: #{task["title"]}")
# Simulate some work
IO.puts("⚙️ Working on task...")
Process.sleep(2000)
# Complete the task
case AgentCoordinator.Client.complete_task(agent) do
{:ok, result} ->
IO.puts("✅ Task completed: #{result["task_id"]}")
{:error, reason} ->
IO.puts("❌ Task completion failed: #{reason}")
end
else
IO.puts("📝 No tasks available: #{task["message"]}")
end
{:error, reason} ->
IO.puts("❌ Failed to get task: #{reason}")
end
# Final status check
{:ok, final_info} = AgentCoordinator.Client.get_session_info(agent)
IO.puts("Final session info:")
IO.puts(" - Last heartbeat: #{final_info.last_heartbeat}")
IO.puts(" - Session duration: #{final_info.session_duration} seconds")
# Cleanup
AgentCoordinator.Client.stop_session(agent)
IO.puts("✅ Workflow demo completed")
end
end
# Run the demo
AutoHeartbeatDemo.run()

View File

@@ -0,0 +1,150 @@
defmodule MCPServerDemo do
@moduledoc """
Demonstration script showing MCP server functionality
"""
alias AgentCoordinator.MCPServer
def run do
IO.puts("🚀 Testing Agent Coordinator MCP Server")
IO.puts("=" |> String.duplicate(50))
# Test 1: Get tools list
IO.puts("\n📋 Getting available tools...")
tools_request = %{"method" => "tools/list", "jsonrpc" => "2.0", "id" => 1}
tools_response = MCPServer.handle_mcp_request(tools_request)
case tools_response do
%{"result" => %{"tools" => tools}} ->
IO.puts("✅ Found #{length(tools)} tools:")
Enum.each(tools, fn tool ->
IO.puts(" - #{tool["name"]}: #{tool["description"]}")
end)
error ->
IO.puts("❌ Error getting tools: #{inspect(error)}")
end
# Test 2: Register an agent
IO.puts("\n👤 Registering test agent...")
register_request = %{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{
"name" => "DemoAgent",
"capabilities" => ["coding", "testing"]
}
},
"jsonrpc" => "2.0",
"id" => 2
}
register_response = MCPServer.handle_mcp_request(register_request)
agent_id = case register_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Agent registered: #{data["agent_id"]}")
data["agent_id"]
error ->
IO.puts("❌ Error registering agent: #{inspect(error)}")
nil
end
if agent_id do
# Test 3: Create a task
IO.puts("\n📝 Creating a test task...")
task_request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Demo Task",
"description" => "A demonstration task for the MCP server",
"priority" => "high",
"required_capabilities" => ["coding"]
}
},
"jsonrpc" => "2.0",
"id" => 3
}
task_response = MCPServer.handle_mcp_request(task_request)
case task_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Task created: #{data["task_id"]}")
if data["assigned_to"] do
IO.puts(" Assigned to: #{data["assigned_to"]}")
end
error ->
IO.puts("❌ Error creating task: #{inspect(error)}")
end
# Test 4: Get task board
IO.puts("\n📊 Getting task board...")
board_request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
},
"jsonrpc" => "2.0",
"id" => 4
}
board_response = MCPServer.handle_mcp_request(board_request)
case board_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Task board retrieved:")
Enum.each(data["agents"], fn agent ->
IO.puts(" Agent: #{agent["name"]} (#{agent["agent_id"]})")
IO.puts(" Capabilities: #{Enum.join(agent["capabilities"], ", ")}")
IO.puts(" Status: #{agent["status"]}")
if agent["current_task"] do
IO.puts(" Current Task: #{agent["current_task"]["title"]}")
else
IO.puts(" Current Task: None")
end
IO.puts(" Pending: #{agent["pending_tasks"]} | Completed: #{agent["completed_tasks"]}")
IO.puts("")
end)
error ->
IO.puts("❌ Error getting task board: #{inspect(error)}")
end
# Test 5: Send heartbeat
IO.puts("\n💓 Sending heartbeat...")
heartbeat_request = %{
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{
"agent_id" => agent_id
}
},
"jsonrpc" => "2.0",
"id" => 5
}
heartbeat_response = MCPServer.handle_mcp_request(heartbeat_request)
case heartbeat_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Heartbeat sent: #{data["status"]}")
error ->
IO.puts("❌ Error sending heartbeat: #{inspect(error)}")
end
end
IO.puts("\n🎉 MCP Server testing completed!")
IO.puts("=" |> String.duplicate(50))
end
end
# Run the demo
MCPServerDemo.run()

View File

@@ -0,0 +1,172 @@
defmodule FullWorkflowDemo do
@moduledoc """
Demonstration of the complete task workflow
"""
alias AgentCoordinator.MCPServer
def run do
IO.puts("🚀 Complete Agent Coordinator Workflow Demo")
IO.puts("=" |> String.duplicate(50))
# Register multiple agents
IO.puts("\n👥 Registering multiple agents...")
agents = [
%{"name" => "CodingAgent", "capabilities" => ["coding", "debugging"]},
%{"name" => "TestingAgent", "capabilities" => ["testing", "qa"]},
%{"name" => "FullStackAgent", "capabilities" => ["coding", "testing", "ui"]}
]
agent_ids = Enum.map(agents, fn agent ->
register_request = %{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => agent
},
"jsonrpc" => "2.0",
"id" => :rand.uniform(1000)
}
case MCPServer.handle_mcp_request(register_request) do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("#{agent["name"]} registered: #{data["agent_id"]}")
data["agent_id"]
error ->
IO.puts("❌ Error registering #{agent["name"]}: #{inspect(error)}")
nil
end
end)
# Create tasks with different requirements
IO.puts("\n📝 Creating various tasks...")
tasks = [
%{"title" => "Fix Bug #123", "description" => "Debug authentication issue", "priority" => "high", "required_capabilities" => ["coding", "debugging"]},
%{"title" => "Write Unit Tests", "description" => "Create comprehensive test suite", "priority" => "medium", "required_capabilities" => ["testing"]},
%{"title" => "UI Enhancement", "description" => "Improve user interface", "priority" => "low", "required_capabilities" => ["ui", "coding"]},
%{"title" => "Code Review", "description" => "Review pull request #456", "priority" => "medium", "required_capabilities" => ["coding"]}
]
task_ids = Enum.map(tasks, fn task ->
task_request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => task
},
"jsonrpc" => "2.0",
"id" => :rand.uniform(1000)
}
case MCPServer.handle_mcp_request(task_request) do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Task '#{task["title"]}' created: #{data["task_id"]}")
if data["assigned_to"] do
IO.puts(" → Assigned to: #{data["assigned_to"]}")
end
data["task_id"]
error ->
IO.puts("❌ Error creating task '#{task["title"]}': #{inspect(error)}")
nil
end
end)
# Show current task board
IO.puts("\n📊 Current Task Board:")
show_task_board()
# Test getting next task for first agent
if agent_id = Enum.at(agent_ids, 0) do
IO.puts("\n🎯 Getting next task for CodingAgent...")
next_task_request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_next_task",
"arguments" => %{
"agent_id" => agent_id
}
},
"jsonrpc" => "2.0",
"id" => :rand.uniform(1000)
}
case MCPServer.handle_mcp_request(next_task_request) do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
if data["task"] do
IO.puts("✅ Got task: #{data["task"]["title"]}")
# Complete the task
IO.puts("\n✅ Completing the task...")
complete_request = %{
"method" => "tools/call",
"params" => %{
"name" => "complete_task",
"arguments" => %{
"agent_id" => agent_id,
"result" => "Task completed successfully!"
}
},
"jsonrpc" => "2.0",
"id" => :rand.uniform(1000)
}
case MCPServer.handle_mcp_request(complete_request) do
%{"result" => %{"content" => [%{"text" => text}]}} ->
completion_data = Jason.decode!(text)
IO.puts("✅ Task completed: #{completion_data["message"]}")
error ->
IO.puts("❌ Error completing task: #{inspect(error)}")
end
else
IO.puts(" No tasks available: #{data["message"]}")
end
error ->
IO.puts("❌ Error getting next task: #{inspect(error)}")
end
end
# Final task board
IO.puts("\n📊 Final Task Board:")
show_task_board()
IO.puts("\n🎉 Complete workflow demonstration finished!")
IO.puts("=" |> String.duplicate(50))
end
defp show_task_board do
board_request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
},
"jsonrpc" => "2.0",
"id" => :rand.uniform(1000)
}
case MCPServer.handle_mcp_request(board_request) do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
Enum.each(data["agents"], fn agent ->
IO.puts(" 📱 #{agent["name"]} (#{String.slice(agent["agent_id"], 0, 8)}...)")
IO.puts(" Capabilities: #{Enum.join(agent["capabilities"], ", ")}")
IO.puts(" Status: #{agent["status"]}")
if agent["current_task"] do
IO.puts(" 🎯 Current: #{agent["current_task"]["title"]}")
end
IO.puts(" 📈 Stats: #{agent["pending_tasks"]} pending | #{agent["completed_tasks"]} completed")
IO.puts("")
end)
error ->
IO.puts("❌ Error getting task board: #{inspect(error)}")
end
end
end
# Run the demo
FullWorkflowDemo.run()

193
examples/mcp_client_example.py Executable file
View File

@@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
AgentCoordinator MCP Client Example
This script demonstrates how to connect to and interact with the
AgentCoordinator MCP server programmatically.
"""
import json
import subprocess
import sys
import uuid
from typing import Dict, Any, Optional
class AgentCoordinatorMCP:
def __init__(self, launcher_path: str = "./scripts/mcp_launcher.sh"):
self.launcher_path = launcher_path
self.process = None
def start(self):
"""Start the MCP server process"""
try:
self.process = subprocess.Popen(
[self.launcher_path],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=0
)
print("🚀 MCP server started")
return True
except Exception as e:
print(f"❌ Failed to start MCP server: {e}")
return False
def stop(self):
"""Stop the MCP server process"""
if self.process:
self.process.terminate()
self.process.wait()
print("🛑 MCP server stopped")
def send_request(self, method: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
"""Send a JSON-RPC request to the MCP server"""
if not self.process:
raise RuntimeError("MCP server not started")
request = {
"jsonrpc": "2.0",
"id": str(uuid.uuid4()),
"method": method
}
if params:
request["params"] = params
# Send request
request_json = json.dumps(request) + "\n"
self.process.stdin.write(request_json)
self.process.stdin.flush()
# Read response
response_line = self.process.stdout.readline()
if not response_line:
raise RuntimeError("No response from MCP server")
return json.loads(response_line.strip())
def get_tools(self) -> Dict[str, Any]:
"""Get list of available tools"""
return self.send_request("tools/list")
def register_agent(self, name: str, capabilities: list) -> Dict[str, Any]:
"""Register a new agent"""
return self.send_request("tools/call", {
"name": "register_agent",
"arguments": {
"name": name,
"capabilities": capabilities
}
})
def create_task(self, title: str, description: str, priority: str = "normal",
required_capabilities: list = None) -> Dict[str, Any]:
"""Create a new task"""
args = {
"title": title,
"description": description,
"priority": priority
}
if required_capabilities:
args["required_capabilities"] = required_capabilities
return self.send_request("tools/call", {
"name": "create_task",
"arguments": args
})
def get_next_task(self, agent_id: str) -> Dict[str, Any]:
"""Get next task for an agent"""
return self.send_request("tools/call", {
"name": "get_next_task",
"arguments": {"agent_id": agent_id}
})
def complete_task(self, agent_id: str, result: str) -> Dict[str, Any]:
"""Complete current task"""
return self.send_request("tools/call", {
"name": "complete_task",
"arguments": {
"agent_id": agent_id,
"result": result
}
})
def get_task_board(self) -> Dict[str, Any]:
"""Get task board overview"""
return self.send_request("tools/call", {
"name": "get_task_board",
"arguments": {}
})
def heartbeat(self, agent_id: str) -> Dict[str, Any]:
"""Send agent heartbeat"""
return self.send_request("tools/call", {
"name": "heartbeat",
"arguments": {"agent_id": agent_id}
})
def demo():
"""Demonstrate MCP client functionality"""
print("🎯 AgentCoordinator MCP Client Demo")
print("=" * 50)
client = AgentCoordinatorMCP()
try:
# Start server
if not client.start():
return
# Wait for server to be ready
import time
time.sleep(2)
# Get tools
print("\n📋 Available tools:")
tools_response = client.get_tools()
if "result" in tools_response:
for tool in tools_response["result"]["tools"]:
print(f" - {tool['name']}: {tool['description']}")
# Register agent
print("\n👤 Registering agent...")
register_response = client.register_agent("PythonAgent", ["coding", "testing"])
if "result" in register_response:
content = register_response["result"]["content"][0]["text"]
agent_data = json.loads(content)
agent_id = agent_data["agent_id"]
print(f"✅ Agent registered: {agent_id}")
# Create task
print("\n📝 Creating task...")
task_response = client.create_task(
"Python Script",
"Write a Python script for data processing",
"high",
["coding"]
)
if "result" in task_response:
content = task_response["result"]["content"][0]["text"]
task_data = json.loads(content)
print(f"✅ Task created: {task_data['task_id']}")
# Get task board
print("\n📊 Task board:")
board_response = client.get_task_board()
if "result" in board_response:
content = board_response["result"]["content"][0]["text"]
board_data = json.loads(content)
for agent in board_data["agents"]:
print(f" 📱 {agent['name']}: {agent['status']}")
print(f" Capabilities: {', '.join(agent['capabilities'])}")
print(f" Pending: {agent['pending_tasks']}, Completed: {agent['completed_tasks']}")
except Exception as e:
print(f"❌ Error: {e}")
finally:
client.stop()
if __name__ == "__main__":
demo()

235
examples/unified_demo.exs Normal file
View File

@@ -0,0 +1,235 @@
#!/usr/bin/env elixir
# Unified MCP Server Demo
# This demo shows how the unified MCP server provides automatic task tracking
# for all external MCP server operations
Mix.install([
{:agent_coordinator, path: "."},
{:jason, "~> 1.4"}
])
defmodule UnifiedDemo do
@moduledoc """
Demo showing the unified MCP server with automatic task tracking
"""
def run do
IO.puts("🚀 Starting Unified MCP Server Demo...")
IO.puts("=" * 60)
# Start the unified system
{:ok, _} = AgentCoordinator.TaskRegistry.start_link()
{:ok, _} = AgentCoordinator.MCPServerManager.start_link(config_file: "mcp_servers.json")
{:ok, _} = AgentCoordinator.UnifiedMCPServer.start_link()
IO.puts("✅ Unified MCP server started successfully")
# Demonstrate automatic tool aggregation
demonstrate_tool_aggregation()
# Demonstrate automatic task tracking
demonstrate_automatic_task_tracking()
# Demonstrate coordination features
demonstrate_coordination_features()
IO.puts("\n🎉 Demo completed successfully!")
IO.puts("📋 Key Points:")
IO.puts(" • All external MCP servers are managed internally")
IO.puts(" • Every tool call automatically creates/updates tasks")
IO.puts(" • GitHub Copilot sees only one MCP server")
IO.puts(" • Coordination tools are still available for planning")
end
defp demonstrate_tool_aggregation do
IO.puts("\n📊 Testing Tool Aggregation...")
# Get all available tools from the unified server
request = %{
"jsonrpc" => "2.0",
"id" => 1,
"method" => "tools/list"
}
response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(request)
case response do
%{"result" => %{"tools" => tools}} ->
IO.puts("✅ Found #{length(tools)} total tools from all servers:")
# Group tools by server origin
coordinator_tools =
Enum.filter(tools, fn tool ->
tool["name"] in ~w[register_agent create_task get_next_task complete_task get_task_board heartbeat]
end)
external_tools = tools -- coordinator_tools
IO.puts(" • Agent Coordinator: #{length(coordinator_tools)} tools")
IO.puts(" • External Servers: #{length(external_tools)} tools")
# Show sample tools
IO.puts("\n📝 Sample Agent Coordinator tools:")
Enum.take(coordinator_tools, 3)
|> Enum.each(fn tool ->
IO.puts(" - #{tool["name"]}: #{tool["description"]}")
end)
if length(external_tools) > 0 do
IO.puts("\n📝 Sample External tools:")
Enum.take(external_tools, 3)
|> Enum.each(fn tool ->
IO.puts(
" - #{tool["name"]}: #{String.slice(tool["description"] || "External tool", 0, 50)}"
)
end)
end
error ->
IO.puts("❌ Error getting tools: #{inspect(error)}")
end
end
defp demonstrate_automatic_task_tracking do
IO.puts("\n🎯 Testing Automatic Task Tracking...")
# First, register an agent (this creates an agent context)
register_request = %{
"jsonrpc" => "2.0",
"id" => 2,
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{
"name" => "Demo Agent",
"capabilities" => ["coding", "analysis"]
}
}
}
response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(register_request)
IO.puts("✅ Agent registered: #{inspect(response["result"])}")
# Now simulate using an external tool - this should automatically create a task
# Note: In a real scenario, external servers would be running
external_tool_request = %{
"jsonrpc" => "2.0",
"id" => 3,
"method" => "tools/call",
"params" => %{
"name" => "mcp_filesystem_read_file",
"arguments" => %{
"path" => "/home/ra/agent_coordinator/README.md"
}
}
}
IO.puts("🔄 Simulating external tool call: mcp_filesystem_read_file")
external_response =
AgentCoordinator.UnifiedMCPServer.handle_mcp_request(external_tool_request)
case external_response do
%{"result" => result} ->
IO.puts("✅ Tool call succeeded with automatic task tracking")
if metadata = result["_metadata"] do
IO.puts("📊 Automatic metadata:")
IO.puts(" - Tool: #{metadata["tool_name"]}")
IO.puts(" - Agent: #{metadata["agent_id"]}")
IO.puts(" - Auto-tracked: #{metadata["auto_tracked"]}")
end
%{"error" => error} ->
IO.puts(" External server not available (expected in demo): #{error["message"]}")
IO.puts(" In real usage, this would automatically create a task")
end
# Check the task board to see auto-created tasks
IO.puts("\n📋 Checking Task Board...")
task_board_request = %{
"jsonrpc" => "2.0",
"id" => 4,
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
}
}
board_response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(task_board_request)
case board_response do
%{"result" => %{"content" => [%{"text" => board_json}]}} ->
case Jason.decode(board_json) do
{:ok, board} ->
IO.puts("✅ Task Board Status:")
IO.puts(" - Total Agents: #{board["total_agents"]}")
IO.puts(" - Active Tasks: #{board["active_tasks"]}")
IO.puts(" - Pending Tasks: #{board["pending_count"]}")
if length(board["agents"]) > 0 do
agent = List.first(board["agents"])
IO.puts(" - Agent '#{agent["name"]}' is #{agent["status"]}")
end
{:error, _} ->
IO.puts("📊 Task board response: #{board_json}")
end
_ ->
IO.puts("📊 Task board response: #{inspect(board_response)}")
end
end
defp demonstrate_coordination_features do
IO.puts("\n🤝 Testing Coordination Features...")
# Create a manual task for coordination
create_task_request = %{
"jsonrpc" => "2.0",
"id" => 5,
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Review Database Design",
"description" => "Review the database schema for the new feature",
"priority" => "high"
}
}
}
response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(create_task_request)
IO.puts("✅ Manual task created for coordination: #{inspect(response["result"])}")
# Send a heartbeat
heartbeat_request = %{
"jsonrpc" => "2.0",
"id" => 6,
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{
"agent_id" => "github_copilot_session"
}
}
}
heartbeat_response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(heartbeat_request)
IO.puts("✅ Heartbeat sent: #{inspect(heartbeat_response["result"])}")
IO.puts("\n💡 Coordination tools are seamlessly integrated:")
IO.puts(" • Agents can still create tasks manually for planning")
IO.puts(" • Heartbeats maintain agent liveness")
IO.puts(" • Task board shows both auto and manual tasks")
IO.puts(" • All operations work through the single unified interface")
end
end
# Run the demo
UnifiedDemo.run()

View File

@@ -3,12 +3,26 @@ defmodule AgentCoordinator.Agent do
Agent data structure for the coordination system.
"""
@derive {Jason.Encoder,
only: [
:id,
:name,
:capabilities,
:status,
:current_task_id,
:codebase_id,
:workspace_path,
:last_heartbeat,
:metadata
]}
defstruct [
:id,
:name,
:capabilities,
:status,
:current_task_id,
:codebase_id,
:workspace_path,
:last_heartbeat,
:metadata
]
@@ -22,6 +36,8 @@ defmodule AgentCoordinator.Agent do
capabilities: [capability()],
status: status(),
current_task_id: String.t() | nil,
codebase_id: String.t(),
workspace_path: String.t() | nil,
last_heartbeat: DateTime.t(),
metadata: map()
}
@@ -33,6 +49,8 @@ defmodule AgentCoordinator.Agent do
capabilities: capabilities,
status: :idle,
current_task_id: nil,
codebase_id: Keyword.get(opts, :codebase_id, "default"),
workspace_path: Keyword.get(opts, :workspace_path),
last_heartbeat: DateTime.utc_now(),
metadata: Keyword.get(opts, :metadata, %{})
}
@@ -55,12 +73,22 @@ defmodule AgentCoordinator.Agent do
end
def can_handle?(agent, task) do
# Check if agent is in the same codebase or can handle cross-codebase tasks
codebase_compatible = agent.codebase_id == task.codebase_id or
Map.get(agent.metadata, :cross_codebase_capable, false)
# Simple capability matching - can be enhanced
required_capabilities = Map.get(task.metadata, :required_capabilities, [])
case required_capabilities do
capability_match = case required_capabilities do
[] -> true
caps -> Enum.any?(caps, fn cap -> cap in agent.capabilities end)
end
codebase_compatible and capability_match
end
def can_work_cross_codebase?(agent) do
Map.get(agent.metadata, :cross_codebase_capable, false)
end
end

View File

@@ -7,6 +7,9 @@ defmodule AgentCoordinator.Application do
@impl true
def start(_type, _args) do
# Check if persistence should be enabled (useful for testing)
enable_persistence = Application.get_env(:agent_coordinator, :enable_persistence, true)
children = [
# Registry for agent inboxes
{Registry, keys: :unique, name: AgentCoordinator.InboxRegistry},
@@ -14,30 +17,44 @@ defmodule AgentCoordinator.Application do
# PubSub for real-time updates
{Phoenix.PubSub, name: AgentCoordinator.PubSub},
# Persistence layer
{AgentCoordinator.Persistence, nats: nats_config()},
# Codebase registry for multi-codebase coordination
{AgentCoordinator.CodebaseRegistry, nats: if(enable_persistence, do: nats_config(), else: nil)},
# Task registry with NATS integration
{AgentCoordinator.TaskRegistry, nats: nats_config()},
# Task registry with NATS integration (conditionally add persistence)
{AgentCoordinator.TaskRegistry, nats: if(enable_persistence, do: nats_config(), else: nil)},
# MCP server
AgentCoordinator.MCPServer,
# Auto-heartbeat manager
AgentCoordinator.AutoHeartbeat,
# Enhanced MCP server with automatic heartbeats
AgentCoordinator.EnhancedMCPServer,
# Dynamic supervisor for agent inboxes
{DynamicSupervisor, name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one}
]
# Add persistence layer if enabled
children =
if enable_persistence do
[{AgentCoordinator.Persistence, nats: nats_config()} | children]
else
children
end
opts = [strategy: :one_for_one, name: AgentCoordinator.Supervisor]
Supervisor.start_link(children, opts)
end
defp nats_config do
[
%{
host: System.get_env("NATS_HOST", "localhost"),
port: String.to_integer(System.get_env("NATS_PORT", "4222")),
connection_settings: [
connection_settings: %{
name: :agent_coordinator
]
]
}
}
end
end

View File

@@ -0,0 +1,231 @@
defmodule AgentCoordinator.AutoHeartbeat do
@moduledoc """
Automatic heartbeat management for agents.
This module provides:
1. Automatic heartbeat sending with every MCP action
2. Background heartbeat timer for idle periods
3. Heartbeat wrapper functions for all operations
"""
use GenServer
alias AgentCoordinator.{MCPServer, TaskRegistry}
# Heartbeat every 10 seconds when idle
@heartbeat_interval 10_000
# Store active agent contexts
defstruct [
:timers,
:agent_contexts
]
# Client API
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@doc """
Register an agent with automatic heartbeat management
"""
def register_agent_with_heartbeat(name, capabilities, agent_context \\ %{}) do
# Convert capabilities to strings if they're atoms
string_capabilities = Enum.map(capabilities, fn
cap when is_atom(cap) -> Atom.to_string(cap)
cap when is_binary(cap) -> cap
end)
# First register the agent normally
case MCPServer.handle_mcp_request(%{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{"name" => name, "capabilities" => string_capabilities}
}
}) do
%{"result" => %{"content" => [%{"text" => response_json}]}} ->
case Jason.decode(response_json) do
{:ok, %{"agent_id" => agent_id}} ->
# Start automatic heartbeat for this agent
GenServer.call(__MODULE__, {:start_heartbeat, agent_id, agent_context})
{:ok, agent_id}
{:error, reason} ->
{:error, reason}
end
%{"error" => %{"message" => message}} ->
{:error, message}
_ ->
{:error, "Unexpected response format"}
end
end
@doc """
Wrapper for any MCP action that automatically sends heartbeat
"""
def mcp_action_with_heartbeat(agent_id, action_request) do
# Send heartbeat before action
heartbeat_result = send_heartbeat(agent_id)
# Perform the actual action
action_result = MCPServer.handle_mcp_request(action_request)
# Send heartbeat after action (to update last activity)
post_heartbeat_result = send_heartbeat(agent_id)
# Reset the timer for this agent
GenServer.cast(__MODULE__, {:reset_timer, agent_id})
# Return the action result along with heartbeat status
case action_result do
%{"result" => _} = success ->
Map.put(success, "_heartbeat_status", %{
pre: heartbeat_result,
post: post_heartbeat_result
})
error_result ->
error_result
end
end
@doc """
Convenience functions for common operations with automatic heartbeats
"""
def create_task_with_heartbeat(agent_id, title, description, opts \\ %{}) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => Map.merge(%{
"title" => title,
"description" => description
}, opts)
}
}
mcp_action_with_heartbeat(agent_id, request)
end
def get_next_task_with_heartbeat(agent_id) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_next_task",
"arguments" => %{"agent_id" => agent_id}
}
}
mcp_action_with_heartbeat(agent_id, request)
end
def complete_task_with_heartbeat(agent_id) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "complete_task",
"arguments" => %{"agent_id" => agent_id}
}
}
mcp_action_with_heartbeat(agent_id, request)
end
def get_task_board_with_heartbeat(agent_id) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
}
}
mcp_action_with_heartbeat(agent_id, request)
end
@doc """
Stop heartbeat management for an agent (when they disconnect)
"""
def stop_heartbeat(agent_id) do
GenServer.call(__MODULE__, {:stop_heartbeat, agent_id})
end
# Server callbacks
def init(_opts) do
state = %__MODULE__{
timers: %{},
agent_contexts: %{}
}
{:ok, state}
end
def handle_call({:start_heartbeat, agent_id, context}, _from, state) do
# Cancel existing timer if any
if Map.has_key?(state.timers, agent_id) do
Process.cancel_timer(state.timers[agent_id])
end
# Start new timer
timer_ref = Process.send_after(self(), {:heartbeat_timer, agent_id}, @heartbeat_interval)
new_state = %{state |
timers: Map.put(state.timers, agent_id, timer_ref),
agent_contexts: Map.put(state.agent_contexts, agent_id, context)
}
{:reply, :ok, new_state}
end
def handle_call({:stop_heartbeat, agent_id}, _from, state) do
# Cancel timer
if Map.has_key?(state.timers, agent_id) do
Process.cancel_timer(state.timers[agent_id])
end
new_state = %{state |
timers: Map.delete(state.timers, agent_id),
agent_contexts: Map.delete(state.agent_contexts, agent_id)
}
{:reply, :ok, new_state}
end
def handle_cast({:reset_timer, agent_id}, state) do
# Cancel existing timer
if Map.has_key?(state.timers, agent_id) do
Process.cancel_timer(state.timers[agent_id])
end
# Start new timer
timer_ref = Process.send_after(self(), {:heartbeat_timer, agent_id}, @heartbeat_interval)
new_state = %{state | timers: Map.put(state.timers, agent_id, timer_ref)}
{:noreply, new_state}
end
def handle_info({:heartbeat_timer, agent_id}, state) do
# Send heartbeat
send_heartbeat(agent_id)
# Schedule next heartbeat
timer_ref = Process.send_after(self(), {:heartbeat_timer, agent_id}, @heartbeat_interval)
new_state = %{state | timers: Map.put(state.timers, agent_id, timer_ref)}
{:noreply, new_state}
end
# Private helpers
defp send_heartbeat(agent_id) do
case TaskRegistry.heartbeat_agent(agent_id) do
:ok -> :ok
{:error, reason} -> {:error, reason}
end
end
end

View File

@@ -3,7 +3,7 @@ defmodule AgentCoordinator.CLI do
Command line interface for testing the agent coordination system.
"""
alias AgentCoordinator.{MCPServer, TaskRegistry, Inbox, Agent, Task}
alias AgentCoordinator.{MCPServer, Inbox}
def main(args \\ []) do
case args do
@@ -28,7 +28,8 @@ defmodule AgentCoordinator.CLI do
end
defp register_agent(name, capabilities) do
caps = Enum.map(capabilities, &String.to_existing_atom/1)
# Note: capabilities should be passed as strings to the MCP server
# The server will handle the validation
request = %{
"method" => "tools/call",
@@ -58,10 +59,14 @@ defmodule AgentCoordinator.CLI do
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => Map.merge(%{
"arguments" =>
Map.merge(
%{
"title" => title,
"description" => description
}, opts)
},
opts
)
}
}
@@ -129,7 +134,8 @@ defmodule AgentCoordinator.CLI do
end
defp print_agent_summary(agent) do
status_icon = case agent["status"] do
status_icon =
case agent["status"] do
"idle" -> "💤"
"busy" -> "🔧"
"offline" -> ""

View File

@@ -0,0 +1,317 @@
defmodule AgentCoordinator.Client do
@moduledoc """
Client wrapper for agents to interact with the coordination system.
This module provides a high-level API that automatically handles:
- Heartbeat management
- Session tracking
- Error handling and retries
- Collision detection
Usage:
```elixir
# Start a client session
{:ok, client} = AgentCoordinator.Client.start_session("MyAgent", [:coding, :analysis])
# All operations automatically include heartbeats
{:ok, task} = AgentCoordinator.Client.get_next_task(client)
{:ok, result} = AgentCoordinator.Client.complete_task(client)
```
"""
use GenServer
alias AgentCoordinator.{EnhancedMCPServer, AutoHeartbeat}
defstruct [
:agent_id,
:agent_name,
:capabilities,
:session_pid,
:heartbeat_interval,
:last_heartbeat,
:auto_heartbeat_enabled
]
# Client API
@doc """
Start a new agent session with automatic heartbeat management
"""
def start_session(agent_name, capabilities, opts \\ []) do
heartbeat_interval = Keyword.get(opts, :heartbeat_interval, 10_000)
auto_heartbeat = Keyword.get(opts, :auto_heartbeat, true)
GenServer.start_link(__MODULE__, %{
agent_name: agent_name,
capabilities: capabilities,
heartbeat_interval: heartbeat_interval,
auto_heartbeat_enabled: auto_heartbeat
})
end
@doc """
Get the next task for this agent (with automatic heartbeat)
"""
def get_next_task(client_pid) do
GenServer.call(client_pid, :get_next_task)
end
@doc """
Create a task (with automatic heartbeat)
"""
def create_task(client_pid, title, description, opts \\ %{}) do
GenServer.call(client_pid, {:create_task, title, description, opts})
end
@doc """
Complete the current task (with automatic heartbeat)
"""
def complete_task(client_pid) do
GenServer.call(client_pid, :complete_task)
end
@doc """
Get task board with enhanced information (with automatic heartbeat)
"""
def get_task_board(client_pid) do
GenServer.call(client_pid, :get_task_board)
end
@doc """
Send manual heartbeat
"""
def heartbeat(client_pid) do
GenServer.call(client_pid, :manual_heartbeat)
end
@doc """
Get client session information
"""
def get_session_info(client_pid) do
GenServer.call(client_pid, :get_session_info)
end
@doc """
Stop the client session (cleanly disconnects the agent)
"""
def stop_session(client_pid) do
GenServer.call(client_pid, :stop_session)
end
@doc """
Unregister the agent (e.g., when waiting for user input)
"""
def unregister_agent(client_pid, reason \\ "Waiting for user input") do
GenServer.call(client_pid, {:unregister_agent, reason})
end
# Server callbacks
def init(config) do
# Register with enhanced MCP server
case EnhancedMCPServer.register_agent_with_session(
config.agent_name,
config.capabilities,
self()
) do
{:ok, agent_id} ->
state = %__MODULE__{
agent_id: agent_id,
agent_name: config.agent_name,
capabilities: config.capabilities,
session_pid: self(),
heartbeat_interval: config.heartbeat_interval,
last_heartbeat: DateTime.utc_now(),
auto_heartbeat_enabled: config.auto_heartbeat_enabled
}
# Start automatic heartbeat timer if enabled
if config.auto_heartbeat_enabled do
schedule_heartbeat(state.heartbeat_interval)
end
{:ok, state}
{:error, reason} ->
{:stop, {:registration_failed, reason}}
end
end
def handle_call(:get_next_task, _from, state) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_next_task",
"arguments" => %{"agent_id" => state.agent_id}
}
}
result = enhanced_mcp_call(request, state)
{:reply, result, update_last_heartbeat(state)}
end
def handle_call({:create_task, title, description, opts}, _from, state) do
arguments = Map.merge(%{
"title" => title,
"description" => description
}, opts)
request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => arguments
}
}
result = enhanced_mcp_call(request, state)
{:reply, result, update_last_heartbeat(state)}
end
def handle_call(:complete_task, _from, state) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "complete_task",
"arguments" => %{"agent_id" => state.agent_id}
}
}
result = enhanced_mcp_call(request, state)
{:reply, result, update_last_heartbeat(state)}
end
def handle_call(:get_task_board, _from, state) do
case EnhancedMCPServer.get_enhanced_task_board() do
{:ok, board} ->
{:reply, {:ok, board}, update_last_heartbeat(state)}
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
def handle_call(:manual_heartbeat, _from, state) do
result = send_heartbeat(state.agent_id)
{:reply, result, update_last_heartbeat(state)}
end
def handle_call(:get_session_info, _from, state) do
info = %{
agent_id: state.agent_id,
agent_name: state.agent_name,
capabilities: state.capabilities,
last_heartbeat: state.last_heartbeat,
heartbeat_interval: state.heartbeat_interval,
auto_heartbeat_enabled: state.auto_heartbeat_enabled,
session_duration: DateTime.diff(DateTime.utc_now(), state.last_heartbeat, :second)
}
{:reply, {:ok, info}, state}
end
def handle_call({:unregister_agent, reason}, _from, state) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "unregister_agent",
"arguments" => %{"agent_id" => state.agent_id, "reason" => reason}
}
}
result = enhanced_mcp_call(request, state)
case result do
{:ok, _data} ->
# Successfully unregistered, stop heartbeats but keep session alive
updated_state = %{state | auto_heartbeat_enabled: false}
{:reply, result, updated_state}
{:error, _reason} ->
# Failed to unregister, keep current state
{:reply, result, state}
end
end
def handle_call(:stop_session, _from, state) do
# Clean shutdown - could include task cleanup here
{:stop, :normal, :ok, state}
end
# Handle automatic heartbeat timer
def handle_info(:heartbeat_timer, state) do
if state.auto_heartbeat_enabled do
send_heartbeat(state.agent_id)
schedule_heartbeat(state.heartbeat_interval)
end
{:noreply, update_last_heartbeat(state)}
end
# Handle unexpected messages
def handle_info(_msg, state) do
{:noreply, state}
end
# Cleanup on termination
def terminate(_reason, state) do
# Stop heartbeat management
if state.agent_id do
AutoHeartbeat.stop_heartbeat(state.agent_id)
end
:ok
end
# Private helpers
defp enhanced_mcp_call(request, state) do
session_info = %{
agent_id: state.agent_id,
session_pid: state.session_pid
}
case EnhancedMCPServer.handle_enhanced_mcp_request(request, session_info) do
%{"result" => %{"content" => [%{"text" => response_json}]}} = response ->
case Jason.decode(response_json) do
{:ok, data} ->
# Include heartbeat metadata if present
metadata = Map.get(response, "_heartbeat_metadata", %{})
{:ok, Map.put(data, "_heartbeat_metadata", metadata)}
{:error, reason} ->
{:error, {:json_decode_error, reason}}
end
%{"error" => %{"message" => message}} ->
{:error, message}
unexpected ->
{:error, {:unexpected_response, unexpected}}
end
end
defp send_heartbeat(agent_id) do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{"agent_id" => agent_id}
}
}
case EnhancedMCPServer.handle_enhanced_mcp_request(request) do
%{"result" => _} -> :ok
%{"error" => %{"message" => message}} -> {:error, message}
_ -> {:error, :unknown_heartbeat_error}
end
end
defp schedule_heartbeat(interval) do
Process.send_after(self(), :heartbeat_timer, interval)
end
defp update_last_heartbeat(state) do
%{state | last_heartbeat: DateTime.utc_now()}
end
end

View File

@@ -0,0 +1,354 @@
defmodule AgentCoordinator.CodebaseRegistry do
@moduledoc """
Registry for managing multiple codebases and their metadata.
Tracks codebase state, dependencies, and cross-codebase coordination.
"""
use GenServer
defstruct [
:codebases,
:cross_codebase_dependencies,
:nats_conn
]
@type codebase :: %{
id: String.t(),
name: String.t(),
workspace_path: String.t(),
description: String.t() | nil,
agents: [String.t()],
active_tasks: [String.t()],
metadata: map(),
created_at: DateTime.t(),
updated_at: DateTime.t()
}
# Client API
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
def register_codebase(codebase_data) do
GenServer.call(__MODULE__, {:register_codebase, codebase_data})
end
def update_codebase(codebase_id, updates) do
GenServer.call(__MODULE__, {:update_codebase, codebase_id, updates})
end
def get_codebase(codebase_id) do
GenServer.call(__MODULE__, {:get_codebase, codebase_id})
end
def list_codebases do
GenServer.call(__MODULE__, :list_codebases)
end
def add_agent_to_codebase(codebase_id, agent_id) do
GenServer.call(__MODULE__, {:add_agent_to_codebase, codebase_id, agent_id})
end
def remove_agent_from_codebase(codebase_id, agent_id) do
GenServer.call(__MODULE__, {:remove_agent_from_codebase, codebase_id, agent_id})
end
def add_cross_codebase_dependency(
source_codebase,
target_codebase,
dependency_type,
metadata \\ %{}
) do
GenServer.call(
__MODULE__,
{:add_cross_dependency, source_codebase, target_codebase, dependency_type, metadata}
)
end
def get_codebase_dependencies(codebase_id) do
GenServer.call(__MODULE__, {:get_dependencies, codebase_id})
end
def get_codebase_stats(codebase_id) do
GenServer.call(__MODULE__, {:get_stats, codebase_id})
end
def can_execute_cross_codebase_task?(source_codebase, target_codebase) do
GenServer.call(__MODULE__, {:can_execute_cross_task, source_codebase, target_codebase})
end
# Server callbacks
def init(opts) do
nats_config = Keyword.get(opts, :nats, [])
nats_conn =
case nats_config do
[] ->
nil
config ->
case Gnat.start_link(config) do
{:ok, conn} ->
# Subscribe to codebase events
Gnat.sub(conn, self(), "codebase.>")
conn
{:error, _reason} ->
nil
end
end
# Register default codebase
default_codebase = create_default_codebase()
state = %__MODULE__{
codebases: %{"default" => default_codebase},
cross_codebase_dependencies: %{},
nats_conn: nats_conn
}
{:ok, state}
end
def handle_call({:register_codebase, codebase_data}, _from, state) do
codebase_id = Map.get(codebase_data, "id") || Map.get(codebase_data, :id) || UUID.uuid4()
codebase = %{
id: codebase_id,
name: Map.get(codebase_data, "name") || Map.get(codebase_data, :name, "Unnamed Codebase"),
workspace_path:
Map.get(codebase_data, "workspace_path") || Map.get(codebase_data, :workspace_path),
description: Map.get(codebase_data, "description") || Map.get(codebase_data, :description),
agents: [],
active_tasks: [],
metadata: Map.get(codebase_data, "metadata") || Map.get(codebase_data, :metadata, %{}),
created_at: DateTime.utc_now(),
updated_at: DateTime.utc_now()
}
case Map.has_key?(state.codebases, codebase_id) do
true ->
{:reply, {:error, "Codebase already exists"}, state}
false ->
new_codebases = Map.put(state.codebases, codebase_id, codebase)
new_state = %{state | codebases: new_codebases}
# Publish codebase registration event
if state.nats_conn do
publish_event(state.nats_conn, "codebase.registered", %{codebase: codebase})
end
{:reply, {:ok, codebase_id}, new_state}
end
end
def handle_call({:update_codebase, codebase_id, updates}, _from, state) do
case Map.get(state.codebases, codebase_id) do
nil ->
{:reply, {:error, "Codebase not found"}, state}
codebase ->
updated_codebase =
Map.merge(codebase, updates)
|> Map.put(:updated_at, DateTime.utc_now())
new_codebases = Map.put(state.codebases, codebase_id, updated_codebase)
new_state = %{state | codebases: new_codebases}
# Publish update event
if state.nats_conn do
publish_event(state.nats_conn, "codebase.updated", %{
codebase_id: codebase_id,
updates: updates
})
end
{:reply, {:ok, updated_codebase}, new_state}
end
end
def handle_call({:get_codebase, codebase_id}, _from, state) do
codebase = Map.get(state.codebases, codebase_id)
{:reply, codebase, state}
end
def handle_call(:list_codebases, _from, state) do
codebases = Map.values(state.codebases)
{:reply, codebases, state}
end
def handle_call({:add_agent_to_codebase, codebase_id, agent_id}, _from, state) do
case Map.get(state.codebases, codebase_id) do
nil ->
{:reply, {:error, "Codebase not found"}, state}
codebase ->
updated_agents = Enum.uniq([agent_id | codebase.agents])
updated_codebase = %{codebase | agents: updated_agents, updated_at: DateTime.utc_now()}
new_codebases = Map.put(state.codebases, codebase_id, updated_codebase)
{:reply, :ok, %{state | codebases: new_codebases}}
end
end
def handle_call({:remove_agent_from_codebase, codebase_id, agent_id}, _from, state) do
case Map.get(state.codebases, codebase_id) do
nil ->
{:reply, {:error, "Codebase not found"}, state}
codebase ->
updated_agents = Enum.reject(codebase.agents, &(&1 == agent_id))
updated_codebase = %{codebase | agents: updated_agents, updated_at: DateTime.utc_now()}
new_codebases = Map.put(state.codebases, codebase_id, updated_codebase)
{:reply, :ok, %{state | codebases: new_codebases}}
end
end
def handle_call({:add_cross_dependency, source_id, target_id, dep_type, metadata}, _from, state) do
dependency = %{
source: source_id,
target: target_id,
type: dep_type,
metadata: metadata,
created_at: DateTime.utc_now()
}
key = "#{source_id}->#{target_id}"
new_dependencies = Map.put(state.cross_codebase_dependencies, key, dependency)
# Publish cross-codebase dependency event
if state.nats_conn do
publish_event(state.nats_conn, "codebase.dependency.added", %{
dependency: dependency
})
end
{:reply, :ok, %{state | cross_codebase_dependencies: new_dependencies}}
end
def handle_call({:get_dependencies, codebase_id}, _from, state) do
dependencies =
state.cross_codebase_dependencies
|> Map.values()
|> Enum.filter(fn dep -> dep.source == codebase_id or dep.target == codebase_id end)
{:reply, dependencies, state}
end
def handle_call({:get_stats, codebase_id}, _from, state) do
case Map.get(state.codebases, codebase_id) do
nil ->
{:reply, {:error, "Codebase not found"}, state}
codebase ->
stats = %{
id: codebase.id,
name: codebase.name,
agent_count: length(codebase.agents),
active_task_count: length(codebase.active_tasks),
dependencies: get_dependency_stats(state, codebase_id),
last_updated: codebase.updated_at
}
{:reply, {:ok, stats}, state}
end
end
def handle_call({:can_execute_cross_task, source_id, target_id}, _from, state) do
# Check if both codebases exist
source_exists = Map.has_key?(state.codebases, source_id)
target_exists = Map.has_key?(state.codebases, target_id)
can_execute =
source_exists and target_exists and
(source_id == target_id or has_cross_dependency?(state, source_id, target_id))
{:reply, can_execute, state}
end
# Handle NATS messages
def handle_info({:msg, %{topic: "codebase.task.started", body: body}}, state) do
%{"codebase_id" => codebase_id, "task_id" => task_id} = Jason.decode!(body)
case Map.get(state.codebases, codebase_id) do
nil ->
{:noreply, state}
codebase ->
updated_tasks = Enum.uniq([task_id | codebase.active_tasks])
updated_codebase = %{codebase | active_tasks: updated_tasks}
new_codebases = Map.put(state.codebases, codebase_id, updated_codebase)
{:noreply, %{state | codebases: new_codebases}}
end
end
def handle_info({:msg, %{topic: "codebase.task.completed", body: body}}, state) do
%{"codebase_id" => codebase_id, "task_id" => task_id} = Jason.decode!(body)
case Map.get(state.codebases, codebase_id) do
nil ->
{:noreply, state}
codebase ->
updated_tasks = Enum.reject(codebase.active_tasks, &(&1 == task_id))
updated_codebase = %{codebase | active_tasks: updated_tasks}
new_codebases = Map.put(state.codebases, codebase_id, updated_codebase)
{:noreply, %{state | codebases: new_codebases}}
end
end
def handle_info({:msg, _msg}, state) do
# Ignore other messages
{:noreply, state}
end
# Private helpers
defp create_default_codebase do
%{
id: "default",
name: "Default Codebase",
workspace_path: nil,
description: "Default codebase for agents without specific codebase assignment",
agents: [],
active_tasks: [],
metadata: %{},
created_at: DateTime.utc_now(),
updated_at: DateTime.utc_now()
}
end
defp has_cross_dependency?(state, source_id, target_id) do
key = "#{source_id}->#{target_id}"
Map.has_key?(state.cross_codebase_dependencies, key)
end
defp get_dependency_stats(state, codebase_id) do
incoming =
state.cross_codebase_dependencies
|> Map.values()
|> Enum.filter(fn dep -> dep.target == codebase_id end)
|> length()
outgoing =
state.cross_codebase_dependencies
|> Map.values()
|> Enum.filter(fn dep -> dep.source == codebase_id end)
|> length()
%{incoming: incoming, outgoing: outgoing}
end
defp publish_event(conn, topic, data) do
if conn do
message = Jason.encode!(data)
Gnat.pub(conn, topic, message)
end
end
end

View File

@@ -0,0 +1,266 @@
defmodule AgentCoordinator.EnhancedMCPServer do
@moduledoc """
Enhanced MCP server with automatic heartbeat management and collision detection.
This module extends the base MCP server with:
1. Automatic heartbeats on every operation
2. Agent session tracking
3. Enhanced collision detection
4. Automatic agent cleanup on disconnect
"""
use GenServer
alias AgentCoordinator.{MCPServer, AutoHeartbeat, TaskRegistry}
# Track active agent sessions
defstruct [
:agent_sessions,
:session_monitors
]
# Client API
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@doc """
Enhanced MCP request handler with automatic heartbeat management
"""
def handle_enhanced_mcp_request(request, session_info \\ %{}) do
GenServer.call(__MODULE__, {:enhanced_mcp_request, request, session_info})
end
@doc """
Register an agent with enhanced session tracking
"""
def register_agent_with_session(name, capabilities, session_pid \\ self()) do
GenServer.call(__MODULE__, {:register_agent_with_session, name, capabilities, session_pid})
end
# Server callbacks
def init(_opts) do
state = %__MODULE__{
agent_sessions: %{},
session_monitors: %{}
}
{:ok, state}
end
def handle_call({:enhanced_mcp_request, request, session_info}, {from_pid, _}, state) do
# Extract agent_id from session or request
agent_id = extract_agent_id(request, session_info, state)
# If we have an agent_id, send heartbeat before and after operation
enhanced_result =
case agent_id do
nil ->
# No agent context, use normal MCP processing
MCPServer.handle_mcp_request(request)
id ->
# Send pre-operation heartbeat
pre_heartbeat = TaskRegistry.heartbeat_agent(id)
# Process the request
result = MCPServer.handle_mcp_request(request)
# Send post-operation heartbeat and update session activity
post_heartbeat = TaskRegistry.heartbeat_agent(id)
update_session_activity(state, id, from_pid)
# Add heartbeat metadata to successful responses
case result do
%{"result" => _} = success ->
Map.put(success, "_heartbeat_metadata", %{
agent_id: id,
pre_heartbeat: pre_heartbeat,
post_heartbeat: post_heartbeat,
timestamp: DateTime.utc_now()
})
error_result ->
error_result
end
end
{:reply, enhanced_result, state}
end
def handle_call({:register_agent_with_session, name, capabilities, session_pid}, _from, state) do
# Convert capabilities to strings if they're atoms
string_capabilities =
Enum.map(capabilities, fn
cap when is_atom(cap) -> Atom.to_string(cap)
cap when is_binary(cap) -> cap
end)
# Register the agent normally first
case MCPServer.handle_mcp_request(%{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{"name" => name, "capabilities" => string_capabilities}
}
}) do
%{"result" => %{"content" => [%{"text" => response_json}]}} ->
case Jason.decode(response_json) do
{:ok, %{"agent_id" => agent_id}} ->
# Track the session
monitor_ref = Process.monitor(session_pid)
new_state = %{
state
| agent_sessions:
Map.put(state.agent_sessions, agent_id, %{
pid: session_pid,
name: name,
capabilities: capabilities,
registered_at: DateTime.utc_now(),
last_activity: DateTime.utc_now()
}),
session_monitors: Map.put(state.session_monitors, monitor_ref, agent_id)
}
# Start automatic heartbeat management
AutoHeartbeat.start_link([])
AutoHeartbeat.register_agent_with_heartbeat(name, capabilities, %{
session_pid: session_pid,
enhanced_server: true
})
{:reply, {:ok, agent_id}, new_state}
{:error, reason} ->
{:reply, {:error, reason}, state}
end
%{"error" => %{"message" => message}} ->
{:reply, {:error, message}, state}
_ ->
{:reply, {:error, "Unexpected response format"}, state}
end
end
def handle_call(:get_enhanced_task_board, _from, state) do
# Get the regular task board
case MCPServer.handle_mcp_request(%{
"method" => "tools/call",
"params" => %{"name" => "get_task_board", "arguments" => %{}}
}) do
%{"result" => %{"content" => [%{"text" => response_json}]}} ->
case Jason.decode(response_json) do
{:ok, %{"agents" => agents}} ->
# Enhance with session information
enhanced_agents =
Enum.map(agents, fn agent ->
agent_id = agent["agent_id"]
session_info = Map.get(state.agent_sessions, agent_id, %{})
Map.merge(agent, %{
"session_active" => Map.has_key?(state.agent_sessions, agent_id),
"last_activity" => Map.get(session_info, :last_activity),
"session_duration" => calculate_session_duration(session_info)
})
end)
result = %{
"agents" => enhanced_agents,
"active_sessions" => map_size(state.agent_sessions)
}
{:reply, {:ok, result}, state}
{:error, reason} ->
{:reply, {:error, reason}, state}
end
%{"error" => %{"message" => message}} ->
{:reply, {:error, message}, state}
end
end
# Handle process monitoring - cleanup when agent session dies
def handle_info({:DOWN, monitor_ref, :process, _pid, _reason}, state) do
case Map.get(state.session_monitors, monitor_ref) do
nil ->
{:noreply, state}
agent_id ->
# Clean up the agent session
new_state = %{
state
| agent_sessions: Map.delete(state.agent_sessions, agent_id),
session_monitors: Map.delete(state.session_monitors, monitor_ref)
}
# Stop heartbeat management
AutoHeartbeat.stop_heartbeat(agent_id)
# Mark agent as offline in registry
# (This could be enhanced to gracefully handle ongoing tasks)
{:noreply, new_state}
end
end
# Private helpers
defp extract_agent_id(request, session_info, state) do
# Try to get agent_id from various sources
cond do
# From request arguments
Map.get(request, "params", %{})
|> Map.get("arguments", %{})
|> Map.get("agent_id") ->
request["params"]["arguments"]["agent_id"]
# From session info
Map.get(session_info, :agent_id) ->
session_info.agent_id
# From session lookup by PID
session_pid = Map.get(session_info, :session_pid, self()) ->
find_agent_by_session_pid(state, session_pid)
true ->
nil
end
end
defp find_agent_by_session_pid(state, session_pid) do
Enum.find_value(state.agent_sessions, fn {agent_id, session_data} ->
if session_data.pid == session_pid, do: agent_id, else: nil
end)
end
defp update_session_activity(state, agent_id, _session_pid) do
case Map.get(state.agent_sessions, agent_id) do
nil ->
:ok
session_data ->
_updated_session = %{session_data | last_activity: DateTime.utc_now()}
# Note: This doesn't update the state since we're in a call handler
# In a real implementation, you might want to use cast for this
:ok
end
end
@doc """
Get enhanced task board with session information
"""
def get_enhanced_task_board do
GenServer.call(__MODULE__, :get_enhanced_task_board)
end
defp calculate_session_duration(%{registered_at: start_time}) do
DateTime.diff(DateTime.utc_now(), start_time, :second)
end
defp calculate_session_duration(_), do: nil
end

View File

@@ -4,7 +4,7 @@ defmodule AgentCoordinator.Inbox do
"""
use GenServer
alias AgentCoordinator.{Task, Agent}
alias AgentCoordinator.Task
defstruct [
:agent_id,
@@ -48,6 +48,21 @@ defmodule AgentCoordinator.Inbox do
GenServer.call(via_tuple(agent_id), :list_tasks)
end
def get_current_task(agent_id) do
GenServer.call(via_tuple(agent_id), :get_current_task)
end
def stop(agent_id) do
case Registry.lookup(AgentCoordinator.InboxRegistry, agent_id) do
[{pid, _}] ->
GenServer.stop(pid, :normal)
:ok
[] ->
{:error, :not_found}
end
end
# Server callbacks
def init({agent_id, opts}) do
@@ -68,8 +83,11 @@ defmodule AgentCoordinator.Inbox do
new_state = %{state | pending_tasks: pending_tasks}
# Broadcast task added
Phoenix.PubSub.broadcast(AgentCoordinator.PubSub, "agent:#{state.agent_id}",
{:task_added, task})
Phoenix.PubSub.broadcast(
AgentCoordinator.PubSub,
"agent:#{state.agent_id}",
{:task_added, task}
)
{:reply, :ok, new_state}
end
@@ -81,14 +99,10 @@ defmodule AgentCoordinator.Inbox do
[next_task | remaining_tasks] ->
updated_task = Task.assign_to_agent(next_task, state.agent_id)
new_state = %{state |
pending_tasks: remaining_tasks,
in_progress_task: updated_task
}
new_state = %{state | pending_tasks: remaining_tasks, in_progress_task: updated_task}
# Broadcast task started
Phoenix.PubSub.broadcast(AgentCoordinator.PubSub, "global",
{:task_started, updated_task})
Phoenix.PubSub.broadcast(AgentCoordinator.PubSub, "global", {:task_started, updated_task})
{:reply, updated_task, new_state}
end
@@ -103,17 +117,18 @@ defmodule AgentCoordinator.Inbox do
completed_task = Task.complete(task)
# Add to completed tasks with history limit
completed_tasks = [completed_task | state.completed_tasks]
completed_tasks =
[completed_task | state.completed_tasks]
|> Enum.take(state.max_history)
new_state = %{state |
in_progress_task: nil,
completed_tasks: completed_tasks
}
new_state = %{state | in_progress_task: nil, completed_tasks: completed_tasks}
# Broadcast task completed
Phoenix.PubSub.broadcast(AgentCoordinator.PubSub, "global",
{:task_completed, completed_task})
Phoenix.PubSub.broadcast(
AgentCoordinator.PubSub,
"global",
{:task_completed, completed_task}
)
{:reply, completed_task, new_state}
end
@@ -134,12 +149,17 @@ defmodule AgentCoordinator.Inbox do
tasks = %{
pending: state.pending_tasks,
in_progress: state.in_progress_task,
completed: Enum.take(state.completed_tasks, 10) # Recent 10
# Recent 10
completed: Enum.take(state.completed_tasks, 10)
}
{:reply, tasks, state}
end
def handle_call(:get_current_task, _from, state) do
{:reply, state.in_progress_task, state}
end
# Private helpers
defp via_tuple(agent_id) do
@@ -150,11 +170,12 @@ defmodule AgentCoordinator.Inbox do
priority_order = %{urgent: 0, high: 1, normal: 2, low: 3}
new_priority = Map.get(priority_order, new_task.priority, 2)
{before, after} = Enum.split_while(tasks, fn task ->
{before, after_tasks} =
Enum.split_while(tasks, fn task ->
task_priority = Map.get(priority_order, task.priority, 2)
task_priority <= new_priority
end)
before ++ [new_task] ++ after
before ++ [new_task] ++ after_tasks
end
end

View File

@@ -5,7 +5,7 @@ defmodule AgentCoordinator.MCPServer do
"""
use GenServer
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task}
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task, CodebaseRegistry}
@mcp_tools [
%{
@@ -17,12 +17,33 @@ defmodule AgentCoordinator.MCPServer do
"name" => %{"type" => "string"},
"capabilities" => %{
"type" => "array",
"items" => %{"type" => "string", "enum" => ["coding", "testing", "documentation", "analysis", "review"]}
"items" => %{
"type" => "string",
"enum" => ["coding", "testing", "documentation", "analysis", "review"]
}
},
"codebase_id" => %{"type" => "string"},
"workspace_path" => %{"type" => "string"},
"cross_codebase_capable" => %{"type" => "boolean"}
},
"required" => ["name", "capabilities"]
}
},
%{
"name" => "register_codebase",
"description" => "Register a new codebase in the coordination system",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"id" => %{"type" => "string"},
"name" => %{"type" => "string"},
"workspace_path" => %{"type" => "string"},
"description" => %{"type" => "string"},
"metadata" => %{"type" => "object"}
},
"required" => ["name", "workspace_path"]
}
},
%{
"name" => "create_task",
"description" => "Create a new task in the coordination system",
@@ -32,15 +53,47 @@ defmodule AgentCoordinator.MCPServer do
"title" => %{"type" => "string"},
"description" => %{"type" => "string"},
"priority" => %{"type" => "string", "enum" => ["low", "normal", "high", "urgent"]},
"codebase_id" => %{"type" => "string"},
"file_paths" => %{"type" => "array", "items" => %{"type" => "string"}},
"required_capabilities" => %{
"type" => "array",
"items" => %{"type" => "string"}
},
"cross_codebase_dependencies" => %{
"type" => "array",
"items" => %{
"type" => "object",
"properties" => %{
"codebase_id" => %{"type" => "string"},
"task_id" => %{"type" => "string"}
}
}
}
},
"required" => ["title", "description"]
}
},
%{
"name" => "create_cross_codebase_task",
"description" => "Create a task that spans multiple codebases",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"title" => %{"type" => "string"},
"description" => %{"type" => "string"},
"primary_codebase_id" => %{"type" => "string"},
"affected_codebases" => %{
"type" => "array",
"items" => %{"type" => "string"}
},
"coordination_strategy" => %{
"type" => "string",
"enum" => ["sequential", "parallel", "leader_follower"]
}
},
"required" => ["title", "description", "primary_codebase_id", "affected_codebases"]
}
},
%{
"name" => "get_next_task",
"description" => "Get the next task for an agent",
@@ -66,11 +119,46 @@ defmodule AgentCoordinator.MCPServer do
%{
"name" => "get_task_board",
"description" => "Get overview of all agents and their current tasks",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"codebase_id" => %{"type" => "string"}
}
}
},
%{
"name" => "get_codebase_status",
"description" => "Get status and statistics for a specific codebase",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"codebase_id" => %{"type" => "string"}
},
"required" => ["codebase_id"]
}
},
%{
"name" => "list_codebases",
"description" => "List all registered codebases",
"inputSchema" => %{
"type" => "object",
"properties" => %{}
}
},
%{
"name" => "add_codebase_dependency",
"description" => "Add a dependency relationship between codebases",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"source_codebase_id" => %{"type" => "string"},
"target_codebase_id" => %{"type" => "string"},
"dependency_type" => %{"type" => "string"},
"metadata" => %{"type" => "object"}
},
"required" => ["source_codebase_id", "target_codebase_id", "dependency_type"]
}
},
%{
"name" => "heartbeat",
"description" => "Send heartbeat to maintain agent status",
@@ -81,6 +169,18 @@ defmodule AgentCoordinator.MCPServer do
},
"required" => ["agent_id"]
}
},
%{
"name" => "unregister_agent",
"description" => "Unregister an agent from the coordination system (e.g., when waiting for user input)",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"agent_id" => %{"type" => "string"},
"reason" => %{"type" => "string"}
},
"required" => ["agent_id"]
}
}
]
@@ -111,26 +211,55 @@ defmodule AgentCoordinator.MCPServer do
# MCP request processing
defp process_mcp_request(%{"method" => "tools/list"}) do
defp process_mcp_request(%{"method" => "initialize"} = request) do
id = Map.get(request, "id", nil)
%{
"jsonrpc" => "2.0",
"id" => id,
"result" => %{
"protocolVersion" => "2024-11-05",
"capabilities" => %{
"tools" => %{}
},
"serverInfo" => %{
"name" => "agent-coordinator",
"version" => "0.1.0"
}
}
}
end
defp process_mcp_request(%{"method" => "tools/list"} = request) do
id = Map.get(request, "id", nil)
%{
"jsonrpc" => "2.0",
"id" => id,
"result" => %{"tools" => @mcp_tools}
}
end
defp process_mcp_request(%{
defp process_mcp_request(
%{
"method" => "tools/call",
"params" => %{"name" => tool_name, "arguments" => args}
} = request) do
} = request
) do
id = Map.get(request, "id", nil)
result = case tool_name do
result =
case tool_name do
"register_agent" -> register_agent(args)
"register_codebase" -> register_codebase(args)
"create_task" -> create_task(args)
"create_cross_codebase_task" -> create_cross_codebase_task(args)
"get_next_task" -> get_next_task(args)
"complete_task" -> complete_task(args)
"get_task_board" -> get_task_board(args)
"get_codebase_status" -> get_codebase_status(args)
"list_codebases" -> list_codebases(args)
"add_codebase_dependency" -> add_codebase_dependency(args)
"heartbeat" -> heartbeat(args)
"unregister_agent" -> unregister_agent(args)
_ -> {:error, "Unknown tool: #{tool_name}"}
end
@@ -151,34 +280,60 @@ defmodule AgentCoordinator.MCPServer do
end
end
defp process_mcp_request(_request) do
defp process_mcp_request(request) do
id = Map.get(request, "id", nil)
%{
"jsonrpc" => "2.0",
"id" => id,
"error" => %{"code" => -32601, "message" => "Method not found"}
}
end
# Tool implementations
defp register_agent(%{"name" => name, "capabilities" => capabilities}) do
caps = Enum.map(capabilities, &String.to_existing_atom/1)
agent = Agent.new(name, caps)
defp register_agent(%{"name" => name, "capabilities" => capabilities} = args) do
caps = Enum.map(capabilities, &String.to_atom/1)
opts = [
codebase_id: Map.get(args, "codebase_id", "default"),
workspace_path: Map.get(args, "workspace_path"),
metadata: %{
cross_codebase_capable: Map.get(args, "cross_codebase_capable", false)
}
]
agent = Agent.new(name, caps, opts)
case TaskRegistry.register_agent(agent) do
:ok ->
# Add agent to codebase registry
CodebaseRegistry.add_agent_to_codebase(agent.codebase_id, agent.id)
# Start inbox for the agent
{:ok, _pid} = Inbox.start_link(agent.id)
{:ok, %{agent_id: agent.id, status: "registered"}}
{:ok, %{agent_id: agent.id, codebase_id: agent.codebase_id, status: "registered"}}
{:error, reason} ->
{:error, "Failed to register agent: #{reason}"}
end
end
defp register_codebase(args) do
case CodebaseRegistry.register_codebase(args) do
{:ok, codebase_id} ->
{:ok, %{codebase_id: codebase_id, status: "registered"}}
{:error, reason} ->
{:error, "Failed to register codebase: #{reason}"}
end
end
defp create_task(%{"title" => title, "description" => description} = args) do
opts = [
priority: String.to_existing_atom(Map.get(args, "priority", "normal")),
priority: String.to_atom(Map.get(args, "priority", "normal")),
codebase_id: Map.get(args, "codebase_id", "default"),
file_paths: Map.get(args, "file_paths", []),
cross_codebase_dependencies: Map.get(args, "cross_codebase_dependencies", []),
metadata: %{
required_capabilities: Map.get(args, "required_capabilities", [])
}
@@ -188,22 +343,81 @@ defmodule AgentCoordinator.MCPServer do
case TaskRegistry.assign_task(task) do
{:ok, agent_id} ->
{:ok, %{task_id: task.id, assigned_to: agent_id, status: "assigned"}}
{:ok, %{task_id: task.id, assigned_to: agent_id, codebase_id: task.codebase_id, status: "assigned"}}
{:error, :no_available_agents} ->
# Add to global pending queue
TaskRegistry.add_to_pending(task)
{:ok, %{task_id: task.id, status: "queued"}}
{:ok, %{task_id: task.id, codebase_id: task.codebase_id, status: "queued"}}
end
end
defp create_cross_codebase_task(%{"title" => title, "description" => description} = args) do
primary_codebase = Map.get(args, "primary_codebase_id")
affected_codebases = Map.get(args, "affected_codebases", [])
strategy = Map.get(args, "coordination_strategy", "sequential")
# Create main task in primary codebase
main_task_opts = [
codebase_id: primary_codebase,
metadata: %{
cross_codebase_task: true,
coordination_strategy: strategy,
affected_codebases: affected_codebases
}
]
main_task = Task.new(title, description, main_task_opts)
# Create dependent tasks in other codebases
dependent_tasks =
Enum.map(affected_codebases, fn codebase_id ->
if codebase_id != primary_codebase do
dependent_opts = [
codebase_id: codebase_id,
cross_codebase_dependencies: [%{codebase_id: primary_codebase, task_id: main_task.id}],
metadata: %{
cross_codebase_task: true,
primary_task_id: main_task.id,
coordination_strategy: strategy
}
]
Task.new("#{title} (#{codebase_id})", "Cross-codebase task: #{description}", dependent_opts)
end
end)
|> Enum.filter(&(&1 != nil))
# Try to assign all tasks
all_tasks = [main_task | dependent_tasks]
results =
Enum.map(all_tasks, fn task ->
case TaskRegistry.assign_task(task) do
{:ok, agent_id} -> %{task_id: task.id, codebase_id: task.codebase_id, agent_id: agent_id, status: "assigned"}
{:error, :no_available_agents} ->
TaskRegistry.add_to_pending(task)
%{task_id: task.id, codebase_id: task.codebase_id, status: "queued"}
end
end)
{:ok, %{
main_task_id: main_task.id,
primary_codebase: primary_codebase,
coordination_strategy: strategy,
tasks: results,
status: "created"
}}
end
defp get_next_task(%{"agent_id" => agent_id}) do
case Inbox.get_next_task(agent_id) do
nil ->
{:ok, %{message: "No tasks available"}}
task ->
{:ok, %{
{:ok,
%{
task_id: task.id,
title: task.title,
description: task.description,
@@ -219,7 +433,8 @@ defmodule AgentCoordinator.MCPServer do
{:error, "Failed to complete task: #{reason}"}
completed_task ->
{:ok, %{
{:ok,
%{
task_id: completed_task.id,
status: "completed",
completed_at: completed_task.updated_at
@@ -227,10 +442,19 @@ defmodule AgentCoordinator.MCPServer do
end
end
defp get_task_board(_args) do
defp get_task_board(args) do
codebase_id = Map.get(args, "codebase_id")
agents = TaskRegistry.list_agents()
board = Enum.map(agents, fn agent ->
# Filter agents by codebase if specified
filtered_agents =
case codebase_id do
nil -> agents
id -> Enum.filter(agents, fn agent -> agent.codebase_id == id end)
end
board =
Enum.map(filtered_agents, fn agent ->
status = Inbox.get_status(agent.id)
%{
@@ -238,17 +462,70 @@ defmodule AgentCoordinator.MCPServer do
name: agent.name,
capabilities: agent.capabilities,
status: agent.status,
codebase_id: agent.codebase_id,
workspace_path: agent.workspace_path,
online: Agent.is_online?(agent),
current_task: status.current_task && %{
cross_codebase_capable: Agent.can_work_cross_codebase?(agent),
current_task:
status.current_task &&
%{
id: status.current_task.id,
title: status.current_task.title
title: status.current_task.title,
codebase_id: status.current_task.codebase_id
},
pending_tasks: status.pending_count,
completed_tasks: status.completed_count
}
end)
{:ok, %{agents: board}}
{:ok, %{agents: board, codebase_filter: codebase_id}}
end
defp get_codebase_status(%{"codebase_id" => codebase_id}) do
case CodebaseRegistry.get_codebase_stats(codebase_id) do
{:ok, stats} ->
{:ok, stats}
{:error, reason} ->
{:error, "Failed to get codebase status: #{reason}"}
end
end
defp list_codebases(_args) do
codebases = CodebaseRegistry.list_codebases()
codebase_summaries =
Enum.map(codebases, fn codebase ->
%{
id: codebase.id,
name: codebase.name,
workspace_path: codebase.workspace_path,
description: codebase.description,
agent_count: length(codebase.agents),
active_task_count: length(codebase.active_tasks),
created_at: codebase.created_at,
updated_at: codebase.updated_at
}
end)
{:ok, %{codebases: codebase_summaries}}
end
defp add_codebase_dependency(%{"source_codebase_id" => source, "target_codebase_id" => target, "dependency_type" => dep_type} = args) do
metadata = Map.get(args, "metadata", %{})
case CodebaseRegistry.add_cross_codebase_dependency(source, target, dep_type, metadata) do
:ok ->
{:ok, %{
source_codebase: source,
target_codebase: target,
dependency_type: dep_type,
status: "added"
}}
{:error, reason} ->
{:error, "Failed to add dependency: #{reason}"}
end
end
defp heartbeat(%{"agent_id" => agent_id}) do
@@ -260,4 +537,16 @@ defmodule AgentCoordinator.MCPServer do
{:error, "Heartbeat failed: #{reason}"}
end
end
defp unregister_agent(%{"agent_id" => agent_id} = args) do
reason = Map.get(args, "reason", "Agent unregistered")
case TaskRegistry.unregister_agent(agent_id, reason) do
:ok ->
{:ok, %{status: "agent_unregistered", agent_id: agent_id, reason: reason}}
{:error, reason} ->
{:error, "Unregister failed: #{reason}"}
end
end
end

View File

@@ -0,0 +1,888 @@
defmodule AgentCoordinator.MCPServerManager do
@moduledoc """
Manages external MCP servers as internal clients.
This module starts, monitors, and communicates with external MCP servers,
acting as a client to each while presenting their tools through the
unified Agent Coordinator interface.
"""
use GenServer
require Logger
defstruct [
:servers,
:server_processes,
:tool_registry,
:config
]
# Client API
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@doc """
Get all tools from all managed servers plus Agent Coordinator tools
"""
def get_unified_tools do
GenServer.call(__MODULE__, :get_unified_tools)
end
@doc """
Route a tool call to the appropriate server
"""
def route_tool_call(tool_name, arguments, agent_context) do
GenServer.call(__MODULE__, {:route_tool_call, tool_name, arguments, agent_context})
end
@doc """
Get status of all managed servers
"""
def get_server_status do
GenServer.call(__MODULE__, :get_server_status)
end
@doc """
Restart a specific server
"""
def restart_server(server_name) do
GenServer.call(__MODULE__, {:restart_server, server_name})
end
# Server callbacks
def init(opts) do
config = load_server_config(opts)
state = %__MODULE__{
servers: %{},
server_processes: %{},
tool_registry: %{},
config: config
}
# Start all configured servers
{:ok, state, {:continue, :start_servers}}
end
def handle_continue(:start_servers, state) do
Logger.info("Starting external MCP servers...")
new_state =
Enum.reduce(state.config.servers, state, fn {name, config}, acc ->
case start_server(name, config) do
{:ok, server_info} ->
Logger.info("Started MCP server: #{name}")
%{
acc
| servers: Map.put(acc.servers, name, server_info),
server_processes: Map.put(acc.server_processes, name, server_info.pid)
}
{:error, reason} ->
Logger.error("Failed to start MCP server #{name}: #{reason}")
acc
end
end)
# Build initial tool registry
updated_state = refresh_tool_registry(new_state)
{:noreply, updated_state}
end
def handle_call(:get_unified_tools, _from, state) do
# Combine Agent Coordinator tools with external server tools
coordinator_tools = get_coordinator_tools()
external_tools = Map.values(state.tool_registry) |> List.flatten()
all_tools = coordinator_tools ++ external_tools
{:reply, all_tools, state}
end
def handle_call({:route_tool_call, tool_name, arguments, agent_context}, _from, state) do
case find_tool_server(tool_name, state) do
{:coordinator, _} ->
# Route to Agent Coordinator's own tools
result = handle_coordinator_tool(tool_name, arguments, agent_context)
{:reply, result, state}
{:external, server_name} ->
# Route to external server
result = call_external_tool(server_name, tool_name, arguments, agent_context, state)
{:reply, result, state}
:not_found ->
error_result = %{
"error" => %{
"code" => -32601,
"message" => "Tool not found: #{tool_name}"
}
}
{:reply, error_result, state}
end
end
def handle_call(:get_server_status, _from, state) do
status =
Enum.map(state.servers, fn {name, server_info} ->
{name,
%{
status: if(Process.alive?(server_info.pid), do: :running, else: :stopped),
pid: server_info.pid,
tools_count: length(Map.get(state.tool_registry, name, [])),
started_at: server_info.started_at
}}
end)
|> Map.new()
{:reply, status, state}
end
def handle_call({:restart_server, server_name}, _from, state) do
case Map.get(state.servers, server_name) do
nil ->
{:reply, {:error, "Server not found"}, state}
server_info ->
# Stop existing server
if Process.alive?(server_info.pid) do
Process.exit(server_info.pid, :kill)
end
# Start new server
server_config = Map.get(state.config.servers, server_name)
case start_server(server_name, server_config) do
{:ok, new_server_info} ->
new_state = %{
state
| servers: Map.put(state.servers, server_name, new_server_info),
server_processes:
Map.put(state.server_processes, server_name, new_server_info.pid)
}
updated_state = refresh_tool_registry(new_state)
{:reply, {:ok, new_server_info}, updated_state}
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
end
def handle_info({:DOWN, _ref, :port, port, reason}, state) do
# Handle server port death
case find_server_by_port(port, state.servers) do
{server_name, server_info} ->
Logger.warning("MCP server #{server_name} port died: #{reason}")
# Cleanup PID file and kill external process
if server_info.pid_file_path do
cleanup_pid_file(server_info.pid_file_path)
end
if server_info.os_pid do
kill_external_process(server_info.os_pid)
end
# Remove from state
new_state = %{
state
| servers: Map.delete(state.servers, server_name),
server_processes: Map.delete(state.server_processes, server_name),
tool_registry: Map.delete(state.tool_registry, server_name)
}
# Attempt restart if configured
if should_auto_restart?(server_name, state.config) do
Logger.info("Auto-restarting MCP server: #{server_name}")
Process.send_after(self(), {:restart_server, server_name}, 1000)
end
{:noreply, new_state}
nil ->
{:noreply, state}
end
end
def handle_info({:restart_server, server_name}, state) do
server_config = Map.get(state.config.servers, server_name)
case start_server(server_name, server_config) do
{:ok, server_info} ->
Logger.info("Auto-restarted MCP server: #{server_name}")
new_state = %{
state
| servers: Map.put(state.servers, server_name, server_info),
server_processes: Map.put(state.server_processes, server_name, server_info.pid)
}
updated_state = refresh_tool_registry(new_state)
{:noreply, updated_state}
{:error, reason} ->
Logger.error("Failed to auto-restart MCP server #{server_name}: #{reason}")
{:noreply, state}
end
end
def handle_info(_msg, state) do
{:noreply, state}
end
# Private functions
defp load_server_config(opts) do
# Allow override from opts or config file
config_file = Keyword.get(opts, :config_file, "mcp_servers.json")
if File.exists?(config_file) do
try do
case Jason.decode!(File.read!(config_file)) do
%{"servers" => servers} = full_config ->
# Convert string types to atoms and normalize server configs
normalized_servers =
Enum.into(servers, %{}, fn {name, config} ->
normalized_config =
config
|> Map.update("type", :stdio, fn
"stdio" -> :stdio
"http" -> :http
type when is_atom(type) -> type
type -> String.to_existing_atom(type)
end)
|> Enum.into(%{}, fn
{"type", type} -> {:type, type}
{key, value} -> {String.to_atom(key), value}
end)
{name, normalized_config}
end)
base_config = %{servers: normalized_servers}
# Add any additional config from the JSON file
case Map.get(full_config, "config") do
nil -> base_config
additional_config ->
Map.merge(base_config, %{config: additional_config})
end
_ ->
Logger.warning("Invalid config file format in #{config_file}, using defaults")
get_default_config()
end
rescue
e ->
Logger.warning("Failed to load config file #{config_file}: #{Exception.message(e)}, using defaults")
get_default_config()
end
else
Logger.warning("Config file #{config_file} not found, using defaults")
get_default_config()
end
end
defp get_default_config do
%{
servers: %{
"mcp_context7" => %{
type: :stdio,
command: "uvx",
args: ["mcp-server-context7"],
auto_restart: true,
description: "Context7 library documentation server"
},
"mcp_figma" => %{
type: :stdio,
command: "npx",
args: ["-y", "@figma/mcp-server-figma"],
auto_restart: true,
description: "Figma design integration server"
},
"mcp_filesystem" => %{
type: :stdio,
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
auto_restart: true,
description: "Filesystem operations server with heartbeat coverage"
},
"mcp_firebase" => %{
type: :stdio,
command: "npx",
args: ["-y", "@firebase/mcp-server"],
auto_restart: true,
description: "Firebase integration server"
},
"mcp_memory" => %{
type: :stdio,
command: "npx",
args: ["-y", "@modelcontextprotocol/server-memory"],
auto_restart: true,
description: "Memory and knowledge graph server"
},
"mcp_sequentialthi" => %{
type: :stdio,
command: "npx",
args: ["-y", "@modelcontextprotocol/server-sequential-thinking"],
auto_restart: true,
description: "Sequential thinking and reasoning server"
}
}
}
end
defp start_server(name, %{type: :stdio} = config) do
case start_stdio_server(name, config) do
{:ok, os_pid, port, pid_file_path} ->
# Monitor the port (not the OS PID)
port_ref = Port.monitor(port)
server_info = %{
name: name,
type: :stdio,
pid: port, # Use port as the "pid" for process tracking
os_pid: os_pid,
port: port,
pid_file_path: pid_file_path,
port_ref: port_ref,
started_at: DateTime.utc_now(),
tools: []
}
# Initialize the server and get tools
case initialize_server(server_info) do
{:ok, tools} ->
{:ok, %{server_info | tools: tools}}
{:error, reason} ->
# Cleanup on initialization failure
cleanup_pid_file(pid_file_path)
kill_external_process(os_pid)
# Only close port if it's still open
if Port.info(port) do
Port.close(port)
end
{:error, reason}
end
{:error, reason} ->
{:error, reason}
end
end
defp start_server(name, %{type: :http} = config) do
# For HTTP servers, we don't spawn processes - just store connection info
server_info = %{
name: name,
type: :http,
url: Map.get(config, :url),
pid: nil, # No process to track for HTTP
os_pid: nil,
port: nil,
pid_file_path: nil,
port_ref: nil,
started_at: DateTime.utc_now(),
tools: []
}
# For HTTP servers, we can try to get tools but don't need process management
case initialize_http_server(server_info) do
{:ok, tools} ->
{:ok, %{server_info | tools: tools}}
{:error, reason} ->
{:error, reason}
end
end
defp start_stdio_server(name, config) do
command = Map.get(config, :command, "npx")
args = Map.get(config, :args, [])
env = Map.get(config, :env, %{})
# Convert env map to list format expected by Port.open
env_list = Enum.map(env, fn {key, value} -> {String.to_charlist(key), String.to_charlist(value)} end)
port_options = [
:binary,
:stream,
{:line, 1024},
{:env, env_list},
:exit_status,
:hide
]
try do
port = Port.open({:spawn_executable, System.find_executable(command)},
[{:args, args} | port_options])
# Get the OS PID of the spawned process
{:os_pid, os_pid} = Port.info(port, :os_pid)
# Create PID file for cleanup
pid_file_path = create_pid_file(name, os_pid)
Logger.info("Started MCP server #{name} with OS PID #{os_pid}")
{:ok, os_pid, port, pid_file_path}
rescue
e ->
Logger.error("Failed to start stdio server #{name}: #{Exception.message(e)}")
{:error, Exception.message(e)}
end
end
defp create_pid_file(server_name, os_pid) do
pid_dir = Path.join(System.tmp_dir(), "mcp_servers")
File.mkdir_p!(pid_dir)
pid_file_path = Path.join(pid_dir, "#{server_name}.pid")
File.write!(pid_file_path, to_string(os_pid))
pid_file_path
end
defp cleanup_pid_file(pid_file_path) do
if File.exists?(pid_file_path) do
File.rm(pid_file_path)
end
end
defp kill_external_process(os_pid) when is_integer(os_pid) do
try do
case System.cmd("kill", ["-TERM", to_string(os_pid)]) do
{_, 0} ->
Logger.info("Successfully terminated process #{os_pid}")
:ok
{_, _} ->
# Try force kill
case System.cmd("kill", ["-KILL", to_string(os_pid)]) do
{_, 0} ->
Logger.info("Force killed process #{os_pid}")
:ok
{_, _} ->
Logger.warning("Failed to kill process #{os_pid}")
:error
end
end
rescue
_ -> :error
end
end
defp find_server_by_port(port, servers) do
Enum.find(servers, fn {_name, server_info} ->
server_info.port == port
end)
end
defp initialize_server(server_info) do
# Send initialize request
init_request = %{
"jsonrpc" => "2.0",
"id" => 1,
"method" => "initialize",
"params" => %{
"protocolVersion" => "2024-11-05",
"capabilities" => %{},
"clientInfo" => %{
"name" => "agent-coordinator",
"version" => "0.1.0"
}
}
}
with {:ok, _init_response} <- send_server_request(server_info, init_request),
{:ok, tools_response} <- get_server_tools(server_info) do
{:ok, tools_response}
else
{:error, reason} -> {:error, reason}
end
end
defp initialize_http_server(server_info) do
# For HTTP servers, we would make HTTP requests instead of using ports
# For now, return empty tools list as we need to implement HTTP client logic
Logger.warning("HTTP server support not fully implemented yet for #{server_info.name}")
{:ok, []}
rescue
e ->
{:error, "HTTP server initialization failed: #{Exception.message(e)}"}
end
defp get_server_tools(server_info) do
tools_request = %{
"jsonrpc" => "2.0",
"id" => 2,
"method" => "tools/list"
}
case send_server_request(server_info, tools_request) do
{:ok, %{"result" => %{"tools" => tools}}} ->
{:ok, tools}
{:ok, unexpected} ->
Logger.warning(
"Unexpected tools response from #{server_info.name}: #{inspect(unexpected)}"
)
{:ok, []}
{:error, reason} ->
{:error, reason}
end
end
defp send_server_request(server_info, request) do
request_json = Jason.encode!(request) <> "\n"
Port.command(server_info.port, request_json)
# Collect full response by reading multiple lines if needed
response_data = collect_response(server_info.port, "", 30_000)
case Jason.decode(response_data) do
{:ok, response} -> {:ok, response}
{:error, %Jason.DecodeError{} = error} ->
Logger.error("JSON decode error for server #{server_info.name}: #{Exception.message(error)}")
Logger.debug("Raw response data: #{inspect(response_data)}")
{:error, "JSON decode error: #{Exception.message(error)}"}
{:error, reason} ->
{:error, "JSON decode error: #{inspect(reason)}"}
end
end
defp collect_response(port, acc, timeout) do
receive do
{^port, {:data, {_eol, response_line}}} ->
# Accumulate the response line
new_acc = acc <> response_line
# Check if we have a complete JSON object
case Jason.decode(new_acc) do
{:ok, _} ->
# Successfully decoded, return the complete response
new_acc
{:error, _} ->
# Not complete yet, continue collecting
collect_response(port, new_acc, timeout)
end
{^port, {:exit_status, status}} ->
Logger.error("Server exited with status: #{status}")
acc
after
timeout ->
Logger.error("Server request timeout after #{timeout}ms")
acc
end
end
defp refresh_tool_registry(state) do
new_registry =
Enum.reduce(state.servers, %{}, fn {name, server_info}, acc ->
tools = Map.get(server_info, :tools, [])
Map.put(acc, name, tools)
end)
%{state | tool_registry: new_registry}
end
defp find_tool_server(tool_name, state) do
# Check Agent Coordinator tools first
if tool_name in get_coordinator_tool_names() do
{:coordinator, tool_name}
else
# Check external servers
case find_external_tool_server(tool_name, state.tool_registry) do
nil -> :not_found
server_name -> {:external, server_name}
end
end
end
defp find_external_tool_server(tool_name, tool_registry) do
Enum.find_value(tool_registry, fn {server_name, tools} ->
if Enum.any?(tools, fn tool -> tool["name"] == tool_name end) do
server_name
else
nil
end
end)
end
defp get_coordinator_tools do
[
%{
"name" => "register_agent",
"description" => "Register a new agent with the coordination system",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"name" => %{"type" => "string"},
"capabilities" => %{
"type" => "array",
"items" => %{"type" => "string"}
}
},
"required" => ["name", "capabilities"]
}
},
%{
"name" => "create_task",
"description" => "Create a new task in the coordination system",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"title" => %{"type" => "string"},
"description" => %{"type" => "string"},
"priority" => %{"type" => "string", "enum" => ["low", "normal", "high", "urgent"]},
"required_capabilities" => %{
"type" => "array",
"items" => %{"type" => "string"}
},
"file_paths" => %{
"type" => "array",
"items" => %{"type" => "string"}
}
},
"required" => ["title", "description"]
}
},
%{
"name" => "get_next_task",
"description" => "Get the next task for an agent",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"agent_id" => %{"type" => "string"}
},
"required" => ["agent_id"]
}
},
%{
"name" => "complete_task",
"description" => "Mark current task as completed",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"agent_id" => %{"type" => "string"}
},
"required" => ["agent_id"]
}
},
%{
"name" => "get_task_board",
"description" => "Get overview of all agents and their current tasks",
"inputSchema" => %{
"type" => "object",
"properties" => %{}
}
},
%{
"name" => "heartbeat",
"description" => "Send heartbeat to maintain agent status",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"agent_id" => %{"type" => "string"}
},
"required" => ["agent_id"]
}
}
]
end
defp get_coordinator_tool_names do
~w[register_agent create_task get_next_task complete_task get_task_board heartbeat]
end
defp handle_coordinator_tool(tool_name, arguments, _agent_context) do
# Route to existing Agent Coordinator functionality
case tool_name do
"register_agent" ->
AgentCoordinator.TaskRegistry.register_agent(
arguments["name"],
arguments["capabilities"]
)
"create_task" ->
AgentCoordinator.TaskRegistry.create_task(
arguments["title"],
arguments["description"],
Map.take(arguments, ["priority", "required_capabilities", "file_paths"])
)
"get_next_task" ->
AgentCoordinator.TaskRegistry.get_next_task(arguments["agent_id"])
"complete_task" ->
AgentCoordinator.TaskRegistry.complete_task(arguments["agent_id"])
"get_task_board" ->
AgentCoordinator.TaskRegistry.get_task_board()
"heartbeat" ->
AgentCoordinator.TaskRegistry.heartbeat_agent(arguments["agent_id"])
_ ->
%{"error" => %{"code" => -32601, "message" => "Unknown coordinator tool: #{tool_name}"}}
end
end
defp call_external_tool(server_name, tool_name, arguments, agent_context, state) do
case Map.get(state.servers, server_name) do
nil ->
%{"error" => %{"code" => -32603, "message" => "Server not available: #{server_name}"}}
server_info ->
# Send heartbeat before tool call if agent context available
if agent_context && agent_context.agent_id do
AgentCoordinator.TaskRegistry.heartbeat_agent(agent_context.agent_id)
# Auto-create/update current task for this tool usage
update_current_task(agent_context.agent_id, tool_name, arguments)
end
# Make the actual tool call
tool_request = %{
"jsonrpc" => "2.0",
"id" => System.unique_integer([:positive]),
"method" => "tools/call",
"params" => %{
"name" => tool_name,
"arguments" => arguments
}
}
result =
case send_server_request(server_info, tool_request) do
{:ok, response} ->
# Send heartbeat after successful tool call
if agent_context && agent_context.agent_id do
AgentCoordinator.TaskRegistry.heartbeat_agent(agent_context.agent_id)
end
response
{:error, reason} ->
%{"error" => %{"code" => -32603, "message" => reason}}
end
result
end
end
defp update_current_task(agent_id, tool_name, arguments) do
# Create a descriptive task title based on the tool being used
task_title = generate_task_title(tool_name, arguments)
task_description = generate_task_description(tool_name, arguments)
# Check if agent has current task, if not create one
case AgentCoordinator.TaskRegistry.get_agent_current_task(agent_id) do
nil ->
# Create new auto-task
AgentCoordinator.TaskRegistry.create_task(
task_title,
task_description,
%{
priority: "normal",
auto_generated: true,
tool_name: tool_name,
assigned_agent: agent_id
}
)
# Auto-assign to this agent
case AgentCoordinator.TaskRegistry.get_next_task(agent_id) do
{:ok, _task} -> :ok
_ -> :ok
end
existing_task ->
# Update existing task with latest activity
AgentCoordinator.TaskRegistry.update_task_activity(
existing_task.id,
tool_name,
arguments
)
end
end
defp generate_task_title(tool_name, arguments) do
case tool_name do
"read_file" ->
"Reading file: #{Path.basename(arguments["path"] || "unknown")}"
"write_file" ->
"Writing file: #{Path.basename(arguments["path"] || "unknown")}"
"list_directory" ->
"Exploring directory: #{Path.basename(arguments["path"] || "unknown")}"
"mcp_context7_get-library-docs" ->
"Researching: #{arguments["context7CompatibleLibraryID"] || "library"}"
"mcp_figma_get_code" ->
"Generating Figma code: #{arguments["nodeId"] || "component"}"
"mcp_firebase_firestore_get_documents" ->
"Fetching Firestore documents"
"mcp_memory_search_nodes" ->
"Searching memory: #{arguments["query"] || "query"}"
"mcp_sequentialthi_sequentialthinking" ->
"Thinking through problem"
_ ->
"Using tool: #{tool_name}"
end
end
defp generate_task_description(tool_name, arguments) do
case tool_name do
"read_file" ->
"Reading and analyzing file content from #{arguments["path"]}"
"write_file" ->
"Creating or updating file at #{arguments["path"]}"
"list_directory" ->
"Exploring directory structure at #{arguments["path"]}"
"mcp_context7_get-library-docs" ->
"Researching documentation for #{arguments["context7CompatibleLibraryID"]} library"
"mcp_figma_get_code" ->
"Generating code for Figma component #{arguments["nodeId"]}"
"mcp_firebase_firestore_get_documents" ->
"Retrieving documents from Firestore: #{inspect(arguments["paths"])}"
"mcp_memory_search_nodes" ->
"Searching knowledge graph for: #{arguments["query"]}"
"mcp_sequentialthi_sequentialthinking" ->
"Using sequential thinking to solve complex problem"
_ ->
"Executing #{tool_name} with arguments: #{inspect(arguments)}"
end
end
defp should_auto_restart?(server_name, config) do
server_config = Map.get(config.servers, server_name, %{})
Map.get(server_config, :auto_restart, false)
end
end

View File

@@ -5,7 +5,6 @@ defmodule AgentCoordinator.Persistence do
"""
use GenServer
alias AgentCoordinator.{Task, Agent}
defstruct [
:nats_conn,
@@ -15,12 +14,15 @@ defmodule AgentCoordinator.Persistence do
@stream_config %{
"name" => "AGENT_COORDINATION",
"subjects" => ["agent.*", "task.*"],
"subjects" => ["agent.>", "task.>", "codebase.>", "cross-codebase.>"],
"storage" => "file",
"max_msgs" => 1_000_000,
"max_bytes" => 1_000_000_000, # 1GB
"max_age" => 7 * 24 * 60 * 60 * 1_000_000_000, # 7 days in nanoseconds
"max_msg_size" => 1_000_000, # 1MB
"max_msgs" => 10_000_000,
# 10GB
"max_bytes" => 10_000_000_000,
# 30 days in nanoseconds
"max_age" => 30 * 24 * 60 * 60 * 1_000_000_000,
# 1MB
"max_msg_size" => 1_000_000,
"retention" => "limits",
"discard" => "old"
}
@@ -57,10 +59,23 @@ defmodule AgentCoordinator.Persistence do
nats_config = Keyword.get(opts, :nats, [])
retention_policy = Keyword.get(opts, :retention_policy, :default)
{:ok, nats_conn} = Gnat.start_link(nats_config)
# Only connect to NATS if config is provided
nats_conn =
case nats_config do
[] ->
nil
# Create or update JetStream
config ->
case Gnat.start_link(config) do
{:ok, conn} -> conn
{:error, _reason} -> nil
end
end
# Only create stream if we have a connection
if nats_conn do
create_or_update_stream(nats_conn)
end
state = %__MODULE__{
nats_conn: nats_conn,
@@ -75,29 +90,45 @@ defmodule AgentCoordinator.Persistence do
enriched_data = enrich_event_data(data)
message = Jason.encode!(enriched_data)
# Publish to JetStream
# Only publish if we have a NATS connection
if state.nats_conn do
case Gnat.pub(state.nats_conn, subject, message, headers: event_headers()) do
:ok -> :ok
:ok ->
:ok
{:error, reason} ->
IO.puts("Failed to store event: #{inspect(reason)}")
end
end
{:noreply, state}
end
def handle_call({:get_agent_history, agent_id, opts}, _from, state) do
case state.nats_conn do
nil ->
{:reply, [], state}
conn ->
subject_filter = "agent.*.#{agent_id}"
limit = Keyword.get(opts, :limit, 100)
events = fetch_events(state.nats_conn, subject_filter, limit)
events = fetch_events(conn, subject_filter, limit)
{:reply, events, state}
end
end
def handle_call({:get_task_history, task_id, opts}, _from, state) do
case state.nats_conn do
nil ->
{:reply, [], state}
conn ->
subject_filter = "task.*"
limit = Keyword.get(opts, :limit, 100)
events = fetch_events(state.nats_conn, subject_filter, limit)
events =
fetch_events(conn, subject_filter, limit)
|> Enum.filter(fn event ->
case Map.get(event, "task") do
%{"id" => ^task_id} -> true
@@ -107,17 +138,29 @@ defmodule AgentCoordinator.Persistence do
{:reply, events, state}
end
end
def handle_call({:replay_events, subject_filter, opts}, _from, state) do
case state.nats_conn do
nil ->
{:reply, [], state}
conn ->
limit = Keyword.get(opts, :limit, 1000)
start_time = Keyword.get(opts, :start_time)
events = fetch_events(state.nats_conn, subject_filter, limit, start_time)
events = fetch_events(conn, subject_filter, limit, start_time)
{:reply, events, state}
end
end
def handle_call(:get_system_stats, _from, state) do
stats = get_stream_info(state.nats_conn, state.stream_name)
stats =
case state.nats_conn do
nil -> %{connected: false}
conn -> get_stream_info(conn, state.stream_name) || %{connected: true}
end
{:reply, stats, state}
end
@@ -160,7 +203,7 @@ defmodule AgentCoordinator.Persistence do
end
end
defp update_stream(conn, config) do
defp update_stream(_conn, _config) do
# For simplicity, we'll just ensure the stream exists
# In production, you might want more sophisticated update logic
:ok
@@ -174,13 +217,14 @@ defmodule AgentCoordinator.Persistence do
info -> info
end
{:error, _} -> nil
{:error, _} ->
nil
end
end
defp fetch_events(conn, subject_filter, limit, start_time \\ nil) do
defp fetch_events(_conn, _subject_filter, _limit, start_time \\ nil) do
# Create a consumer to fetch messages
consumer_config = %{
_consumer_config = %{
"durable_name" => "temp_#{:rand.uniform(10000)}",
"deliver_policy" => if(start_time, do: "by_start_time", else: "all"),
"opt_start_time" => start_time,
@@ -190,7 +234,8 @@ defmodule AgentCoordinator.Persistence do
# This is a simplified implementation
# In production, you'd use proper JetStream consumer APIs
[] # Return empty for now - would implement full JetStream integration
# Return empty for now - would implement full JetStream integration
[]
end
defp enrich_event_data(data) do

View File

@@ -3,6 +3,22 @@ defmodule AgentCoordinator.Task do
Task data structure for agent coordination system.
"""
@derive {Jason.Encoder,
only: [
:id,
:title,
:description,
:status,
:priority,
:agent_id,
:codebase_id,
:file_paths,
:dependencies,
:cross_codebase_dependencies,
:created_at,
:updated_at,
:metadata
]}
defstruct [
:id,
:title,
@@ -10,8 +26,10 @@ defmodule AgentCoordinator.Task do
:status,
:priority,
:agent_id,
:codebase_id,
:file_paths,
:dependencies,
:cross_codebase_dependencies,
:created_at,
:updated_at,
:metadata
@@ -27,8 +45,10 @@ defmodule AgentCoordinator.Task do
status: status(),
priority: priority(),
agent_id: String.t() | nil,
codebase_id: String.t(),
file_paths: [String.t()],
dependencies: [String.t()],
cross_codebase_dependencies: [%{codebase_id: String.t(), task_id: String.t()}],
created_at: DateTime.t(),
updated_at: DateTime.t(),
metadata: map()
@@ -37,18 +57,28 @@ defmodule AgentCoordinator.Task do
def new(title, description, opts \\ []) do
now = DateTime.utc_now()
# Handle both keyword lists and maps
get_opt = fn key, default ->
case opts do
opts when is_map(opts) -> Map.get(opts, key, default)
opts when is_list(opts) -> Keyword.get(opts, key, default)
end
end
%__MODULE__{
id: UUID.uuid4(),
title: title,
description: description,
status: Keyword.get(opts, :status, :pending),
priority: Keyword.get(opts, :priority, :normal),
agent_id: Keyword.get(opts, :agent_id),
file_paths: Keyword.get(opts, :file_paths, []),
dependencies: Keyword.get(opts, :dependencies, []),
status: get_opt.(:status, :pending),
priority: get_opt.(:priority, :normal),
agent_id: get_opt.(:agent_id, nil),
codebase_id: get_opt.(:codebase_id, "default"),
file_paths: get_opt.(:file_paths, []),
dependencies: get_opt.(:dependencies, []),
cross_codebase_dependencies: get_opt.(:cross_codebase_dependencies, []),
created_at: now,
updated_at: now,
metadata: Keyword.get(opts, :metadata, %{})
metadata: get_opt.(:metadata, %{})
}
end
@@ -71,6 +101,18 @@ defmodule AgentCoordinator.Task do
end
def has_file_conflict?(task1, task2) do
# Only check conflicts within the same codebase
task1.codebase_id == task2.codebase_id and
not MapSet.disjoint?(MapSet.new(task1.file_paths), MapSet.new(task2.file_paths))
end
def is_cross_codebase?(task) do
not Enum.empty?(task.cross_codebase_dependencies)
end
def add_cross_codebase_dependency(task, codebase_id, task_id) do
dependency = %{codebase_id: codebase_id, task_id: task_id}
dependencies = [dependency | task.cross_codebase_dependencies]
%{task | cross_codebase_dependencies: dependencies, updated_at: DateTime.utc_now()}
end
end

View File

@@ -1,15 +1,19 @@
defmodule AgentCoordinator.TaskRegistry do
@moduledoc """
Central registry for agents and task assignment with NATS integration.
Enhanced to support multi-codebase coordination and cross-codebase task management.
"""
use GenServer
require Logger
alias AgentCoordinator.{Agent, Task, Inbox}
defstruct [
:agents,
:pending_tasks,
:file_locks,
:codebase_file_locks,
:cross_codebase_tasks,
:nats_conn
]
@@ -39,25 +43,75 @@ defmodule AgentCoordinator.TaskRegistry do
GenServer.call(__MODULE__, {:heartbeat_agent, agent_id})
end
def unregister_agent(agent_id, reason \\ "Agent requested unregistration") do
GenServer.call(__MODULE__, {:unregister_agent, agent_id, reason})
end
def get_file_locks do
GenServer.call(__MODULE__, :get_file_locks)
end
def get_agent_current_task(agent_id) do
GenServer.call(__MODULE__, {:get_agent_current_task, agent_id})
end
def update_task_activity(task_id, tool_name, arguments) do
GenServer.call(__MODULE__, {:update_task_activity, task_id, tool_name, arguments})
end
def create_task(title, description, opts \\ %{}) do
GenServer.call(__MODULE__, {:create_task, title, description, opts})
end
def get_next_task(agent_id) do
GenServer.call(__MODULE__, {:get_next_task, agent_id})
end
def complete_task(agent_id) do
GenServer.call(__MODULE__, {:complete_task, agent_id})
end
def get_task_board do
GenServer.call(__MODULE__, :get_task_board)
end
def register_agent(name, capabilities) do
agent = Agent.new(name, capabilities)
GenServer.call(__MODULE__, {:register_agent, agent})
end
# Server callbacks
def init(opts) do
# Connect to NATS
# Connect to NATS if config provided
nats_config = Keyword.get(opts, :nats, [])
{:ok, nats_conn} = Gnat.start_link(nats_config)
nats_conn =
case nats_config do
[] ->
nil
config ->
case Gnat.start_link(config) do
{:ok, conn} ->
# Subscribe to task events
Gnat.sub(nats_conn, self(), "agent.task.*")
Gnat.sub(nats_conn, self(), "agent.heartbeat.*")
Gnat.sub(conn, self(), "agent.task.*")
Gnat.sub(conn, self(), "agent.heartbeat.*")
Gnat.sub(conn, self(), "codebase.>")
Gnat.sub(conn, self(), "cross-codebase.>")
conn
{:error, _reason} ->
nil
end
end
state = %__MODULE__{
agents: %{},
pending_tasks: [],
file_locks: %{},
codebase_file_locks: %{},
cross_codebase_tasks: %{},
nats_conn: nats_conn
}
@@ -71,11 +125,28 @@ defmodule AgentCoordinator.TaskRegistry do
new_agents = Map.put(state.agents, agent.id, agent)
new_state = %{state | agents: new_agents}
# Publish agent registration
publish_event(state.nats_conn, "agent.registered", %{agent: agent})
# Create inbox for the agent
case DynamicSupervisor.start_child(
AgentCoordinator.InboxSupervisor,
{Inbox, agent.id}
) do
{:ok, _pid} ->
Logger.info("Created inbox for agent #{agent.id}")
{:error, {:already_started, _pid}} ->
Logger.info("Inbox already exists for agent #{agent.id}")
{:error, reason} ->
Logger.warning("Failed to create inbox for agent #{agent.id}: #{inspect(reason)}")
end
# Publish agent registration with codebase info
if state.nats_conn do
publish_event(state.nats_conn, "agent.registered.#{agent.codebase_id}", %{agent: agent})
end
# Try to assign pending tasks
{assigned_tasks, remaining_pending} = assign_pending_tasks(new_state)
{_assigned_tasks, remaining_pending} = assign_pending_tasks(new_state)
final_state = %{new_state | pending_tasks: remaining_pending}
{:reply, :ok, final_state}
@@ -91,7 +162,7 @@ defmodule AgentCoordinator.TaskRegistry do
{:reply, {:error, :no_available_agents}, state}
agent ->
# Check for file conflicts
# Check for file conflicts within the same codebase
case check_file_conflicts(state, task) do
[] ->
# No conflicts, assign task
@@ -102,10 +173,12 @@ defmodule AgentCoordinator.TaskRegistry do
blocked_task = Task.block(task, "File conflicts: #{inspect(conflicts)}")
new_pending = [blocked_task | state.pending_tasks]
publish_event(state.nats_conn, "task.blocked", %{
if state.nats_conn do
publish_event(state.nats_conn, "task.blocked.#{task.codebase_id}", %{
task: blocked_task,
conflicts: conflicts
})
end
{:reply, {:error, :file_conflicts}, %{state | pending_tasks: new_pending}}
end
@@ -114,7 +187,11 @@ defmodule AgentCoordinator.TaskRegistry do
def handle_call({:add_to_pending, task}, _from, state) do
new_pending = [task | state.pending_tasks]
publish_event(state.nats_conn, "task.queued", %{task: task})
if state.nats_conn do
publish_event(state.nats_conn, "task.queued.#{task.codebase_id}", %{task: task})
end
{:reply, :ok, %{state | pending_tasks: new_pending}}
end
@@ -133,71 +210,353 @@ defmodule AgentCoordinator.TaskRegistry do
new_agents = Map.put(state.agents, agent_id, updated_agent)
new_state = %{state | agents: new_agents}
publish_event(state.nats_conn, "agent.heartbeat", %{agent_id: agent_id})
if state.nats_conn do
publish_event(state.nats_conn, "agent.heartbeat.#{agent_id}", %{
agent_id: agent_id,
codebase_id: updated_agent.codebase_id
})
end
{:reply, :ok, new_state}
end
end
def handle_call({:unregister_agent, agent_id, reason}, _from, state) do
case Map.get(state.agents, agent_id) do
nil ->
{:reply, {:error, :agent_not_found}, state}
agent ->
# Check if agent has current tasks
case agent.current_task_id do
nil ->
# Agent is idle, safe to unregister
unregister_agent_safely(state, agent_id, agent, reason)
task_id ->
# Agent has active task, handle accordingly
case Map.get(state, :allow_force_unregister, false) do
true ->
# Force unregister, reassign task to pending
unregister_agent_with_task_reassignment(state, agent_id, agent, task_id, reason)
false ->
{:reply,
{:error,
"Agent has active task #{task_id}. Complete task first or use force unregister."},
state}
end
end
end
end
def handle_call(:get_file_locks, _from, state) do
{:reply, state.file_locks, state}
{:reply, state.codebase_file_locks || %{}, state}
end
def handle_call({:get_agent_current_task, agent_id}, _from, state) do
case Map.get(state.agents, agent_id) do
nil ->
{:reply, nil, state}
agent ->
case agent.current_task_id do
nil ->
{:reply, nil, state}
task_id ->
# Get task details from inbox or pending tasks
task = find_task_by_id(state, task_id)
{:reply, task, state}
end
end
end
def handle_call({:update_task_activity, task_id, tool_name, arguments}, _from, state) do
# Update task with latest activity
# This could store activity logs or update task metadata
if state.nats_conn do
publish_event(state.nats_conn, "task.activity_updated", %{
task_id: task_id,
tool_name: tool_name,
arguments: arguments,
timestamp: DateTime.utc_now()
})
end
{:reply, :ok, state}
end
def handle_call({:create_task, title, description, opts}, _from, state) do
task = Task.new(title, description, opts)
# Add to pending tasks
new_pending = [task | state.pending_tasks]
new_state = %{state | pending_tasks: new_pending}
# Try to assign immediately
case find_available_agent(new_state, task) do
nil ->
if state.nats_conn do
publish_event(state.nats_conn, "task.created", %{task: task})
end
{:reply, {:ok, task}, new_state}
agent ->
case check_file_conflicts(new_state, task) do
[] ->
# Assign immediately
case assign_task_to_agent(new_state, task, agent.id) do
{:reply, {:ok, _agent_id}, final_state} ->
# Remove from pending since it was assigned
final_state = %{final_state | pending_tasks: state.pending_tasks}
{:reply, {:ok, task}, final_state}
error ->
error
end
_conflicts ->
# Keep in pending due to conflicts
{:reply, {:ok, task}, new_state}
end
end
end
def handle_call({:get_next_task, agent_id}, _from, state) do
case Map.get(state.agents, agent_id) do
nil ->
{:reply, {:error, :agent_not_found}, state}
agent ->
# First ensure the agent's inbox exists
case ensure_inbox_started(agent_id) do
:ok ->
case Inbox.get_next_task(agent_id) do
nil ->
{:reply, {:error, :no_tasks}, state}
task ->
# Update agent status
updated_agent = Agent.assign_task(agent, task.id)
new_agents = Map.put(state.agents, agent_id, updated_agent)
new_state = %{state | agents: new_agents}
if state.nats_conn do
publish_event(state.nats_conn, "task.started", %{
task: task,
agent_id: agent_id
})
end
{:reply, {:ok, task}, new_state}
end
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
end
def handle_call({:complete_task, agent_id}, _from, state) do
case Map.get(state.agents, agent_id) do
nil ->
{:reply, {:error, :agent_not_found}, state}
agent ->
case agent.current_task_id do
nil ->
{:reply, {:error, :no_current_task}, state}
task_id ->
# Mark task as completed in inbox
case Inbox.complete_current_task(agent_id) do
task when is_map(task) ->
# Update agent status back to idle
updated_agent = Agent.complete_task(agent)
new_agents = Map.put(state.agents, agent_id, updated_agent)
new_state = %{state | agents: new_agents}
if state.nats_conn do
publish_event(state.nats_conn, "task.completed", %{
task_id: task_id,
agent_id: agent_id
})
end
# Try to assign pending tasks
{_assigned, remaining_pending} = assign_pending_tasks(new_state)
final_state = %{new_state | pending_tasks: remaining_pending}
{:reply, :ok, final_state}
{:error, reason} ->
{:reply, {:error, reason}, state}
end
end
end
end
def handle_call(:get_task_board, _from, state) do
agents_info =
Enum.map(state.agents, fn {_id, agent} ->
current_task =
case agent.current_task_id do
nil -> nil
task_id -> find_task_by_id(state, task_id)
end
%{
agent_id: agent.id,
name: agent.name,
status: agent.status,
capabilities: agent.capabilities,
current_task: current_task,
last_heartbeat: agent.last_heartbeat,
online: Agent.is_online?(agent)
}
end)
task_board = %{
agents: agents_info,
pending_tasks: state.pending_tasks,
total_agents: map_size(state.agents),
active_tasks: Enum.count(state.agents, fn {_id, agent} -> agent.current_task_id != nil end),
pending_count: length(state.pending_tasks)
}
{:reply, task_board, state}
end
# Handle NATS messages
def handle_info({:msg, %{topic: "agent.task.started", body: body}}, state) do
%{"task" => task_data} = Jason.decode!(body)
%{"task" => task_data, "codebase_id" => codebase_id} = Jason.decode!(body)
# Update file locks
file_locks = add_file_locks(state.file_locks, task_data["id"], task_data["file_paths"])
# Update codebase-specific file locks
codebase_file_locks =
add_file_locks(
state.codebase_file_locks,
codebase_id,
task_data["id"],
task_data["file_paths"]
)
{:noreply, %{state | file_locks: file_locks}}
{:noreply, %{state | codebase_file_locks: codebase_file_locks}}
end
def handle_info({:msg, %{topic: "agent.task.completed", body: body}}, state) do
%{"task" => task_data} = Jason.decode!(body)
%{"task" => task_data, "codebase_id" => codebase_id} = Jason.decode!(body)
# Remove file locks
file_locks = remove_file_locks(state.file_locks, task_data["id"])
# Remove codebase-specific file locks
codebase_file_locks =
remove_file_locks(
state.codebase_file_locks,
codebase_id,
task_data["id"]
)
# Try to assign pending tasks that might now be unblocked
{_assigned, remaining_pending} = assign_pending_tasks(%{state | file_locks: file_locks})
{_assigned, remaining_pending} =
assign_pending_tasks(%{state | codebase_file_locks: codebase_file_locks})
{:noreply, %{state | file_locks: file_locks, pending_tasks: remaining_pending}}
{:noreply,
%{state | codebase_file_locks: codebase_file_locks, pending_tasks: remaining_pending}}
end
def handle_info({:msg, %{topic: topic}}, state) when topic != "agent.task.started" and topic != "agent.task.completed" do
def handle_info({:msg, %{topic: "cross-codebase.task.created", body: body}}, state) do
%{"main_task_id" => main_task_id, "dependent_tasks" => dependent_tasks} = Jason.decode!(body)
# Track cross-codebase task relationship
cross_codebase_tasks = Map.put(state.cross_codebase_tasks, main_task_id, dependent_tasks)
{:noreply, %{state | cross_codebase_tasks: cross_codebase_tasks}}
end
def handle_info({:msg, %{topic: "codebase.agent.registered", body: body}}, state) do
# Handle cross-codebase agent registration notifications
%{"agent" => _agent_data} = Jason.decode!(body)
# Could trigger reassignment of pending cross-codebase tasks
{:noreply, state}
end
def handle_info({:msg, %{topic: topic}}, state)
when topic != "agent.task.started" and
topic != "agent.task.completed" and
topic != "cross-codebase.task.created" and
topic != "codebase.agent.registered" do
# Ignore other messages for now
{:noreply, state}
end
# Private helpers
defp ensure_inbox_started(agent_id) do
case Registry.lookup(AgentCoordinator.InboxRegistry, agent_id) do
[{_pid, _}] ->
:ok
[] ->
# Start the inbox for this agent
case DynamicSupervisor.start_child(
AgentCoordinator.InboxSupervisor,
{Inbox, agent_id}
) do
{:ok, _pid} -> :ok
{:error, {:already_started, _pid}} -> :ok
{:error, reason} -> {:error, reason}
end
end
end
defp find_available_agent(state, task) do
state.agents
|> Map.values()
|> Enum.filter(fn agent ->
agent.codebase_id == task.codebase_id and
agent.status == :idle and
Agent.is_online?(agent) and
Agent.can_handle?(agent, task)
end)
|> Enum.sort_by(fn agent ->
# Prefer agents with fewer pending tasks
# Prefer agents with fewer pending tasks and same codebase
codebase_match = if agent.codebase_id == task.codebase_id, do: 0, else: 1
pending_count =
case Registry.lookup(AgentCoordinator.InboxRegistry, agent.id) do
[{_pid, _}] ->
try do
case Inbox.get_status(agent.id) do
%{pending_count: count} -> count
_ -> 999
_ -> 0
end
catch
:exit, _ -> 0
end
[] ->
# No inbox process exists, treat as 0 pending tasks
0
end
{codebase_match, pending_count}
end)
|> List.first()
end
defp check_file_conflicts(state, task) do
# Get codebase-specific file locks
codebase_locks = Map.get(state.codebase_file_locks, task.codebase_id, %{})
task.file_paths
|> Enum.filter(fn file_path ->
Map.has_key?(state.file_locks, file_path)
Map.has_key?(codebase_locks, file_path)
end)
end
defp assign_task_to_agent(state, task, agent_id) do
# Ensure inbox exists for the agent
ensure_inbox_exists(agent_id)
# Add to agent's inbox
Inbox.add_task(agent_id, task)
@@ -206,17 +565,20 @@ defmodule AgentCoordinator.TaskRegistry do
updated_agent = Agent.assign_task(agent, task.id)
new_agents = Map.put(state.agents, agent_id, updated_agent)
# Publish assignment
publish_event(state.nats_conn, "task.assigned", %{
# Publish assignment with codebase context
if state.nats_conn do
publish_event(state.nats_conn, "task.assigned.#{task.codebase_id}", %{
task: task,
agent_id: agent_id
})
end
{:reply, {:ok, agent_id}, %{state | agents: new_agents}}
end
defp assign_pending_tasks(state) do
{assigned, remaining} = Enum.reduce(state.pending_tasks, {[], []}, fn task, {assigned, pending} ->
{assigned, remaining} =
Enum.reduce(state.pending_tasks, {[], []}, fn task, {assigned, pending} ->
case find_available_agent(state, task) do
nil ->
{assigned, [task | pending]}
@@ -224,6 +586,8 @@ defmodule AgentCoordinator.TaskRegistry do
agent ->
case check_file_conflicts(state, task) do
[] ->
# Ensure inbox exists for the agent
ensure_inbox_exists(agent.id)
Inbox.add_task(agent.id, task)
{[{task, agent.id} | assigned], pending}
@@ -236,21 +600,149 @@ defmodule AgentCoordinator.TaskRegistry do
{assigned, Enum.reverse(remaining)}
end
defp add_file_locks(file_locks, task_id, file_paths) do
Enum.reduce(file_paths, file_locks, fn path, locks ->
defp add_file_locks(codebase_file_locks, codebase_id, task_id, file_paths) do
codebase_locks = Map.get(codebase_file_locks, codebase_id, %{})
updated_locks =
Enum.reduce(file_paths, codebase_locks, fn path, locks ->
Map.put(locks, path, task_id)
end)
Map.put(codebase_file_locks, codebase_id, updated_locks)
end
defp remove_file_locks(file_locks, task_id) do
Enum.reject(file_locks, fn {_path, locked_task_id} ->
defp remove_file_locks(codebase_file_locks, codebase_id, task_id) do
case Map.get(codebase_file_locks, codebase_id) do
nil ->
codebase_file_locks
codebase_locks ->
updated_locks =
Enum.reject(codebase_locks, fn {_path, locked_task_id} ->
locked_task_id == task_id
end)
|> Map.new()
Map.put(codebase_file_locks, codebase_id, updated_locks)
end
end
defp find_task_by_id(state, task_id) do
# Look for task in pending tasks first
case Enum.find(state.pending_tasks, fn task -> task.id == task_id end) do
nil ->
# Try to find in agent inboxes - for now return nil
# TODO: Implement proper task lookup in Inbox module
nil
task ->
task
end
end
defp publish_event(conn, topic, data) do
if conn do
message = Jason.encode!(data)
Gnat.pub(conn, topic, message)
end
end
# Agent unregistration helpers
defp unregister_agent_safely(state, agent_id, agent, reason) do
# Remove agent from registry
new_agents = Map.delete(state.agents, agent_id)
new_state = %{state | agents: new_agents}
# Stop the agent's inbox if it exists
case Inbox.stop(agent_id) do
:ok -> :ok
# Inbox already stopped
{:error, :not_found} -> :ok
# Continue regardless
_ -> :ok
end
# Publish unregistration event
if state.nats_conn do
publish_event(state.nats_conn, "agent.unregistered", %{
agent_id: agent_id,
agent_name: agent.name,
codebase_id: agent.codebase_id,
reason: reason,
timestamp: DateTime.utc_now()
})
end
{:reply, :ok, new_state}
end
defp unregister_agent_with_task_reassignment(state, agent_id, agent, task_id, reason) do
# Get the current task from inbox
case Inbox.get_current_task(agent_id) do
nil ->
# No actual task, treat as safe unregister
unregister_agent_safely(state, agent_id, agent, reason)
task ->
# Reassign task to pending queue
new_pending = [task | state.pending_tasks]
# Remove agent
new_agents = Map.delete(state.agents, agent_id)
new_state = %{state | agents: new_agents, pending_tasks: new_pending}
# Stop the agent's inbox
Inbox.stop(agent_id)
# Publish events
if state.nats_conn do
publish_event(state.nats_conn, "agent.unregistered.with_reassignment", %{
agent_id: agent_id,
agent_name: agent.name,
codebase_id: agent.codebase_id,
reason: reason,
reassigned_task_id: task_id,
timestamp: DateTime.utc_now()
})
publish_event(state.nats_conn, "task.reassigned", %{
task_id: task_id,
from_agent_id: agent_id,
to_queue: "pending",
reason: "Agent unregistered: #{reason}"
})
end
{:reply, :ok, new_state}
end
end
# Helper function to ensure an inbox exists for an agent
defp ensure_inbox_exists(agent_id) do
case Registry.lookup(AgentCoordinator.InboxRegistry, agent_id) do
[] ->
# No inbox exists, create one
case DynamicSupervisor.start_child(
AgentCoordinator.InboxSupervisor,
{Inbox, agent_id}
) do
{:ok, _pid} ->
Logger.info("Created inbox for agent #{agent_id}")
:ok
{:error, {:already_started, _pid}} ->
Logger.info("Inbox already exists for agent #{agent_id}")
:ok
{:error, reason} ->
Logger.warning("Failed to create inbox for agent #{agent_id}: #{inspect(reason)}")
{:error, reason}
end
[{_pid, _}] ->
# Inbox already exists
:ok
end
end
end

View File

@@ -0,0 +1,251 @@
defmodule AgentCoordinator.UnifiedMCPServer do
@moduledoc """
Unified MCP Server that aggregates all external MCP servers and Agent Coordinator tools.
This is the single MCP server that GitHub Copilot sees, which internally manages
all other MCP servers and provides automatic task tracking for any tool usage.
"""
use GenServer
require Logger
alias AgentCoordinator.{MCPServerManager, TaskRegistry}
defstruct [
:agent_sessions,
:request_id_counter
]
# Client API
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@doc """
Handle MCP request from GitHub Copilot
"""
def handle_mcp_request(request) do
GenServer.call(__MODULE__, {:handle_request, request})
end
# Server callbacks
def init(_opts) do
state = %__MODULE__{
agent_sessions: %{},
request_id_counter: 0
}
Logger.info("Unified MCP Server starting...")
{:ok, state}
end
def handle_call({:handle_request, request}, _from, state) do
response = process_mcp_request(request, state)
{:reply, response, state}
end
def handle_call({:register_agent_session, agent_id, session_info}, _from, state) do
new_state = %{state | agent_sessions: Map.put(state.agent_sessions, agent_id, session_info)}
{:reply, :ok, new_state}
end
def handle_info(_msg, state) do
{:noreply, state}
end
# Private functions
defp process_mcp_request(request, state) do
method = Map.get(request, "method")
id = Map.get(request, "id")
case method do
"initialize" ->
handle_initialize(request, id)
"tools/list" ->
handle_tools_list(request, id)
"tools/call" ->
handle_tools_call(request, id, state)
_ ->
error_response(id, -32601, "Method not found: #{method}")
end
end
defp handle_initialize(_request, id) do
%{
"jsonrpc" => "2.0",
"id" => id,
"result" => %{
"protocolVersion" => "2024-11-05",
"capabilities" => %{
"tools" => %{},
"coordination" => %{
"automatic_task_tracking" => true,
"agent_management" => true,
"multi_server_proxy" => true,
"heartbeat_coverage" => true
}
},
"serverInfo" => %{
"name" => "agent-coordinator-unified",
"version" => "0.1.0",
"description" =>
"Unified MCP server with automatic task tracking and agent coordination"
}
}
}
end
defp handle_tools_list(_request, id) do
case MCPServerManager.get_unified_tools() do
tools when is_list(tools) ->
%{
"jsonrpc" => "2.0",
"id" => id,
"result" => %{
"tools" => tools
}
}
{:error, reason} ->
error_response(id, -32603, "Failed to get tools: #{reason}")
end
end
defp handle_tools_call(request, id, state) do
params = Map.get(request, "params", %{})
tool_name = Map.get(params, "name")
arguments = Map.get(params, "arguments", %{})
# Determine agent context from the request or session
agent_context = determine_agent_context(request, arguments, state)
case MCPServerManager.route_tool_call(tool_name, arguments, agent_context) do
%{"error" => _} = error_result ->
Map.put(error_result, "id", id)
result ->
# Wrap successful results in MCP format
success_response = %{
"jsonrpc" => "2.0",
"id" => id,
"result" => format_tool_result(result, tool_name, agent_context)
}
success_response
end
end
defp determine_agent_context(request, arguments, state) do
# Try to determine agent from various sources:
# 1. Explicit agent_id in arguments
case Map.get(arguments, "agent_id") do
agent_id when is_binary(agent_id) ->
%{agent_id: agent_id}
_ ->
# 2. Try to extract from request metadata
case extract_agent_from_request(request) do
agent_id when is_binary(agent_id) ->
%{agent_id: agent_id}
_ ->
# 3. Use a default session for GitHub Copilot
default_agent_context(state)
end
end
end
defp extract_agent_from_request(_request) do
# Look for agent info in request headers, params, etc.
# This could be extended to support various ways of identifying the agent
nil
end
defp default_agent_context(state) do
# Create or use a default agent session for GitHub Copilot
default_agent_id = "github_copilot_session"
case Map.get(state.agent_sessions, default_agent_id) do
nil ->
# Auto-register GitHub Copilot as an agent
case TaskRegistry.register_agent("GitHub Copilot", [
"coding",
"analysis",
"review",
"documentation"
]) do
{:ok, %{agent_id: agent_id}} ->
session_info = %{
agent_id: agent_id,
name: "GitHub Copilot",
auto_registered: true,
created_at: DateTime.utc_now()
}
GenServer.call(self(), {:register_agent_session, agent_id, session_info})
%{agent_id: agent_id}
_ ->
%{agent_id: default_agent_id}
end
session_info ->
%{agent_id: session_info.agent_id}
end
end
defp format_tool_result(result, tool_name, agent_context) do
# Format the result according to MCP tool call response format
base_result =
case result do
%{"result" => content} when is_map(content) ->
# Already properly formatted
content
{:ok, content} ->
# Convert tuple response to content
%{"content" => [%{"type" => "text", "text" => inspect(content)}]}
%{} = map_result ->
# Convert map to text content
%{"content" => [%{"type" => "text", "text" => Jason.encode!(map_result)}]}
binary when is_binary(binary) ->
# Simple text result
%{"content" => [%{"type" => "text", "text" => binary}]}
other ->
# Fallback for any other type
%{"content" => [%{"type" => "text", "text" => inspect(other)}]}
end
# Add metadata about the operation
metadata = %{
"tool_name" => tool_name,
"agent_id" => agent_context.agent_id,
"timestamp" => DateTime.utc_now() |> DateTime.to_iso8601(),
"auto_tracked" => true
}
Map.put(base_result, "_metadata", metadata)
end
defp error_response(id, code, message) do
%{
"jsonrpc" => "2.0",
"id" => id,
"error" => %{
"code" => code,
"message" => message
}
}
end
end

57
mcp_servers.json Normal file
View File

@@ -0,0 +1,57 @@
{
"servers": {
"mcp_context7": {
"type": "stdio",
"command": "bunx",
"args": [
"-y",
"@upstash/context7-mcp"
],
"auto_restart": true,
"description": "Context7 library documentation server"
},
"mcp_figma": {
"url": "http://127.0.0.1:3845/mcp",
"type": "http",
"auto_restart": true,
"description": "Figma design integration server"
},
"mcp_filesystem": {
"type": "stdio",
"command": "bunx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/home/ra"
],
"auto_restart": true,
"description": "Filesystem operations server with heartbeat coverage"
},
"mcp_memory": {
"type": "stdio",
"command": "bunx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
],
"auto_restart": true,
"description": "Memory and knowledge graph server"
},
"mcp_sequentialthinking": {
"type": "stdio",
"command": "bunx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
],
"auto_restart": true,
"description": "Sequential thinking and reasoning server"
}
},
"config": {
"startup_timeout": 30000,
"heartbeat_interval": 10000,
"auto_restart_delay": 1000,
"max_restart_attempts": 3
}
}

67
mix.exs
View File

@@ -1,13 +1,33 @@
defmodule AgentCoordinator.MixProject do
use Mix.Project
@version "0.1.0"
@source_url "https://github.com/your-username/agent_coordinator"
def project do
[
app: :agent_coordinator,
version: "0.1.0",
elixir: "~> 1.18",
version: @version,
elixir: "~> 1.16",
start_permanent: Mix.env() == :prod,
deps: deps()
deps: deps(),
name: "AgentCoordinator",
description: description(),
package: package(),
docs: docs(),
source_url: @source_url,
homepage_url: @source_url,
dialyzer: [
plt_file: {:no_warn, "priv/plts/dialyzer.plt"},
plt_add_apps: [:mix]
],
test_coverage: [tool: ExCoveralls],
preferred_cli_env: [
coveralls: :test,
"coveralls.detail": :test,
"coveralls.post": :test,
"coveralls.html": :test
]
]
end
@@ -26,7 +46,46 @@ defmodule AgentCoordinator.MixProject do
{:gnat, "~> 1.8"},
{:phoenix_pubsub, "~> 2.1"},
{:gen_stage, "~> 1.2"},
{:uuid, "~> 1.1"}
{:uuid, "~> 1.1"},
# Development and testing dependencies
{:ex_doc, "~> 0.34", only: :dev, runtime: false},
{:dialyxir, "~> 1.4", only: [:dev], runtime: false},
{:credo, "~> 1.7", only: [:dev, :test], runtime: false},
{:excoveralls, "~> 0.18", only: :test}
]
end
defp description do
"""
A distributed task coordination system for AI agents built with Elixir and NATS.
Enables multiple AI agents (Claude Code, GitHub Copilot, etc.) to work collaboratively
on the same codebase without conflicts through centralized task management,
file-level locking, and real-time communication.
"""
end
defp package do
[
maintainers: ["Your Name"],
licenses: ["MIT"],
links: %{
"GitHub" => @source_url,
"Changelog" => "#{@source_url}/blob/main/CHANGELOG.md"
},
files: ~w(lib .formatter.exs mix.exs README.md LICENSE CHANGELOG.md)
]
end
defp docs do
[
main: "AgentCoordinator",
source_ref: "v#{@version}",
source_url: @source_url,
extras: [
"README.md",
"CHANGELOG.md"
]
]
end
end

30
mix.lock Normal file
View File

@@ -0,0 +1,30 @@
%{
"bunt": {:hex, :bunt, "1.0.0", "081c2c665f086849e6d57900292b3a161727ab40431219529f13c4ddcf3e7a44", [:mix], [], "hexpm", "dc5f86aa08a5f6fa6b8096f0735c4e76d54ae5c9fa2c143e5a1fc7c1cd9bb6b5"},
"chacha20": {:hex, :chacha20, "1.0.4", "0359d8f9a32269271044c1b471d5cf69660c362a7c61a98f73a05ef0b5d9eb9e", [:mix], [], "hexpm", "2027f5d321ae9903f1f0da7f51b0635ad6b8819bc7fe397837930a2011bc2349"},
"connection": {:hex, :connection, "1.1.0", "ff2a49c4b75b6fb3e674bfc5536451607270aac754ffd1bdfe175abe4a6d7a68", [:mix], [], "hexpm", "722c1eb0a418fbe91ba7bd59a47e28008a189d47e37e0e7bb85585a016b2869c"},
"cowlib": {:hex, :cowlib, "2.15.0", "3c97a318a933962d1c12b96ab7c1d728267d2c523c25a5b57b0f93392b6e9e25", [:make, :rebar3], [], "hexpm", "4f00c879a64b4fe7c8fcb42a4281925e9ffdb928820b03c3ad325a617e857532"},
"credo": {:hex, :credo, "1.7.12", "9e3c20463de4b5f3f23721527fcaf16722ec815e70ff6c60b86412c695d426c1", [:mix], [{:bunt, "~> 0.2.1 or ~> 1.0", [hex: :bunt, repo: "hexpm", optional: false]}, {:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "8493d45c656c5427d9c729235b99d498bd133421f3e0a683e5c1b561471291e5"},
"curve25519": {:hex, :curve25519, "1.0.5", "f801179424e4012049fcfcfcda74ac04f65d0ffceeb80e7ef1d3352deb09f5bb", [:mix], [], "hexpm", "0fba3ad55bf1154d4d5fc3ae5fb91b912b77b13f0def6ccb3a5d58168ff4192d"},
"dialyxir": {:hex, :dialyxir, "1.4.6", "7cca478334bf8307e968664343cbdb432ee95b4b68a9cba95bdabb0ad5bdfd9a", [:mix], [{:erlex, ">= 0.2.7", [hex: :erlex, repo: "hexpm", optional: false]}], "hexpm", "8cf5615c5cd4c2da6c501faae642839c8405b49f8aa057ad4ae401cb808ef64d"},
"earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"},
"ed25519": {:hex, :ed25519, "1.4.3", "d1422c643fb691f8efc65e66c733bcc92338485858a9469f24a528b915809377", [:mix], [], "hexpm", "37f9de6be4a0e67d56f1b69ec2b79d4d96fea78365f45f5d5d344c48cf81d487"},
"equivalex": {:hex, :equivalex, "1.0.3", "170d9a82ae066e0020dfe1cf7811381669565922eb3359f6c91d7e9a1124ff74", [:mix], [], "hexpm", "46fa311adb855117d36e461b9c0ad2598f72110ad17ad73d7533c78020e045fc"},
"erlex": {:hex, :erlex, "0.2.7", "810e8725f96ab74d17aac676e748627a07bc87eb950d2b83acd29dc047a30595", [:mix], [], "hexpm", "3ed95f79d1a844c3f6bf0cea61e0d5612a42ce56da9c03f01df538685365efb0"},
"ex_doc": {:hex, :ex_doc, "0.38.3", "ddafe36b8e9fe101c093620879f6604f6254861a95133022101c08e75e6c759a", [:mix], [{:earmark_parser, "~> 1.4.44", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "ecaa785456a67f63b4e7d7f200e8832fa108279e7eb73fd9928e7e66215a01f9"},
"excoveralls": {:hex, :excoveralls, "0.18.5", "e229d0a65982613332ec30f07940038fe451a2e5b29bce2a5022165f0c9b157e", [:mix], [{:castore, "~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "523fe8a15603f86d64852aab2abe8ddbd78e68579c8525ae765facc5eae01562"},
"file_system": {:hex, :file_system, "1.1.0", "08d232062284546c6c34426997dd7ef6ec9f8bbd090eb91780283c9016840e8f", [:mix], [], "hexpm", "bfcf81244f416871f2a2e15c1b515287faa5db9c6bcf290222206d120b3d43f6"},
"gen_stage": {:hex, :gen_stage, "1.3.2", "7c77e5d1e97de2c6c2f78f306f463bca64bf2f4c3cdd606affc0100b89743b7b", [:mix], [], "hexpm", "0ffae547fa777b3ed889a6b9e1e64566217413d018cabd825f786e843ffe63e7"},
"gnat": {:hex, :gnat, "1.11.0", "eb6cdb6a3ddab99a1620d7b87e176a04c3881d9ce0ea53e56380db85ce6b73ef", [:mix], [{:connection, "~> 1.1", [hex: :connection, repo: "hexpm", optional: false]}, {:cowlib, "~> 2.0", [hex: :cowlib, repo: "hexpm", optional: false]}, {:jason, "~> 1.1", [hex: :jason, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 0.5 or ~> 1.0", [hex: :nimble_parsec, repo: "hexpm", optional: false]}, {:nkeys, "~> 0.2", [hex: :nkeys, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "3b9a762ff2366e83b44a232f864e80ec754410486ad00b167f17de6c19b0f88a"},
"jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
"kcl": {:hex, :kcl, "1.4.3", "5e7dcc1e6d70b467cbeabd1ca2a574605233996eb02acf70fe8a651a72e9ef13", [:mix], [{:curve25519, ">= 1.0.4", [hex: :curve25519, repo: "hexpm", optional: false]}, {:ed25519, "~> 1.3", [hex: :ed25519, repo: "hexpm", optional: false]}, {:poly1305, "~> 1.0", [hex: :poly1305, repo: "hexpm", optional: false]}, {:salsa20, "~> 1.0", [hex: :salsa20, repo: "hexpm", optional: false]}], "hexpm", "45be516de04bae67c31ea08099406c86cbedad18a3ded5b931a513e74d4e9ba3"},
"makeup": {:hex, :makeup, "1.2.1", "e90ac1c65589ef354378def3ba19d401e739ee7ee06fb47f94c687016e3713d1", [:mix], [{:nimble_parsec, "~> 1.4", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "d36484867b0bae0fea568d10131197a4c2e47056a6fbe84922bf6ba71c8d17ce"},
"makeup_elixir": {:hex, :makeup_elixir, "1.0.1", "e928a4f984e795e41e3abd27bfc09f51db16ab8ba1aebdba2b3a575437efafc2", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "7284900d412a3e5cfd97fdaed4f5ed389b8f2b4cb49efc0eb3bd10e2febf9507"},
"makeup_erlang": {:hex, :makeup_erlang, "1.0.2", "03e1804074b3aa64d5fad7aa64601ed0fb395337b982d9bcf04029d68d51b6a7", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "af33ff7ef368d5893e4a267933e7744e46ce3cf1f61e2dccf53a111ed3aa3727"},
"nimble_parsec": {:hex, :nimble_parsec, "1.4.2", "8efba0122db06df95bfaa78f791344a89352ba04baedd3849593bfce4d0dc1c6", [:mix], [], "hexpm", "4b21398942dda052b403bbe1da991ccd03a053668d147d53fb8c4e0efe09c973"},
"nkeys": {:hex, :nkeys, "0.3.0", "837add5261a3cdd8ff75b54e0475062313093929ab5e042fa48e010f33b10d16", [:mix], [{:ed25519, "~> 1.3", [hex: :ed25519, repo: "hexpm", optional: false]}, {:kcl, "~> 1.4", [hex: :kcl, repo: "hexpm", optional: false]}], "hexpm", "b5af773a296620ee8eeb1ec6dc5b68f716386f7e53f7bda8c4ac23515823dfe4"},
"phoenix_pubsub": {:hex, :phoenix_pubsub, "2.1.3", "3168d78ba41835aecad272d5e8cd51aa87a7ac9eb836eabc42f6e57538e3731d", [:mix], [], "hexpm", "bba06bc1dcfd8cb086759f0edc94a8ba2bc8896d5331a1e2c2902bf8e36ee502"},
"poly1305": {:hex, :poly1305, "1.0.4", "7cdc8961a0a6e00a764835918cdb8ade868044026df8ef5d718708ea6cc06611", [:mix], [{:chacha20, "~> 1.0", [hex: :chacha20, repo: "hexpm", optional: false]}, {:equivalex, "~> 1.0", [hex: :equivalex, repo: "hexpm", optional: false]}], "hexpm", "e14e684661a5195e149b3139db4a1693579d4659d65bba115a307529c47dbc3b"},
"salsa20": {:hex, :salsa20, "1.0.4", "404cbea1fa8e68a41bcc834c0a2571ac175580fec01cc38cc70c0fb9ffc87e9b", [:mix], [], "hexpm", "745ddcd8cfa563ddb0fd61e7ce48d5146279a2cf7834e1da8441b369fdc58ac6"},
"telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"},
"uuid": {:hex, :uuid, "1.1.8", "e22fc04499de0de3ed1116b770c7737779f226ceefa0badb3592e64d5cfb4eb9", [:mix], [], "hexpm", "c790593b4c3b601f5dc2378baae7efaf5b3d73c4c6456ba85759905be792f2ac"},
}

13
scripts/mcp_config.json Normal file
View File

@@ -0,0 +1,13 @@
{
"mcpServers": {
"agent-coordinator": {
"command": "/home/ra/agent_coordinator/scripts/mcp_launcher.sh",
"args": [],
"env": {
"MIX_ENV": "dev",
"NATS_HOST": "localhost",
"NATS_PORT": "4222"
}
}
}
}

109
scripts/mcp_launcher.sh Executable file
View File

@@ -0,0 +1,109 @@
#!/bin/bash
# AgentCoordinator Unified MCP Server Launcher
# This script starts the unified MCP server that manages all external MCP servers
# and provides automatic task tracking with heartbeat coverage
set -e
export PATH="$HOME/.asdf/shims:$PATH"
# Change to the project directory
cd "$(dirname "$0")/.."
# Set environment
export MIX_ENV="${MIX_ENV:-dev}"
export NATS_HOST="${NATS_HOST:-localhost}"
export NATS_PORT="${NATS_PORT:-4222}"
# Log startup
echo "Starting AgentCoordinator Unified MCP Server..." >&2
echo "Environment: $MIX_ENV" >&2
echo "NATS: $NATS_HOST:$NATS_PORT" >&2
# Start the Elixir application with unified MCP server
exec mix run --no-halt -e "
# Ensure all applications are started
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
# Start services that are NOT in the application supervisor
# TaskRegistry is already started by the application supervisor, so we skip it
case AgentCoordinator.MCPServerManager.start_link([config_file: \"mcp_servers.json\"]) do
{:ok, _} -> :ok
{:error, {:already_started, _}} -> :ok
{:error, reason} -> raise \"Failed to start MCPServerManager: #{inspect(reason)}\"
end
case AgentCoordinator.UnifiedMCPServer.start_link() do
{:ok, _} -> :ok
{:error, {:already_started, _}} -> :ok
{:error, reason} -> raise \"Failed to start UnifiedMCPServer: #{inspect(reason)}\"
end
# Log that we're ready
IO.puts(:stderr, \"Unified MCP server ready with automatic task tracking\")
# Handle MCP JSON-RPC messages through the unified server
defmodule UnifiedMCPStdio do
def start do
spawn_link(fn -> message_loop() end)
Process.sleep(:infinity)
end
defp message_loop do
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, \"Unified MCP server shutting down\")
System.halt(0)
{:error, reason} ->
IO.puts(:stderr, \"IO Error: #{inspect(reason)}\")
System.halt(1)
line ->
handle_message(String.trim(line))
message_loop()
end
end
defp handle_message(\"\"), do: :ok
defp handle_message(json_line) do
try do
request = Jason.decode!(json_line)
# Route through unified MCP server for automatic task tracking
response = AgentCoordinator.UnifiedMCPServer.handle_mcp_request(request)
IO.puts(Jason.encode!(response))
rescue
e in Jason.DecodeError ->
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => nil,
\"error\" => %{
\"code\" => -32700,
\"message\" => \"Parse error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
e ->
# Try to get the ID from the malformed request
id = try do
partial = Jason.decode!(json_line)
Map.get(partial, \"id\")
rescue
_ -> nil
end
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => id,
\"error\" => %{
\"code\" => -32603,
\"message\" => \"Internal error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
end
end
end
UnifiedMCPStdio.start()
"

73
scripts/minimal_test.sh Executable file
View File

@@ -0,0 +1,73 @@
#!/bin/bash
# Ultra-minimal test that doesn't start the full application
echo "🔬 Ultra-Minimal AgentCoordinator Test"
echo "======================================"
cd "$(dirname "$0")"
echo "📋 Testing compilation..."
if mix compile >/dev/null 2>&1; then
echo "✅ Compilation successful"
else
echo "❌ Compilation failed"
exit 1
fi
echo "📋 Testing MCP server without application startup..."
if timeout 10 mix run --no-start -e "
# Load compiled modules without starting application
Code.ensure_loaded(AgentCoordinator.MCPServer)
# Test MCP server directly
try do
# Start just the required processes manually
{:ok, _} = Registry.start_link(keys: :unique, name: AgentCoordinator.InboxRegistry)
{:ok, _} = Phoenix.PubSub.start_link(name: AgentCoordinator.PubSub)
# Start TaskRegistry without NATS
{:ok, _} = GenServer.start_link(AgentCoordinator.TaskRegistry, [nats: nil], name: AgentCoordinator.TaskRegistry)
# Start MCP server
{:ok, _} = GenServer.start_link(AgentCoordinator.MCPServer, %{}, name: AgentCoordinator.MCPServer)
IO.puts('✅ Core components started')
# Test MCP functionality
response = AgentCoordinator.MCPServer.handle_mcp_request(%{
\"jsonrpc\" => \"2.0\",
\"id\" => 1,
\"method\" => \"tools/list\"
})
case response do
%{\"result\" => %{\"tools\" => tools}} when is_list(tools) ->
IO.puts(\"✅ MCP server working (#{length(tools)} tools)\")
_ ->
IO.puts(\"❌ MCP server not working: #{inspect(response)}\")
end
rescue
e ->
IO.puts(\"❌ Error: #{inspect(e)}\")
end
System.halt(0)
"; then
echo "✅ Minimal test passed!"
else
echo "❌ Minimal test failed"
exit 1
fi
echo ""
echo "🎉 Core MCP functionality works!"
echo ""
echo "📝 The hanging issue was due to NATS persistence trying to connect."
echo " Your MCP server core functionality is working perfectly."
echo ""
echo "🚀 To run with proper NATS setup:"
echo " 1. Make sure NATS server is running: sudo systemctl start nats"
echo " 2. Or run: nats-server -js -p 4222 -m 8222 &"
echo " 3. Then use: ../scripts/mcp_launcher.sh"

54
scripts/quick_test.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/bin/bash
# Quick test script to verify Agentecho "💡 Next steps:"
echo " 1. Run scripts/setup.sh to configure VS Code integration"
echo " 2. Or test manually with: scripts/mcp_launcher.sh"rdinator works without getting stuck
echo "🧪 Quick AgentCoordinator Test"
echo "=============================="
cd "$(dirname "$0")"
echo "📋 Testing basic compilation..."
if mix compile --force >/dev/null 2>&1; then
echo "✅ Compilation successful"
else
echo "❌ Compilation failed"
exit 1
fi
echo "📋 Testing application startup (without persistence)..."
if timeout 10 mix run -e "
Application.put_env(:agent_coordinator, :enable_persistence, false)
{:ok, _apps} = Application.ensure_all_started(:agent_coordinator)
IO.puts('✅ Application started successfully')
# Quick MCP server test
response = AgentCoordinator.MCPServer.handle_mcp_request(%{
\"jsonrpc\" => \"2.0\",
\"id\" => 1,
\"method\" => \"tools/list\"
})
case response do
%{\"result\" => %{\"tools\" => tools}} when is_list(tools) ->
IO.puts(\"✅ MCP server working (#{length(tools)} tools available)\")
_ ->
IO.puts(\"❌ MCP server not responding correctly\")
end
System.halt(0)
"; then
echo "✅ Quick test passed!"
else
echo "❌ Quick test failed"
exit 1
fi
echo ""
echo "🎉 AgentCoordinator is ready!"
echo ""
echo "🚀 Next steps:"
echo " 1. Run ./setup.sh to configure VS Code integration"
echo " 2. Or test manually with: ./mcp_launcher.sh"
echo " 3. Or run Python example: python3 mcp_client_example.py"

246
scripts/setup.sh Executable file
View File

@@ -0,0 +1,246 @@
#!/bin/bash
# AgentCoordinator Setup Script
# This script sets up everything needed to connect GitHub Copilot to AgentCoordinator
set -e
PROJECT_DIR="$(cd "$(dirname "$0")" && pwd)"
USER_HOME="$HOME"
echo "🚀 AgentCoordinator Setup"
echo "========================="
echo "Project Directory: $PROJECT_DIR"
echo "User Home: $USER_HOME"
# Function to check if command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Check prerequisites
echo -e "\n📋 Checking prerequisites..."
if ! command_exists mix; then
echo "❌ Elixir/Mix not found. Please install Elixir first."
exit 1
fi
if ! command_exists nats-server; then
echo "⚠️ NATS server not found. Installing via package manager..."
if command_exists apt; then
sudo apt update && sudo apt install -y nats-server
elif command_exists brew; then
brew install nats-server
elif command_exists yum; then
sudo yum install -y nats-server
else
echo "❌ Please install NATS server manually: https://docs.nats.io/nats-server/installation"
exit 1
fi
fi
echo "✅ Prerequisites OK"
# Start NATS server if not running
echo -e "\n🔧 Setting up NATS server..."
if ! pgrep -f nats-server > /dev/null; then
echo "Starting NATS server..."
# Check if systemd service exists
if systemctl list-unit-files | grep -q nats.service; then
sudo systemctl enable nats
sudo systemctl start nats
echo "✅ NATS server started via systemd"
else
# Start manually in background
nats-server -js -p 4222 -m 8222 > /tmp/nats.log 2>&1 &
echo $! > /tmp/nats.pid
echo "✅ NATS server started manually (PID: $(cat /tmp/nats.pid))"
fi
# Wait for NATS to be ready
sleep 2
else
echo "✅ NATS server already running"
fi
# Install Elixir dependencies
echo -e "\n📦 Installing Elixir dependencies..."
cd "$PROJECT_DIR"
mix deps.get
echo "✅ Dependencies installed"
# Test the application
echo -e "\n🧪 Testing AgentCoordinator application..."
echo "Testing basic compilation and startup..."
# First test: just compile
if mix compile >/dev/null 2>&1; then
echo "✅ Application compiles successfully"
else
echo "❌ Application compilation failed"
exit 1
fi
# Second test: quick startup test without persistence
if timeout 15 mix run -e "
try do
Application.put_env(:agent_coordinator, :enable_persistence, false)
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
IO.puts('App startup test OK')
System.halt(0)
rescue
e ->
IO.puts('App startup error: #{inspect(e)}')
System.halt(1)
end
" >/dev/null 2>&1; then
echo "✅ Application startup test passed"
else
echo "⚠️ Application startup test had issues, but continuing..."
echo " (This might be due to NATS configuration - will be fixed during runtime)"
fi
# Create VS Code settings directory if it doesn't exist
VSCODE_SETTINGS_DIR="$USER_HOME/.vscode-server/data/User"
if [ ! -d "$VSCODE_SETTINGS_DIR" ]; then
VSCODE_SETTINGS_DIR="$USER_HOME/.vscode/User"
fi
mkdir -p "$VSCODE_SETTINGS_DIR"
# Create or update VS Code settings for MCP
echo -e "\n⚙ Configuring VS Code for MCP..."
SETTINGS_FILE="$VSCODE_SETTINGS_DIR/settings.json"
MCP_CONFIG='{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "'$PROJECT_DIR'/scripts/mcp_launcher.sh",
"args": [],
"env": {
"MIX_ENV": "dev",
"NATS_HOST": "localhost",
"NATS_PORT": "4222"
}
}
}
}
}
}'
# Backup existing settings
if [ -f "$SETTINGS_FILE" ]; then
cp "$SETTINGS_FILE" "$SETTINGS_FILE.backup.$(date +%s)"
echo "📋 Backed up existing VS Code settings"
fi
# Merge or create settings
if [ -f "$SETTINGS_FILE" ]; then
# Use jq to merge if available, otherwise manual merge
if command_exists jq; then
echo "$MCP_CONFIG" | jq -s '.[0] * .[1]' "$SETTINGS_FILE" - > "$SETTINGS_FILE.tmp"
mv "$SETTINGS_FILE.tmp" "$SETTINGS_FILE"
else
echo "⚠️ jq not found. Please manually add MCP configuration to $SETTINGS_FILE"
echo "Add this configuration:"
echo "$MCP_CONFIG"
fi
else
echo "$MCP_CONFIG" > "$SETTINGS_FILE"
fi
echo "✅ VS Code settings updated"
# Test MCP server
echo -e "\n🧪 Testing MCP server..."
cd "$PROJECT_DIR"
if timeout 5 ./scripts/mcp_launcher.sh >/dev/null 2>&1; then
echo "✅ MCP server test passed"
else
echo "⚠️ MCP server test timed out (this is expected)"
fi
# Create desktop shortcut for easy access
echo -e "\n🖥 Creating desktop shortcuts..."
# Start script
cat > "$PROJECT_DIR/start_agent_coordinator.sh" << 'EOF'
#!/bin/bash
cd "$(dirname "$0")"
echo "🚀 Starting AgentCoordinator..."
# Start NATS if not running
if ! pgrep -f nats-server > /dev/null; then
echo "Starting NATS server..."
nats-server -js -p 4222 -m 8222 > /tmp/nats.log 2>&1 &
echo $! > /tmp/nats.pid
sleep 2
fi
# Start MCP server
echo "Starting MCP server..."
./scripts/mcp_launcher.sh
EOF
chmod +x "$PROJECT_DIR/start_agent_coordinator.sh"
# Stop script
cat > "$PROJECT_DIR/stop_agent_coordinator.sh" << 'EOF'
#!/bin/bash
echo "🛑 Stopping AgentCoordinator..."
# Stop NATS if we started it
if [ -f /tmp/nats.pid ]; then
kill $(cat /tmp/nats.pid) 2>/dev/null || true
rm -f /tmp/nats.pid
fi
# Kill any remaining processes
pkill -f "scripts/mcp_launcher.sh" || true
pkill -f "agent_coordinator" || true
echo "✅ AgentCoordinator stopped"
EOF
chmod +x "$PROJECT_DIR/stop_agent_coordinator.sh"
echo "✅ Created start/stop scripts"
# Final instructions
echo -e "\n🎉 Setup Complete!"
echo "==================="
echo ""
echo "📋 Next Steps:"
echo ""
echo "1. 🔄 Restart VS Code to load the new MCP configuration"
echo " - Close all VS Code windows"
echo " - Reopen VS Code in your project"
echo ""
echo "2. 🤖 GitHub Copilot should now have access to AgentCoordinator tools:"
echo " - register_agent"
echo " - create_task"
echo " - get_next_task"
echo " - complete_task"
echo " - get_task_board"
echo " - heartbeat"
echo ""
echo "3. 🧪 Test the integration:"
echo " - Ask Copilot: 'Register me as an agent with coding capabilities'"
echo " - Ask Copilot: 'Create a task to refactor the login module'"
echo " - Ask Copilot: 'Show me the task board'"
echo ""
echo "📂 Useful files:"
echo " - Start server: $PROJECT_DIR/start_agent_coordinator.sh"
echo " - Stop server: $PROJECT_DIR/stop_agent_coordinator.sh"
echo " - Test client: $PROJECT_DIR/mcp_client_example.py"
echo " - VS Code settings: $SETTINGS_FILE"
echo ""
echo "🔧 Manual start (if needed):"
echo " cd $PROJECT_DIR && ./scripts/mcp_launcher.sh"
echo ""
echo "💡 Tip: The MCP server will auto-start when Copilot needs it!"
echo ""

149
scripts/test_mcp_server.exs Normal file
View File

@@ -0,0 +1,149 @@
#!/usr/bin/env elixir
# Simple test script to demonstrate MCP server functionality
Mix.install([
{:jason, "~> 1.4"}
])
# Start the agent coordinator application
Application.ensure_all_started(:agent_coordinator)
alias AgentCoordinator.MCPServer
IO.puts("🚀 Testing Agent Coordinator MCP Server")
IO.puts("=" |> String.duplicate(50))
# Test 1: Get tools list
IO.puts("\n📋 Getting available tools...")
tools_request = %{"method" => "tools/list", "jsonrpc" => "2.0", "id" => 1}
tools_response = MCPServer.handle_mcp_request(tools_request)
case tools_response do
%{"result" => %{"tools" => tools}} ->
IO.puts("✅ Found #{length(tools)} tools:")
Enum.each(tools, fn tool ->
IO.puts(" - #{tool["name"]}: #{tool["description"]}")
end)
error ->
IO.puts("❌ Error getting tools: #{inspect(error)}")
end
# Test 2: Register an agent
IO.puts("\n👤 Registering test agent...")
register_request = %{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{
"name" => "DemoAgent",
"capabilities" => ["coding", "testing"]
}
},
"jsonrpc" => "2.0",
"id" => 2
}
register_response = MCPServer.handle_mcp_request(register_request)
agent_id = case register_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Agent registered: #{data["agent_id"]}")
data["agent_id"]
error ->
IO.puts("❌ Error registering agent: #{inspect(error)}")
nil
end
if agent_id do
# Test 3: Create a task
IO.puts("\n📝 Creating a test task...")
task_request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Demo Task",
"description" => "A demonstration task for the MCP server",
"priority" => "high",
"required_capabilities" => ["coding"]
}
},
"jsonrpc" => "2.0",
"id" => 3
}
task_response = MCPServer.handle_mcp_request(task_request)
case task_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Task created: #{data["task_id"]}")
if data["assigned_to"] do
IO.puts(" Assigned to: #{data["assigned_to"]}")
end
error ->
IO.puts("❌ Error creating task: #{inspect(error)}")
end
# Test 4: Get task board
IO.puts("\n📊 Getting task board...")
board_request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
},
"jsonrpc" => "2.0",
"id" => 4
}
board_response = MCPServer.handle_mcp_request(board_request)
case board_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Task board retrieved:")
Enum.each(data["agents"], fn agent ->
IO.puts(" Agent: #{agent["name"]} (#{agent["agent_id"]})")
IO.puts(" Capabilities: #{Enum.join(agent["capabilities"], ", ")}")
IO.puts(" Status: #{agent["status"]}")
if agent["current_task"] do
IO.puts(" Current Task: #{agent["current_task"]["title"]}")
else
IO.puts(" Current Task: None")
end
IO.puts(" Pending: #{agent["pending_tasks"]} | Completed: #{agent["completed_tasks"]}")
IO.puts("")
end)
error ->
IO.puts("❌ Error getting task board: #{inspect(error)}")
end
# Test 5: Send heartbeat
IO.puts("\n💓 Sending heartbeat...")
heartbeat_request = %{
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{
"agent_id" => agent_id
}
},
"jsonrpc" => "2.0",
"id" => 5
}
heartbeat_response = MCPServer.handle_mcp_request(heartbeat_request)
case heartbeat_response do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
IO.puts("✅ Heartbeat sent: #{data["status"]}")
error ->
IO.puts("❌ Error sending heartbeat: #{inspect(error)}")
end
end
IO.puts("\n🎉 MCP Server testing completed!")
IO.puts("=" |> String.duplicate(50))

46
scripts/test_mcp_stdio.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
# Test script for MCP server stdio interface
echo "🧪 Testing AgentCoordinator MCP Server via stdio"
echo "================================================"
# Start the MCP server in background
./mcp_launcher.sh &
MCP_PID=$!
# Give it time to start
sleep 3
# Function to send MCP request and get response
send_mcp_request() {
local request="$1"
echo "📤 Sending: $request"
echo "$request" | nc localhost 12345 2>/dev/null || echo "$request" >&${MCP_PID}
sleep 1
}
# Test 1: Get tools list
echo -e "\n1⃣ Testing tools/list..."
TOOLS_REQUEST='{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
send_mcp_request "$TOOLS_REQUEST"
# Test 2: Register agent
echo -e "\n2⃣ Testing register_agent..."
REGISTER_REQUEST='{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"register_agent","arguments":{"name":"TestAgent","capabilities":["coding","testing"]}}}'
send_mcp_request "$REGISTER_REQUEST"
# Test 3: Create task
echo -e "\n3⃣ Testing create_task..."
TASK_REQUEST='{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"create_task","arguments":{"title":"Test Task","description":"A test task","priority":"medium","required_capabilities":["coding"]}}}'
send_mcp_request "$TASK_REQUEST"
# Test 4: Get task board
echo -e "\n4⃣ Testing get_task_board..."
BOARD_REQUEST='{"jsonrpc":"2.0","id":4,"method":"tools/call","params":{"name":"get_task_board","arguments":{}}}'
send_mcp_request "$BOARD_REQUEST"
# Clean up
sleep 2
kill $MCP_PID 2>/dev/null
echo -e "\n✅ MCP server test completed"

View File

@@ -0,0 +1,175 @@
defmodule AgentCoordinator.AutoHeartbeatTest do
use ExUnit.Case, async: true
alias AgentCoordinator.{Client, EnhancedMCPServer, TaskRegistry}
setup do
# Start necessary services for testing
{:ok, _} = Registry.start_link(keys: :unique, name: AgentCoordinator.InboxRegistry)
{:ok, _} = DynamicSupervisor.start_link(name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one)
{:ok, _} = TaskRegistry.start_link()
{:ok, _} = AgentCoordinator.MCPServer.start_link()
{:ok, _} = AgentCoordinator.AutoHeartbeat.start_link()
{:ok, _} = EnhancedMCPServer.start_link()
:ok
end
describe "automatic heartbeat functionality" do
test "agent automatically sends heartbeats during operations" do
# Start a client with auto-heartbeat
{:ok, client} = Client.start_session("TestAgent", [:coding], auto_heartbeat: true, heartbeat_interval: 1000)
# Get initial session info
{:ok, initial_info} = Client.get_session_info(client)
initial_heartbeat = initial_info.last_heartbeat
# Wait a bit for automatic heartbeat
Process.sleep(1500)
# Check that heartbeat was updated
{:ok, updated_info} = Client.get_session_info(client)
assert DateTime.compare(updated_info.last_heartbeat, initial_heartbeat) == :gt
# Cleanup
Client.stop_session(client)
end
test "agent stays online with regular heartbeats" do
# Start client
{:ok, client} = Client.start_session("OnlineAgent", [:analysis], auto_heartbeat: true, heartbeat_interval: 500)
# Get agent info
{:ok, session_info} = Client.get_session_info(client)
agent_id = session_info.agent_id
# Check task board initially
{:ok, initial_board} = Client.get_task_board(client)
agent = Enum.find(initial_board.agents, fn a -> a["agent_id"] == agent_id end)
assert agent["online"] == true
# Wait longer than heartbeat interval but not longer than online timeout
Process.sleep(2000)
# Agent should still be online due to automatic heartbeats
{:ok, updated_board} = Client.get_task_board(client)
updated_agent = Enum.find(updated_board.agents, fn a -> a["agent_id"] == agent_id end)
assert updated_agent["online"] == true
Client.stop_session(client)
end
test "multiple agents coordinate without collisions" do
# Start multiple agents
{:ok, agent1} = Client.start_session("Agent1", [:coding], auto_heartbeat: true)
{:ok, agent2} = Client.start_session("Agent2", [:testing], auto_heartbeat: true)
{:ok, agent3} = Client.start_session("Agent3", [:review], auto_heartbeat: true)
# All should be online
{:ok, board} = Client.get_task_board(agent1)
online_agents = Enum.filter(board.agents, fn a -> a["online"] end)
assert length(online_agents) >= 3
# Create tasks from different agents simultaneously
task1 = Task.async(fn ->
Client.create_task(agent1, "Task1", "Description1", %{"priority" => "normal"})
end)
task2 = Task.async(fn ->
Client.create_task(agent2, "Task2", "Description2", %{"priority" => "high"})
end)
task3 = Task.async(fn ->
Client.create_task(agent3, "Task3", "Description3", %{"priority" => "low"})
end)
# All tasks should complete successfully
{:ok, result1} = Task.await(task1)
{:ok, result2} = Task.await(task2)
{:ok, result3} = Task.await(task3)
# Verify heartbeat metadata is included
assert Map.has_key?(result1, "_heartbeat_metadata")
assert Map.has_key?(result2, "_heartbeat_metadata")
assert Map.has_key?(result3, "_heartbeat_metadata")
# Cleanup
Client.stop_session(agent1)
Client.stop_session(agent2)
Client.stop_session(agent3)
end
test "heartbeat metadata is included in responses" do
{:ok, client} = Client.start_session("MetadataAgent", [:documentation])
# Perform an operation
{:ok, result} = Client.create_task(client, "Test Task", "Test Description")
# Check for heartbeat metadata
assert Map.has_key?(result, "_heartbeat_metadata")
metadata = result["_heartbeat_metadata"]
# Verify metadata structure
{:ok, session_info} = Client.get_session_info(client)
assert metadata["agent_id"] == session_info.agent_id
assert Map.has_key?(metadata, "timestamp")
assert Map.has_key?(metadata, "pre_heartbeat")
assert Map.has_key?(metadata, "post_heartbeat")
Client.stop_session(client)
end
test "session cleanup on client termination" do
# Start client
{:ok, client} = Client.start_session("CleanupAgent", [:coding])
# Get session info
{:ok, session_info} = Client.get_session_info(client)
agent_id = session_info.agent_id
# Verify agent is in task board
{:ok, board} = Client.get_task_board(client)
assert Enum.any?(board.agents, fn a -> a["agent_id"] == agent_id end)
# Stop client
Client.stop_session(client)
# Give some time for cleanup
Process.sleep(100)
# Start another client to check board
{:ok, checker_client} = Client.start_session("CheckerAgent", [:analysis])
{:ok, updated_board} = Client.get_task_board(checker_client)
# Original agent should show as offline or be cleaned up
case Enum.find(updated_board.agents, fn a -> a["agent_id"] == agent_id end) do
nil ->
# Agent was cleaned up - this is acceptable
:ok
agent ->
# Agent should be offline
refute agent["online"]
end
Client.stop_session(checker_client)
end
end
describe "enhanced task board" do
test "provides session information" do
{:ok, client} = Client.start_session("BoardAgent", [:analysis])
{:ok, board} = Client.get_task_board(client)
# Should have session metadata
assert Map.has_key?(board, "active_sessions")
assert board["active_sessions"] >= 1
# Agents should have enhanced information
agent = Enum.find(board.agents, fn a -> a["name"] == "BoardAgent" end)
assert Map.has_key?(agent, "session_active")
assert agent["session_active"] == true
Client.stop_session(client)
end
end
end

View File

@@ -0,0 +1,635 @@
defmodule AgentCoordinator.MCPServerTest do
use ExUnit.Case, async: false
alias AgentCoordinator.{MCPServer, TaskRegistry, Agent, Task, Inbox}
setup do
# Clean up any existing named processes safely
if Process.whereis(MCPServer), do: GenServer.stop(MCPServer, :normal, 1000)
if Process.whereis(TaskRegistry), do: GenServer.stop(TaskRegistry, :normal, 1000)
if Process.whereis(AgentCoordinator.PubSub),
do: GenServer.stop(AgentCoordinator.PubSub, :normal, 1000)
if Process.whereis(AgentCoordinator.InboxSupervisor),
do: DynamicSupervisor.stop(AgentCoordinator.InboxSupervisor, :normal, 1000)
# Registry has to be handled differently
case Process.whereis(AgentCoordinator.InboxRegistry) do
nil ->
:ok
pid ->
Process.unlink(pid)
Process.exit(pid, :kill)
end
# Wait a bit for processes to terminate
Process.sleep(200)
# Start fresh components needed for testing (without NATS)
start_supervised!({Registry, keys: :unique, name: AgentCoordinator.InboxRegistry})
start_supervised!({Phoenix.PubSub, name: AgentCoordinator.PubSub})
start_supervised!(
{DynamicSupervisor, name: AgentCoordinator.InboxSupervisor, strategy: :one_for_one}
)
# Start task registry without NATS for testing
# Empty map for no NATS connection
start_supervised!({TaskRegistry, nats: %{}})
start_supervised!(MCPServer)
:ok
end
describe "MCP protocol compliance" do
test "returns tools list for tools/list method" do
request = %{"method" => "tools/list", "jsonrpc" => "2.0", "id" => 1}
response = MCPServer.handle_mcp_request(request)
assert %{"jsonrpc" => "2.0", "result" => %{"tools" => tools}} = response
assert is_list(tools)
assert length(tools) == 6
# Check that all expected tools are present
tool_names = Enum.map(tools, & &1["name"])
expected_tools = [
"register_agent",
"create_task",
"get_next_task",
"complete_task",
"get_task_board",
"heartbeat"
]
for tool_name <- expected_tools do
assert tool_name in tool_names, "Missing tool: #{tool_name}"
end
end
test "returns error for unknown method" do
request = %{"method" => "unknown/method", "jsonrpc" => "2.0", "id" => 1}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"error" => %{"code" => -32601, "message" => "Method not found"}
} = response
end
test "returns error for unknown tool" do
request = %{
"method" => "tools/call",
"params" => %{"name" => "unknown_tool", "arguments" => %{}},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"error" => %{"code" => -1, "message" => "Unknown tool: unknown_tool"}
} = response
end
end
describe "register_agent tool" do
test "successfully registers an agent with valid capabilities" do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{
"name" => "TestAgent",
"capabilities" => ["coding", "testing"]
}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"agent_id" => agent_id, "status" => "registered"} = data
assert is_binary(agent_id)
# Verify agent is in registry
agents = TaskRegistry.list_agents()
assert Enum.any?(agents, fn agent -> agent.id == agent_id and agent.name == "TestAgent" end)
end
test "fails to register agent with duplicate name" do
# Register first agent
args1 = %{"name" => "DuplicateAgent", "capabilities" => ["coding"]}
request1 = %{
"method" => "tools/call",
"params" => %{"name" => "register_agent", "arguments" => args1},
"jsonrpc" => "2.0",
"id" => 1
}
MCPServer.handle_mcp_request(request1)
# Try to register second agent with same name
args2 = %{"name" => "DuplicateAgent", "capabilities" => ["testing"]}
request2 = %{
"method" => "tools/call",
"params" => %{"name" => "register_agent", "arguments" => args2},
"jsonrpc" => "2.0",
"id" => 2
}
response = MCPServer.handle_mcp_request(request2)
assert %{"jsonrpc" => "2.0", "id" => 2, "error" => %{"code" => -1, "message" => message}} =
response
assert String.contains?(message, "Agent name already exists")
end
end
describe "create_task tool" do
setup do
# Register an agent for task assignment
agent = Agent.new("TaskAgent", [:coding, :testing])
TaskRegistry.register_agent(agent)
Inbox.start_link(agent.id)
%{agent_id: agent.id}
end
test "successfully creates and assigns task to available agent", %{agent_id: agent_id} do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Test Task",
"description" => "A test task description",
"priority" => "high",
"file_paths" => ["test.ex"],
"required_capabilities" => ["coding"]
}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"task_id" => task_id, "assigned_to" => ^agent_id, "status" => "assigned"} = data
assert is_binary(task_id)
end
test "queues task when no agents available" do
# Don't register any agents
request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Queued Task",
"description" => "This task will be queued"
}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"task_id" => task_id, "status" => "queued"} = data
assert is_binary(task_id)
end
test "creates task with minimum required fields" do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Minimal Task",
"description" => "Minimal task description"
}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"task_id" => task_id} = data
assert is_binary(task_id)
end
end
describe "get_next_task tool" do
setup do
# Register agent and create a task
agent = Agent.new("WorkerAgent", [:coding])
TaskRegistry.register_agent(agent)
Inbox.start_link(agent.id)
task = Task.new("Work Task", "Some work to do", priority: :high)
Inbox.add_task(agent.id, task)
%{agent_id: agent.id, task_id: task.id}
end
test "returns next task for agent with pending tasks", %{agent_id: agent_id, task_id: task_id} do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_next_task",
"arguments" => %{"agent_id" => agent_id}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{
"task_id" => ^task_id,
"title" => "Work Task",
"description" => "Some work to do",
"priority" => "high"
} = data
end
test "returns no tasks message when no pending tasks", %{agent_id: agent_id} do
# First get the task to make inbox empty
Inbox.get_next_task(agent_id)
request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_next_task",
"arguments" => %{"agent_id" => agent_id}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"message" => "No tasks available"} = data
end
end
describe "complete_task tool" do
setup do
# Setup agent with a task in progress
agent = Agent.new("CompletionAgent", [:coding])
TaskRegistry.register_agent(agent)
Inbox.start_link(agent.id)
task = Task.new("Complete Me", "Task to complete")
Inbox.add_task(agent.id, task)
# Start the task
completed_task = Inbox.get_next_task(agent.id)
%{agent_id: agent.id, task_id: completed_task.id}
end
test "successfully completes current task", %{agent_id: agent_id, task_id: task_id} do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "complete_task",
"arguments" => %{"agent_id" => agent_id}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{
"task_id" => ^task_id,
"status" => "completed",
"completed_at" => completed_at
} = data
assert is_binary(completed_at)
end
test "fails when no task in progress" do
# Register agent without starting any tasks
agent = Agent.new("IdleAgent", [:coding])
TaskRegistry.register_agent(agent)
Inbox.start_link(agent.id)
request = %{
"method" => "tools/call",
"params" => %{
"name" => "complete_task",
"arguments" => %{"agent_id" => agent.id}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{"jsonrpc" => "2.0", "id" => 1, "error" => %{"code" => -1, "message" => message}} =
response
assert String.contains?(message, "no_task_in_progress")
end
end
describe "get_task_board tool" do
setup do
# Register multiple agents with different states
agent1 = Agent.new("BusyAgent", [:coding])
agent2 = Agent.new("IdleAgent", [:testing])
TaskRegistry.register_agent(agent1)
TaskRegistry.register_agent(agent2)
Inbox.start_link(agent1.id)
Inbox.start_link(agent2.id)
# Add task to first agent
task = Task.new("Busy Work", "Work in progress")
Inbox.add_task(agent1.id, task)
# Start the task
Inbox.get_next_task(agent1.id)
%{agent1_id: agent1.id, agent2_id: agent2.id}
end
test "returns status of all agents", %{agent1_id: agent1_id, agent2_id: agent2_id} do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"agents" => agents} = data
assert length(agents) == 2
# Find agents by ID
busy_agent = Enum.find(agents, fn agent -> agent["agent_id"] == agent1_id end)
idle_agent = Enum.find(agents, fn agent -> agent["agent_id"] == agent2_id end)
assert busy_agent["name"] == "BusyAgent"
assert busy_agent["capabilities"] == ["coding"]
assert busy_agent["current_task"]["title"] == "Busy Work"
assert idle_agent["name"] == "IdleAgent"
assert idle_agent["capabilities"] == ["testing"]
assert is_nil(idle_agent["current_task"])
end
end
describe "heartbeat tool" do
setup do
agent = Agent.new("HeartbeatAgent", [:coding])
TaskRegistry.register_agent(agent)
%{agent_id: agent.id}
end
test "successfully processes heartbeat for registered agent", %{agent_id: agent_id} do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{"agent_id" => agent_id}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{
"jsonrpc" => "2.0",
"id" => 1,
"result" => %{"content" => [%{"type" => "text", "text" => text}]}
} = response
data = Jason.decode!(text)
assert %{"status" => "heartbeat_received"} = data
end
test "fails heartbeat for non-existent agent" do
request = %{
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{"agent_id" => "non-existent-id"}
},
"jsonrpc" => "2.0",
"id" => 1
}
response = MCPServer.handle_mcp_request(request)
assert %{"jsonrpc" => "2.0", "id" => 1, "error" => %{"code" => -1, "message" => message}} =
response
assert String.contains?(message, "agent_not_found")
end
end
describe "full workflow integration" do
test "complete agent coordination workflow" do
# 1. Register an agent
register_request = %{
"method" => "tools/call",
"params" => %{
"name" => "register_agent",
"arguments" => %{
"name" => "WorkflowAgent",
"capabilities" => ["coding", "testing"]
}
},
"jsonrpc" => "2.0",
"id" => 1
}
register_response = MCPServer.handle_mcp_request(register_request)
register_data =
register_response["result"]["content"]
|> List.first()
|> Map.get("text")
|> Jason.decode!()
agent_id = register_data["agent_id"]
# 2. Create a task
create_request = %{
"method" => "tools/call",
"params" => %{
"name" => "create_task",
"arguments" => %{
"title" => "Workflow Task",
"description" => "Complete workflow test",
"priority" => "high",
"required_capabilities" => ["coding"]
}
},
"jsonrpc" => "2.0",
"id" => 2
}
create_response = MCPServer.handle_mcp_request(create_request)
create_data =
create_response["result"]["content"] |> List.first() |> Map.get("text") |> Jason.decode!()
task_id = create_data["task_id"]
assert create_data["assigned_to"] == agent_id
# 3. Get the task
get_request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_next_task",
"arguments" => %{"agent_id" => agent_id}
},
"jsonrpc" => "2.0",
"id" => 3
}
get_response = MCPServer.handle_mcp_request(get_request)
get_data =
get_response["result"]["content"] |> List.first() |> Map.get("text") |> Jason.decode!()
assert get_data["task_id"] == task_id
assert get_data["title"] == "Workflow Task"
# 4. Check task board
board_request = %{
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{}
},
"jsonrpc" => "2.0",
"id" => 4
}
board_response = MCPServer.handle_mcp_request(board_request)
board_data =
board_response["result"]["content"] |> List.first() |> Map.get("text") |> Jason.decode!()
agent_status = board_data["agents"] |> List.first()
assert agent_status["agent_id"] == agent_id
assert agent_status["current_task"]["id"] == task_id
# 5. Complete the task
complete_request = %{
"method" => "tools/call",
"params" => %{
"name" => "complete_task",
"arguments" => %{"agent_id" => agent_id}
},
"jsonrpc" => "2.0",
"id" => 5
}
complete_response = MCPServer.handle_mcp_request(complete_request)
complete_data =
complete_response["result"]["content"]
|> List.first()
|> Map.get("text")
|> Jason.decode!()
assert complete_data["task_id"] == task_id
assert complete_data["status"] == "completed"
# 6. Verify task board shows completed state
final_board_response = MCPServer.handle_mcp_request(board_request)
final_board_data =
final_board_response["result"]["content"]
|> List.first()
|> Map.get("text")
|> Jason.decode!()
final_agent_status = final_board_data["agents"] |> List.first()
assert is_nil(final_agent_status["current_task"])
assert final_agent_status["completed_tasks"] == 1
end
end
end

71
test_enhanced.exs Normal file
View File

@@ -0,0 +1,71 @@
# Test enhanced Agent Coordinator with auto-heartbeat and unregister
# Start a client with automatic heartbeat
IO.puts "🚀 Testing Enhanced Agent Coordinator"
IO.puts "====================================="
{:ok, client1} = AgentCoordinator.Client.start_session("TestAgent1", [:coding, :analysis])
# Get session info
{:ok, info} = AgentCoordinator.Client.get_session_info(client1)
IO.puts "✅ Agent registered: #{info.agent_name} (#{info.agent_id})"
IO.puts " Auto-heartbeat: #{info.auto_heartbeat_enabled}"
# Check task board
{:ok, board} = AgentCoordinator.Client.get_task_board(client1)
IO.puts "📊 Task board status:"
IO.puts " Total agents: #{length(board.agents)}"
IO.puts " Active sessions: #{board.active_sessions}"
# Find our agent on the board
our_agent = Enum.find(board.agents, fn a -> a["agent_id"] == info.agent_id end)
IO.puts " Our agent online: #{our_agent["online"]}"
IO.puts " Session active: #{our_agent["session_active"]}"
# Test heartbeat functionality
IO.puts "\n💓 Testing manual heartbeat..."
{:ok, _} = AgentCoordinator.Client.heartbeat(client1)
IO.puts " Heartbeat sent successfully"
# Wait to observe automatic heartbeats
IO.puts "\n⏱️ Waiting 3 seconds to observe automatic heartbeats..."
Process.sleep(3000)
{:ok, updated_info} = AgentCoordinator.Client.get_session_info(client1)
IO.puts " Last heartbeat updated: #{DateTime.diff(updated_info.last_heartbeat, info.last_heartbeat) > 0}"
# Test unregister functionality
IO.puts "\n🔄 Testing unregister functionality..."
{:ok, result} = AgentCoordinator.Client.unregister_agent(client1, "Testing unregister from script")
IO.puts " Unregister result: #{result["status"]}"
# Check agent status after unregister
{:ok, final_board} = AgentCoordinator.Client.get_task_board(client1)
final_agent = Enum.find(final_board.agents, fn a -> a["agent_id"] == info.agent_id end)
case final_agent do
nil ->
IO.puts " Agent removed from board ✅"
agent ->
IO.puts " Agent still on board, online: #{agent["online"]}"
end
# Test task creation
IO.puts "\n📝 Testing task creation with heartbeats..."
{:ok, task_result} = AgentCoordinator.Client.create_task(
client1,
"Test Task",
"A test task to verify heartbeat integration",
%{"priority" => "normal"}
)
IO.puts " Task created: #{task_result["task_id"]}"
if Map.has_key?(task_result, "_heartbeat_metadata") do
IO.puts " Heartbeat metadata included ✅"
else
IO.puts " No heartbeat metadata ❌"
end
# Clean up
AgentCoordinator.Client.stop_session(client1)
IO.puts "\n✨ Test completed successfully!"

321
test_multi_codebase.exs Normal file
View File

@@ -0,0 +1,321 @@
#!/usr/bin/env elixir
# Multi-Codebase Coordination Test Script
# This script demonstrates how agents can coordinate across multiple codebases
Mix.install([
{:jason, "~> 1.4"},
{:uuid, "~> 1.1"}
])
defmodule MultiCodebaseTest do
@moduledoc """
Test script for multi-codebase agent coordination functionality.
Demonstrates cross-codebase task creation, dependency management, and agent coordination.
"""
def run do
IO.puts("=== Multi-Codebase Agent Coordination Test ===\n")
# Test 1: Register multiple codebases
test_codebase_registration()
# Test 2: Register agents in different codebases
test_agent_registration()
# Test 3: Create tasks within individual codebases
test_single_codebase_tasks()
# Test 4: Create cross-codebase tasks
test_cross_codebase_tasks()
# Test 5: Test cross-codebase dependencies
test_codebase_dependencies()
# Test 6: Verify coordination and task board
test_coordination_overview()
IO.puts("\n=== Test Completed ===")
end
def test_codebase_registration do
IO.puts("1. Testing Codebase Registration")
IO.puts(" - Registering frontend codebase...")
IO.puts(" - Registering backend codebase...")
IO.puts(" - Registering shared-lib codebase...")
frontend_codebase = %{
"id" => "frontend-app",
"name" => "Frontend Application",
"workspace_path" => "/workspace/frontend",
"description" => "React-based frontend application",
"metadata" => %{
"tech_stack" => ["react", "typescript", "tailwind"],
"dependencies" => ["backend-api", "shared-lib"]
}
}
backend_codebase = %{
"id" => "backend-api",
"name" => "Backend API",
"workspace_path" => "/workspace/backend",
"description" => "Node.js API server",
"metadata" => %{
"tech_stack" => ["nodejs", "express", "mongodb"],
"dependencies" => ["shared-lib"]
}
}
shared_lib_codebase = %{
"id" => "shared-lib",
"name" => "Shared Library",
"workspace_path" => "/workspace/shared",
"description" => "Shared utilities and types",
"metadata" => %{
"tech_stack" => ["typescript"],
"dependencies" => []
}
}
# Simulate MCP calls
simulate_mcp_call("register_codebase", frontend_codebase)
simulate_mcp_call("register_codebase", backend_codebase)
simulate_mcp_call("register_codebase", shared_lib_codebase)
IO.puts(" ✓ All codebases registered successfully\n")
end
def test_agent_registration do
IO.puts("2. Testing Agent Registration")
# Frontend agents
frontend_agent1 = %{
"name" => "frontend-dev-1",
"capabilities" => ["coding", "testing"],
"codebase_id" => "frontend-app",
"workspace_path" => "/workspace/frontend",
"cross_codebase_capable" => true
}
frontend_agent2 = %{
"name" => "frontend-dev-2",
"capabilities" => ["coding", "review"],
"codebase_id" => "frontend-app",
"workspace_path" => "/workspace/frontend",
"cross_codebase_capable" => false
}
# Backend agents
backend_agent1 = %{
"name" => "backend-dev-1",
"capabilities" => ["coding", "testing", "analysis"],
"codebase_id" => "backend-api",
"workspace_path" => "/workspace/backend",
"cross_codebase_capable" => true
}
# Shared library agent (cross-codebase capable)
shared_agent = %{
"name" => "shared-lib-dev",
"capabilities" => ["coding", "documentation", "review"],
"codebase_id" => "shared-lib",
"workspace_path" => "/workspace/shared",
"cross_codebase_capable" => true
}
agents = [frontend_agent1, frontend_agent2, backend_agent1, shared_agent]
Enum.each(agents, fn agent ->
IO.puts(" - Registering agent: #{agent["name"]} (#{agent["codebase_id"]})")
simulate_mcp_call("register_agent", agent)
end)
IO.puts(" ✓ All agents registered successfully\n")
end
def test_single_codebase_tasks do
IO.puts("3. Testing Single Codebase Tasks")
tasks = [
%{
"title" => "Update user interface components",
"description" => "Modernize the login and dashboard components",
"codebase_id" => "frontend-app",
"file_paths" => ["/src/components/Login.tsx", "/src/components/Dashboard.tsx"],
"required_capabilities" => ["coding"],
"priority" => "normal"
},
%{
"title" => "Implement user authentication API",
"description" => "Create secure user authentication endpoints",
"codebase_id" => "backend-api",
"file_paths" => ["/src/routes/auth.js", "/src/middleware/auth.js"],
"required_capabilities" => ["coding", "testing"],
"priority" => "high"
},
%{
"title" => "Add utility functions for date handling",
"description" => "Create reusable date utility functions",
"codebase_id" => "shared-lib",
"file_paths" => ["/src/utils/date.ts", "/src/types/date.ts"],
"required_capabilities" => ["coding", "documentation"],
"priority" => "normal"
}
]
Enum.each(tasks, fn task ->
IO.puts(" - Creating task: #{task["title"]} (#{task["codebase_id"]})")
simulate_mcp_call("create_task", task)
end)
IO.puts(" ✓ All single-codebase tasks created successfully\n")
end
def test_cross_codebase_tasks do
IO.puts("4. Testing Cross-Codebase Tasks")
# Task that affects multiple codebases
cross_codebase_task = %{
"title" => "Implement real-time notifications feature",
"description" => "Add real-time notifications across frontend and backend",
"primary_codebase_id" => "backend-api",
"affected_codebases" => ["backend-api", "frontend-app", "shared-lib"],
"coordination_strategy" => "sequential"
}
IO.puts(" - Creating cross-codebase task: #{cross_codebase_task["title"]}")
IO.puts(" Primary: #{cross_codebase_task["primary_codebase_id"]}")
IO.puts(" Affected: #{Enum.join(cross_codebase_task["affected_codebases"], ", ")}")
simulate_mcp_call("create_cross_codebase_task", cross_codebase_task)
# Another cross-codebase task with different strategy
parallel_task = %{
"title" => "Update shared types and interfaces",
"description" => "Synchronize type definitions across all codebases",
"primary_codebase_id" => "shared-lib",
"affected_codebases" => ["shared-lib", "frontend-app", "backend-api"],
"coordination_strategy" => "parallel"
}
IO.puts(" - Creating parallel cross-codebase task: #{parallel_task["title"]}")
simulate_mcp_call("create_cross_codebase_task", parallel_task)
IO.puts(" ✓ Cross-codebase tasks created successfully\n")
end
def test_codebase_dependencies do
IO.puts("5. Testing Codebase Dependencies")
dependencies = [
%{
"source_codebase_id" => "frontend-app",
"target_codebase_id" => "backend-api",
"dependency_type" => "api_consumption",
"metadata" => %{"api_version" => "v1", "endpoints" => ["auth", "users", "notifications"]}
},
%{
"source_codebase_id" => "frontend-app",
"target_codebase_id" => "shared-lib",
"dependency_type" => "library_import",
"metadata" => %{"imports" => ["types", "utils", "constants"]}
},
%{
"source_codebase_id" => "backend-api",
"target_codebase_id" => "shared-lib",
"dependency_type" => "library_import",
"metadata" => %{"imports" => ["types", "validators"]}
}
]
Enum.each(dependencies, fn dep ->
IO.puts(" - Adding dependency: #{dep["source_codebase_id"]}#{dep["target_codebase_id"]} (#{dep["dependency_type"]})")
simulate_mcp_call("add_codebase_dependency", dep)
end)
IO.puts(" ✓ All codebase dependencies added successfully\n")
end
def test_coordination_overview do
IO.puts("6. Testing Coordination Overview")
IO.puts(" - Getting overall task board...")
simulate_mcp_call("get_task_board", %{})
IO.puts(" - Getting frontend codebase status...")
simulate_mcp_call("get_codebase_status", %{"codebase_id" => "frontend-app"})
IO.puts(" - Getting backend codebase status...")
simulate_mcp_call("get_codebase_status", %{"codebase_id" => "backend-api"})
IO.puts(" - Listing all codebases...")
simulate_mcp_call("list_codebases", %{})
IO.puts(" ✓ Coordination overview retrieved successfully\n")
end
defp simulate_mcp_call(tool_name, arguments) do
request = %{
"jsonrpc" => "2.0",
"id" => UUID.uuid4(),
"method" => "tools/call",
"params" => %{
"name" => tool_name,
"arguments" => arguments
}
}
# In a real implementation, this would make an actual MCP call
# For now, we'll just show the structure
IO.puts(" MCP Call: #{tool_name}")
IO.puts(" Arguments: #{Jason.encode!(arguments, pretty: true) |> String.replace("\n", "\n ")}")
# Simulate successful response
response = %{
"jsonrpc" => "2.0",
"id" => request["id"],
"result" => %{
"content" => [%{
"type" => "text",
"text" => Jason.encode!(%{"status" => "success", "tool" => tool_name})
}]
}
}
IO.puts(" Response: success")
end
def simulate_task_flow do
IO.puts("\n=== Simulating Multi-Codebase Task Flow ===")
IO.puts("1. Cross-codebase task created:")
IO.puts(" - Main task assigned to backend agent")
IO.puts(" - Dependent task created for frontend")
IO.puts(" - Dependent task created for shared library")
IO.puts("\n2. Agent coordination:")
IO.puts(" - Backend agent starts implementation")
IO.puts(" - Publishes API specification to NATS stream")
IO.puts(" - Frontend agent receives notification")
IO.puts(" - Shared library agent updates type definitions")
IO.puts("\n3. File conflict detection:")
IO.puts(" - Frontend agent attempts to modify shared types")
IO.puts(" - System detects conflict with shared-lib agent's work")
IO.puts(" - Task is queued until shared-lib work completes")
IO.puts("\n4. Cross-codebase synchronization:")
IO.puts(" - Shared-lib agent completes type updates")
IO.puts(" - Frontend task is automatically unblocked")
IO.puts(" - All agents coordinate through NATS streams")
IO.puts("\n5. Task completion:")
IO.puts(" - All subtasks complete successfully")
IO.puts(" - Cross-codebase dependencies resolved")
IO.puts(" - Coordination system updates task board")
end
end
# Run the test
MultiCodebaseTest.run()
MultiCodebaseTest.simulate_task_flow()