Compare commits

..

7 Commits

Author SHA1 Message Date
Ra
d22675fd16 push to github registry not docker
Some checks failed
build-container / build (push) Has been cancelled
2025-09-08 19:42:58 -07:00
Ra
87b7181d4a don't copy the unused file into docker 2025-09-08 19:37:56 -07:00
Ra
0d9166cc56 cleanup unused file 2025-09-08 19:36:26 -07:00
Ra
74a8574778 Fix docker startup so that it works properly with stdio mode. Probably worthwhile to toss majority of this readme, less confusing 2025-09-08 19:35:27 -07:00
Ra
5d3e04c5f8 None of this is necessary 2025-09-06 10:13:39 -07:00
Ra
0327665fe2 update copilot instructions with something actually relevant 2025-09-06 10:08:55 -07:00
Ra
b1f55799ec Add comprehensive agent activity tracking
- Enhanced Agent struct with current_activity, current_files, and activity_history fields
- Created ActivityTracker module to infer activities from tool calls
- Integrated activity tracking into MCP server tool routing
- Updated task board APIs to include activity information
- Agents now show real-time status like 'Reading file.ex', 'Editing main.py', 'Sequential thinking', etc.
- Added activity history to track recent agent actions
- All file operations and tool calls are now tracked and displayed
2025-09-06 09:59:48 -07:00
21 changed files with 401 additions and 1321 deletions

View File

@@ -1,3 +0,0 @@
---
applyTo: '**/*.md'
---

View File

@@ -0,0 +1,9 @@
---
applyTo: '**'
---
- You must always use the agent-coordinator mcp server, this will allow you to coordinate your work with other agents that may be actively working on the same codebase as you.
- All MCP tool calls which would alter anythings state should be done through the agent-coordinator, even vscode internal tool calls. Searches are fine to not be done through the agent coordinator. The core reason for running them through the agent-coordinator is for the heartbeat.
- Always make sure you append at least 2 random words onto the end of your agent name when registering / sending metadata, eg: `GitHub Copilot Silver Banana`, `GitHub Copilot Yellow Koala`, `GitHub Copilot Olive Llama`, this will assure that not agents have overlapping names and get tracked as the same agent (hopefully)
- Always check the task board when given new user input which will require making any changes to files.
- Always populate your own task board with your task set so other agents may be informed of your planned tasks, and in turn you will be able to see other agents' agendas.
- Once you are coming to an end of your current query, and you will be passing the turn to wait for user input, you must unregister yourself as an agent, and upon the followup you should re-register and follow through with the same registration flow.

View File

@@ -1,50 +0,0 @@
---
applyTo: '**'
---
# No Duplicate Files Policy
## Critical Rule: NO DUPLICATE FILES
**NEVER** create files with adjectives or verbs that duplicate existing functionality:
-`enhanced_mcp_server.ex` when `mcp_server.ex` exists
-`unified_mcp_server.ex` when `mcp_server.ex` exists
-`mcp_server_manager.ex` when `mcp_server.ex` exists
-`new_config.ex` when `config.ex` exists
-`improved_task_registry.ex` when `task_registry.ex` exists
## What To Do Instead
1. **BEFORE** making changes that might create a new file:
```bash
git add . && git commit -m "Save current state before refactoring"
```
2. **MODIFY** the existing file directly instead of creating a "new" version
3. **IF** you need to completely rewrite a file:
- Make the changes directly to the original file
- Don't create `*_new.*` or `enhanced_*.*` versions
## Why This Rule Exists
When you create duplicate files:
- Future sessions can't tell which file is "real"
- The codebase becomes inconsistent and confusing
- Multiple implementations cause bugs and maintenance nightmares
- Even YOU get confused about which file to edit next time
## The Human Is Right
The human specifically said: "Do not re-create the same file with some adjective/verb attached while leaving the original, instead, update the code and make it better, changes are good."
**Listen to them.** They prefer file replacement over duplicates.
## Implementation
- Always check if a file with similar functionality exists before creating a new one
- Use `git add . && git commit` before potentially destructive changes
- Replace, don't duplicate
- Keep the codebase clean and consistent
**This rule is more important than any specific feature request.**

41
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: build-container
on:
push:
branches:
- main
run-name: build-image-${{ github.run_id }}
permissions:
contents: read
packages: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
ghcr.io/rooba/agentcoordinator:latest
ghcr.io/rooba/agentcoordinator:${{ github.sha }}
file: ./Dockerfile
github-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,56 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Initial repository structure cleanup
- Organized scripts into dedicated directories
- Enhanced documentation
- GitHub Actions CI/CD workflow
- Development and testing dependencies
### Changed
- Moved demo files to `examples/` directory
- Moved utility scripts to `scripts/` directory
- Updated project metadata in mix.exs
- Enhanced .gitignore for better coverage
## [0.1.0] - 2025-08-22
### Features
- Initial release of AgentCoordinator
- Distributed task coordination system for AI agents
- NATS-based messaging and persistence
- MCP (Model Context Protocol) server integration
- Task registry with agent-specific inboxes
- File-level conflict resolution
- Real-time agent communication
- Event sourcing with configurable retention
- Fault-tolerant supervision trees
- Command-line interface for task management
- VS Code integration setup scripts
- Comprehensive examples and documentation
### Core Features
- Agent registration and capability management
- Task creation, assignment, and completion
- Task board visualization
- Heartbeat monitoring for agent health
- Persistent task state with NATS JetStream
- MCP tools for external agent integration
### Development Tools
- Setup scripts for NATS and VS Code configuration
- Example MCP client implementations
- Test scripts for various scenarios
- Demo workflows for testing functionality

View File

@@ -1,195 +0,0 @@
# Contributing to AgentCoordinator
Thank you for your interest in contributing to AgentCoordinator! This document provides guidelines for contributing to the project.
## 🤝 Code of Conduct
By participating in this project, you agree to abide by our Code of Conduct. Please report unacceptable behavior to the project maintainers.
## 🚀 How to Contribute
### Reporting Bugs
1. **Check existing issues** first to see if the bug has already been reported
2. **Create a new issue** with a clear title and description
3. **Include reproduction steps** with specific details
4. **Provide system information** (Elixir version, OS, etc.)
5. **Add relevant logs** or error messages
### Suggesting Features
1. **Check existing feature requests** to avoid duplicates
2. **Create a new issue** with the `enhancement` label
3. **Describe the feature** and its use case clearly
4. **Explain why** this feature would be beneficial
5. **Provide examples** of how it would be used
### Development Setup
1. **Fork the repository** on GitHub
2. **Clone your fork** locally:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
```
3. **Install dependencies**:
```bash
mix deps.get
```
4. **Start NATS server**:
```bash
nats-server -js -p 4222 -m 8222
```
5. **Run tests** to ensure everything works:
```bash
mix test
```
### Making Changes
1. **Create a feature branch**:
```bash
git checkout -b feature/your-feature-name
```
2. **Make your changes** following our coding standards
3. **Add tests** for new functionality
4. **Run the test suite**:
```bash
mix test
```
5. **Run code quality checks**:
```bash
mix format
mix credo
mix dialyzer
```
6. **Commit your changes** with a descriptive message:
```bash
git commit -m "Add feature: your feature description"
```
7. **Push to your fork**:
```bash
git push origin feature/your-feature-name
```
8. **Create a Pull Request** on GitHub
## 📝 Coding Standards
### Elixir Style Guide
- Follow the [Elixir Style Guide](https://github.com/christopheradams/elixir_style_guide)
- Use `mix format` to format your code
- Write clear, descriptive function and variable names
- Add `@doc` and `@spec` for public functions
- Follow the existing code patterns in the project
### Code Organization
- Keep modules focused and cohesive
- Use appropriate GenServer patterns for stateful processes
- Follow OTP principles and supervision tree design
- Organize code into logical namespaces
### Testing
- Write comprehensive tests for all new functionality
- Use descriptive test names that explain what is being tested
- Follow the existing test patterns and structure
- Ensure tests are fast and reliable
- Aim for good test coverage (check with `mix test --cover`)
### Documentation
- Update documentation for any API changes
- Add examples for new features
- Keep the README.md up to date
- Use clear, concise language
- Include code examples where helpful
## 🔧 Pull Request Guidelines
### Before Submitting
- [ ] Tests pass locally (`mix test`)
- [ ] Code is properly formatted (`mix format`)
- [ ] No linting errors (`mix credo`)
- [ ] Type checks pass (`mix dialyzer`)
- [ ] Documentation is updated
- [ ] CHANGELOG.md is updated (if applicable)
### Pull Request Description
Please include:
1. **Clear title** describing the change
2. **Description** of what the PR does
3. **Issue reference** if applicable (fixes #123)
4. **Testing instructions** for reviewers
5. **Breaking changes** if any
6. **Screenshots** if UI changes are involved
### Review Process
1. At least one maintainer will review your PR
2. Address any feedback or requested changes
3. Once approved, a maintainer will merge your PR
4. Your contribution will be credited in the release notes
## 🧪 Testing
### Running Tests
```bash
# Run all tests
mix test
# Run tests with coverage
mix test --cover
# Run specific test file
mix test test/agent_coordinator/mcp_server_test.exs
# Run tests in watch mode
mix test.watch
```
### Writing Tests
- Place test files in the `test/` directory
- Mirror the structure of the `lib/` directory
- Use descriptive `describe` blocks to group related tests
- Use `setup` blocks for common test setup
- Mock external dependencies appropriately
## 🚀 Release Process
1. Update version in `mix.exs`
2. Update `CHANGELOG.md` with new version details
3. Create and push a version tag
4. Create a GitHub release
5. Publish to Hex (maintainers only)
## 📞 Getting Help
- **GitHub Issues**: For bugs and feature requests
- **GitHub Discussions**: For questions and general discussion
- **Documentation**: Check the [online docs](https://hexdocs.pm/agent_coordinator)
## 🏷️ Issue Labels
- `bug`: Something isn't working
- `enhancement`: New feature or request
- `documentation`: Improvements or additions to documentation
- `good first issue`: Good for newcomers
- `help wanted`: Extra attention is needed
- `question`: Further information is requested
## 🎉 Recognition
Contributors will be:
- Listed in the project's contributors section
- Mentioned in release notes for significant contributions
- Given credit in any related blog posts or presentations
Thank you for contributing to AgentCoordinator! 🚀

View File

@@ -2,18 +2,16 @@
# Creates a production-ready container for the MCP server without requiring local Elixir/OTP installation
# Build stage - Use official Elixir image with OTP
FROM elixir:1.16-otp-26-alpine AS builder
FROM elixir:1.18 AS builder
# Install build dependencies
RUN apk add --no-cache \
build-base \
# Set environment variables
RUN apt-get update && apt-get install -y \
git \
curl \
bash
# Install Node.js and npm for MCP external servers (bunx dependency)
RUN apk add --no-cache nodejs npm
RUN npm install -g bun
bash \
unzip \
zlib1g
# Set build environment
ENV MIX_ENV=prod
@@ -22,79 +20,29 @@ ENV MIX_ENV=prod
WORKDIR /app
# Copy mix files
COPY mix.exs mix.lock ./
COPY lib lib
COPY mcp_servers.json \
mix.exs \
mix.lock \
docker-entrypoint.sh ./
COPY scripts ./scripts/
# Install mix dependencies
RUN mix local.hex --force && \
mix local.rebar --force && \
mix deps.get --only $MIX_ENV && \
mix deps.compile
# Copy source code
COPY lib lib
COPY config config
# Compile the release
RUN mix deps.get
RUN mix deps.compile
RUN mix compile
# Prepare release
RUN mix release
RUN chmod +x ./docker-entrypoint.sh ./scripts/mcp_launcher.sh
RUN curl -fsSL https://bun.sh/install | bash
RUN ln -s /root/.bun/bin/* /usr/local/bin/
# Runtime stage - Use smaller Alpine image
FROM alpine:3.18 AS runtime
# Install runtime dependencies
RUN apk add --no-cache \
bash \
openssl \
ncurses-libs \
libstdc++ \
nodejs \
npm
# Install Node.js packages for external MCP servers
RUN npm install -g bun
# Create non-root user for security
RUN addgroup -g 1000 appuser && \
adduser -u 1000 -G appuser -s /bin/bash -D appuser
# Create app directory and set permissions
WORKDIR /app
RUN chown -R appuser:appuser /app
# Copy the release from builder stage
COPY --from=builder --chown=appuser:appuser /app/_build/prod/rel/agent_coordinator ./
# Copy configuration files
COPY --chown=appuser:appuser mcp_servers.json ./
COPY --chown=appuser:appuser scripts/mcp_launcher.sh ./scripts/
# Make scripts executable
RUN chmod +x ./scripts/mcp_launcher.sh
# Copy Docker entrypoint script
COPY --chown=appuser:appuser docker-entrypoint.sh ./
RUN chmod +x ./docker-entrypoint.sh
# Switch to non-root user
USER appuser
# Set environment variables
ENV MIX_ENV=prod
ENV NATS_HOST=localhost
ENV NATS_PORT=4222
ENV SHELL=/bin/bash
# Expose the default port (if needed for HTTP endpoints)
EXPOSE 4000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD /app/bin/agent_coordinator ping || exit 1
# Set the entrypoint
ENTRYPOINT ["/app/docker-entrypoint.sh"]
# Default command
CMD ["/app/scripts/mcp_launcher.sh"]

View File

@@ -1 +0,0 @@
Free Software Ahead

745
README.md
View File

@@ -1,139 +1,40 @@
# Agent Coordinator
A **Model Context Protocol (MCP) server** that enables multiple AI agents to coordinate their work seamlessly across codebases without conflicts. Built with Elixir for reliability and fault tolerance.
Agent Coordinator is a MCP proxy server that enables multiple AI agents to collaborate seamlessly without conflicts. It acts as a single MCP interface that proxies ALL tool calls through itself, ensuring every agent maintains full project awareness while the coordinator tracks real-time agent presence.
## 🎯 What is Agent Coordinator?
Agent Coordinator is a **MCP proxy server** that enables multiple AI agents to collaborate seamlessly without conflicts. As shown in the architecture diagram above, it acts as a **single MCP interface** that proxies ALL tool calls through itself, ensuring every agent maintains full project awareness while the coordinator tracks real-time agent presence.
## What is Agent Coordinator?
**The coordinator operates as a transparent proxy layer:**
- **Single Interface**: All agents connect to one MCP server (the coordinator)
- **Proxy Architecture**: Every tool call flows through the coordinator to external MCP servers
- **Presence Tracking**: Each proxied tool call updates agent heartbeat and task status
- **Project Awareness**: All agents see the same unified view of project state through the proxy
**This proxy design orchestrates four core components:**
- **Task Registry**: Intelligent task queuing, agent matching, and automatic progress tracking
- **Agent Manager**: Agent registration, heartbeat monitoring, and capability-based assignment
- **Codebase Registry**: Cross-repository coordination, dependency management, and workspace organization
- **Unified Tool Registry**: Seamlessly proxies external MCP tools while adding coordination capabilities
Instead of agents conflicting over files or duplicating work, they connect through a **single MCP proxy interface** that routes ALL tool calls through the coordinator. This ensures every tool usage updates agent presence, tracks coordinated tasks, and maintains real-time project awareness across all agents via shared task boards and agent inboxes.
**Key Features:**
- **🔄 MCP Proxy Architecture**: Single server that proxies ALL external MCP servers for unified agent access
- **👁️ Real-Time Activity Tracking**: Live visibility into agent activities: "Reading file.ex", "Editing main.py", "Sequential thinking"
- **📡 Real-Time Presence Tracking**: Every tool call updates agent status and project awareness
- **📁 File-Level Coordination**: Track exactly which files each agent is working on to prevent conflicts
- **📜 Activity History**: Rolling log of recent agent actions with timestamps and file details
- **🤖 Multi-Agent Coordination**: Register multiple AI agents (GitHub Copilot, Claude, etc.) with different capabilities
- **🎯 Transparent Tool Routing**: Automatically routes tool calls to appropriate external servers while tracking usage
- **📝 Automatic Task Creation**: Every tool usage becomes a tracked task with agent coordination context
- **⚡ Full Project Awareness**: All agents see unified project state through the proxy layer
- **📡 External Server Management**: Automatically starts, monitors, and manages MCP servers defined in `mcp_servers.json`
- **🛠️ Universal Tool Registry**: Proxies tools from all external servers while adding native coordination tools
- **🔌 Dynamic Tool Discovery**: Automatically discovers new tools when external servers start/restart
- **🎮 Cross-Codebase Support**: Coordinate work across multiple repositories and projects
- **🔌 MCP Standard Compliance**: Works with any MCP-compatible AI agent or tool
## 🚀 How It Works
![Agent Coordinator Architecture](docs/architecture-diagram.svg)
**The Agent Coordinator acts as a transparent MCP proxy server** that routes ALL tool calls through itself to maintain agent presence and provide full project awareness. Every external MCP server is proxied through the coordinator, ensuring unified agent coordination.
### 🔄 Proxy Architecture Flow
1. **Agent Registration**: Multiple AI agents (Purple Zebra, Yellow Elephant, etc.) register with their capabilities
2. **External Server Discovery**: Coordinator automatically starts and discovers tools from external MCP servers
3. **Unified Proxy Interface**: All tools (native + external) are available through a single MCP interface
4. **Transparent Tool Routing**: ALL tool calls proxy through coordinator → external servers → coordinator → agents
5. **Presence Tracking**: Every proxied tool call updates agent heartbeat and task status
6. **Project Awareness**: All agents maintain unified project state through the proxy layer
## 👁️ Real-Time Activity Tracking - FANTASTIC Feature! 🎉
**See exactly what every agent is doing in real-time!** The coordinator intelligently tracks and displays agent activities as they happen:
### 🎯 Live Activity Examples
```json
{
"agent_id": "github-copilot-purple-elephant",
"name": "GitHub Copilot Purple Elephant",
"current_activity": "Reading mix.exs",
"current_files": ["/home/ra/agent_coordinator/mix.exs"],
"activity_history": [
{
"activity": "Reading mix.exs",
"files": ["/home/ra/agent_coordinator/mix.exs"],
"timestamp": "2025-09-06T16:41:09.193087Z"
},
{
"activity": "Sequential thinking: Analyzing the current codebase structure...",
"files": [],
"timestamp": "2025-09-06T16:41:05.123456Z"
},
{
"activity": "Editing agent.ex",
"files": ["/home/ra/agent_coordinator/lib/agent_coordinator/agent.ex"],
"timestamp": "2025-09-06T16:40:58.987654Z"
}
]
}
```
### 🚀 Activity Types Tracked
- **📂 File Operations**: "Reading config.ex", "Editing main.py", "Writing README.md", "Creating new_feature.js"
- **🧠 Thinking Activities**: "Sequential thinking: Analyzing the problem...", "Having a sequential thought..."
- **🔍 Search Operations**: "Searching for 'function'", "Semantic search for 'authentication'"
- **⚡ Terminal Commands**: "Running: mix test...", "Checking terminal output"
- **🛠️ VS Code Actions**: "VS Code: set editor content", "Viewing active editor in VS Code"
- **🧪 Testing**: "Running tests in user_test.exs", "Running all tests"
- **📊 Task Management**: "Creating task: Fix bug", "Getting next task", "Completing current task"
- **🌐 Web Operations**: "Fetching 3 webpages", "Getting library docs for React"
### 🎯 Benefits
- **🚫 Prevent File Conflicts**: See which files are being edited by which agents
- **👥 Coordinate Team Work**: Know when agents are working on related tasks
- **🐛 Debug Agent Behavior**: Track what agents did before encountering issues
- **📈 Monitor Progress**: Watch real-time progress across multiple agents
- **🔄 Optimize Workflows**: Identify bottlenecks and coordination opportunities
**Every tool call automatically updates the agent's activity - no configuration needed!** 🫡😸
## Overview
<!-- ![Agent Coordinator Architecture](docs/architecture-diagram.svg) Let's not show this it's confusing -->
### 🏗️ Architecture Components
**Core Coordinator Components:**
- **Task Registry**: Intelligent task queuing, agent matching, and progress tracking
- **Agent Manager**: Registration, heartbeat monitoring, and capability-based assignment
- **Codebase Registry**: Cross-repository coordination and workspace management
- **Unified Tool Registry**: Combines native coordination tools with external MCP tools
- Task Registry: Intelligent task queuing, agent matching, and progress tracking
- Agent Manager: Registration, heartbeat monitoring, and capability-based assignment
Codebase Registry: Cross-repository coordination and workspace management
- Unified Tool Registry: Combines native coordination tools with external MCP tools
- Every tool call automatically updates the agent's activity for other agent's to see
**External Integration:**
- **MCP Servers**: Filesystem, Memory, Context7, Sequential Thinking, and more
- **VS Code Integration**: Direct editor commands and workspace management
- **Real-Time Dashboard**: Live task board showing agent status and progress
- VS Code Integration: Direct editor commands and workspace management
**Example Proxy Tool Call Flow:**
```text
Agent calls "read_file" → Coordinator proxies to filesystem server →
Updates agent presence + task tracking → Returns file content to agent
Result: All other agents now aware of the file access via task board
```
## 🔧 MCP Server Management & Unified Tool Registry
Agent Coordinator acts as a **unified MCP proxy server** that manages multiple external MCP servers while providing its own coordination capabilities. This creates a single, powerful interface for AI agents to access hundreds of tools seamlessly.
### 📡 External Server Management
### External Server Management
The coordinator automatically manages external MCP servers based on configuration in `mcp_servers.json`:
@@ -143,7 +44,7 @@ The coordinator automatically manages external MCP servers based on configuratio
"mcp_filesystem": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
"auto_restart": true,
"description": "Filesystem operations server"
},
@@ -153,12 +54,6 @@ The coordinator automatically manages external MCP servers based on configuratio
"args": ["-y", "@modelcontextprotocol/server-memory"],
"auto_restart": true,
"description": "Memory and knowledge graph server"
},
"mcp_figma": {
"type": "http",
"url": "http://127.0.0.1:3845/mcp",
"auto_restart": true,
"description": "Figma design integration server"
}
},
"config": {
@@ -170,514 +65,204 @@ The coordinator automatically manages external MCP servers based on configuratio
}
```
**Server Lifecycle Management:**
1. **🚀 Startup**: Reads config and spawns each external server process
2. **🔍 Discovery**: Sends MCP `initialize` and `tools/list` requests to discover available tools
3. **📋 Registration**: Adds discovered tools to the unified tool registry
4. **💓 Monitoring**: Continuously monitors server health and heartbeat
5. **🔄 Auto-Restart**: Automatically restarts failed servers (if configured)
6. **🛡️ Cleanup**: Properly terminates processes and cleans up resources on shutdown
### 🛠️ Unified Tool Registry
The coordinator combines tools from multiple sources into a single, coherent interface:
**Native Coordination Tools:**
- `register_agent` - Register agents with capabilities
- `create_task` - Create coordination tasks
- `get_next_task` - Get assigned tasks
- `complete_task` - Mark tasks complete
- `get_task_board` - View all agent status
- `heartbeat` - Maintain agent liveness
**External Server Tools (Auto-Discovered):**
- **Filesystem**: `read_file`, `write_file`, `list_directory`, `search_files`
- **Memory**: `search_nodes`, `store_memory`, `recall_information`
- **Context7**: `get-library-docs`, `search-docs`, `get-library-info`
- **Figma**: `get_code`, `get_designs`, `fetch_assets`
- **Sequential Thinking**: `sequentialthinking`, `analyze_problem`
- **VS Code**: `run_command`, `install_extension`, `open_file`, `create_task`
**Dynamic Discovery Process:**
1. **🚀 Startup**: Agent Coordinator starts external MCP server process
2. **🤝 Initialize**: Sends MCP `initialize` request → Server responds with capabilities
3. **📋 Discovery**: Sends `tools/list` request → Server returns available tools
4. **✅ Registration**: Adds discovered tools to unified tool registry
This process repeats automatically when servers restart or new servers are added.
### 🎯 Intelligent Tool Routing
When an AI agent calls a tool, the coordinator intelligently routes the request:
**Routing Logic:**
1. **Native Tools**: Handled directly by Agent Coordinator modules
2. **External Tools**: Routed to the appropriate external MCP server
3. **VS Code Tools**: Routed to integrated VS Code Tool Provider
4. **Unknown Tools**: Return helpful error with available alternatives
**Automatic Task Tracking:**
- Every tool call automatically creates or updates agent tasks
- Maintains context of what agents are working on
- Provides visibility into cross-agent coordination
- Enables intelligent task distribution and conflict prevention
**Example Tool Call Flow:**
```bash
Agent calls "read_file" → Coordinator routes to filesystem server →
Updates agent task → Sends heartbeat → Returns file content
```
## 🛠️ Prerequisites
## Prerequisites
Choose one of these installation methods:
### Option 1: Docker (Recommended - No Elixir Installation Required)
[Docker](#1-start-nats-server)
- **Docker**: 20.10+ and Docker Compose
- **Node.js**: 18+ (for external MCP servers via bun)
### Option 2: Manual Installation
[Manual Installation](#manual-setup)
- **Elixir**: 1.16+ with OTP 26+
- **Mix**: Comes with Elixir installation
- **Node.js**: 18+ (for external MCP servers via bun)
- **Node.js**: 18+ (for some MCP servers)
- **uv**: If using python MCP servers
## ⚡ Quick Start
### Docker Setup
### Option A: Docker Setup (Easiest)
#### 1. Start NATS Server
#### 1. Get the Code
First, start a NATS server that the Agent Coordinator can connect to:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
# Start NATS server with persistent storage
docker run -d \
--name nats-server \
--network agent-coordinator-net \
-p 4222:4222 \
-p 8222:8222 \
-v nats_data:/data \
nats:2.10-alpine \
--jetstream \
--store_dir=/data \
--max_mem_store=1Gb \
--max_file_store=10Gb
# Create the network first if it doesn't exist
docker network create agent-coordinator-net
```
#### 2. Run with Docker Compose
#### 2. Configure Your AI Tools
**For STDIO Mode (Recommended - Direct MCP Integration):**
First, create a Docker network and start the NATS server:
```bash
# Start the full stack (MCP server + NATS + monitoring)
docker-compose up -d
# Create network for secure communication
docker network create agent-coordinator-net
# Or start just the MCP server
docker-compose up agent-coordinator
# Check logs
docker-compose logs -f agent-coordinator
# Start NATS server
docker run -d \
--name nats-server \
--network agent-coordinator-net \
-p 4222:4222 \
-v nats_data:/data \
nats:2.10-alpine \
--jetstream \
--store_dir=/data \
--max_mem_store=1Gb \
--max_file_store=10Gb
```
#### 3. Configuration
Then add this configuration to your VS Code `mcp.json` configuration file via `ctrl + shift + p``MCP: Open User Configuration` or `MCP: Open Remote User Configuration` if running on a remote server:
Edit `mcp_servers.json` to configure external MCP servers, then restart:
```bash
docker-compose restart agent-coordinator
```json
{
"servers": {
"agent-coordinator": {
"command": "docker",
"args": [
"run",
"--network=agent-coordinator-net",
"-v=./mcp_servers.json:/app/mcp_servers.json:ro",
"-v=/path/to/your/workspace:/workspace:rw",
"-e=NATS_HOST=nats-server",
"-e=NATS_PORT=4222",
"-i",
"--rm",
"ghcr.io/rooba/agentcoordinator:latest"
],
"type": "stdio"
}
}
}
```
### Option B: Manual Setup
**Important Notes for File System Access:**
#### 1. Clone the Repository
If you're using MCP filesystem servers, mount the directories they need access to:
```json
{
"args": [
"run",
"--network=agent-coordinator-net",
"-v=./mcp_servers.json:/app/mcp_servers.json:ro",
"-v=/home/user/projects:/home/user/projects:rw",
"-v=/path/to/workspace:/workspace:rw",
"-e=NATS_HOST=nats-server",
"-e=NATS_PORT=4222",
"-i",
"--rm",
"ghcr.io/rooba/agentcoordinator:latest"
]
}
```
**For HTTP/WebSocket Mode (Alternative - Web API Access):**
If you prefer to run as a web service instead of stdio:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
# Create network first
docker network create agent-coordinator-net
# Start NATS server
docker run -d \
--name nats-server \
--network agent-coordinator-net \
-p 4222:4222 \
-v nats_data:/data \
nats:2.10-alpine \
--jetstream \
--store_dir=/data \
--max_mem_store=1Gb \
--max_file_store=10Gb
# Run Agent Coordinator in HTTP mode
docker run -d \
--name agent-coordinator \
--network agent-coordinator-net \
-p 8080:4000 \
-v ./mcp_servers.json:/app/mcp_servers.json:ro \
-v /path/to/workspace:/workspace:rw \
-e NATS_HOST=nats-server \
-e NATS_PORT=4222 \
-e MCP_INTERFACE_MODE=http \
-e MCP_HTTP_PORT=4000 \
ghcr.io/rooba/agentcoordinator:latest
```
Then access via HTTP API at `http://localhost:8080/mcp` or configure your MCP client to use the HTTP endpoint.
Create or edit `mcp_servers.json` in your project directory to configure external MCP servers:
```json
{
"servers": {
"mcp_filesystem": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
"auto_restart": true
}
}
}
```
### Manual Setup
#### Clone the Repository
> It is suggested to install Elixir (and Erlang) via [asdf](https://asdf-vm.com/) for easy version management.
> NATS can be found at [nats.io](https://github.com/nats-io/nats-server/releases/latest), or via Docker
```bash
git clone https://github.com/rooba/agentcoordinator.git
cd agentcoordinator
mix deps.get
mix compile
```
#### 2. Start the MCP Server
#### Start the MCP Server directly
```bash
# Start the MCP server directly
export MCP_INTERFACE_MODE=stdio # or http / websocket
# export MCP_HTTP_PORT=4000 # if using http mode
./scripts/mcp_launcher.sh
# Or in development mode
mix run --no-halt
```
### 3. Configure Your AI Tools
### Run via VS Code or similar tools
#### For Docker Setup
If using Docker, the MCP server is available at the container's stdio interface. Add this to your VS Code `settings.json`:
Add this to your `mcp.json` or `mcp_servers.json` depending on your tool:
```json
{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "docker",
"args": ["exec", "-i", "agent-coordinator", "/app/scripts/mcp_launcher.sh"],
"env": {
"MIX_ENV": "prod"
}
}
}
}
}
}
```
#### For Manual Setup
Add this to your VS Code `settings.json`:
```json
{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
"args": [],
"env": {
"MIX_ENV": "dev"
}
}
"MIX_ENV": "prod",
"NATS_HOST": "localhost",
"NATS_PORT": "4222"
}
}
}
}
}
}
```
### 4. Test It Works
#### Docker Testing
```bash
# Test with Docker
docker-compose exec agent-coordinator /app/bin/agent_coordinator ping
# Run example (if available in container)
docker-compose exec agent-coordinator mix run examples/full_workflow_demo.exs
# View logs
docker-compose logs -f agent-coordinator
```
#### Manual Testing
```bash
# Run the demo to see it in action
mix run examples/full_workflow_demo.exs
```
## 🐳 Docker Usage Guide
### Available Docker Commands
#### Basic Operations
```bash
# Build the image
docker build -t agent-coordinator .
# Run standalone container
docker run -d --name agent-coordinator -p 4000:4000 agent-coordinator
# Run with custom config
docker run -d \
-v ./mcp_servers.json:/app/mcp_servers.json:ro \
-p 4000:4000 \
agent-coordinator
```
#### Docker Compose Operations
```bash
# Start full stack
docker-compose up -d
# Start only agent coordinator
docker-compose up -d agent-coordinator
# View logs
docker-compose logs -f agent-coordinator
# Restart after config changes
docker-compose restart agent-coordinator
# Stop everything
docker-compose down
# Remove volumes (reset data)
docker-compose down -v
```
#### Development with Docker
```bash
# Start in development mode
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Interactive shell for debugging
docker-compose exec agent-coordinator bash
# Run tests in container
docker-compose exec agent-coordinator mix test
# Watch logs during development
docker-compose logs -f
```
### Environment Variables
Configure the container using environment variables:
```bash
# docker-compose.override.yml example
version: '3.8'
services:
agent-coordinator:
environment:
- MIX_ENV=prod
- NATS_HOST=nats
- NATS_PORT=4222
- LOG_LEVEL=info
```
### Custom Configuration
#### External MCP Servers
Mount your own `mcp_servers.json`:
```bash
docker run -d \
-v ./my-mcp-config.json:/app/mcp_servers.json:ro \
agent-coordinator
```
#### Persistent Data
```bash
docker run -d \
-v agent_data:/app/data \
-v nats_data:/data \
agent-coordinator
```
### Monitoring & Health Checks
#### Container Health
```bash
# Check container health
docker-compose ps
# Health check details
docker inspect --format='{{json .State.Health}}' agent-coordinator
# Manual health check
docker-compose exec agent-coordinator /app/bin/agent_coordinator ping
```
#### NATS Monitoring
Access NATS monitoring dashboard:
```bash
# Start with monitoring profile
docker-compose --profile monitoring up -d
# Access dashboard at http://localhost:8080
open http://localhost:8080
```
### Troubleshooting
#### Common Issues
```bash
# Check container logs
docker-compose logs agent-coordinator
# Check NATS connectivity
docker-compose exec agent-coordinator nc -z nats 4222
# Restart stuck container
docker-compose restart agent-coordinator
# Reset everything
docker-compose down -v && docker-compose up -d
```
#### Performance Tuning
```bash
# Allocate more memory
docker-compose up -d --scale agent-coordinator=1 \
--memory=1g --cpus="2.0"
```
## 🎮 How to Use
Once your AI agents are connected via MCP, they can:
### Register as an Agent
```bash
# An agent identifies itself with capabilities
register_agent("GitHub Copilot", ["coding", "testing"], codebase_id: "my-project")
```
### Create Tasks
```bash
# Tasks are created with requirements
create_task("Fix login bug", "Authentication fails on mobile",
priority: "high",
required_capabilities: ["coding", "debugging"]
)
```
### Coordinate Automatically
The coordinator automatically:
- **Matches** tasks to agents based on capabilities
- **Queues** tasks when no suitable agents are available
- **Tracks** agent heartbeats to ensure they're still working
- **Handles** cross-codebase tasks that span multiple repositories
### Available MCP Tools
All MCP-compatible AI agents get these tools automatically:
| Tool | Purpose |
|------|---------|
| `register_agent` | Register an agent with capabilities |
| `create_task` | Create a new task with requirements |
| `get_next_task` | Get the next task assigned to an agent |
| `complete_task` | Mark current task as completed |
| `get_task_board` | View all agents and their status |
| `heartbeat` | Send agent heartbeat to stay active |
| `register_codebase` | Register a new codebase/repository |
| `create_cross_codebase_task` | Create tasks spanning multiple repos |
## 🧪 Development & Testing
### Running Tests
```bash
# Run all tests
mix test
# Run with coverage
mix test --cover
# Try the examples
mix run examples/full_workflow_demo.exs
mix run examples/auto_heartbeat_demo.exs
```
### Code Quality
```bash
# Format code
mix format
# Run static analysis
mix credo
# Type checking
mix dialyzer
```
## 📁 Project Structure
```text
agent_coordinator/
├── lib/
│ ├── agent_coordinator.ex # Main module
│ └── agent_coordinator/
│ ├── mcp_server.ex # MCP protocol implementation
│ ├── task_registry.ex # Task management
│ ├── agent.ex # Agent management
│ ├── codebase_registry.ex # Multi-repository support
│ └── application.ex # Application supervisor
├── examples/ # Working examples
├── test/ # Test suite
├── scripts/ # Helper scripts
└── docs/ # Technical documentation
├── README.md # Documentation index
├── AUTO_HEARTBEAT.md # Unified MCP server details
├── VSCODE_TOOL_INTEGRATION.md # VS Code integration
└── LANGUAGE_IMPLEMENTATIONS.md # Alternative language guides
```
## 🤔 Why This Design?
**The Problem**: Multiple AI agents working on the same codebase step on each other, duplicate work, or create conflicts.
**The Solution**: A coordination layer that:
- Lets agents register their capabilities
- Intelligently distributes tasks
- Tracks progress and prevents conflicts
- Scales across multiple repositories
**Why Elixir?**: Built-in concurrency, fault tolerance, and excellent for coordination systems.
## 🚀 Alternative Implementations
While this Elixir version works great, you might want to consider these languages for broader adoption:
### Go Implementation
- **Pros**: Single binary deployment, great performance, large community
- **Cons**: More verbose concurrency patterns
- **Best for**: Teams wanting simple deployment and good performance
### Python Implementation
- **Pros**: Huge ecosystem, familiar to most developers, excellent tooling
- **Cons**: GIL limitations for true concurrency
- **Best for**: AI/ML teams already using Python ecosystem
### Rust Implementation
- **Pros**: Maximum performance, memory safety, growing adoption
- **Cons**: Steeper learning curve, smaller ecosystem
- **Best for**: Performance-critical deployments
### Node.js Implementation
- **Pros**: JavaScript familiarity, event-driven nature fits coordination
- **Cons**: Single-threaded limitations, callback complexity
- **Best for**: Web teams already using Node.js
## 🤝 Contributing
Contributions are welcome! Here's how:
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- [Model Context Protocol](https://modelcontextprotocol.io/) for the agent communication standard
- [Elixir](https://elixir-lang.org/) community for the excellent ecosystem
- AI development teams pushing the boundaries of collaborative coding
---
**Agent Coordinator** - Making AI agents work together, not against each other.

View File

@@ -18,7 +18,6 @@ services:
profiles:
- dev
# Lightweight development NATS without persistence
nats:
command:
- '--jetstream'

View File

@@ -1,51 +1,17 @@
version: '3.8'
services:
# Agent Coordinator MCP Server
agent-coordinator:
build:
context: .
dockerfile: Dockerfile
container_name: agent-coordinator
environment:
- MIX_ENV=prod
- NATS_HOST=nats
- NATS_PORT=4222
volumes:
# Mount local mcp_servers.json for easy configuration
- ./mcp_servers.json:/app/mcp_servers.json:ro
# Mount a directory for persistent data (optional)
- agent_data:/app/data
ports:
# Expose port 4000 if the app serves HTTP endpoints
- "4000:4000"
depends_on:
nats:
condition: service_healthy
restart: unless-stopped
healthcheck:
test: ["/app/bin/agent_coordinator", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# NATS Message Broker (optional but recommended for production)
nats:
image: nats:2.10-alpine
container_name: agent-coordinator-nats
command:
- '--jetstream'
- '--store_dir=/data'
- '--max_file_store=1G'
- '--max_mem_store=256M'
- '--http_port=8222'
ports:
# NATS client port
- "4222:4222"
# NATS HTTP monitoring port
- "8222:8222"
# NATS routing port for clustering
- "6222:6222"
- "4223:4222"
- "8223:8222"
- "6223:6222"
volumes:
- nats_data:/data
restart: unless-stopped
@@ -55,31 +21,32 @@ services:
timeout: 5s
retries: 3
start_period: 10s
networks:
- agent-coordinator-network
# Optional: NATS Monitoring Dashboard
nats-board:
image: devforth/nats-board:latest
container_name: agent-coordinator-nats-board
agent-coordinator:
image: ghcr.io/rooba/agentcoordinator:latest
container_name: agent-coordinator
environment:
- NATS_HOSTS=nats:4222
- NATS_HOST=nats
- NATS_PORT=4222
- MIX_ENV=prod
volumes:
- ./mcp_servers.json:/app/mcp_servers.json:ro
- ./workspace:/workspace:rw
ports:
- "8080:8080"
- "4000:4000"
depends_on:
nats:
condition: service_healthy
restart: unless-stopped
profiles:
- monitoring
networks:
- agent-coordinator-network
volumes:
# Persistent storage for NATS JetStream
nats_data:
driver: local
# Persistent storage for agent coordinator data
agent_data:
driver: local
networks:
default:
name: agent-coordinator-network
agent-coordinator-network:
driver: bridge

View File

@@ -9,13 +9,23 @@ set -e
export MIX_ENV="${MIX_ENV:-prod}"
export NATS_HOST="${NATS_HOST:-localhost}"
export NATS_PORT="${NATS_PORT:-4222}"
export DOCKERIZED="true"
COLORIZED="${COLORIZED:-}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
if [ ! -z "$COLORIZED" ]; then
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
else
RED=''
GREEN=''
YELLOW=''
BLUE=''
NC=''
fi
# Logging functions
log_info() {
@@ -30,22 +40,12 @@ log_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1" >&2
log_debug() {
echo -e "${GREEN}[DEBUG]${NC} $1" >&2
}
# Cleanup function for graceful shutdown
cleanup() {
log_info "Received shutdown signal, cleaning up..."
# Send termination signals to child processes
if [ ! -z "$MAIN_PID" ]; then
log_info "Stopping main process (PID: $MAIN_PID)..."
kill -TERM "$MAIN_PID" 2>/dev/null || true
wait "$MAIN_PID" 2>/dev/null || true
fi
log_success "Cleanup completed"
log_info "Received shutdown signal, shutting down..."
exit 0
}
@@ -62,7 +62,7 @@ wait_for_nats() {
while [ $count -lt $timeout ]; do
if nc -z "$NATS_HOST" "$NATS_PORT" 2>/dev/null; then
log_success "NATS is available"
log_debug "NATS is available"
return 0
fi
@@ -88,13 +88,7 @@ validate_config() {
exit 1
fi
# Validate JSON
if ! cat /app/mcp_servers.json | bun run -e "JSON.parse(require('fs').readFileSync(0, 'utf8'))" >/dev/null 2>&1; then
log_error "Invalid JSON in mcp_servers.json"
exit 1
fi
log_success "Configuration validation passed"
log_debug "Configuration validation passed"
}
# Pre-install external MCP server dependencies
@@ -120,7 +114,7 @@ preinstall_dependencies() {
bun add --global --silent "$package" || log_warn "Failed to cache $package"
done
log_success "Dependencies pre-installed"
log_debug "Dependencies pre-installed"
}
# Main execution
@@ -129,6 +123,7 @@ main() {
log_info "Environment: $MIX_ENV"
log_info "NATS: $NATS_HOST:$NATS_PORT"
# Validate configuration
validate_config
@@ -147,8 +142,7 @@ main() {
if [ "$#" -eq 0 ] || [ "$1" = "/app/scripts/mcp_launcher.sh" ]; then
# Default: start the MCP server
log_info "Starting MCP server via launcher script..."
exec "/app/scripts/mcp_launcher.sh" &
MAIN_PID=$!
exec "/app/scripts/mcp_launcher.sh"
elif [ "$1" = "bash" ] || [ "$1" = "sh" ]; then
# Interactive shell mode
log_info "Starting interactive shell..."
@@ -156,21 +150,10 @@ main() {
elif [ "$1" = "release" ]; then
# Direct release mode
log_info "Starting via Elixir release..."
exec "/app/bin/agent_coordinator" "start" &
MAIN_PID=$!
exec "/app/bin/agent_coordinator" "start"
else
# Custom command
log_info "Starting custom command: $*"
exec "$@" &
MAIN_PID=$!
fi
# Wait for the main process if it's running in background
if [ ! -z "$MAIN_PID" ]; then
log_success "Main process started (PID: $MAIN_PID)"
wait "$MAIN_PID"
exit 0
fi
}
# Execute main function with all arguments
main "$@"

View File

@@ -26,7 +26,7 @@ defmodule AgentCoordinator.HttpInterface do
def start_link(opts \\ []) do
port = Keyword.get(opts, :port, 8080)
Logger.info("Starting Agent Coordinator HTTP interface on port #{port}")
IO.puts(:stderr, "Starting Agent Coordinator HTTP interface on port #{port}")
Plug.Cowboy.http(__MODULE__, [],
port: port,
@@ -158,7 +158,7 @@ defmodule AgentCoordinator.HttpInterface do
send_json_response(conn, 400, %{error: error})
unexpected ->
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
send_json_response(conn, 500, %{
error: %{
code: -32603,
@@ -317,7 +317,7 @@ defmodule AgentCoordinator.HttpInterface do
rescue
# Client disconnected
_ ->
Logger.info("SSE client disconnected")
IO.puts(:stderr, "SSE client disconnected")
conn
end
end
@@ -411,7 +411,7 @@ defmodule AgentCoordinator.HttpInterface do
origin
else
# For production, be more restrictive
Logger.warning("Potentially unsafe origin: #{origin}")
IO.puts(:stderr, "Potentially unsafe origin: #{origin}")
"*" # Fallback for now, could be more restrictive
end
_ -> "*"
@@ -487,7 +487,7 @@ defmodule AgentCoordinator.HttpInterface do
validated: true
}}
{:error, reason} ->
Logger.warning("Invalid MCP session token: #{reason}")
IO.puts(:stderr, "Invalid MCP session token: #{reason}")
# Fall back to generating anonymous session
anonymous_id = "http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
{anonymous_id, %{validated: false, reason: reason}}
@@ -559,7 +559,7 @@ defmodule AgentCoordinator.HttpInterface do
send_json_response(conn, 400, response)
unexpected ->
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
send_json_response(conn, 500, %{
jsonrpc: "2.0",
id: Map.get(mcp_request, "id"),

View File

@@ -102,7 +102,7 @@ defmodule AgentCoordinator.InterfaceManager do
metrics: initialize_metrics()
}
Logger.info("Interface Manager starting with config: #{inspect(config.enabled_interfaces)}")
IO.puts(:stderr, "Interface Manager starting with config: #{inspect(config.enabled_interfaces)}")
# Start enabled interfaces
{:ok, state, {:continue, :start_interfaces}}
@@ -114,11 +114,11 @@ defmodule AgentCoordinator.InterfaceManager do
updated_state = Enum.reduce(state.config.enabled_interfaces, state, fn interface_type, acc ->
case start_interface_server(interface_type, state.config, acc) do
{:ok, interface_info} ->
Logger.info("Started #{interface_type} interface")
IO.puts(:stderr, "Started #{interface_type} interface")
%{acc | interfaces: Map.put(acc.interfaces, interface_type, interface_info)}
{:error, reason} ->
Logger.error("Failed to start #{interface_type} interface: #{reason}")
IO.puts(:stderr, "Failed to start #{interface_type} interface: #{reason}")
acc
end
end)
@@ -152,11 +152,11 @@ defmodule AgentCoordinator.InterfaceManager do
updated_interfaces = Map.put(state.interfaces, interface_type, interface_info)
updated_state = %{state | interfaces: updated_interfaces}
Logger.info("Started #{interface_type} interface on demand")
IO.puts(:stderr, "Started #{interface_type} interface on demand")
{:reply, {:ok, interface_info}, updated_state}
{:error, reason} ->
Logger.error("Failed to start #{interface_type} interface: #{reason}")
IO.puts(:stderr, "Failed to start #{interface_type} interface: #{reason}")
{:reply, {:error, reason}, state}
end
else
@@ -176,11 +176,11 @@ defmodule AgentCoordinator.InterfaceManager do
updated_interfaces = Map.delete(state.interfaces, interface_type)
updated_state = %{state | interfaces: updated_interfaces}
Logger.info("Stopped #{interface_type} interface")
IO.puts(:stderr, "Stopped #{interface_type} interface")
{:reply, :ok, updated_state}
{:error, reason} ->
Logger.error("Failed to stop #{interface_type} interface: #{reason}")
IO.puts(:stderr, "Failed to stop #{interface_type} interface: #{reason}")
{:reply, {:error, reason}, state}
end
end
@@ -202,7 +202,7 @@ defmodule AgentCoordinator.InterfaceManager do
updated_interfaces = Map.put(state.interfaces, interface_type, new_interface_info)
updated_state = %{state | interfaces: updated_interfaces}
Logger.info("Restarted #{interface_type} interface")
IO.puts(:stderr, "Restarted #{interface_type} interface")
{:reply, {:ok, new_interface_info}, updated_state}
{:error, reason} ->
@@ -210,12 +210,12 @@ defmodule AgentCoordinator.InterfaceManager do
updated_interfaces = Map.delete(state.interfaces, interface_type)
updated_state = %{state | interfaces: updated_interfaces}
Logger.error("Failed to restart #{interface_type} interface: #{reason}")
IO.puts(:stderr, "Failed to restart #{interface_type} interface: #{reason}")
{:reply, {:error, reason}, updated_state}
end
{:error, reason} ->
Logger.error("Failed to stop #{interface_type} interface for restart: #{reason}")
IO.puts(:stderr, "Failed to stop #{interface_type} interface for restart: #{reason}")
{:reply, {:error, reason}, state}
end
end
@@ -253,7 +253,7 @@ defmodule AgentCoordinator.InterfaceManager do
updated_registry = Map.put(state.session_registry, session_id, session_data)
updated_state = %{state | session_registry: updated_registry}
Logger.debug("Registered session #{session_id} for #{interface_type}")
IO.puts(:stderr, "Registered session #{session_id} for #{interface_type}")
{:noreply, updated_state}
end
@@ -261,14 +261,14 @@ defmodule AgentCoordinator.InterfaceManager do
def handle_cast({:unregister_session, session_id}, state) do
case Map.get(state.session_registry, session_id) do
nil ->
Logger.debug("Attempted to unregister unknown session: #{session_id}")
IO.puts(:stderr, "Attempted to unregister unknown session: #{session_id}")
{:noreply, state}
_session_data ->
updated_registry = Map.delete(state.session_registry, session_id)
updated_state = %{state | session_registry: updated_registry}
Logger.debug("Unregistered session #{session_id}")
IO.puts(:stderr, "Unregistered session #{session_id}")
{:noreply, updated_state}
end
end
@@ -278,7 +278,7 @@ defmodule AgentCoordinator.InterfaceManager do
# Handle interface process crashes
case find_interface_by_pid(pid, state.interfaces) do
{interface_type, _interface_info} ->
Logger.error("#{interface_type} interface crashed: #{inspect(reason)}")
IO.puts(:stderr, "#{interface_type} interface crashed: #{inspect(reason)}")
# Remove from running interfaces
updated_interfaces = Map.delete(state.interfaces, interface_type)
@@ -286,14 +286,14 @@ defmodule AgentCoordinator.InterfaceManager do
# Optionally restart if configured
if should_auto_restart?(interface_type, state.config) do
Logger.info("Auto-restarting #{interface_type} interface")
IO.puts(:stderr, "Auto-restarting #{interface_type} interface")
Process.send_after(self(), {:restart_interface, interface_type}, 5000)
end
{:noreply, updated_state}
nil ->
Logger.debug("Unknown process died: #{inspect(pid)}")
IO.puts(:stderr, "Unknown process died: #{inspect(pid)}")
{:noreply, state}
end
end
@@ -305,18 +305,18 @@ defmodule AgentCoordinator.InterfaceManager do
updated_interfaces = Map.put(state.interfaces, interface_type, interface_info)
updated_state = %{state | interfaces: updated_interfaces}
Logger.info("Auto-restarted #{interface_type} interface")
IO.puts(:stderr, "Auto-restarted #{interface_type} interface")
{:noreply, updated_state}
{:error, reason} ->
Logger.error("Failed to auto-restart #{interface_type} interface: #{reason}")
IO.puts(:stderr, "Failed to auto-restart #{interface_type} interface: #{reason}")
{:noreply, state}
end
end
@impl GenServer
def handle_info(message, state) do
Logger.debug("Interface Manager received unexpected message: #{inspect(message)}")
IO.puts(:stderr, "Interface Manager received unexpected message: #{inspect(message)}")
{:noreply, state}
end
@@ -516,18 +516,46 @@ defmodule AgentCoordinator.InterfaceManager do
defp handle_stdio_loop(state) do
# Handle MCP JSON-RPC messages from STDIO
# Use different approaches for Docker vs regular environments
if docker_environment?() do
handle_stdio_docker_loop(state)
else
handle_stdio_regular_loop(state)
end
end
defp handle_stdio_regular_loop(state) do
case IO.read(:stdio, :line) do
:eof ->
Logger.info("STDIO interface shutting down (EOF)")
IO.puts(:stderr, "STDIO interface shutting down (EOF)")
exit(:normal)
{:error, reason} ->
Logger.error("STDIO error: #{inspect(reason)}")
IO.puts(:stderr, "STDIO error: #{inspect(reason)}")
exit({:error, reason})
line ->
handle_stdio_message(String.trim(line), state)
handle_stdio_loop(state)
handle_stdio_regular_loop(state)
end
end
defp handle_stdio_docker_loop(state) do
# In Docker, use regular IO.read instead of Port.open({:fd, 0, 1})
# to avoid "driver_select stealing control of fd=0" conflicts with external MCP servers
# This allows external servers to use pipes while Agent Coordinator reads from stdin
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, "STDIO interface shutting down (EOF)")
exit(:normal)
{:error, reason} ->
IO.puts(:stderr, "STDIO error: #{inspect(reason)}")
exit({:error, reason})
line ->
handle_stdio_message(String.trim(line), state)
handle_stdio_docker_loop(state)
end
end
@@ -646,4 +674,21 @@ defmodule AgentCoordinator.InterfaceManager do
end
defp deep_merge(_left, right), do: right
# Check if running in Docker environment
defp docker_environment? do
# Check common Docker environment indicators
System.get_env("DOCKER_CONTAINER") != nil or
System.get_env("container") != nil or
System.get_env("DOCKERIZED") != nil or
File.exists?("/.dockerenv") or
File.exists?("/proc/1/cgroup") and
(File.read!("/proc/1/cgroup") |> String.contains?("docker")) or
String.contains?(to_string(System.get_env("PATH", "")), "/app/") or
# Check if we're running under a container init system
case File.read("/proc/1/comm") do
{:ok, comm} -> String.trim(comm) in ["bash", "sh", "docker-init", "tini"]
_ -> false
end
end
end

View File

@@ -654,7 +654,7 @@ defmodule AgentCoordinator.MCPServer do
}}
{:error, reason} ->
Logger.error("Failed to create session for agent #{agent.id}: #{inspect(reason)}")
IO.puts(:stderr, "Failed to create session for agent #{agent.id}: #{inspect(reason)}")
# Still return success but without session token for backward compatibility
{:ok, %{agent_id: agent.id, codebase_id: agent.codebase_id, status: "registered"}}
end
@@ -1226,17 +1226,15 @@ defmodule AgentCoordinator.MCPServer do
tools: []
}
# Initialize the server and get tools
# Initialize the server and get tools with shorter timeout
case initialize_external_server(server_info) do
{:ok, tools} ->
{:ok, %{server_info | tools: tools}}
{:error, reason} ->
# Cleanup on initialization failure
cleanup_external_pid_file(pid_file_path)
kill_external_process(os_pid)
if Port.info(port), do: Port.close(port)
{:error, reason}
# Log error but don't fail - continue with empty tools
IO.puts(:stderr, "Failed to initialize #{name}: #{reason}")
{:ok, %{server_info | tools: []}}
end
{:error, reason} ->
@@ -1276,6 +1274,8 @@ defmodule AgentCoordinator.MCPServer do
env_list =
Enum.map(env, fn {key, value} -> {String.to_charlist(key), String.to_charlist(value)} end)
# Use pipe communication but allow stdin/stdout for MCP protocol
# Remove :nouse_stdio since MCP servers need stdin/stdout to communicate
port_options = [
:binary,
:stream,
@@ -1357,7 +1357,7 @@ defmodule AgentCoordinator.MCPServer do
Port.command(server_info.port, request_json)
# Collect full response by reading multiple lines if needed
response_data = collect_external_response(server_info.port, "", 30_000)
response_data = collect_external_response(server_info.port, "", 5_000)
cond do
response_data == "" ->
@@ -1503,35 +1503,6 @@ defmodule AgentCoordinator.MCPServer do
pid_file_path
end
defp cleanup_external_pid_file(pid_file_path) do
if File.exists?(pid_file_path) do
File.rm(pid_file_path)
end
end
defp kill_external_process(os_pid) when is_integer(os_pid) do
try do
case System.cmd("kill", ["-TERM", to_string(os_pid)]) do
{_, 0} ->
IO.puts(:stderr, "Successfully terminated process #{os_pid}")
:ok
{_, _} ->
case System.cmd("kill", ["-KILL", to_string(os_pid)]) do
{_, 0} ->
IO.puts(:stderr, "Force killed process #{os_pid}")
:ok
{_, _} ->
IO.puts(:stderr, "Failed to kill process #{os_pid}")
:error
end
end
rescue
_ -> :error
end
end
defp get_all_unified_tools_from_state(state) do
# Combine coordinator tools with external server tools from state
coordinator_tools = @mcp_tools

View File

@@ -78,7 +78,7 @@ defmodule AgentCoordinator.SessionManager do
}
}
Logger.info("SessionManager started with #{state.config.expiry_minutes}min expiry")
IO.puts(:stderr, "SessionManager started with #{state.config.expiry_minutes}min expiry")
{:ok, state}
end
@@ -99,7 +99,7 @@ defmodule AgentCoordinator.SessionManager do
new_sessions = Map.put(state.sessions, session_token, session_data)
new_state = %{state | sessions: new_sessions}
Logger.debug("Created session #{session_token} for agent #{agent_id}")
IO.puts(:stderr, "Created session #{session_token} for agent #{agent_id}")
{:reply, {:ok, session_token}, new_state}
end
@@ -136,7 +136,7 @@ defmodule AgentCoordinator.SessionManager do
session_data ->
new_sessions = Map.delete(state.sessions, session_token)
new_state = %{state | sessions: new_sessions}
Logger.debug("Invalidated session #{session_token} for agent #{session_data.agent_id}")
IO.puts(:stderr, "Invalidated session #{session_token} for agent #{session_data.agent_id}")
{:reply, :ok, new_state}
end
end
@@ -161,7 +161,7 @@ defmodule AgentCoordinator.SessionManager do
end)
if length(expired_sessions) > 0 do
Logger.debug("Cleaned up #{length(expired_sessions)} expired sessions")
IO.puts(:stderr, "Cleaned up #{length(expired_sessions)} expired sessions")
end
new_state = %{state | sessions: Map.new(active_sessions)}

View File

@@ -37,7 +37,7 @@ defmodule AgentCoordinator.WebSocketHandler do
# Start heartbeat timer
Process.send_after(self(), :heartbeat, @heartbeat_interval)
Logger.info("WebSocket connection established: #{session_id}")
IO.puts(:stderr, "WebSocket connection established: #{session_id}")
{:ok, state}
end
@@ -64,7 +64,7 @@ defmodule AgentCoordinator.WebSocketHandler do
@impl WebSock
def handle_in({_binary, [opcode: :binary]}, state) do
Logger.warning("Received unexpected binary data on WebSocket")
IO.puts(:stderr, "Received unexpected binary data on WebSocket")
{:ok, state}
end
@@ -95,20 +95,20 @@ defmodule AgentCoordinator.WebSocketHandler do
@impl WebSock
def handle_info(message, state) do
Logger.debug("Received unexpected message: #{inspect(message)}")
IO.puts(:stderr, "Received unexpected message: #{inspect(message)}")
{:ok, state}
end
@impl WebSock
def terminate(:remote, state) do
Logger.info("WebSocket connection closed by client: #{state.session_id}")
IO.puts(:stderr, "WebSocket connection closed by client: #{state.session_id}")
cleanup_session(state)
:ok
end
@impl WebSock
def terminate(reason, state) do
Logger.info("WebSocket connection terminated: #{state.session_id}, reason: #{inspect(reason)}")
IO.puts(:stderr, "WebSocket connection terminated: #{state.session_id}, reason: #{inspect(reason)}")
cleanup_session(state)
:ok
end
@@ -245,7 +245,7 @@ defmodule AgentCoordinator.WebSocketHandler do
{:reply, {:text, Jason.encode!(response)}, updated_state}
unexpected ->
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
error_response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
@@ -287,7 +287,7 @@ defmodule AgentCoordinator.WebSocketHandler do
defp handle_initialized_notification(_message, state) do
# Client is ready to receive notifications
Logger.info("WebSocket client initialized: #{state.session_id}")
IO.puts(:stderr, "WebSocket client initialized: #{state.session_id}")
{:ok, state}
end
@@ -304,7 +304,7 @@ defmodule AgentCoordinator.WebSocketHandler do
{:ok, state}
unexpected ->
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
{:ok, state}
end
else

View File

@@ -1,106 +0,0 @@
{
"config": {
"auto_restart_delay": 1000,
"heartbeat_interval": 10000,
"max_restart_attempts": 3,
"startup_timeout": 30000
},
"interfaces": {
"enabled_interfaces": ["stdio"],
"stdio": {
"enabled": true,
"handle_stdio": true,
"description": "Local MCP interface for VSCode and direct clients"
},
"http": {
"enabled": false,
"port": 8080,
"host": "localhost",
"cors_enabled": true,
"description": "HTTP REST API for remote MCP clients"
},
"websocket": {
"enabled": false,
"port": 8081,
"host": "localhost",
"description": "WebSocket interface for real-time web clients"
},
"auto_restart": {
"stdio": false,
"http": true,
"websocket": true
},
"tool_filtering": {
"local_only_tools": [
"read_file", "write_file", "create_file", "delete_file",
"list_directory", "search_files", "move_file", "get_file_info",
"vscode_*", "run_in_terminal", "get_terminal_output"
],
"always_safe_tools": [
"register_agent", "create_task", "get_task_board",
"heartbeat", "create_entities", "sequentialthinking"
]
}
},
"servers": {
"mcp_filesystem": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
"auto_restart": true,
"description": "Filesystem operations server with heartbeat coverage",
"local_only": true
},
"mcp_memory": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"auto_restart": true,
"description": "Memory and knowledge graph server",
"local_only": false
},
"mcp_sequentialthinking": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
"auto_restart": true,
"description": "Sequential thinking and reasoning server",
"local_only": false
},
"mcp_context7": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@upstash/context7-mcp"],
"auto_restart": true,
"description": "Context7 library documentation server",
"local_only": false
}
},
"examples": {
"stdio_mode": {
"description": "Traditional MCP over stdio for local clients",
"command": "./scripts/mcp_launcher_multi.sh stdio",
"use_case": "VSCode MCP integration, local development"
},
"http_mode": {
"description": "HTTP REST API for remote clients",
"command": "./scripts/mcp_launcher_multi.sh http 8080",
"use_case": "Remote API access, web applications, CI/CD"
},
"websocket_mode": {
"description": "WebSocket for real-time web clients",
"command": "./scripts/mcp_launcher_multi.sh websocket 8081",
"use_case": "Real-time web dashboards, live collaboration"
},
"remote_mode": {
"description": "Both HTTP and WebSocket on same port",
"command": "./scripts/mcp_launcher_multi.sh remote 8080",
"use_case": "Complete remote access with both REST and real-time"
},
"all_mode": {
"description": "All interface modes simultaneously",
"command": "./scripts/mcp_launcher_multi.sh all 8080",
"use_case": "Development, testing, maximum compatibility"
}
}
}

12
nats-server.conf Normal file
View File

@@ -0,0 +1,12 @@
port: 4222
jetstream {
store_dir: /var/lib/nats/jetstream
max_memory_store: 1GB
max_file_store: 10GB
}
http_port: 8222
log_file: "/var/log/nats-server.log"
debug: false
trace: false

View File

@@ -37,67 +37,7 @@ end
# Log that we're ready
IO.puts(:stderr, \"Unified MCP server ready with automatic task tracking\")
# Handle MCP JSON-RPC messages through the unified server
defmodule UnifiedMCPStdio do
def start do
spawn_link(fn -> message_loop() end)
Process.sleep(:infinity)
end
defp message_loop do
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, \"Unified MCP server shutting down\")
System.halt(0)
{:error, reason} ->
IO.puts(:stderr, \"IO Error: #{inspect(reason)}\")
System.halt(1)
line ->
handle_message(String.trim(line))
message_loop()
end
end
defp handle_message(\"\"), do: :ok
defp handle_message(json_line) do
try do
request = Jason.decode!(json_line)
# Route through unified MCP server for automatic task tracking
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
IO.puts(Jason.encode!(response))
rescue
e in Jason.DecodeError ->
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => nil,
\"error\" => %{
\"code\" => -32700,
\"message\" => \"Parse error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
e ->
# Try to get the ID from the malformed request
id = try do
partial = Jason.decode!(json_line)
Map.get(partial, \"id\")
rescue
_ -> nil
end
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => id,
\"error\" => %{
\"code\" => -32603,
\"message\" => \"Internal error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
end
end
end
UnifiedMCPStdio.start()
# STDIO handling is now managed by InterfaceManager, not here
# Just keep the process alive
Process.sleep(:infinity)
"

View File

@@ -1,9 +0,0 @@
Add comprehensive agent activity tracking
- Enhanced Agent struct with current_activity, current_files, and activity_history fields
- Created ActivityTracker module to infer activities from tool calls
- Integrated activity tracking into MCP server tool routing
- Updated task board APIs to include activity information
- Agents now show real-time status like 'Reading file.ex', 'Editing main.py', 'Sequential thinking', etc.
- Added activity history to track recent agent actions
- All file operations and tool calls are now tracked and displayed