Compare commits

...

7 Commits

Author SHA1 Message Date
Ra
d22675fd16 push to github registry not docker
Some checks failed
build-container / build (push) Has been cancelled
2025-09-08 19:42:58 -07:00
Ra
87b7181d4a don't copy the unused file into docker 2025-09-08 19:37:56 -07:00
Ra
0d9166cc56 cleanup unused file 2025-09-08 19:36:26 -07:00
Ra
74a8574778 Fix docker startup so that it works properly with stdio mode. Probably worthwhile to toss majority of this readme, less confusing 2025-09-08 19:35:27 -07:00
Ra
5d3e04c5f8 None of this is necessary 2025-09-06 10:13:39 -07:00
Ra
0327665fe2 update copilot instructions with something actually relevant 2025-09-06 10:08:55 -07:00
Ra
b1f55799ec Add comprehensive agent activity tracking
- Enhanced Agent struct with current_activity, current_files, and activity_history fields
- Created ActivityTracker module to infer activities from tool calls
- Integrated activity tracking into MCP server tool routing
- Updated task board APIs to include activity information
- Agents now show real-time status like 'Reading file.ex', 'Editing main.py', 'Sequential thinking', etc.
- Added activity history to track recent agent actions
- All file operations and tool calls are now tracked and displayed
2025-09-06 09:59:48 -07:00
38 changed files with 4910 additions and 1381 deletions

View File

@@ -1,3 +0,0 @@
---
applyTo: '**/*.md'
---

View File

@@ -0,0 +1,9 @@
---
applyTo: '**'
---
- You must always use the agent-coordinator mcp server, this will allow you to coordinate your work with other agents that may be actively working on the same codebase as you.
- All MCP tool calls which would alter anythings state should be done through the agent-coordinator, even vscode internal tool calls. Searches are fine to not be done through the agent coordinator. The core reason for running them through the agent-coordinator is for the heartbeat.
- Always make sure you append at least 2 random words onto the end of your agent name when registering / sending metadata, eg: `GitHub Copilot Silver Banana`, `GitHub Copilot Yellow Koala`, `GitHub Copilot Olive Llama`, this will assure that not agents have overlapping names and get tracked as the same agent (hopefully)
- Always check the task board when given new user input which will require making any changes to files.
- Always populate your own task board with your task set so other agents may be informed of your planned tasks, and in turn you will be able to see other agents' agendas.
- Once you are coming to an end of your current query, and you will be passing the turn to wait for user input, you must unregister yourself as an agent, and upon the followup you should re-register and follow through with the same registration flow.

View File

@@ -1,50 +0,0 @@
---
applyTo: '**'
---
# No Duplicate Files Policy
## Critical Rule: NO DUPLICATE FILES
**NEVER** create files with adjectives or verbs that duplicate existing functionality:
-`enhanced_mcp_server.ex` when `mcp_server.ex` exists
-`unified_mcp_server.ex` when `mcp_server.ex` exists
-`mcp_server_manager.ex` when `mcp_server.ex` exists
-`new_config.ex` when `config.ex` exists
-`improved_task_registry.ex` when `task_registry.ex` exists
## What To Do Instead
1. **BEFORE** making changes that might create a new file:
```bash
git add . && git commit -m "Save current state before refactoring"
```
2. **MODIFY** the existing file directly instead of creating a "new" version
3. **IF** you need to completely rewrite a file:
- Make the changes directly to the original file
- Don't create `*_new.*` or `enhanced_*.*` versions
## Why This Rule Exists
When you create duplicate files:
- Future sessions can't tell which file is "real"
- The codebase becomes inconsistent and confusing
- Multiple implementations cause bugs and maintenance nightmares
- Even YOU get confused about which file to edit next time
## The Human Is Right
The human specifically said: "Do not re-create the same file with some adjective/verb attached while leaving the original, instead, update the code and make it better, changes are good."
**Listen to them.** They prefer file replacement over duplicates.
## Implementation
- Always check if a file with similar functionality exists before creating a new one
- Use `git add . && git commit` before potentially destructive changes
- Replace, don't duplicate
- Keep the codebase clean and consistent
**This rule is more important than any specific feature request.**

41
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: build-container
on:
push:
branches:
- main
run-name: build-image-${{ github.run_id }}
permissions:
contents: read
packages: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
ghcr.io/rooba/agentcoordinator:latest
ghcr.io/rooba/agentcoordinator:${{ github.sha }}
file: ./Dockerfile
github-token: ${{ secrets.GITHUB_TOKEN }}

3
.gitignore vendored
View File

@@ -95,3 +95,6 @@ coverage/
/docs/LANGUAGE_IMPLEMENTATIONS.md
/asdf.txt
/erl_crash.dump
/_build
/test_env

View File

@@ -1,56 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Initial repository structure cleanup
- Organized scripts into dedicated directories
- Enhanced documentation
- GitHub Actions CI/CD workflow
- Development and testing dependencies
### Changed
- Moved demo files to `examples/` directory
- Moved utility scripts to `scripts/` directory
- Updated project metadata in mix.exs
- Enhanced .gitignore for better coverage
## [0.1.0] - 2025-08-22
### Features
- Initial release of AgentCoordinator
- Distributed task coordination system for AI agents
- NATS-based messaging and persistence
- MCP (Model Context Protocol) server integration
- Task registry with agent-specific inboxes
- File-level conflict resolution
- Real-time agent communication
- Event sourcing with configurable retention
- Fault-tolerant supervision trees
- Command-line interface for task management
- VS Code integration setup scripts
- Comprehensive examples and documentation
### Core Features
- Agent registration and capability management
- Task creation, assignment, and completion
- Task board visualization
- Heartbeat monitoring for agent health
- Persistent task state with NATS JetStream
- MCP tools for external agent integration
### Development Tools
- Setup scripts for NATS and VS Code configuration
- Example MCP client implementations
- Test scripts for various scenarios
- Demo workflows for testing functionality

View File

@@ -1,195 +0,0 @@
# Contributing to AgentCoordinator
Thank you for your interest in contributing to AgentCoordinator! This document provides guidelines for contributing to the project.
## 🤝 Code of Conduct
By participating in this project, you agree to abide by our Code of Conduct. Please report unacceptable behavior to the project maintainers.
## 🚀 How to Contribute
### Reporting Bugs
1. **Check existing issues** first to see if the bug has already been reported
2. **Create a new issue** with a clear title and description
3. **Include reproduction steps** with specific details
4. **Provide system information** (Elixir version, OS, etc.)
5. **Add relevant logs** or error messages
### Suggesting Features
1. **Check existing feature requests** to avoid duplicates
2. **Create a new issue** with the `enhancement` label
3. **Describe the feature** and its use case clearly
4. **Explain why** this feature would be beneficial
5. **Provide examples** of how it would be used
### Development Setup
1. **Fork the repository** on GitHub
2. **Clone your fork** locally:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
```
3. **Install dependencies**:
```bash
mix deps.get
```
4. **Start NATS server**:
```bash
nats-server -js -p 4222 -m 8222
```
5. **Run tests** to ensure everything works:
```bash
mix test
```
### Making Changes
1. **Create a feature branch**:
```bash
git checkout -b feature/your-feature-name
```
2. **Make your changes** following our coding standards
3. **Add tests** for new functionality
4. **Run the test suite**:
```bash
mix test
```
5. **Run code quality checks**:
```bash
mix format
mix credo
mix dialyzer
```
6. **Commit your changes** with a descriptive message:
```bash
git commit -m "Add feature: your feature description"
```
7. **Push to your fork**:
```bash
git push origin feature/your-feature-name
```
8. **Create a Pull Request** on GitHub
## 📝 Coding Standards
### Elixir Style Guide
- Follow the [Elixir Style Guide](https://github.com/christopheradams/elixir_style_guide)
- Use `mix format` to format your code
- Write clear, descriptive function and variable names
- Add `@doc` and `@spec` for public functions
- Follow the existing code patterns in the project
### Code Organization
- Keep modules focused and cohesive
- Use appropriate GenServer patterns for stateful processes
- Follow OTP principles and supervision tree design
- Organize code into logical namespaces
### Testing
- Write comprehensive tests for all new functionality
- Use descriptive test names that explain what is being tested
- Follow the existing test patterns and structure
- Ensure tests are fast and reliable
- Aim for good test coverage (check with `mix test --cover`)
### Documentation
- Update documentation for any API changes
- Add examples for new features
- Keep the README.md up to date
- Use clear, concise language
- Include code examples where helpful
## 🔧 Pull Request Guidelines
### Before Submitting
- [ ] Tests pass locally (`mix test`)
- [ ] Code is properly formatted (`mix format`)
- [ ] No linting errors (`mix credo`)
- [ ] Type checks pass (`mix dialyzer`)
- [ ] Documentation is updated
- [ ] CHANGELOG.md is updated (if applicable)
### Pull Request Description
Please include:
1. **Clear title** describing the change
2. **Description** of what the PR does
3. **Issue reference** if applicable (fixes #123)
4. **Testing instructions** for reviewers
5. **Breaking changes** if any
6. **Screenshots** if UI changes are involved
### Review Process
1. At least one maintainer will review your PR
2. Address any feedback or requested changes
3. Once approved, a maintainer will merge your PR
4. Your contribution will be credited in the release notes
## 🧪 Testing
### Running Tests
```bash
# Run all tests
mix test
# Run tests with coverage
mix test --cover
# Run specific test file
mix test test/agent_coordinator/mcp_server_test.exs
# Run tests in watch mode
mix test.watch
```
### Writing Tests
- Place test files in the `test/` directory
- Mirror the structure of the `lib/` directory
- Use descriptive `describe` blocks to group related tests
- Use `setup` blocks for common test setup
- Mock external dependencies appropriately
## 🚀 Release Process
1. Update version in `mix.exs`
2. Update `CHANGELOG.md` with new version details
3. Create and push a version tag
4. Create a GitHub release
5. Publish to Hex (maintainers only)
## 📞 Getting Help
- **GitHub Issues**: For bugs and feature requests
- **GitHub Discussions**: For questions and general discussion
- **Documentation**: Check the [online docs](https://hexdocs.pm/agent_coordinator)
## 🏷️ Issue Labels
- `bug`: Something isn't working
- `enhancement`: New feature or request
- `documentation`: Improvements or additions to documentation
- `good first issue`: Good for newcomers
- `help wanted`: Extra attention is needed
- `question`: Further information is requested
## 🎉 Recognition
Contributors will be:
- Listed in the project's contributors section
- Mentioned in release notes for significant contributions
- Given credit in any related blog posts or presentations
Thank you for contributing to AgentCoordinator! 🚀

View File

@@ -2,18 +2,16 @@
# Creates a production-ready container for the MCP server without requiring local Elixir/OTP installation
# Build stage - Use official Elixir image with OTP
FROM elixir:1.16-otp-26-alpine AS builder
FROM elixir:1.18 AS builder
# Install build dependencies
RUN apk add --no-cache \
build-base \
# Set environment variables
RUN apt-get update && apt-get install -y \
git \
curl \
bash
# Install Node.js and npm for MCP external servers (bunx dependency)
RUN apk add --no-cache nodejs npm
RUN npm install -g bun
bash \
unzip \
zlib1g
# Set build environment
ENV MIX_ENV=prod
@@ -22,79 +20,29 @@ ENV MIX_ENV=prod
WORKDIR /app
# Copy mix files
COPY mix.exs mix.lock ./
COPY lib lib
COPY mcp_servers.json \
mix.exs \
mix.lock \
docker-entrypoint.sh ./
COPY scripts ./scripts/
# Install mix dependencies
RUN mix local.hex --force && \
mix local.rebar --force && \
mix deps.get --only $MIX_ENV && \
mix deps.compile
# Copy source code
COPY lib lib
COPY config config
# Compile the release
RUN mix deps.get
RUN mix deps.compile
RUN mix compile
# Prepare release
RUN mix release
RUN chmod +x ./docker-entrypoint.sh ./scripts/mcp_launcher.sh
RUN curl -fsSL https://bun.sh/install | bash
RUN ln -s /root/.bun/bin/* /usr/local/bin/
# Runtime stage - Use smaller Alpine image
FROM alpine:3.18 AS runtime
# Install runtime dependencies
RUN apk add --no-cache \
bash \
openssl \
ncurses-libs \
libstdc++ \
nodejs \
npm
# Install Node.js packages for external MCP servers
RUN npm install -g bun
# Create non-root user for security
RUN addgroup -g 1000 appuser && \
adduser -u 1000 -G appuser -s /bin/bash -D appuser
# Create app directory and set permissions
WORKDIR /app
RUN chown -R appuser:appuser /app
# Copy the release from builder stage
COPY --from=builder --chown=appuser:appuser /app/_build/prod/rel/agent_coordinator ./
# Copy configuration files
COPY --chown=appuser:appuser mcp_servers.json ./
COPY --chown=appuser:appuser scripts/mcp_launcher.sh ./scripts/
# Make scripts executable
RUN chmod +x ./scripts/mcp_launcher.sh
# Copy Docker entrypoint script
COPY --chown=appuser:appuser docker-entrypoint.sh ./
RUN chmod +x ./docker-entrypoint.sh
# Switch to non-root user
USER appuser
# Set environment variables
ENV MIX_ENV=prod
ENV NATS_HOST=localhost
ENV NATS_PORT=4222
ENV SHELL=/bin/bash
# Expose the default port (if needed for HTTP endpoints)
EXPOSE 4000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD /app/bin/agent_coordinator ping || exit 1
# Set the entrypoint
ENTRYPOINT ["/app/docker-entrypoint.sh"]
# Default command
CMD ["/app/scripts/mcp_launcher.sh"]

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2025 AgentCoordinator Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

753
README.md
View File

@@ -1,110 +1,40 @@
# Agent Coordinator
A **Model Context Protocol (MCP) server** that enables multiple AI agents to coordinate their work seamlessly across codebases without conflicts. Built with Elixir for reliability and fault tolerance.
Agent Coordinator is a MCP proxy server that enables multiple AI agents to collaborate seamlessly without conflicts. It acts as a single MCP interface that proxies ALL tool calls through itself, ensuring every agent maintains full project awareness while the coordinator tracks real-time agent presence.
## 🎯 What is Agent Coordinator?
## What is Agent Coordinator?
Agent Coordinator is a **unified MCP proxy server** that enables multiple AI agents to collaborate seamlessly without conflicts. As shown in the architecture diagram above, it acts as a single interface connecting multiple agents (Purple Zebra, Yellow Elephant, etc.) to a comprehensive ecosystem of tools and task management.
**The coordinator operates as a transparent proxy layer:**
- **Single Interface**: All agents connect to one MCP server (the coordinator)
- **Proxy Architecture**: Every tool call flows through the coordinator to external MCP servers
- **Presence Tracking**: Each proxied tool call updates agent heartbeat and task status
- **Project Awareness**: All agents see the same unified view of project state through the proxy
**This proxy design orchestrates four core components:**
**The coordinator orchestrates three core components:**
- **Task Registry**: Intelligent task queuing, agent matching, and automatic progress tracking
- **Agent Manager**: Agent registration, heartbeat monitoring, and capability-based assignment
- **Codebase Registry**: Cross-repository coordination, dependency management, and workspace organization
- **Unified Tool Registry**: Seamlessly proxies external MCP tools while adding coordination capabilities
**Plus a Unified Tool Registry** that seamlessly combines:
- Native coordination tools (register_agent, get_next_task, etc.)
- Proxied MCP tools from external servers (read_file, search_memory, etc.)
- VS Code integration tools (get_active_editor, run_command, etc.)
## Overview
<!-- ![Agent Coordinator Architecture](docs/architecture-diagram.svg) Let's not show this it's confusing -->
### 🏗️ Architecture Components
Instead of agents conflicting over files or duplicating work, they connect through a single MCP interface that automatically routes tool calls, tracks all operations as coordinated tasks, and maintains real-time communication via personal agent inboxes and shared task boards.
**Core Coordinator Components:**
**Key Features:**
- Task Registry: Intelligent task queuing, agent matching, and progress tracking
- Agent Manager: Registration, heartbeat monitoring, and capability-based assignment
Codebase Registry: Cross-repository coordination and workspace management
- Unified Tool Registry: Combines native coordination tools with external MCP tools
- Every tool call automatically updates the agent's activity for other agent's to see
- **🤖 Multi-Agent Coordination**: Register multiple AI agents (GitHub Copilot, Claude, etc.) with different capabilities
- **<2A> Unified MCP Proxy**: Single MCP server that manages and unifies multiple external MCP servers
- **📡 External Server Management**: Automatically starts, monitors, and manages MCP servers defined in `mcp_servers.json`
- **🛠️ Universal Tool Registry**: Combines tools from all external servers with native coordination tools
- **🎯 Intelligent Tool Routing**: Automatically routes tool calls to the appropriate server or handles natively
- **📝 Automatic Task Tracking**: Every tool usage becomes a tracked task with agent coordination
- **⚡ Real-Time Communication**: Agents can communicate and share progress via heartbeat system
- **🔌 Dynamic Tool Discovery**: Automatically discovers new tools when external servers start/restart
- **🎮 Cross-Codebase Support**: Coordinate work across multiple repositories and projects
- **🔌 MCP Standard Compliance**: Works with any MCP-compatible AI agent or tool
**External Integration:**
## 🚀 How It Works
- VS Code Integration: Direct editor commands and workspace management
```ascii
┌────────────────────────────┐
│AI AGENTS & TOOLS CONNECTION│
└────────────────────────────┘
Agent 1 (Purple Zebra) Agent 2(Yellow Elephant) Agent N (...)
│ │ │
└────────────MCP Protocol┼(Single Interface)──────────┘
┌───────────────────────────────┴──────────────────────────────────┐
│ AGENT COORDINATOR (Unified MCP Server) │
├──────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────────┐ │
│ │ Task Registry │ │ Agent Manager │ │Codebase Registry │ │
│ ├─────────────────┤ ├─────────────────┤ ├──────────────────┤ │
│ │• Task Queuing │ │• Registration │ │• Cross-Repo │ │
│ │• Agent Matching │ │• Heartbeat │ │• Dependencies │ │
│ │• Auto-Tracking │ │• Capabilities │ │• Workspace Mgmt │ │
│ └─────────────────┘ └─────────────────┘ └──────────────────┘ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ UNIFIED TOOL REGISTRY │ │
│ ├─────────────────────────────────────────────────────────────┤ │
│ │ Native Tools: register_agent, get_next_task, │ ╞═══════════════════════════════════╕
│ │ create_task_set, complete_task, ... │ │ │
│ │ Proxied MCP Tools: read_file, write_file, │ │ ┍━━━━━━━━━━━┷━━━━━━━━━━━┑
│ │ search_memory, get_docs, ... │ │ │ Task Board │
│ │ VS Code Tools: get_active_editor, set_selection, │ │ ┏━━━━━━━━━━━━━━━━━━━━┓┝━━━━━━━━━━━┳━━━━━━━━━━━┥ ┏━━━━━━━━━━━━━━━━━━━━┓
│ │ get_workspace_folders, run_command, ... │ │ ┃ Agent 1 INBOX ┃│ Agent 1 Q ┃ Agent 2 Q │ ┃ Agent 2 INBOX ┃
│ ├─────────────────────────────────────────────────────────────┤ │ ┣━━━━━━━━━━━━━━━━━━━━┫┝━━━━━━━━━━━╋━━━━━━━━━━━┥ ┣━━━━━━━━━━━━━━━━━━━━┫
│ │ Routes to appropriate server or handles natively │ │ ┃ current: task 3 ┃│ ✓ Task 1 ┃ ✓ Task 1 │ ┃ current: task 2 ┃
│ │ Configure MCP Servers to run via MCP_TOOLS_FILE │ │ ┃ [ complete task ] ┣┥ ✓ Task 2 ┃ ➔ Task 2 ┝━┫ [ complete task ] ┃<─┐
│ └─────────────────────────────────────────────────────────────┘ │ ┗━━━━━━━━━━━━━━━━━━━━┛│ ➔ Task 3 ┃ … Task 3 │ ┗━━━━━━━━━━━━━━━━━━━━┛ │
└─────────────────────────────────┬────────────────────────────────┘ ┝━━━━━━━━━━━╋━━━━━━━━━━━┥ │
│ │ Agent 3 Q ┃ Agent 4 Q │ ┏━━━━━━━━━━━━━━━━━━━━┓ │
│ ┝━━━━━━━━━━━╋━━━━━━━━━━━┥ ┃ Agent 4 INBOX ┃<─┤ Personal inboxes
│ │ ✓ Task 1 ┃ ➔ Task 1 │ ┣━━━━━━━━━━━━━━━━━━━━┫ │
│ │ ✓ Task 2 ┃ … Task 2 │ ┃ current: task 2 ┃ │
┌─────────────────────────────────┴─────────────────────────────────────┐ │ ✓ Task 3 ┃ … Task 3 ┝━┫ [ complete task ] ┃ │
│ EXTERNAL MCP SERVERS │ ┕━━━━━┳━━━━━┻━━━━━━━━━━━┙ ┗━━━━━━━━━━━━━━━━━━━━┛ │
└──────────────┬─────────┬─────────┬─────────┬─────────┬─────────┬──────┤ ┏━━━━━━━━━━┻━━━━━━━━━┓ │
│ │ │ │ │ │ │ │ ┃ Agent 3 INBOX ┃<━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
┌────┴───┐ │ ┌────┴───┐ │ ┌────┴───┐ │ ┌────┴───┐ │ ┣━━━━━━━━━━━━━━━━━━━━┫
│ MCP 1 │ │ │ MCP 2 │ │ │ MCP 3 │ │ │ MCP 4 │ │ ┃ current: none ┃
├────────┤ │ ├────────┤ │ ├────────┤ │ ├────────┤ │ ┃ [ view history ] ┃
│• tool 1│ │ │• tool 1│ │ │• tool 1│ │ │• tool 1│ │ ┗━━━━━━━━━━━━━━━━━━━━┛
│• tool 2│ │ │• tool 2│ │ │• tool 2│ │ │• tool 2│ │
│• tool 3│┌────┴───┐│• tool 3│┌────┴───┐│• tool 3│┌────┴───┐│• tool 3│┌─┴──────┐
└────────┘│ MCP 5 │└────────┘│ MCP 6 │└────────┘│ MCP 7 │└────────┘│ MCP 8 │
├────────┤ ├────────┤ ├────────┤ ├────────┤
│• tool 1│ │• tool 1│ │• tool 1│ │• tool 1│
│• tool 2│ │• tool 2│ │• tool 2│ │• tool 2│
│• tool 3│ │• tool 3│ │• tool 3│ │• tool 3│
└────────┘ └────────┘ └────────┘ └────────┘
🔥 WHAT HAPPENS:
1. Agent Coordinator reads mcp_servers.json config
2. Spawns & initializes all external MCP servers
3. Discovers tools from each server via MCP protocol
4. Builds unified tool registry (native + external)
5. Presents single MCP interface to AI agents
6. Routes tool calls to appropriate servers
7. Automatically tracks all operations as tasks
8. Maintains heartbeat & coordination across agents
```
## 🔧 MCP Server Management & Unified Tool Registry
Agent Coordinator acts as a **unified MCP proxy server** that manages multiple external MCP servers while providing its own coordination capabilities. This creates a single, powerful interface for AI agents to access hundreds of tools seamlessly.
### 📡 External Server Management
### External Server Management
The coordinator automatically manages external MCP servers based on configuration in `mcp_servers.json`:
@@ -114,7 +44,7 @@ The coordinator automatically manages external MCP servers based on configuratio
"mcp_filesystem": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
"auto_restart": true,
"description": "Filesystem operations server"
},
@@ -124,12 +54,6 @@ The coordinator automatically manages external MCP servers based on configuratio
"args": ["-y", "@modelcontextprotocol/server-memory"],
"auto_restart": true,
"description": "Memory and knowledge graph server"
},
"mcp_figma": {
"type": "http",
"url": "http://127.0.0.1:3845/mcp",
"auto_restart": true,
"description": "Figma design integration server"
}
},
"config": {
@@ -141,521 +65,204 @@ The coordinator automatically manages external MCP servers based on configuratio
}
```
**Server Lifecycle Management:**
1. **🚀 Startup**: Reads config and spawns each external server process
2. **🔍 Discovery**: Sends MCP `initialize` and `tools/list` requests to discover available tools
3. **📋 Registration**: Adds discovered tools to the unified tool registry
4. **💓 Monitoring**: Continuously monitors server health and heartbeat
5. **🔄 Auto-Restart**: Automatically restarts failed servers (if configured)
6. **🛡️ Cleanup**: Properly terminates processes and cleans up resources on shutdown
### 🛠️ Unified Tool Registry
The coordinator combines tools from multiple sources into a single, coherent interface:
**Native Coordination Tools:**
- `register_agent` - Register agents with capabilities
- `create_task` - Create coordination tasks
- `get_next_task` - Get assigned tasks
- `complete_task` - Mark tasks complete
- `get_task_board` - View all agent status
- `heartbeat` - Maintain agent liveness
**External Server Tools (Auto-Discovered):**
- **Filesystem**: `read_file`, `write_file`, `list_directory`, `search_files`
- **Memory**: `search_nodes`, `store_memory`, `recall_information`
- **Context7**: `get-library-docs`, `search-docs`, `get-library-info`
- **Figma**: `get_code`, `get_designs`, `fetch_assets`
- **Sequential Thinking**: `sequentialthinking`, `analyze_problem`
- **VS Code**: `run_command`, `install_extension`, `open_file`, `create_task`
**Dynamic Discovery Process:**
```ascii
┌─────────────────┐ MCP Protocol ┌─────────────────┐
│ Agent │ ──────────────────▶│ Agent │
│ Coordinator │ │ Coordinator │
│ │ initialize │ │
│ 1. Starts │◀───────────────── │ 2. Responds │
│ External │ │ with info │
│ Server │ tools/list │ │
│ │ ──────────────────▶│ 3. Returns │
│ 4. Registers │ │ tool list │
│ Tools │◀───────────────── │ │
└─────────────────┘ └─────────────────┘
```
### 🎯 Intelligent Tool Routing
When an AI agent calls a tool, the coordinator intelligently routes the request:
**Routing Logic:**
1. **Native Tools**: Handled directly by Agent Coordinator modules
2. **External Tools**: Routed to the appropriate external MCP server
3. **VS Code Tools**: Routed to integrated VS Code Tool Provider
4. **Unknown Tools**: Return helpful error with available alternatives
**Automatic Task Tracking:**
- Every tool call automatically creates or updates agent tasks
- Maintains context of what agents are working on
- Provides visibility into cross-agent coordination
- Enables intelligent task distribution and conflict prevention
**Example Tool Call Flow:**
```bash
Agent calls "read_file" → Coordinator routes to filesystem server →
Updates agent task → Sends heartbeat → Returns file content
```
## 🛠️ Prerequisites
## Prerequisites
Choose one of these installation methods:
### Option 1: Docker (Recommended - No Elixir Installation Required)
[Docker](#1-start-nats-server)
- **Docker**: 20.10+ and Docker Compose
- **Node.js**: 18+ (for external MCP servers via bun)
### Option 2: Manual Installation
[Manual Installation](#manual-setup)
- **Elixir**: 1.16+ with OTP 26+
- **Mix**: Comes with Elixir installation
- **Node.js**: 18+ (for external MCP servers via bun)
- **Node.js**: 18+ (for some MCP servers)
- **uv**: If using python MCP servers
## ⚡ Quick Start
### Docker Setup
### Option A: Docker Setup (Easiest)
#### 1. Start NATS Server
#### 1. Get the Code
First, start a NATS server that the Agent Coordinator can connect to:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
# Start NATS server with persistent storage
docker run -d \
--name nats-server \
--network agent-coordinator-net \
-p 4222:4222 \
-p 8222:8222 \
-v nats_data:/data \
nats:2.10-alpine \
--jetstream \
--store_dir=/data \
--max_mem_store=1Gb \
--max_file_store=10Gb
# Create the network first if it doesn't exist
docker network create agent-coordinator-net
```
#### 2. Run with Docker Compose
#### 2. Configure Your AI Tools
**For STDIO Mode (Recommended - Direct MCP Integration):**
First, create a Docker network and start the NATS server:
```bash
# Start the full stack (MCP server + NATS + monitoring)
docker-compose up -d
# Create network for secure communication
docker network create agent-coordinator-net
# Or start just the MCP server
docker-compose up agent-coordinator
# Check logs
docker-compose logs -f agent-coordinator
# Start NATS server
docker run -d \
--name nats-server \
--network agent-coordinator-net \
-p 4222:4222 \
-v nats_data:/data \
nats:2.10-alpine \
--jetstream \
--store_dir=/data \
--max_mem_store=1Gb \
--max_file_store=10Gb
```
#### 3. Configuration
Then add this configuration to your VS Code `mcp.json` configuration file via `ctrl + shift + p``MCP: Open User Configuration` or `MCP: Open Remote User Configuration` if running on a remote server:
Edit `mcp_servers.json` to configure external MCP servers, then restart:
```bash
docker-compose restart agent-coordinator
```json
{
"servers": {
"agent-coordinator": {
"command": "docker",
"args": [
"run",
"--network=agent-coordinator-net",
"-v=./mcp_servers.json:/app/mcp_servers.json:ro",
"-v=/path/to/your/workspace:/workspace:rw",
"-e=NATS_HOST=nats-server",
"-e=NATS_PORT=4222",
"-i",
"--rm",
"ghcr.io/rooba/agentcoordinator:latest"
],
"type": "stdio"
}
}
}
```
### Option B: Manual Setup
**Important Notes for File System Access:**
#### 1. Get the Code
If you're using MCP filesystem servers, mount the directories they need access to:
```json
{
"args": [
"run",
"--network=agent-coordinator-net",
"-v=./mcp_servers.json:/app/mcp_servers.json:ro",
"-v=/home/user/projects:/home/user/projects:rw",
"-v=/path/to/workspace:/workspace:rw",
"-e=NATS_HOST=nats-server",
"-e=NATS_PORT=4222",
"-i",
"--rm",
"ghcr.io/rooba/agentcoordinator:latest"
]
}
```
**For HTTP/WebSocket Mode (Alternative - Web API Access):**
If you prefer to run as a web service instead of stdio:
```bash
git clone https://github.com/your-username/agent_coordinator.git
cd agent_coordinator
# Create network first
docker network create agent-coordinator-net
# Start NATS server
docker run -d \
--name nats-server \
--network agent-coordinator-net \
-p 4222:4222 \
-v nats_data:/data \
nats:2.10-alpine \
--jetstream \
--store_dir=/data \
--max_mem_store=1Gb \
--max_file_store=10Gb
# Run Agent Coordinator in HTTP mode
docker run -d \
--name agent-coordinator \
--network agent-coordinator-net \
-p 8080:4000 \
-v ./mcp_servers.json:/app/mcp_servers.json:ro \
-v /path/to/workspace:/workspace:rw \
-e NATS_HOST=nats-server \
-e NATS_PORT=4222 \
-e MCP_INTERFACE_MODE=http \
-e MCP_HTTP_PORT=4000 \
ghcr.io/rooba/agentcoordinator:latest
```
Then access via HTTP API at `http://localhost:8080/mcp` or configure your MCP client to use the HTTP endpoint.
Create or edit `mcp_servers.json` in your project directory to configure external MCP servers:
```json
{
"servers": {
"mcp_filesystem": {
"type": "stdio",
"command": "bunx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
"auto_restart": true
}
}
}
```
### Manual Setup
#### Clone the Repository
> It is suggested to install Elixir (and Erlang) via [asdf](https://asdf-vm.com/) for easy version management.
> NATS can be found at [nats.io](https://github.com/nats-io/nats-server/releases/latest), or via Docker
```bash
git clone https://github.com/rooba/agentcoordinator.git
cd agentcoordinator
mix deps.get
mix compile
```
#### 2. Start the MCP Server
#### Start the MCP Server directly
```bash
# Start the MCP server directly
export MCP_INTERFACE_MODE=stdio # or http / websocket
# export MCP_HTTP_PORT=4000 # if using http mode
./scripts/mcp_launcher.sh
# Or in development mode
mix run --no-halt
```
### 3. Configure Your AI Tools
### Run via VS Code or similar tools
#### For Docker Setup
If using Docker, the MCP server is available at the container's stdio interface. Add this to your VS Code `settings.json`:
Add this to your `mcp.json` or `mcp_servers.json` depending on your tool:
```json
{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "docker",
"args": ["exec", "-i", "agent-coordinator", "/app/scripts/mcp_launcher.sh"],
"env": {
"MIX_ENV": "prod"
}
}
"servers": {
"agent-coordinator": {
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
"args": [],
"env": {
"MIX_ENV": "prod",
"NATS_HOST": "localhost",
"NATS_PORT": "4222"
}
}
}
}
```
#### For Manual Setup
Add this to your VS Code `settings.json`:
```json
{
"github.copilot.advanced": {
"mcp": {
"servers": {
"agent-coordinator": {
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
"args": [],
"env": {
"MIX_ENV": "dev"
}
}
}
}
}
}
}
}
```
### 4. Test It Works
#### Docker Testing
```bash
# Test with Docker
docker-compose exec agent-coordinator /app/bin/agent_coordinator ping
# Run example (if available in container)
docker-compose exec agent-coordinator mix run examples/full_workflow_demo.exs
# View logs
docker-compose logs -f agent-coordinator
```
#### Manual Testing
```bash
# Run the demo to see it in action
mix run examples/full_workflow_demo.exs
```
## 🐳 Docker Usage Guide
### Available Docker Commands
#### Basic Operations
```bash
# Build the image
docker build -t agent-coordinator .
# Run standalone container
docker run -d --name agent-coordinator -p 4000:4000 agent-coordinator
# Run with custom config
docker run -d \
-v ./mcp_servers.json:/app/mcp_servers.json:ro \
-p 4000:4000 \
agent-coordinator
```
#### Docker Compose Operations
```bash
# Start full stack
docker-compose up -d
# Start only agent coordinator
docker-compose up -d agent-coordinator
# View logs
docker-compose logs -f agent-coordinator
# Restart after config changes
docker-compose restart agent-coordinator
# Stop everything
docker-compose down
# Remove volumes (reset data)
docker-compose down -v
```
#### Development with Docker
```bash
# Start in development mode
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Interactive shell for debugging
docker-compose exec agent-coordinator bash
# Run tests in container
docker-compose exec agent-coordinator mix test
# Watch logs during development
docker-compose logs -f
```
### Environment Variables
Configure the container using environment variables:
```bash
# docker-compose.override.yml example
version: '3.8'
services:
agent-coordinator:
environment:
- MIX_ENV=prod
- NATS_HOST=nats
- NATS_PORT=4222
- LOG_LEVEL=info
```
### Custom Configuration
#### External MCP Servers
Mount your own `mcp_servers.json`:
```bash
docker run -d \
-v ./my-mcp-config.json:/app/mcp_servers.json:ro \
agent-coordinator
```
#### Persistent Data
```bash
docker run -d \
-v agent_data:/app/data \
-v nats_data:/data \
agent-coordinator
```
### Monitoring & Health Checks
#### Container Health
```bash
# Check container health
docker-compose ps
# Health check details
docker inspect --format='{{json .State.Health}}' agent-coordinator
# Manual health check
docker-compose exec agent-coordinator /app/bin/agent_coordinator ping
```
#### NATS Monitoring
Access NATS monitoring dashboard:
```bash
# Start with monitoring profile
docker-compose --profile monitoring up -d
# Access dashboard at http://localhost:8080
open http://localhost:8080
```
### Troubleshooting
#### Common Issues
```bash
# Check container logs
docker-compose logs agent-coordinator
# Check NATS connectivity
docker-compose exec agent-coordinator nc -z nats 4222
# Restart stuck container
docker-compose restart agent-coordinator
# Reset everything
docker-compose down -v && docker-compose up -d
```
#### Performance Tuning
```bash
# Allocate more memory
docker-compose up -d --scale agent-coordinator=1 \
--memory=1g --cpus="2.0"
```
## 🎮 How to Use
Once your AI agents are connected via MCP, they can:
### Register as an Agent
```bash
# An agent identifies itself with capabilities
register_agent("GitHub Copilot", ["coding", "testing"], codebase_id: "my-project")
```
### Create Tasks
```bash
# Tasks are created with requirements
create_task("Fix login bug", "Authentication fails on mobile",
priority: "high",
required_capabilities: ["coding", "debugging"]
)
```
### Coordinate Automatically
The coordinator automatically:
- **Matches** tasks to agents based on capabilities
- **Queues** tasks when no suitable agents are available
- **Tracks** agent heartbeats to ensure they're still working
- **Handles** cross-codebase tasks that span multiple repositories
### Available MCP Tools
All MCP-compatible AI agents get these tools automatically:
| Tool | Purpose |
|------|---------|
| `register_agent` | Register an agent with capabilities |
| `create_task` | Create a new task with requirements |
| `get_next_task` | Get the next task assigned to an agent |
| `complete_task` | Mark current task as completed |
| `get_task_board` | View all agents and their status |
| `heartbeat` | Send agent heartbeat to stay active |
| `register_codebase` | Register a new codebase/repository |
| `create_cross_codebase_task` | Create tasks spanning multiple repos |
## 🧪 Development & Testing
### Running Tests
```bash
# Run all tests
mix test
# Run with coverage
mix test --cover
# Try the examples
mix run examples/full_workflow_demo.exs
mix run examples/auto_heartbeat_demo.exs
```
### Code Quality
```bash
# Format code
mix format
# Run static analysis
mix credo
# Type checking
mix dialyzer
```
## 📁 Project Structure
```text
agent_coordinator/
├── lib/
│ ├── agent_coordinator.ex # Main module
│ └── agent_coordinator/
│ ├── mcp_server.ex # MCP protocol implementation
│ ├── task_registry.ex # Task management
│ ├── agent.ex # Agent management
│ ├── codebase_registry.ex # Multi-repository support
│ └── application.ex # Application supervisor
├── examples/ # Working examples
├── test/ # Test suite
├── scripts/ # Helper scripts
└── docs/ # Technical documentation
├── README.md # Documentation index
├── AUTO_HEARTBEAT.md # Unified MCP server details
├── VSCODE_TOOL_INTEGRATION.md # VS Code integration
└── LANGUAGE_IMPLEMENTATIONS.md # Alternative language guides
```
## 🤔 Why This Design?
**The Problem**: Multiple AI agents working on the same codebase step on each other, duplicate work, or create conflicts.
**The Solution**: A coordination layer that:
- Lets agents register their capabilities
- Intelligently distributes tasks
- Tracks progress and prevents conflicts
- Scales across multiple repositories
**Why Elixir?**: Built-in concurrency, fault tolerance, and excellent for coordination systems.
## 🚀 Alternative Implementations
While this Elixir version works great, you might want to consider these languages for broader adoption:
### Go Implementation
- **Pros**: Single binary deployment, great performance, large community
- **Cons**: More verbose concurrency patterns
- **Best for**: Teams wanting simple deployment and good performance
### Python Implementation
- **Pros**: Huge ecosystem, familiar to most developers, excellent tooling
- **Cons**: GIL limitations for true concurrency
- **Best for**: AI/ML teams already using Python ecosystem
### Rust Implementation
- **Pros**: Maximum performance, memory safety, growing adoption
- **Cons**: Steeper learning curve, smaller ecosystem
- **Best for**: Performance-critical deployments
### Node.js Implementation
- **Pros**: JavaScript familiarity, event-driven nature fits coordination
- **Cons**: Single-threaded limitations, callback complexity
- **Best for**: Web teams already using Node.js
## 🤝 Contributing
Contributions are welcome! Here's how:
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- [Model Context Protocol](https://modelcontextprotocol.io/) for the agent communication standard
- [Elixir](https://elixir-lang.org/) community for the excellent ecosystem
- AI development teams pushing the boundaries of collaborative coding
---
**Agent Coordinator** - Making AI agents work together, not against each other.

View File

@@ -18,7 +18,6 @@ services:
profiles:
- dev
# Lightweight development NATS without persistence
nats:
command:
- '--jetstream'

View File

@@ -1,51 +1,17 @@
version: '3.8'
services:
# Agent Coordinator MCP Server
agent-coordinator:
build:
context: .
dockerfile: Dockerfile
container_name: agent-coordinator
environment:
- MIX_ENV=prod
- NATS_HOST=nats
- NATS_PORT=4222
volumes:
# Mount local mcp_servers.json for easy configuration
- ./mcp_servers.json:/app/mcp_servers.json:ro
# Mount a directory for persistent data (optional)
- agent_data:/app/data
ports:
# Expose port 4000 if the app serves HTTP endpoints
- "4000:4000"
depends_on:
nats:
condition: service_healthy
restart: unless-stopped
healthcheck:
test: ["/app/bin/agent_coordinator", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# NATS Message Broker (optional but recommended for production)
nats:
image: nats:2.10-alpine
container_name: agent-coordinator-nats
command:
- '--jetstream'
- '--store_dir=/data'
- '--max_file_store=1G'
- '--max_mem_store=256M'
- '--http_port=8222'
ports:
# NATS client port
- "4222:4222"
# NATS HTTP monitoring port
- "8222:8222"
# NATS routing port for clustering
- "6222:6222"
- "4223:4222"
- "8223:8222"
- "6223:6222"
volumes:
- nats_data:/data
restart: unless-stopped
@@ -55,31 +21,32 @@ services:
timeout: 5s
retries: 3
start_period: 10s
networks:
- agent-coordinator-network
# Optional: NATS Monitoring Dashboard
nats-board:
image: devforth/nats-board:latest
container_name: agent-coordinator-nats-board
agent-coordinator:
image: ghcr.io/rooba/agentcoordinator:latest
container_name: agent-coordinator
environment:
- NATS_HOSTS=nats:4222
- NATS_HOST=nats
- NATS_PORT=4222
- MIX_ENV=prod
volumes:
- ./mcp_servers.json:/app/mcp_servers.json:ro
- ./workspace:/workspace:rw
ports:
- "8080:8080"
- "4000:4000"
depends_on:
nats:
condition: service_healthy
restart: unless-stopped
profiles:
- monitoring
networks:
- agent-coordinator-network
volumes:
# Persistent storage for NATS JetStream
nats_data:
driver: local
# Persistent storage for agent coordinator data
agent_data:
driver: local
networks:
default:
name: agent-coordinator-network
agent-coordinator-network:
driver: bridge

View File

@@ -9,13 +9,23 @@ set -e
export MIX_ENV="${MIX_ENV:-prod}"
export NATS_HOST="${NATS_HOST:-localhost}"
export NATS_PORT="${NATS_PORT:-4222}"
export DOCKERIZED="true"
COLORIZED="${COLORIZED:-}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
if [ ! -z "$COLORIZED" ]; then
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
else
RED=''
GREEN=''
YELLOW=''
BLUE=''
NC=''
fi
# Logging functions
log_info() {
@@ -30,22 +40,12 @@ log_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1" >&2
log_debug() {
echo -e "${GREEN}[DEBUG]${NC} $1" >&2
}
# Cleanup function for graceful shutdown
cleanup() {
log_info "Received shutdown signal, cleaning up..."
# Send termination signals to child processes
if [ ! -z "$MAIN_PID" ]; then
log_info "Stopping main process (PID: $MAIN_PID)..."
kill -TERM "$MAIN_PID" 2>/dev/null || true
wait "$MAIN_PID" 2>/dev/null || true
fi
log_success "Cleanup completed"
log_info "Received shutdown signal, shutting down..."
exit 0
}
@@ -62,7 +62,7 @@ wait_for_nats() {
while [ $count -lt $timeout ]; do
if nc -z "$NATS_HOST" "$NATS_PORT" 2>/dev/null; then
log_success "NATS is available"
log_debug "NATS is available"
return 0
fi
@@ -88,13 +88,7 @@ validate_config() {
exit 1
fi
# Validate JSON
if ! cat /app/mcp_servers.json | bun run -e "JSON.parse(require('fs').readFileSync(0, 'utf8'))" >/dev/null 2>&1; then
log_error "Invalid JSON in mcp_servers.json"
exit 1
fi
log_success "Configuration validation passed"
log_debug "Configuration validation passed"
}
# Pre-install external MCP server dependencies
@@ -120,7 +114,7 @@ preinstall_dependencies() {
bun add --global --silent "$package" || log_warn "Failed to cache $package"
done
log_success "Dependencies pre-installed"
log_debug "Dependencies pre-installed"
}
# Main execution
@@ -129,6 +123,7 @@ main() {
log_info "Environment: $MIX_ENV"
log_info "NATS: $NATS_HOST:$NATS_PORT"
# Validate configuration
validate_config
@@ -147,8 +142,7 @@ main() {
if [ "$#" -eq 0 ] || [ "$1" = "/app/scripts/mcp_launcher.sh" ]; then
# Default: start the MCP server
log_info "Starting MCP server via launcher script..."
exec "/app/scripts/mcp_launcher.sh" &
MAIN_PID=$!
exec "/app/scripts/mcp_launcher.sh"
elif [ "$1" = "bash" ] || [ "$1" = "sh" ]; then
# Interactive shell mode
log_info "Starting interactive shell..."
@@ -156,21 +150,10 @@ main() {
elif [ "$1" = "release" ]; then
# Direct release mode
log_info "Starting via Elixir release..."
exec "/app/bin/agent_coordinator" "start" &
MAIN_PID=$!
exec "/app/bin/agent_coordinator" "start"
else
# Custom command
log_info "Starting custom command: $*"
exec "$@" &
MAIN_PID=$!
fi
# Wait for the main process if it's running in background
if [ ! -z "$MAIN_PID" ]; then
log_success "Main process started (PID: $MAIN_PID)"
wait "$MAIN_PID"
exit 0
fi
}
# Execute main function with all arguments
main "$@"

View File

@@ -0,0 +1,253 @@
# MCP Compliance Enhancement Summary
## Overview
This document summarizes the enhanced Model Context Protocol (MCP) compliance features implemented in the Agent Coordinator system, focusing on session management, security, and real-time streaming capabilities.
## Implemented Features
### 1. 🔐 Enhanced Session Management
#### Session Token Authentication
- **Implementation**: Modified `register_agent` to return cryptographically secure session tokens
- **Token Format**: 32-byte secure random tokens, Base64 encoded
- **Expiry**: 60-minute session timeout with automatic cleanup
- **Headers**: Support for `Mcp-Session-Id` header (MCP compliant) and `X-Session-Id` (legacy)
#### Session Validation Flow
```
Client Server
| |
|-- POST /mcp/request ---->|
| register_agent |
| |
|<-- session_token --------|
| expires_at |
| |
|-- Subsequent requests -->|
| Mcp-Session-Id: token |
| |
|<-- Authenticated resp ---|
```
#### Key Components
- **SessionManager GenServer**: Manages token lifecycle and validation
- **Secure token generation**: Uses `:crypto.strong_rand_bytes/1`
- **Automatic cleanup**: Periodic removal of expired sessions
- **Backward compatibility**: Supports legacy X-Session-Id headers
### 2. 📋 MCP Protocol Version Compliance
#### Protocol Headers
- **MCP-Protocol-Version**: `2025-06-18` (current specification)
- **Server**: `AgentCoordinator/1.0` identification
- **Applied to**: All JSON responses via enhanced `send_json_response/3`
#### CORS Enhancement
- **Session Headers**: Added `mcp-session-id`, `mcp-protocol-version` to allowed headers
- **Exposed Headers**: Protocol version and server headers exposed to clients
- **Security**: Enhanced origin validation with localhost and HTTPS preference
### 3. 🔒 Security Enhancements
#### Origin Validation
```elixir
defp validate_origin(origin) do
case URI.parse(origin) do
%URI{host: host} when host in ["localhost", "127.0.0.1", "::1"] -> origin
%URI{host: host} when is_binary(host) ->
if String.starts_with?(origin, "https://") or
String.contains?(host, ["localhost", "127.0.0.1", "dev", "local"]) do
origin
else
Logger.warning("Potentially unsafe origin: #{origin}")
"*"
end
_ -> "*"
end
end
```
#### Authenticated Method Protection
Protected methods requiring valid session tokens:
- `agents/register`
- `agents/unregister`
- `agents/heartbeat`
- `tasks/create`
- `tasks/complete`
- `codebase/register`
- `stream/subscribe`
### 4. 📡 Server-Sent Events (SSE) Support
#### Real-time Streaming Endpoint
- **Endpoint**: `GET /mcp/stream`
- **Transport**: Streamable HTTP (MCP specification)
- **Authentication**: Requires valid session token
- **Content-Type**: `text/event-stream`
#### SSE Event Format
```
event: connected
data: {"session_id":"agent_123","protocol_version":"2025-06-18","timestamp":"2025-01-11T..."}
event: heartbeat
data: {"timestamp":"2025-01-11T...","session_id":"agent_123"}
```
#### Features
- **Connection establishment**: Sends initial `connected` event
- **Heartbeat**: Periodic keepalive events
- **Session tracking**: Events include session context
- **Graceful disconnection**: Handles client disconnects
## Technical Implementation Details
### File Structure
```
lib/agent_coordinator/
├── session_manager.ex # Session token management
├── mcp_server.ex # Enhanced register_agent
├── http_interface.ex # HTTP/SSE endpoints + security
└── application.ex # Supervision tree
```
### Session Manager API
```elixir
# Create new session
{:ok, session_info} = SessionManager.create_session(agent_id, capabilities)
# Validate existing session
{:ok, session_info} = SessionManager.validate_session(token)
{:error, :expired} = SessionManager.validate_session(old_token)
# Manual cleanup (automatic via timer)
SessionManager.cleanup_expired_sessions()
```
### HTTP Interface Enhancements
```elixir
# Session validation middleware
case validate_session_for_method(method, conn, context) do
{:ok, session_info} -> # Process request
{:error, auth_error} -> # Return 401 Unauthorized
end
# MCP headers on all responses
defp put_mcp_headers(conn) do
conn
|> put_resp_header("mcp-protocol-version", "2025-06-18")
|> put_resp_header("server", "AgentCoordinator/1.0")
end
```
## Usage Examples
### 1. Agent Registration with Session Token
```bash
curl -X POST http://localhost:4000/mcp/request \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": "1",
"method": "agents/register",
"params": {
"name": "My Agent Blue Koala",
"capabilities": ["coding", "testing"],
"codebase_id": "my_project"
}
}'
# Response:
{
"jsonrpc": "2.0",
"id": "1",
"result": {
"agent_id": "My Agent Blue Koala",
"session_token": "abc123...",
"expires_at": "2025-01-11T15:30:00Z"
}
}
```
### 2. Authenticated Tool Call
```bash
curl -X POST http://localhost:4000/mcp/request \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: abc123..." \
-d '{
"jsonrpc": "2.0",
"id": "2",
"method": "tools/call",
"params": {
"name": "get_task_board",
"arguments": {"agent_id": "My Agent Blue Koala"}
}
}'
```
### 3. Server-Sent Events Stream
```javascript
const eventSource = new EventSource('/mcp/stream', {
headers: {
'Mcp-Session-Id': 'abc123...'
}
});
eventSource.onmessage = function(event) {
const data = JSON.parse(event.data);
console.log('Received:', data);
};
```
## Testing and Verification
### Automated Test Script
- **File**: `test_session_management.exs`
- **Coverage**: Registration flow, session validation, protocol headers
- **Usage**: `elixir test_session_management.exs`
### Manual Testing
1. Start server: `mix phx.server`
2. Register agent via `/mcp/request`
3. Use returned session token for authenticated calls
4. Verify MCP headers in responses
5. Test SSE stream endpoint
## Benefits
### 🔐 Security
- **Token-based authentication**: Prevents unauthorized access
- **Session expiry**: Limits exposure of compromised tokens
- **Origin validation**: Mitigates CSRF and unauthorized origins
- **Method-level protection**: Granular access control
### 📋 MCP Compliance
- **Official protocol version**: Headers indicate MCP 2025-06-18 support
- **Streamable HTTP**: Real-time capabilities via SSE
- **Proper error handling**: Standard JSON-RPC error responses
- **Session context**: Request metadata for debugging
### 🚀 Developer Experience
- **Backward compatibility**: Legacy headers still supported
- **Clear error messages**: Detailed authentication failure reasons
- **Real-time updates**: Live agent status via SSE
- **Easy testing**: Comprehensive test utilities
## Future Enhancements
### Planned Features
- **PubSub integration**: Event-driven SSE updates
- **Session persistence**: Redis/database backing
- **Rate limiting**: Per-session request throttling
- **Audit logging**: Session activity tracking
- **WebSocket upgrade**: Bidirectional real-time communication
### Configuration Options
- **Session timeout**: Configurable expiry duration
- **Security levels**: Strict/permissive origin validation
- **Token rotation**: Automatic refresh mechanisms
- **Multi-tenancy**: Workspace-scoped sessions
---
*This implementation provides a solid foundation for MCP-compliant session management while maintaining the flexibility to extend with additional features as requirements evolve.*

279
docs/MULTI_INTERFACE.md Normal file
View File

@@ -0,0 +1,279 @@
# Agent Coordinator Multi-Interface MCP Server
The Agent Coordinator now supports multiple interface modes to accommodate different client types and use cases, from local VSCode integration to remote web applications.
## Interface Modes
### 1. STDIO Mode (Default)
Traditional MCP over stdin/stdout for local clients like VSCode.
**Features:**
- Full tool access (filesystem, VSCode, terminal tools)
- Local security context (trusted)
- Backward compatible with existing MCP clients
**Usage:**
```bash
./scripts/mcp_launcher_multi.sh stdio
# or
./scripts/mcp_launcher.sh # original launcher
```
### 2. HTTP Mode
REST API interface for remote clients and web applications.
**Features:**
- HTTP endpoints for MCP operations
- Tool filtering (removes local-only tools)
- CORS support for web clients
- Remote security context (sandboxed)
**Usage:**
```bash
./scripts/mcp_launcher_multi.sh http 8080
```
**Endpoints:**
- `GET /health` - Health check
- `GET /mcp/capabilities` - Server capabilities and filtered tools
- `GET /mcp/tools` - List available tools (filtered by context)
- `POST /mcp/tools/:tool_name` - Execute specific tool
- `POST /mcp/request` - Full MCP JSON-RPC request
- `GET /agents` - Agent status (requires authorization)
### 3. WebSocket Mode
Real-time interface for web clients requiring live updates.
**Features:**
- Real-time MCP JSON-RPC over WebSocket
- Tool filtering for remote clients
- Session management and heartbeat
- Automatic cleanup on disconnect
**Usage:**
```bash
./scripts/mcp_launcher_multi.sh websocket 8081
```
**Endpoint:**
- `ws://localhost:8081/mcp/ws` - WebSocket connection
### 4. Remote Mode
Both HTTP and WebSocket on the same port for complete remote access.
**Usage:**
```bash
./scripts/mcp_launcher_multi.sh remote 8080
```
### 5. All Mode
All interface modes simultaneously for maximum compatibility.
**Usage:**
```bash
./scripts/mcp_launcher_multi.sh all 8080
```
## Tool Filtering
The system intelligently filters available tools based on client context:
### Local Clients (STDIO)
- **Context**: Trusted, local machine
- **Tools**: All tools available
- **Use case**: VSCode extension, local development
### Remote Clients (HTTP/WebSocket)
- **Context**: Sandboxed, remote access
- **Tools**: Filtered to exclude local-only operations
- **Use case**: Web applications, CI/CD, remote dashboards
### Tool Categories
**Always Available (All Contexts):**
- Agent coordination: `register_agent`, `create_task`, `get_task_board`, `heartbeat`
- Memory/Knowledge: `create_entities`, `read_graph`, `search_nodes`
- Documentation: `get-library-docs`, `resolve-library-id`
- Reasoning: `sequentialthinking`
**Local Only (Filtered for Remote):**
- Filesystem: `read_file`, `write_file`, `create_file`, `delete_file`
- VSCode: `vscode_*` tools
- Terminal: `run_in_terminal`, `get_terminal_output`
- System: Local file operations
## Configuration
Configuration is managed through environment variables and config files:
### Environment Variables
- `MCP_INTERFACE_MODE`: Interface mode (`stdio`, `http`, `websocket`, `remote`, `all`)
- `MCP_HTTP_PORT`: HTTP server port (default: 8080)
- `MCP_WS_PORT`: WebSocket port (default: 8081)
### Configuration File
See `mcp_interfaces_config.json` for detailed configuration options.
## Security Considerations
### Local Context (STDIO)
- Full filesystem access
- Trusted environment
- No network exposure
### Remote Context (HTTP/WebSocket)
- Sandboxed environment
- Tool filtering active
- CORS protection
- No local file access
### Tool Filtering Rules
1. **Allowlist approach**: Safe tools are explicitly allowed for remote clients
2. **Pattern matching**: Local-only tools identified by name patterns
3. **Schema analysis**: Tools with local-only parameters are filtered
4. **Context-aware**: Different tool sets per connection type
## Client Examples
### HTTP Client (Python)
```python
import requests
# Get available tools
response = requests.get("http://localhost:8080/mcp/tools")
tools = response.json()
# Register an agent
agent_data = {
"arguments": {
"name": "Remote Agent",
"capabilities": ["analysis", "coordination"]
}
}
response = requests.post("http://localhost:8080/mcp/tools/register_agent",
json=agent_data)
```
### WebSocket Client (JavaScript)
```javascript
const ws = new WebSocket('ws://localhost:8080/mcp/ws');
ws.onopen = () => {
// Initialize connection
ws.send(JSON.stringify({
jsonrpc: "2.0",
id: 1,
method: "initialize",
params: {
protocolVersion: "2024-11-05",
clientInfo: { name: "web-client", version: "1.0.0" }
}
}));
};
ws.onmessage = (event) => {
const response = JSON.parse(event.data);
console.log('MCP Response:', response);
};
```
### VSCode MCP (Traditional)
```json
{
"mcpServers": {
"agent-coordinator": {
"command": "./scripts/mcp_launcher_multi.sh",
"args": ["stdio"]
}
}
}
```
## Testing
Run the test suite to verify all interface modes:
```bash
# Start the server in remote mode
./scripts/mcp_launcher_multi.sh remote 8080 &
# Run tests
python3 scripts/test_multi_interface.py
# Stop the server
kill %1
```
## Use Cases
### VSCode Extension Development
```bash
./scripts/mcp_launcher_multi.sh stdio
```
Full local tool access for development workflows.
### Web Dashboard
```bash
./scripts/mcp_launcher_multi.sh remote 8080
```
Remote access with HTTP API and WebSocket for real-time updates.
### CI/CD Integration
```bash
./scripts/mcp_launcher_multi.sh http 8080
```
REST API access for automated workflows.
### Development/Testing
```bash
./scripts/mcp_launcher_multi.sh all 8080
```
All interfaces available for comprehensive testing.
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ STDIO Client │ │ HTTP Client │ │ WebSocket Client│
│ (VSCode) │ │ (Web/API) │ │ (Web/Real-time)│
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
│ │ │
│ Full Tools │ Filtered Tools │ Filtered Tools
│ │ │
v v v
┌─────────────────────────────────────────────────────────────────────┐
│ Interface Manager │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ STDIO │ │ HTTP │ │ WebSocket │ │
│ │ Interface │ │ Interface │ │ Interface │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────┬───────────────────────────────────────────────┘
v
┌─────────────────────────────────────────────────────────────────────┐
│ Tool Filter │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Local Context │ │ Remote Context │ │ Web Context │ │
│ │ (Full Access) │ │ (Sandboxed) │ │ (Restricted) │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
└─────────────────────┬───────────────────────────────────────────────┘
v
┌─────────────────────────────────────────────────────────────────────┐
│ MCP Server │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Agent Registry │ │ Task Manager │ │ External MCPs │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
```
## Benefits
1. **Flexible Deployment**: Choose the right interface for your use case
2. **Security**: Automatic tool filtering prevents unauthorized local access
3. **Scalability**: HTTP/WebSocket interfaces support multiple concurrent clients
4. **Backward Compatibility**: STDIO mode maintains compatibility with existing tools
5. **Real-time Capability**: WebSocket enables live updates and notifications
6. **Developer Experience**: Consistent MCP protocol across all interfaces
The multi-interface system allows the Agent Coordinator to serve both local development workflows and remote/web applications while maintaining security and appropriate tool access levels.

View File

@@ -0,0 +1,305 @@
<svg viewBox="0 0 1200 800" xmlns="http://www.w3.org/2000/svg">
<defs>
<style>
.agent-box {
fill: #e3f2fd;
stroke: #1976d2;
stroke-width: 2;
}
.coordinator-box {
fill: #f3e5f5;
stroke: #7b1fa2;
stroke-width: 3;
}
.component-box {
fill: #fff3e0;
stroke: #f57c00;
stroke-width: 2;
}
.mcp-server-box {
fill: #e8f5e8;
stroke: #388e3c;
stroke-width: 2;
}
.taskboard-box {
fill: #fff8e1;
stroke: #ffa000;
stroke-width: 2;
}
.text {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 12px;
}
.title-text {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 14px;
font-weight: bold;
}
.small-text {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 10px;
}
.connection-line {
stroke: #666;
stroke-width: 2;
fill: none;
}
.mcp-line {
stroke: #1976d2;
stroke-width: 3;
fill: none;
}
.data-flow {
stroke: #4caf50;
stroke-width: 2;
fill: none;
stroke-dasharray: 5,5;
}
.text-bg {
fill: white;
fill-opacity: 0.9;
stroke: #333;
stroke-width: 1; rx: 4;
}
.overlay-text {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 12px;
font-weight: bold;
}
</style>
<!-- Arrow marker -->
<marker id="arrowhead" markerWidth="6" markerHeight="4"
refX="5" refY="2" orient="auto">
<polygon points="0 0, 6 2, 0 4" fill="none" stroke="#666" stroke-width="0.01" />
</marker>
<!-- MCP Arrow marker -->
<marker id="mcpArrow" markerWidth="6" markerHeight="4"
refX="5" refY="2" orient="auto">
<polygon points="0 0, 6 2, 0 4" fill="#1976d2" stroke="#1976d2" stroke-width="0.01" />
</marker>
<!-- Data flow arrow -->
<marker id="dataArrow" markerWidth="6" markerHeight="4"
refX="5" refY="2" orient="auto">
<polygon points="0 0, 6 2, 0 4" fill="#4caf50" stroke="#4caf50" stroke-width="1" />
</marker>
</defs>
<!-- Background -->
<rect width="1200" height="800" fill="#fafafa03" />
<!-- Title -->
<text x="600" y="30" text-anchor="middle" class="title-text" font-size="18" fill="#333">
Agent Coordinator: MCP Proxy Server Architecture
</text>
<!-- AI Agents Section -->
<text x="600" y="55" text-anchor="middle" class="text" fill="#666">
Single MCP Interface → Multiple AI Agents → Unified Project Awareness
</text>
<!-- Agent 1 -->
<rect x="50" y="80" width="150" height="80" rx="8" class="agent-box" />
<text x="125" y="105" text-anchor="middle" class="title-text" fill="#1976d2">Agent 1</text>
<text x="125" y="120" text-anchor="middle" class="small-text" fill="#666">Purple Zebra</text>
<text x="125" y="135" text-anchor="middle" class="small-text" fill="#666">Capabilities:</text>
<text x="125" y="148" text-anchor="middle" class="small-text" fill="#666">coding, testing</text>
<!-- Agent 2 -->
<rect x="250" y="80" width="150" height="80" rx="8" class="agent-box" />
<text x="325" y="105" text-anchor="middle" class="title-text" fill="#1976d2">Agent 2</text>
<text x="325" y="120" text-anchor="middle" class="small-text" fill="#666">Yellow Elephant</text>
<text x="325" y="135" text-anchor="middle" class="small-text" fill="#666">Capabilities:</text>
<text x="325" y="148" text-anchor="middle" class="small-text" fill="#666">analysis, docs</text>
<!-- Agent N -->
<rect x="450" y="80" width="150" height="80" rx="8" class="agent-box" />
<text x="525" y="105" text-anchor="middle" class="title-text" fill="#1976d2">Agent N</text>
<text x="525" y="120" text-anchor="middle" class="small-text" fill="#666">More Agents...</text>
<text x="525" y="135" text-anchor="middle" class="small-text" fill="#666">Dynamic</text>
<text x="525" y="148" text-anchor="middle" class="small-text" fill="#666">Registration</text>
<!-- Lines from agents to coordinator (drawn first, behind text) -->
<line x1="125" y1="160" x2="130" y2="220" class="mcp-line" marker-end="url(#mcpArrow)" />
<line x1="325" y1="160" x2="330" y2="220" class="mcp-line" marker-end="url(#mcpArrow)" />
<line x1="525" y1="160" x2="525" y2="220" class="mcp-line" marker-end="url(#mcpArrow)" />
<!-- MCP Protocol text with background (drawn on top of lines) -->
<rect x="200" y="167" width="250" height="25" class="text-bg" />
<text x="325" y="185" text-anchor="middle" class="overlay-text">
MCP Protocol → Single Proxy Interface
</text>
<!-- Main Coordinator Box -->
<rect x="50" y="220" width="600" height="280" rx="12" class="coordinator-box" />
<text x="350" y="245" text-anchor="middle" class="title-text" font-size="16">
AGENT COORDINATOR (MCP Proxy Server)
</text>
<text x="350" y="255" text-anchor="middle" class="small-text" fill="#9c27b0">
⚡ All tool calls proxy through here → Real-time agent tracking → Full project awareness
</text>
<!-- Core Components Row -->
<!-- Task Registry -->
<rect x="70" y="260" width="160" height="100" rx="6" class="component-box" />
<text x="150" y="280" text-anchor="middle" class="title-text" fill="#f57c00">Task Registry</text>
<text x="150" y="298" text-anchor="middle" class="small-text" fill="#666">• Task Queuing</text>
<text x="150" y="311" text-anchor="middle" class="small-text" fill="#666">• Agent Matching</text>
<text x="150" y="324" text-anchor="middle" class="small-text" fill="#666">• Auto-Tracking</text>
<text x="150" y="337" text-anchor="middle" class="small-text" fill="#666">• Progress Monitor</text>
<text x="150" y="350" text-anchor="middle" class="small-text" fill="#666">• Conflict Prevention</text>
<!-- Agent Manager -->
<rect x="250" y="260" width="160" height="100" rx="6" class="component-box" />
<text x="330" y="280" text-anchor="middle" class="title-text" fill="#f57c00">Agent Manager</text>
<text x="330" y="298" text-anchor="middle" class="small-text" fill="#666">• Registration</text>
<text x="330" y="311" text-anchor="middle" class="small-text" fill="#666">• Heartbeat Monitor</text>
<text x="330" y="324" text-anchor="middle" class="small-text" fill="#666">• Capabilities</text>
<text x="330" y="337" text-anchor="middle" class="small-text" fill="#666">• Status Tracking</text>
<text x="330" y="350" text-anchor="middle" class="small-text" fill="#666">• Load Balancing</text>
<!-- Codebase Registry -->
<rect x="430" y="260" width="160" height="100" rx="6" class="component-box" />
<text x="510" y="280" text-anchor="middle" class="title-text" fill="#f57c00">Codebase Registry</text>
<text x="510" y="298" text-anchor="middle" class="small-text" fill="#666">• Cross-Repo</text>
<text x="510" y="311" text-anchor="middle" class="small-text" fill="#666">• Dependencies</text>
<text x="510" y="324" text-anchor="middle" class="small-text" fill="#666">• Workspace Mgmt</text>
<text x="510" y="337" text-anchor="middle" class="small-text" fill="#666">• File Locking</text>
<text x="510" y="350" text-anchor="middle" class="small-text" fill="#666">• Version Control</text>
<!-- Unified Tool Registry -->
<rect x="70" y="380" width="520" height="100" rx="6" class="component-box" />
<text x="330" y="400" text-anchor="middle" class="title-text" fill="#f57c00">UNIFIED TOOL REGISTRY (Proxy Layer)</text>
<text x="330" y="415" text-anchor="middle" class="small-text" fill="#f57c00">Every tool call = Agent presence update + Task tracking + Project awareness</text>
<!-- Native Tools -->
<text x="90" y="435" class="small-text" fill="#666" font-weight="bold">Native Tools:</text>
<!-- <text x="90" y="434" class="small-text" fill="#666">register_agent, get_next_task, create_task_set,</text> -->
<!-- <text x="90" y="448" class="small-text" fill="#666">complete_task, heartbeat, get_task_board</text> -->
<text x="90" y="463" class="small-text" fill="#666" font-weight="bold">Proxied External Tools:</text>
<!-- External Tools -->
<text x="320" y="435" class="small-text" fill="#666">register_agent, get_next_task, create_task_set,</text>
<!-- <text x="320" y="420" class="small-text" fill="#666" font-weight="bold">External MCP Tools:</text> -->
<text x="320" y="449" class="small-text" fill="#666">complete_task, heartbeat, get_task_board</text>
<!-- <text x="320" y="434" class="small-text" fill="#666">read_file, write_file, search_memory,</text> -->
<text x="320" y="463" class="small-text" fill="#666">read_file, write_file, search_memory, get_docs</text>
<!-- VS Code Tools -->
<text x="90" y="477" class="small-text" fill="#666" font-weight="bold">VS Code Integration:</text>
<text x="320" y="477" class="small-text" fill="#666">get_active_editor, set_selection, install_extension</text>
<!-- Task Board (Right side) -->
<rect x="680" y="220" width="260" height="280" rx="8" class="coordinator-box"/>
<text x="810" y="245" text-anchor="middle" class="title-text">Real-Time Task Board</text>
<!-- Agent Queues -->
<rect x="700" y="260" width="100" height="80" rx="4" class="component-box"/>
<text x="750" y="275" text-anchor="middle" class="small-text" fill="#666" font-weight="bold">Agent 1 Queue</text>
<text x="750" y="290" text-anchor="middle" class="small-text" fill="#4caf50">✓ Task 1</text>
<text x="750" y="303" text-anchor="middle" class="small-text" fill="#4caf50">✓ Task 2</text>
<text x="750" y="316" text-anchor="middle" class="small-text" fill="#ff9800">→ Task 3</text>
<text x="750" y="329" text-anchor="middle" class="small-text" fill="#666">… Task 4</text>
<rect x="820" y="260" width="100" height="80" rx="4" class="component-box" />
<text x="870" y="275" text-anchor="middle" class="small-text" fill="#666" font-weight="bold">Agent 2 Queue</text>
<text x="870" y="290" text-anchor="middle" class="small-text" fill="#4caf50">✓ Task 1</text>
<text x="870" y="303" text-anchor="middle" class="small-text" fill="#ff9800">→ Task 2</text>
<text x="870" y="316" text-anchor="middle" class="small-text" fill="#666">… Task 3</text>
<text x="870" y="329" text-anchor="middle" class="small-text" fill="#666">… Task 4</text>
<!-- Agent Inboxes -->
<rect x="700" y="360" width="100" height="60" rx="4" fill="#e3f2fd" stroke="#1976d2" stroke-width="1" />
<text x="750" y="375" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">Agent 1 Inbox</text>
<text x="750" y="390" text-anchor="middle" class="small-text" fill="#666">current: task 3</text>
<text x="750" y="403" text-anchor="middle" class="small-text" fill="#666">[complete task]</text>
<!-- <rect x="700" y="360" width="100" height="60" rx="4" fill="#e3f2fd" stroke="#1976d2" stroke-width="1" />
<text x="750" y="375" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">Agent 1 Inbox</text>
<text x="750" y="390" text-anchor="middle" class="small-text" fill="#666">current: task 3</text>
<text x="750" y="403" text-anchor="middle" class="small-text" fill="#666">[complete task]</text> -->
<rect x="820" y="360" width="100" height="60" rx="4" fill="#e3f2fd" stroke="#1976d2" stroke-width="1" />
<text x="870" y="375" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">Agent 2 Inbox</text>
<text x="870" y="390" text-anchor="middle" class="small-text" fill="#666">current: task 2</text>
<text x="870" y="403" text-anchor="middle" class="small-text" fill="#666">[complete task]</text>
<!-- Connection lines from coordinator to external servers (drawn first, behind text) -->
<line x1="350" y1="500" x2="110" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
<line x1="350" y1="500" x2="250" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
<line x1="350" y1="500" x2="390" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
<line x1="350" y1="500" x2="530" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
<!-- Data flow line to task board (drawn first, behind text) -->
<line x1="650" y1="350" x2="680" y2="350" class="data-flow" marker-end="url(#dataArrow)" />
<!-- PROXY arrows showing reverse direction - tools flow UP through coordinator -->
<line x1="110" y1="550" x2="330" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
<line x1="250" y1="550" x2="340" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
<line x1="390" y1="550" x2="360" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
<line x1="530" y1="550" x2="370" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
<!-- External MCP Servers Section title with background -->
<rect x="210" y="520" width="280" height="25" class="text-bg" />
<text x="350" y="535" text-anchor="middle" class="overlay-text" fill="#388e3c">
External MCP Servers (Proxied via Coordinator)
</text>
<!-- Proxy flow label -->
<rect x="550" y="520" width="140" height="25" class="text-bg" />
<text x="620" y="535" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">
⇅ Proxied Tool Calls
</text>
<!-- Data flow label with background -->
<rect x="630" y="340" width="80" height="20" class="text-bg" />
<text x="670" y="352" text-anchor="middle" class="small-text" fill="#4caf50" font-weight="bold">
Live Updates
</text>
<!-- MCP Server boxes -->
<rect x="50" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
<text x="110" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Filesystem</text>
<text x="110" y="585" text-anchor="middle" class="small-text" fill="#666">read_file</text>
<text x="110" y="598" text-anchor="middle" class="small-text" fill="#666">write_file</text>
<text x="110" y="611" text-anchor="middle" class="small-text" fill="#666">list_directory</text>
<rect x="190" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
<text x="250" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Memory</text>
<text x="250" y="585" text-anchor="middle" class="small-text" fill="#666">search_nodes</text>
<text x="250" y="598" text-anchor="middle" class="small-text" fill="#666">store_memory</text>
<text x="250" y="611" text-anchor="middle" class="small-text" fill="#666">recall_info</text>
<rect x="330" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
<text x="390" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Context7</text>
<text x="390" y="585" text-anchor="middle" class="small-text" fill="#666">get_docs</text>
<text x="390" y="598" text-anchor="middle" class="small-text" fill="#666">search_docs</text>
<text x="390" y="611" text-anchor="middle" class="small-text" fill="#666">get_library</text>
<rect x="470" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
<text x="530" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Sequential</text>
<text x="530" y="585" text-anchor="middle" class="small-text" fill="#666">thinking</text>
<text x="530" y="598" text-anchor="middle" class="small-text" fill="#666">analyze</text>
<text x="530" y="611" text-anchor="middle" class="small-text" fill="#666">problem</text>
<!-- Key Process Flow -->
<text x="350" y="670" text-anchor="middle" class="title-text" fill="#d5d5d5ff">
Key Proxy Flow: Agent → Coordinator → External Tools → Presence Tracking
</text>
<text x="50" y="690" class="small-text" fill="#d5d5d5ff">1. Agents connect via single MCP interface</text>
<text x="50" y="705" class="small-text" fill="#d5d5d5ff">2. ALL tool calls proxy through coordinator</text>
<text x="50" y="720" class="small-text" fill="#d5d5d5ff">3. Coordinator updates agent presence + tracks tasks</text>
<text x="450" y="690" class="small-text" fill="#d5d5d5ff">4. Agents gain full project awareness via proxy</text>
<text x="450" y="705" class="small-text" fill="#d5d5d5ff">5. Real-time coordination prevents conflicts</text>
<text x="450" y="720" class="small-text" fill="#d5d5d5ff">6. Single interface → Multiple backends</text>
<!-- Version info -->
<text x="1150" y="790" text-anchor="end" class="small-text" fill="#aaa">
Agent Coordinator v0.1.0
</text>
</svg>

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -1,193 +0,0 @@
#!/usr/bin/env python3
"""
AgentCoordinator MCP Client Example
This script demonstrates how to connect to and interact with the
AgentCoordinator MCP server programmatically.
"""
import json
import subprocess
import uuid
from typing import Any, Dict, Optional
class AgentCoordinatorMCP:
def __init__(self, launcher_path: str = "./scripts/mcp_launcher.sh"):
self.launcher_path = launcher_path
self.process = None
def start(self):
"""Start the MCP server process"""
try:
self.process = subprocess.Popen(
[self.launcher_path],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=0
)
print("🚀 MCP server started")
return True
except Exception as e:
print(f"❌ Failed to start MCP server: {e}")
return False
def stop(self):
"""Stop the MCP server process"""
if self.process:
self.process.terminate()
self.process.wait()
print("🛑 MCP server stopped")
def send_request(self, method: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
"""Send a JSON-RPC request to the MCP server"""
if not self.process:
raise RuntimeError("MCP server not started")
request = {
"jsonrpc": "2.0",
"id": str(uuid.uuid4()),
"method": method
}
if params:
request["params"] = params
# Send request
request_json = json.dumps(request) + "\n"
self.process.stdin.write(request_json)
self.process.stdin.flush()
# Read response
response_line = self.process.stdout.readline()
if not response_line:
raise RuntimeError("No response from MCP server")
return json.loads(response_line.strip())
def get_tools(self) -> Dict[str, Any]:
"""Get list of available tools"""
return self.send_request("tools/list")
def register_agent(self, name: str, capabilities: list) -> Dict[str, Any]:
"""Register a new agent"""
return self.send_request("tools/call", {
"name": "register_agent",
"arguments": {
"name": name,
"capabilities": capabilities
}
})
def create_task(self, title: str, description: str, priority: str = "normal",
required_capabilities: list = None) -> Dict[str, Any]:
"""Create a new task"""
args = {
"title": title,
"description": description,
"priority": priority
}
if required_capabilities:
args["required_capabilities"] = required_capabilities
return self.send_request("tools/call", {
"name": "create_task",
"arguments": args
})
def get_next_task(self, agent_id: str) -> Dict[str, Any]:
"""Get next task for an agent"""
return self.send_request("tools/call", {
"name": "get_next_task",
"arguments": {"agent_id": agent_id}
})
def complete_task(self, agent_id: str, result: str) -> Dict[str, Any]:
"""Complete current task"""
return self.send_request("tools/call", {
"name": "complete_task",
"arguments": {
"agent_id": agent_id,
"result": result
}
})
def get_task_board(self) -> Dict[str, Any]:
"""Get task board overview"""
return self.send_request("tools/call", {
"name": "get_task_board",
"arguments": {}
})
def heartbeat(self, agent_id: str) -> Dict[str, Any]:
"""Send agent heartbeat"""
return self.send_request("tools/call", {
"name": "heartbeat",
"arguments": {"agent_id": agent_id}
})
def demo():
"""Demonstrate MCP client functionality"""
print("🎯 AgentCoordinator MCP Client Demo")
print("=" * 50)
client = AgentCoordinatorMCP()
try:
# Start server
if not client.start():
return
# Wait for server to be ready
import time
time.sleep(2)
# Get tools
print("\n📋 Available tools:")
tools_response = client.get_tools()
if "result" in tools_response:
for tool in tools_response["result"]["tools"]:
print(f" - {tool['name']}: {tool['description']}")
# Register agent
print("\n👤 Registering agent...")
register_response = client.register_agent("PythonAgent", ["coding", "testing"])
if "result" in register_response:
content = register_response["result"]["content"][0]["text"]
agent_data = json.loads(content)
agent_id = agent_data["agent_id"]
print(f"✅ Agent registered: {agent_id}")
# Create task
print("\n📝 Creating task...")
task_response = client.create_task(
"Python Script",
"Write a Python script for data processing",
"high",
["coding"]
)
if "result" in task_response:
content = task_response["result"]["content"][0]["text"]
task_data = json.loads(content)
print(f"✅ Task created: {task_data['task_id']}")
# Get task board
print("\n📊 Task board:")
board_response = client.get_task_board()
if "result" in board_response:
content = board_response["result"]["content"][0]["text"]
board_data = json.loads(content)
for agent in board_data["agents"]:
print(f" 📱 {agent['name']}: {agent['status']}")
print(f" Capabilities: {', '.join(agent['capabilities'])}")
print(f" Pending: {agent['pending_tasks']}, Completed: {agent['completed_tasks']}")
except Exception as e:
print(f"❌ Error: {e}")
finally:
client.stop()
if __name__ == "__main__":
demo()

View File

@@ -0,0 +1,291 @@
defmodule AgentCoordinator.ActivityTracker do
@moduledoc """
Tracks agent activities based on tool calls and infers human-readable activity descriptions.
"""
alias AgentCoordinator.{Agent, TaskRegistry}
@doc """
Infer activity description and files from tool name and arguments.
Returns {activity_description, files_list}.
"""
def infer_activity(tool_name, args) do
case tool_name do
# File operations
"read_file" ->
file_path = extract_file_path(args)
{"Reading #{Path.basename(file_path || "file")}", [file_path]}
"read_text_file" ->
file_path = extract_file_path(args)
{"Reading #{Path.basename(file_path || "file")}", [file_path]}
"read_multiple_files" ->
files = Map.get(args, "paths", [])
file_names = Enum.map(files, &Path.basename/1)
{"Reading #{length(files)} files: #{Enum.join(file_names, ", ")}", files}
"write_file" ->
file_path = extract_file_path(args)
{"Writing #{Path.basename(file_path || "file")}", [file_path]}
"edit_file" ->
file_path = extract_file_path(args)
{"Editing #{Path.basename(file_path || "file")}", [file_path]}
"create_file" ->
file_path = extract_file_path(args)
{"Creating #{Path.basename(file_path || "file")}", [file_path]}
"move_file" ->
source = Map.get(args, "source")
dest = Map.get(args, "destination")
files = [source, dest] |> Enum.filter(&(&1))
{"Moving #{Path.basename(source || "file")} to #{Path.basename(dest || "destination")}", files}
# VS Code operations
"vscode_read_file" ->
file_path = extract_file_path(args)
{"Reading #{Path.basename(file_path || "file")} in VS Code", [file_path]}
"vscode_write_file" ->
file_path = extract_file_path(args)
{"Writing #{Path.basename(file_path || "file")} in VS Code", [file_path]}
"vscode_set_editor_content" ->
file_path = Map.get(args, "file_path")
if file_path do
{"Editing #{Path.basename(file_path)} in VS Code", [file_path]}
else
{"Editing active file in VS Code", []}
end
"vscode_get_active_editor" ->
{"Viewing active editor in VS Code", []}
"vscode_get_selection" ->
{"Viewing text selection in VS Code", []}
# Directory operations
"list_directory" ->
path = extract_file_path(args)
{"Browsing directory #{Path.basename(path || ".")}", []}
"list_directory_with_sizes" ->
path = extract_file_path(args)
{"Browsing directory #{Path.basename(path || ".")} with sizes", []}
"directory_tree" ->
path = extract_file_path(args)
{"Exploring directory tree for #{Path.basename(path || ".")}", []}
"create_directory" ->
path = extract_file_path(args)
{"Creating directory #{Path.basename(path || "directory")}", []}
# Search operations
"search_files" ->
pattern = Map.get(args, "pattern", "files")
{"Searching for #{pattern}", []}
"grep_search" ->
query = Map.get(args, "query", "text")
{"Searching for '#{query}' in files", []}
"semantic_search" ->
query = Map.get(args, "query", "content")
{"Semantic search for '#{query}'", []}
# Thinking operations
"sequentialthinking" ->
thought = Map.get(args, "thought", "")
thought_summary = String.slice(thought, 0, 50) |> String.trim()
{"Sequential thinking: #{thought_summary}...", []}
# Terminal operations
"run_in_terminal" ->
command = Map.get(args, "command", "command")
command_summary = String.slice(command, 0, 30) |> String.trim()
{"Running: #{command_summary}...", []}
"get_terminal_output" ->
{"Checking terminal output", []}
# Test operations
"runTests" ->
files = Map.get(args, "files", [])
if files != [] do
file_names = Enum.map(files, &Path.basename/1)
{"Running tests in #{Enum.join(file_names, ", ")}", files}
else
{"Running all tests", []}
end
# Task management
"create_task" ->
title = Map.get(args, "title", "task")
{"Creating task: #{title}", []}
"get_next_task" ->
{"Getting next task", []}
"complete_task" ->
{"Completing current task", []}
# Knowledge operations
"create_entities" ->
entities = Map.get(args, "entities", [])
count = length(entities)
{"Creating #{count} knowledge entities", []}
"create_relations" ->
relations = Map.get(args, "relations", [])
count = length(relations)
{"Creating #{count} knowledge relations", []}
"search_nodes" ->
query = Map.get(args, "query", "nodes")
{"Searching knowledge graph for '#{query}'", []}
"read_graph" ->
{"Reading knowledge graph", []}
# HTTP/Web operations
"fetch_webpage" ->
urls = Map.get(args, "urls", [])
if urls != [] do
{"Fetching #{length(urls)} webpages", []}
else
{"Fetching webpage", []}
end
# Development operations
"get_errors" ->
files = Map.get(args, "filePaths", [])
if files != [] do
file_names = Enum.map(files, &Path.basename/1)
{"Checking errors in #{Enum.join(file_names, ", ")}", files}
else
{"Checking all errors", []}
end
"list_code_usages" ->
symbol = Map.get(args, "symbolName", "symbol")
{"Finding usages of #{symbol}", []}
# Elixir-specific operations
"elixir-definition" ->
symbol = Map.get(args, "symbol", "symbol")
{"Finding definition of #{symbol}", []}
"elixir-docs" ->
modules = Map.get(args, "modules", [])
if modules != [] do
{"Getting docs for #{Enum.join(modules, ", ")}", []}
else
{"Getting Elixir documentation", []}
end
"elixir-environment" ->
location = Map.get(args, "location", "code")
{"Analyzing Elixir environment at #{location}", []}
# Python operations
"pylanceRunCodeSnippet" ->
{"Running Python code snippet", []}
"pylanceFileSyntaxErrors" ->
file_uri = Map.get(args, "fileUri")
if file_uri do
file_path = uri_to_path(file_uri)
{"Checking syntax errors in #{Path.basename(file_path)}", [file_path]}
else
{"Checking Python syntax errors", []}
end
# Default cases
tool_name when is_binary(tool_name) ->
cond do
String.starts_with?(tool_name, "vscode_") ->
action = String.replace(tool_name, "vscode_", "") |> String.replace("_", " ")
{"VS Code: #{action}", []}
String.starts_with?(tool_name, "elixir-") ->
action = String.replace(tool_name, "elixir-", "") |> String.replace("-", " ")
{"Elixir: #{action}", []}
String.starts_with?(tool_name, "pylance") ->
action = String.replace(tool_name, "pylance", "") |> humanize_string()
{"Python: #{action}", []}
String.contains?(tool_name, "_") ->
action = String.replace(tool_name, "_", " ") |> String.capitalize()
{action, []}
true ->
{String.capitalize(tool_name), []}
end
_ ->
{"Unknown activity", []}
end
end
@doc """
Update an agent's activity based on a tool call.
"""
def update_agent_activity(agent_id, tool_name, args) do
{activity, files} = infer_activity(tool_name, args)
case TaskRegistry.get_agent(agent_id) do
{:ok, agent} ->
updated_agent = Agent.update_activity(agent, activity, files)
# Update the agent in the registry
TaskRegistry.update_agent(agent_id, updated_agent)
{:error, _} ->
# Agent not found, ignore
:ok
end
end
@doc """
Clear an agent's activity (e.g., when they go idle).
"""
def clear_agent_activity(agent_id) do
case TaskRegistry.get_agent(agent_id) do
{:ok, agent} ->
updated_agent = Agent.clear_activity(agent)
TaskRegistry.update_agent(agent_id, updated_agent)
{:error, _} ->
:ok
end
end
# Private helper functions
defp extract_file_path(args) do
# Try various common parameter names for file paths
args["path"] || args["filePath"] || args["file_path"] ||
args["source"] || args["destination"] || args["fileUri"] |> uri_to_path()
end
defp uri_to_path(nil), do: nil
defp uri_to_path(uri) when is_binary(uri) do
if String.starts_with?(uri, "file://") do
String.replace_prefix(uri, "file://", "")
else
uri
end
end
defp humanize_string(str) do
str
|> String.split(~r/[A-Z]/)
|> Enum.map(&String.downcase/1)
|> Enum.filter(&(&1 != ""))
|> Enum.join(" ")
|> String.capitalize()
end
end

View File

@@ -13,7 +13,10 @@ defmodule AgentCoordinator.Agent do
:codebase_id,
:workspace_path,
:last_heartbeat,
:metadata
:metadata,
:current_activity,
:current_files,
:activity_history
]}
defstruct [
:id,
@@ -24,7 +27,10 @@ defmodule AgentCoordinator.Agent do
:codebase_id,
:workspace_path,
:last_heartbeat,
:metadata
:metadata,
:current_activity,
:current_files,
:activity_history
]
@type status :: :idle | :busy | :offline | :error
@@ -39,20 +45,45 @@ defmodule AgentCoordinator.Agent do
codebase_id: String.t(),
workspace_path: String.t() | nil,
last_heartbeat: DateTime.t(),
metadata: map()
metadata: map(),
current_activity: String.t() | nil,
current_files: [String.t()],
activity_history: [map()]
}
def new(name, capabilities, opts \\ []) do
workspace_path = Keyword.get(opts, :workspace_path)
# Use smart codebase identification
codebase_id = case Keyword.get(opts, :codebase_id) do
nil when workspace_path ->
# Auto-detect from workspace
case AgentCoordinator.CodebaseIdentifier.identify_codebase(workspace_path) do
%{canonical_id: canonical_id} -> canonical_id
_ -> Path.basename(workspace_path || "default")
end
nil ->
"default"
explicit_id ->
# Normalize the provided ID
AgentCoordinator.CodebaseIdentifier.normalize_codebase_reference(explicit_id, workspace_path)
end
%__MODULE__{
id: UUID.uuid4(),
name: name,
capabilities: capabilities,
status: :idle,
current_task_id: nil,
codebase_id: Keyword.get(opts, :codebase_id, "default"),
workspace_path: Keyword.get(opts, :workspace_path),
codebase_id: codebase_id,
workspace_path: workspace_path,
last_heartbeat: DateTime.utc_now(),
metadata: Keyword.get(opts, :metadata, %{})
metadata: Keyword.get(opts, :metadata, %{}),
current_activity: nil,
current_files: [],
activity_history: []
}
end
@@ -60,6 +91,33 @@ defmodule AgentCoordinator.Agent do
%{agent | last_heartbeat: DateTime.utc_now()}
end
def update_activity(agent, activity, files \\ []) do
# Add to activity history (keep last 10 activities)
activity_entry = %{
activity: activity,
files: files,
timestamp: DateTime.utc_now()
}
new_history = [activity_entry | agent.activity_history]
|> Enum.take(10)
%{agent |
current_activity: activity,
current_files: files,
activity_history: new_history,
last_heartbeat: DateTime.utc_now()
}
end
def clear_activity(agent) do
%{agent |
current_activity: nil,
current_files: [],
last_heartbeat: DateTime.utc_now()
}
end
def assign_task(agent, task_id) do
%{agent | status: :busy, current_task_id: task_id}
end

View File

@@ -24,9 +24,15 @@ defmodule AgentCoordinator.Application do
# Task registry with NATS integration (conditionally add persistence)
{AgentCoordinator.TaskRegistry, nats: if(enable_persistence, do: nats_config(), else: nil)},
# Session manager for MCP session token handling
AgentCoordinator.SessionManager,
# Unified MCP server (includes external server management, session tracking, and auto-registration)
AgentCoordinator.MCPServer,
# Interface manager for multiple MCP interface modes
AgentCoordinator.InterfaceManager,
# Auto-heartbeat manager
AgentCoordinator.AutoHeartbeat,

View File

@@ -0,0 +1,313 @@
defmodule AgentCoordinator.CodebaseIdentifier do
@moduledoc """
Smart codebase identification system that works across local and remote scenarios.
Generates canonical codebase identifiers using multiple strategies:
1. Git repository detection (preferred)
2. Local folder name fallback
3. Remote workspace mapping
4. Custom identifier override
"""
require Logger
@type codebase_info :: %{
canonical_id: String.t(),
display_name: String.t(),
workspace_path: String.t(),
repository_url: String.t() | nil,
git_remote: String.t() | nil,
branch: String.t() | nil,
commit_hash: String.t() | nil,
identification_method: :git_remote | :git_local | :folder_name | :custom
}
@doc """
Identify a codebase from a workspace path, generating a canonical ID.
Priority order:
1. Git remote URL (most reliable for distributed teams)
2. Git local repository info
3. Folder name (fallback for non-git projects)
4. Custom override from metadata
## Examples
# Git repository with remote
iex> identify_codebase("/home/user/my-project")
%{
canonical_id: "github.com/owner/my-project",
display_name: "my-project",
workspace_path: "/home/user/my-project",
repository_url: "https://github.com/owner/my-project.git",
git_remote: "origin",
branch: "main",
identification_method: :git_remote
}
# Local folder (no git)
iex> identify_codebase("/home/user/local-project")
%{
canonical_id: "local:/home/user/local-project",
display_name: "local-project",
workspace_path: "/home/user/local-project",
repository_url: nil,
identification_method: :folder_name
}
"""
def identify_codebase(workspace_path, opts \\ [])
def identify_codebase(nil, opts) do
custom_id = Keyword.get(opts, :custom_id, "default")
build_custom_codebase_info(nil, custom_id)
end
def identify_codebase(workspace_path, opts) do
custom_id = Keyword.get(opts, :custom_id)
cond do
custom_id ->
build_custom_codebase_info(workspace_path, custom_id)
git_repository?(workspace_path) ->
identify_git_codebase(workspace_path)
true ->
identify_folder_codebase(workspace_path)
end
end
@doc """
Normalize different codebase references to canonical IDs.
Handles cases where agents specify different local paths for same repository.
"""
def normalize_codebase_reference(codebase_ref, workspace_path) do
case codebase_ref do
# Already canonical
id when is_binary(id) ->
if String.contains?(id, ".com/") or String.starts_with?(id, "local:") do
id
else
# Folder name - try to resolve to canonical
case identify_codebase(workspace_path) do
%{canonical_id: canonical_id} -> canonical_id
_ -> "local:#{id}"
end
end
_ ->
# Fallback to folder-based ID
Path.basename(workspace_path || "/unknown")
end
end
@doc """
Check if two workspace paths refer to the same codebase.
Useful for detecting when agents from different machines work on same project.
"""
def same_codebase?(workspace_path1, workspace_path2) do
info1 = identify_codebase(workspace_path1)
info2 = identify_codebase(workspace_path2)
info1.canonical_id == info2.canonical_id
end
# Private functions
defp build_custom_codebase_info(workspace_path, custom_id) do
%{
canonical_id: custom_id,
display_name: custom_id,
workspace_path: workspace_path,
repository_url: nil,
git_remote: nil,
branch: nil,
commit_hash: nil,
identification_method: :custom
}
end
defp identify_git_codebase(workspace_path) do
with {:ok, git_info} <- get_git_info(workspace_path) do
canonical_id = case git_info.remote_url do
nil ->
# Local git repo without remote
"git-local:#{git_info.repo_name}"
remote_url ->
# Extract canonical identifier from remote URL
extract_canonical_from_remote(remote_url)
end
%{
canonical_id: canonical_id,
display_name: git_info.repo_name,
workspace_path: workspace_path,
repository_url: git_info.remote_url,
git_remote: git_info.remote_name,
branch: git_info.branch,
commit_hash: git_info.commit_hash,
identification_method: if(git_info.remote_url, do: :git_remote, else: :git_local)
}
else
_ ->
identify_folder_codebase(workspace_path)
end
end
defp identify_folder_codebase(workspace_path) when is_nil(workspace_path) do
%{
canonical_id: "default",
display_name: "default",
workspace_path: nil,
repository_url: nil,
git_remote: nil,
branch: nil,
commit_hash: nil,
identification_method: :folder_name
}
end
defp identify_folder_codebase(workspace_path) do
folder_name = Path.basename(workspace_path)
%{
canonical_id: "local:#{workspace_path}",
display_name: folder_name,
workspace_path: workspace_path,
repository_url: nil,
git_remote: nil,
branch: nil,
commit_hash: nil,
identification_method: :folder_name
}
end
defp git_repository?(workspace_path) when is_nil(workspace_path), do: false
defp git_repository?(workspace_path) do
File.exists?(Path.join(workspace_path, ".git"))
end
defp get_git_info(workspace_path) do
try do
# Get repository name
repo_name = Path.basename(workspace_path)
# Get current branch
{branch, 0} = System.cmd("git", ["branch", "--show-current"], cd: workspace_path)
branch = String.trim(branch)
# Get current commit
{commit_hash, 0} = System.cmd("git", ["rev-parse", "HEAD"], cd: workspace_path)
commit_hash = String.trim(commit_hash)
# Try to get remote URL
{remote_info, _remote_result_use_me?} = case System.cmd("git", ["remote", "-v"], cd: workspace_path) do
{output, 0} when output != "" ->
# Parse remote output to extract origin URL
lines = String.split(String.trim(output), "\n")
origin_line = Enum.find(lines, fn line ->
String.starts_with?(line, "origin") and String.contains?(line, "(fetch)")
end)
case origin_line do
nil -> {nil, :no_origin}
line ->
# Extract URL from "origin <url> (fetch)"
url = line
|> String.split()
|> Enum.at(1)
{url, :ok}
end
_ -> {nil, :no_remotes}
end
git_info = %{
repo_name: repo_name,
branch: branch,
commit_hash: commit_hash,
remote_url: remote_info,
remote_name: if(remote_info, do: "origin", else: nil)
}
{:ok, git_info}
rescue
_ -> {:error, :git_command_failed}
end
end
defp extract_canonical_from_remote(remote_url) do
cond do
# GitHub HTTPS
String.contains?(remote_url, "github.com") ->
extract_github_id(remote_url)
# GitLab HTTPS
String.contains?(remote_url, "gitlab.com") ->
extract_gitlab_id(remote_url)
# SSH format
String.contains?(remote_url, "@") and String.contains?(remote_url, ":") ->
extract_ssh_id(remote_url)
# Other HTTPS
String.starts_with?(remote_url, "https://") ->
extract_https_id(remote_url)
true ->
# Fallback - use raw URL
"remote:#{remote_url}"
end
end
defp extract_github_id(url) do
# Extract "owner/repo" from various GitHub URL formats
regex = ~r/github\.com[\/:]([^\/]+)\/([^\/\.]+)/
case Regex.run(regex, url) do
[_, owner, repo] ->
"github.com/#{owner}/#{repo}"
_ ->
"github.com/unknown"
end
end
defp extract_gitlab_id(url) do
# Similar logic for GitLab
regex = ~r/gitlab\.com[\/:]([^\/]+)\/([^\/\.]+)/
case Regex.run(regex, url) do
[_, owner, repo] ->
"gitlab.com/#{owner}/#{repo}"
_ ->
"gitlab.com/unknown"
end
end
defp extract_ssh_id(url) do
# SSH format: git@host:owner/repo.git
case String.split(url, ":") do
[host_part, path_part] ->
host = String.replace(host_part, ~r/.*@/, "")
path = String.replace(path_part, ".git", "")
"#{host}/#{path}"
_ ->
"ssh:#{url}"
end
end
defp extract_https_id(url) do
# Extract from general HTTPS URLs
uri = URI.parse(url)
host = uri.host
path = String.replace(uri.path || "", ~r/^\//, "")
path = String.replace(path, ".git", "")
if host && path != "" do
"#{host}/#{path}"
else
"https:#{url}"
end
end
end

View File

@@ -0,0 +1,597 @@
defmodule AgentCoordinator.HttpInterface do
@moduledoc """
HTTP and WebSocket interface for the Agent Coordinator MCP server.
This module provides:
- HTTP REST API for MCP requests
- WebSocket support for real-time communication
- Remote client detection and tool filtering
- CORS support for web clients
- Session management across HTTP requests
"""
use Plug.Router
require Logger
alias AgentCoordinator.{MCPServer, ToolFilter, SessionManager}
plug Plug.Logger
plug :match
plug Plug.Parsers, parsers: [:json], json_decoder: Jason
plug :put_cors_headers
plug :dispatch
@doc """
Start the HTTP server on the specified port.
"""
def start_link(opts \\ []) do
port = Keyword.get(opts, :port, 8080)
IO.puts(:stderr, "Starting Agent Coordinator HTTP interface on port #{port}")
Plug.Cowboy.http(__MODULE__, [],
port: port,
dispatch: cowboy_dispatch()
)
end
# HTTP Routes
get "/health" do
send_json_response(conn, 200, %{
status: "healthy",
service: "agent-coordinator",
version: AgentCoordinator.version(),
timestamp: DateTime.utc_now()
})
end
get "/mcp/capabilities" do
context = extract_client_context(conn)
# Get filtered tools based on client context
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, context)
capabilities = %{
protocolVersion: "2024-11-05",
serverInfo: %{
name: "agent-coordinator-http",
version: AgentCoordinator.version(),
description: "Agent Coordinator HTTP/WebSocket interface"
},
capabilities: %{
tools: %{},
coordination: %{
automatic_task_tracking: true,
agent_management: true,
multi_server_proxy: true,
heartbeat_coverage: true,
session_tracking: true,
tool_filtering: true
}
},
tools: filtered_tools,
context: %{
connection_type: context.connection_type,
security_level: context.security_level,
tool_count: length(filtered_tools)
}
}
send_json_response(conn, 200, capabilities)
end
get "/mcp/tools" do
context = extract_client_context(conn)
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, context)
filter_stats = ToolFilter.get_filter_stats(all_tools, context)
response = %{
tools: filtered_tools,
_meta: %{
filter_stats: filter_stats,
context: %{
connection_type: context.connection_type,
security_level: context.security_level
}
}
}
send_json_response(conn, 200, response)
end
post "/mcp/tools/:tool_name" do
context = extract_client_context(conn)
# Check if tool is allowed for this client
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, context)
tool_allowed = Enum.any?(filtered_tools, fn tool ->
Map.get(tool, "name") == tool_name
end)
if not tool_allowed do
send_json_response(conn, 403, %{
error: %{
code: -32601,
message: "Tool not available for remote clients: #{tool_name}",
data: %{
available_tools: Enum.map(filtered_tools, &Map.get(&1, "name")),
connection_type: context.connection_type
}
}
})
else
# Execute the tool call
args = Map.get(conn.body_params, "arguments", %{})
# Create MCP request format
mcp_request = %{
"jsonrpc" => "2.0",
"id" => Map.get(conn.body_params, "id", generate_request_id()),
"method" => "tools/call",
"params" => %{
"name" => tool_name,
"arguments" => args
}
}
# Add session tracking
mcp_request = add_session_info(mcp_request, conn, context)
# Execute through MCP server
case MCPServer.handle_mcp_request(mcp_request) do
%{"result" => result} ->
send_json_response(conn, 200, %{
result: result,
_meta: %{
tool_name: tool_name,
request_id: mcp_request["id"],
context: context.connection_type
}
})
%{"error" => error} ->
send_json_response(conn, 400, %{error: error})
unexpected ->
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
send_json_response(conn, 500, %{
error: %{
code: -32603,
message: "Internal server error"
}
})
end
end
end
post "/mcp/request" do
context = extract_client_context(conn)
# Validate MCP request format
case validate_mcp_request(conn.body_params) do
{:ok, mcp_request} ->
method = Map.get(mcp_request, "method")
# Validate session for this method
case validate_session_for_method(method, conn, context) do
{:ok, _session_info} ->
# Add session tracking
enhanced_request = add_session_info(mcp_request, conn, context)
# For tool calls, check tool filtering
case method do
"tools/call" ->
tool_name = get_in(enhanced_request, ["params", "name"])
if tool_allowed_for_context?(tool_name, context) do
execute_mcp_request(conn, enhanced_request, context)
else
send_json_response(conn, 403, %{
jsonrpc: "2.0",
id: Map.get(enhanced_request, "id"),
error: %{
code: -32601,
message: "Tool not available: #{tool_name}"
}
})
end
"tools/list" ->
# Override tools/list to return filtered tools
handle_filtered_tools_list(conn, enhanced_request, context)
_ ->
# Other methods pass through normally
execute_mcp_request(conn, enhanced_request, context)
end
{:error, auth_error} ->
send_json_response(conn, 401, %{
jsonrpc: "2.0",
id: Map.get(mcp_request, "id"),
error: auth_error
})
end
{:error, reason} ->
send_json_response(conn, 400, %{
jsonrpc: "2.0",
id: Map.get(conn.body_params, "id"),
error: %{
code: -32700,
message: "Invalid request: #{reason}"
}
})
end
end
get "/mcp/ws" do
conn
|> WebSockAdapter.upgrade(AgentCoordinator.WebSocketHandler, %{}, timeout: 60_000)
end
get "/agents" do
context = extract_client_context(conn)
# Only allow agent status for authorized clients
case context.security_level do
level when level in [:trusted, :sandboxed] ->
mcp_request = %{
"jsonrpc" => "2.0",
"id" => generate_request_id(),
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{"agent_id" => "http_interface"}
}
}
case MCPServer.handle_mcp_request(mcp_request) do
%{"result" => %{"content" => [%{"text" => text}]}} ->
data = Jason.decode!(text)
send_json_response(conn, 200, data)
%{"error" => error} ->
send_json_response(conn, 500, %{error: error})
end
_ ->
send_json_response(conn, 403, %{
error: "Insufficient privileges to view agent status"
})
end
end
# Server-Sent Events (SSE) endpoint for real-time MCP streaming.
# Implements MCP Streamable HTTP transport for live updates.
get "/mcp/stream" do
context = extract_client_context(conn)
# Validate session for SSE stream
case validate_session_for_method("stream/subscribe", conn, context) do
{:ok, session_info} ->
# Set up SSE headers
conn = conn
|> put_resp_content_type("text/event-stream")
|> put_mcp_headers()
|> put_resp_header("cache-control", "no-cache")
|> put_resp_header("connection", "keep-alive")
|> put_resp_header("access-control-allow-credentials", "true")
|> send_chunked(200)
# Send initial connection event
{:ok, conn} = chunk(conn, format_sse_event("connected", %{
session_id: Map.get(session_info, :agent_id, "anonymous"),
protocol_version: "2025-06-18",
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}))
# Start streaming loop
stream_mcp_events(conn, session_info, context)
{:error, auth_error} ->
send_json_response(conn, 401, auth_error)
end
end
defp stream_mcp_events(conn, session_info, context) do
# This is a basic implementation - in production you'd want to:
# 1. Subscribe to a GenServer/PubSub for real-time events
# 2. Handle client disconnections gracefully
# 3. Implement proper backpressure
# Send periodic heartbeat for now
try do
:timer.sleep(1000)
{:ok, conn} = chunk(conn, format_sse_event("heartbeat", %{
timestamp: DateTime.utc_now() |> DateTime.to_iso8601(),
session_id: Map.get(session_info, :agent_id, "anonymous")
}))
# Continue streaming (this would be event-driven in production)
stream_mcp_events(conn, session_info, context)
rescue
# Client disconnected
_ ->
IO.puts(:stderr, "SSE client disconnected")
conn
end
end
defp format_sse_event(event_type, data) do
"event: #{event_type}\ndata: #{Jason.encode!(data)}\n\n"
end
# Catch-all for unmatched routes
match _ do
send_json_response(conn, 404, %{
error: "Not found",
available_endpoints: [
"GET /health",
"GET /mcp/capabilities",
"GET /mcp/tools",
"POST /mcp/tools/:tool_name",
"POST /mcp/request",
"GET /mcp/stream (SSE)",
"GET /mcp/ws",
"GET /agents"
]
})
end
# Private helper functions
defp cowboy_dispatch do
[
{:_, [
{"/mcp/ws", AgentCoordinator.WebSocketHandler, []},
{:_, Plug.Cowboy.Handler, {__MODULE__, []}}
]}
]
end
defp extract_client_context(conn) do
remote_ip = get_remote_ip(conn)
user_agent = get_req_header(conn, "user-agent") |> List.first()
origin = get_req_header(conn, "origin") |> List.first()
connection_info = %{
transport: :http,
remote_ip: remote_ip,
user_agent: user_agent,
origin: origin,
secure: conn.scheme == :https,
headers: conn.req_headers
}
ToolFilter.detect_client_context(connection_info)
end
defp get_remote_ip(conn) do
# Check for forwarded headers first (for reverse proxies)
forwarded_for = get_req_header(conn, "x-forwarded-for") |> List.first()
real_ip = get_req_header(conn, "x-real-ip") |> List.first()
cond do
forwarded_for ->
forwarded_for |> String.split(",") |> List.first() |> String.trim()
real_ip ->
real_ip
true ->
conn.remote_ip |> :inet.ntoa() |> to_string()
end
end
defp put_cors_headers(conn, _opts) do
# Validate origin for enhanced security
origin = get_req_header(conn, "origin") |> List.first()
allowed_origin = validate_origin(origin)
conn
|> put_resp_header("access-control-allow-origin", allowed_origin)
|> put_resp_header("access-control-allow-methods", "GET, POST, OPTIONS")
|> put_resp_header("access-control-allow-headers", "content-type, authorization, mcp-session-id, mcp-protocol-version, x-session-id")
|> put_resp_header("access-control-expose-headers", "mcp-protocol-version, server")
|> put_resp_header("access-control-max-age", "86400")
end
defp validate_origin(nil), do: "*" # No origin header (direct API calls)
defp validate_origin(origin) do
# Allow localhost and development origins
case URI.parse(origin) do
%URI{host: host} when host in ["localhost", "127.0.0.1", "::1"] -> origin
%URI{host: host} when is_binary(host) ->
# Allow HTTPS origins and known development domains
if String.starts_with?(origin, "https://") or
String.contains?(host, ["localhost", "127.0.0.1", "dev", "local"]) do
origin
else
# For production, be more restrictive
IO.puts(:stderr, "Potentially unsafe origin: #{origin}")
"*" # Fallback for now, could be more restrictive
end
_ -> "*"
end
end
defp send_json_response(conn, status, data) do
conn
|> put_resp_content_type("application/json")
|> put_mcp_headers()
|> send_resp(status, Jason.encode!(data))
end
defp put_mcp_headers(conn) do
conn
|> put_resp_header("mcp-protocol-version", "2025-06-18")
|> put_resp_header("server", "AgentCoordinator/1.0")
end
defp validate_mcp_request(params) when is_map(params) do
required_fields = ["jsonrpc", "method"]
missing_fields = Enum.filter(required_fields, fn field ->
not Map.has_key?(params, field)
end)
cond do
not Enum.empty?(missing_fields) ->
{:error, "Missing required fields: #{Enum.join(missing_fields, ", ")}"}
Map.get(params, "jsonrpc") != "2.0" ->
{:error, "Invalid jsonrpc version, must be '2.0'"}
not is_binary(Map.get(params, "method")) ->
{:error, "Method must be a string"}
true ->
{:ok, params}
end
end
defp validate_mcp_request(_), do: {:error, "Request must be a JSON object"}
defp add_session_info(mcp_request, conn, context) do
# Extract and validate MCP session token
{session_id, session_info} = get_session_info(conn)
# Add context metadata to request params
enhanced_params = Map.get(mcp_request, "params", %{})
|> Map.put("_session_id", session_id)
|> Map.put("_session_info", session_info)
|> Map.put("_client_context", %{
connection_type: context.connection_type,
security_level: context.security_level,
remote_ip: get_remote_ip(conn),
user_agent: context.user_agent
})
Map.put(mcp_request, "params", enhanced_params)
end
defp get_session_info(conn) do
# Check for MCP-Session-Id header (MCP compliant)
case get_req_header(conn, "mcp-session-id") do
[session_token] when byte_size(session_token) > 0 ->
case SessionManager.validate_session(session_token) do
{:ok, session_info} ->
{session_info.agent_id, %{
token: session_token,
agent_id: session_info.agent_id,
capabilities: session_info.capabilities,
expires_at: session_info.expires_at,
validated: true
}}
{:error, reason} ->
IO.puts(:stderr, "Invalid MCP session token: #{reason}")
# Fall back to generating anonymous session
anonymous_id = "http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
{anonymous_id, %{validated: false, reason: reason}}
end
[] ->
# Check legacy X-Session-Id header for backward compatibility
case get_req_header(conn, "x-session-id") do
[session_id] when byte_size(session_id) > 0 ->
{session_id, %{validated: false, legacy: true}}
_ ->
# No session header, generate anonymous session
anonymous_id = "http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
{anonymous_id, %{validated: false, anonymous: true}}
end
end
end
defp require_authenticated_session(conn, _context) do
{_session_id, session_info} = get_session_info(conn)
case Map.get(session_info, :validated, false) do
true ->
{:ok, session_info}
false ->
reason = Map.get(session_info, :reason, "Session not authenticated")
{:error, %{
code: -32001,
message: "Authentication required",
data: %{reason: reason}
}}
end
end
defp validate_session_for_method(method, conn, context) do
# Define which methods require authenticated sessions
authenticated_methods = MapSet.new([
"agents/register",
"agents/unregister",
"agents/heartbeat",
"tasks/create",
"tasks/complete",
"codebase/register",
"stream/subscribe"
])
if MapSet.member?(authenticated_methods, method) do
require_authenticated_session(conn, context)
else
{:ok, %{anonymous: true}}
end
end
defp tool_allowed_for_context?(tool_name, context) do
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, context)
Enum.any?(filtered_tools, fn tool ->
Map.get(tool, "name") == tool_name
end)
end
defp execute_mcp_request(conn, mcp_request, _context) do
case MCPServer.handle_mcp_request(mcp_request) do
%{"result" => _} = response ->
send_json_response(conn, 200, response)
%{"error" => _} = response ->
send_json_response(conn, 400, response)
unexpected ->
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
send_json_response(conn, 500, %{
jsonrpc: "2.0",
id: Map.get(mcp_request, "id"),
error: %{
code: -32603,
message: "Internal server error"
}
})
end
end
defp handle_filtered_tools_list(conn, mcp_request, context) do
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, context)
response = %{
"jsonrpc" => "2.0",
"id" => Map.get(mcp_request, "id"),
"result" => %{
"tools" => filtered_tools,
"_meta" => %{
"filtered_for" => context.connection_type,
"original_count" => length(all_tools),
"filtered_count" => length(filtered_tools)
}
}
}
send_json_response(conn, 200, response)
end
defp generate_request_id do
"http_req_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
end
end

View File

@@ -0,0 +1,694 @@
defmodule AgentCoordinator.InterfaceManager do
@moduledoc """
Centralized manager for multiple MCP interface modes.
This module coordinates between different interface types:
- STDIO interface (for local MCP clients like VSCode)
- HTTP REST interface (for remote API access)
- WebSocket interface (for real-time web clients)
Responsibilities:
- Start/stop interface servers based on configuration
- Coordinate session state across interfaces
- Apply appropriate tool filtering per interface
- Monitor interface health and restart if needed
- Provide unified metrics and monitoring
"""
use GenServer
require Logger
alias AgentCoordinator.{HttpInterface, ToolFilter}
defstruct [
:config,
:interfaces,
:stdio_handler,
:session_registry,
:metrics
]
@interface_types [:stdio, :http, :websocket]
# Client API
@doc """
Start the interface manager with configuration.
"""
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@doc """
Get current interface status.
"""
def get_status do
GenServer.call(__MODULE__, :get_status)
end
@doc """
Start a specific interface type.
"""
def start_interface(interface_type, opts \\ []) do
GenServer.call(__MODULE__, {:start_interface, interface_type, opts})
end
@doc """
Stop a specific interface type.
"""
def stop_interface(interface_type) do
GenServer.call(__MODULE__, {:stop_interface, interface_type})
end
@doc """
Restart an interface.
"""
def restart_interface(interface_type) do
GenServer.call(__MODULE__, {:restart_interface, interface_type})
end
@doc """
Get metrics for all interfaces.
"""
def get_metrics do
GenServer.call(__MODULE__, :get_metrics)
end
@doc """
Register a session across interfaces.
"""
def register_session(session_id, interface_type, session_info) do
GenServer.cast(__MODULE__, {:register_session, session_id, interface_type, session_info})
end
@doc """
Unregister a session.
"""
def unregister_session(session_id) do
GenServer.cast(__MODULE__, {:unregister_session, session_id})
end
# Server callbacks
@impl GenServer
def init(opts) do
# Load configuration
config = load_interface_config(opts)
state = %__MODULE__{
config: config,
interfaces: %{},
stdio_handler: nil,
session_registry: %{},
metrics: initialize_metrics()
}
IO.puts(:stderr, "Interface Manager starting with config: #{inspect(config.enabled_interfaces)}")
# Start enabled interfaces
{:ok, state, {:continue, :start_interfaces}}
end
@impl GenServer
def handle_continue(:start_interfaces, state) do
# Start each enabled interface
updated_state = Enum.reduce(state.config.enabled_interfaces, state, fn interface_type, acc ->
case start_interface_server(interface_type, state.config, acc) do
{:ok, interface_info} ->
IO.puts(:stderr, "Started #{interface_type} interface")
%{acc | interfaces: Map.put(acc.interfaces, interface_type, interface_info)}
{:error, reason} ->
IO.puts(:stderr, "Failed to start #{interface_type} interface: #{reason}")
acc
end
end)
{:noreply, updated_state}
end
@impl GenServer
def handle_call(:get_status, _from, state) do
status = %{
enabled_interfaces: state.config.enabled_interfaces,
running_interfaces: Map.keys(state.interfaces),
active_sessions: map_size(state.session_registry),
config: %{
stdio: state.config.stdio,
http: state.config.http,
websocket: state.config.websocket
},
uptime: get_uptime(),
metrics: state.metrics
}
{:reply, status, state}
end
@impl GenServer
def handle_call({:start_interface, interface_type, opts}, _from, state) do
if interface_type in @interface_types do
case start_interface_server(interface_type, state.config, state, opts) do
{:ok, interface_info} ->
updated_interfaces = Map.put(state.interfaces, interface_type, interface_info)
updated_state = %{state | interfaces: updated_interfaces}
IO.puts(:stderr, "Started #{interface_type} interface on demand")
{:reply, {:ok, interface_info}, updated_state}
{:error, reason} ->
IO.puts(:stderr, "Failed to start #{interface_type} interface: #{reason}")
{:reply, {:error, reason}, state}
end
else
{:reply, {:error, "Unknown interface type: #{interface_type}"}, state}
end
end
@impl GenServer
def handle_call({:stop_interface, interface_type}, _from, state) do
case Map.get(state.interfaces, interface_type) do
nil ->
{:reply, {:error, "Interface not running: #{interface_type}"}, state}
interface_info ->
case stop_interface_server(interface_type, interface_info) do
:ok ->
updated_interfaces = Map.delete(state.interfaces, interface_type)
updated_state = %{state | interfaces: updated_interfaces}
IO.puts(:stderr, "Stopped #{interface_type} interface")
{:reply, :ok, updated_state}
{:error, reason} ->
IO.puts(:stderr, "Failed to stop #{interface_type} interface: #{reason}")
{:reply, {:error, reason}, state}
end
end
end
@impl GenServer
def handle_call({:restart_interface, interface_type}, _from, state) do
case Map.get(state.interfaces, interface_type) do
nil ->
{:reply, {:error, "Interface not running: #{interface_type}"}, state}
interface_info ->
# Stop the interface
case stop_interface_server(interface_type, interface_info) do
:ok ->
# Start it again
case start_interface_server(interface_type, state.config, state) do
{:ok, new_interface_info} ->
updated_interfaces = Map.put(state.interfaces, interface_type, new_interface_info)
updated_state = %{state | interfaces: updated_interfaces}
IO.puts(:stderr, "Restarted #{interface_type} interface")
{:reply, {:ok, new_interface_info}, updated_state}
{:error, reason} ->
# Remove from running interfaces since it failed to restart
updated_interfaces = Map.delete(state.interfaces, interface_type)
updated_state = %{state | interfaces: updated_interfaces}
IO.puts(:stderr, "Failed to restart #{interface_type} interface: #{reason}")
{:reply, {:error, reason}, updated_state}
end
{:error, reason} ->
IO.puts(:stderr, "Failed to stop #{interface_type} interface for restart: #{reason}")
{:reply, {:error, reason}, state}
end
end
end
@impl GenServer
def handle_call(:get_metrics, _from, state) do
# Collect metrics from all running interfaces
interface_metrics = Enum.map(state.interfaces, fn {interface_type, interface_info} ->
{interface_type, get_interface_metrics(interface_type, interface_info)}
end) |> Enum.into(%{})
metrics = %{
interfaces: interface_metrics,
sessions: %{
total: map_size(state.session_registry),
by_interface: get_sessions_by_interface(state.session_registry)
},
uptime: get_uptime(),
timestamp: DateTime.utc_now()
}
{:reply, metrics, state}
end
@impl GenServer
def handle_cast({:register_session, session_id, interface_type, session_info}, state) do
session_data = %{
interface_type: interface_type,
info: session_info,
registered_at: DateTime.utc_now(),
last_activity: DateTime.utc_now()
}
updated_registry = Map.put(state.session_registry, session_id, session_data)
updated_state = %{state | session_registry: updated_registry}
IO.puts(:stderr, "Registered session #{session_id} for #{interface_type}")
{:noreply, updated_state}
end
@impl GenServer
def handle_cast({:unregister_session, session_id}, state) do
case Map.get(state.session_registry, session_id) do
nil ->
IO.puts(:stderr, "Attempted to unregister unknown session: #{session_id}")
{:noreply, state}
_session_data ->
updated_registry = Map.delete(state.session_registry, session_id)
updated_state = %{state | session_registry: updated_registry}
IO.puts(:stderr, "Unregistered session #{session_id}")
{:noreply, updated_state}
end
end
@impl GenServer
def handle_info({:DOWN, _ref, :process, pid, reason}, state) do
# Handle interface process crashes
case find_interface_by_pid(pid, state.interfaces) do
{interface_type, _interface_info} ->
IO.puts(:stderr, "#{interface_type} interface crashed: #{inspect(reason)}")
# Remove from running interfaces
updated_interfaces = Map.delete(state.interfaces, interface_type)
updated_state = %{state | interfaces: updated_interfaces}
# Optionally restart if configured
if should_auto_restart?(interface_type, state.config) do
IO.puts(:stderr, "Auto-restarting #{interface_type} interface")
Process.send_after(self(), {:restart_interface, interface_type}, 5000)
end
{:noreply, updated_state}
nil ->
IO.puts(:stderr, "Unknown process died: #{inspect(pid)}")
{:noreply, state}
end
end
@impl GenServer
def handle_info({:restart_interface, interface_type}, state) do
case start_interface_server(interface_type, state.config, state) do
{:ok, interface_info} ->
updated_interfaces = Map.put(state.interfaces, interface_type, interface_info)
updated_state = %{state | interfaces: updated_interfaces}
IO.puts(:stderr, "Auto-restarted #{interface_type} interface")
{:noreply, updated_state}
{:error, reason} ->
IO.puts(:stderr, "Failed to auto-restart #{interface_type} interface: #{reason}")
{:noreply, state}
end
end
@impl GenServer
def handle_info(message, state) do
IO.puts(:stderr, "Interface Manager received unexpected message: #{inspect(message)}")
{:noreply, state}
end
# Private helper functions
defp load_interface_config(opts) do
# Load from application config and override with opts
base_config = Application.get_env(:agent_coordinator, :interfaces, %{})
# Default configuration
default_config = %{
enabled_interfaces: [:stdio],
stdio: %{
enabled: true,
handle_stdio: true
},
http: %{
enabled: false,
port: 8080,
host: "localhost",
cors_enabled: true
},
websocket: %{
enabled: false,
port: 8081,
host: "localhost"
},
auto_restart: %{
stdio: false,
http: true,
websocket: true
}
}
# Merge configurations
config = deep_merge(default_config, base_config)
config = deep_merge(config, Enum.into(opts, %{}))
# Determine enabled interfaces from environment or config
enabled = determine_enabled_interfaces(config)
# Update individual interface enabled flags based on environment
config = update_interface_enabled_flags(config, enabled)
%{config | enabled_interfaces: enabled}
end
defp determine_enabled_interfaces(config) do
# Check environment variables
interface_mode = System.get_env("MCP_INTERFACE_MODE", "stdio")
case interface_mode do
"stdio" -> [:stdio]
"http" -> [:http]
"websocket" -> [:websocket]
"all" -> [:stdio, :http, :websocket]
"remote" -> [:http, :websocket]
_ ->
# Check for comma-separated list
if String.contains?(interface_mode, ",") do
interface_mode
|> String.split(",")
|> Enum.map(&String.trim/1)
|> Enum.map(&String.to_atom/1)
|> Enum.filter(&(&1 in @interface_types))
else
# Fall back to config
Map.get(config, :enabled_interfaces, [:stdio])
end
end
end
defp update_interface_enabled_flags(config, enabled_interfaces) do
# Update individual interface enabled flags based on which interfaces are enabled
config
|> update_in([:stdio, :enabled], fn _ -> :stdio in enabled_interfaces end)
|> update_in([:http, :enabled], fn _ -> :http in enabled_interfaces end)
|> update_in([:websocket, :enabled], fn _ -> :websocket in enabled_interfaces end)
# Also update ports from environment if set
|> update_http_config_from_env()
end
defp update_http_config_from_env(config) do
config = case System.get_env("MCP_HTTP_PORT") do
nil -> config
port_str ->
case Integer.parse(port_str) do
{port, ""} -> put_in(config, [:http, :port], port)
_ -> config
end
end
case System.get_env("MCP_HTTP_HOST") do
nil -> config
host -> put_in(config, [:http, :host], host)
end
end
# Declare defaults once
defp start_interface_server(type, config, state, opts \\ %{})
defp start_interface_server(:stdio, config, state, _opts) do
if config.stdio.enabled and config.stdio.handle_stdio do
# Start stdio handler
stdio_handler = spawn_link(fn -> handle_stdio_loop(state) end)
interface_info = %{
type: :stdio,
pid: stdio_handler,
started_at: DateTime.utc_now(),
config: config.stdio
}
{:ok, interface_info}
else
{:error, "STDIO interface not enabled"}
end
end
defp start_interface_server(:http, config, _state, opts) do
if config.http.enabled do
http_opts = [
port: Map.get(opts, :port, config.http.port),
host: Map.get(opts, :host, config.http.host)
]
case HttpInterface.start_link(http_opts) do
{:ok, pid} ->
# Monitor the process
ref = Process.monitor(pid)
interface_info = %{
type: :http,
pid: pid,
monitor_ref: ref,
started_at: DateTime.utc_now(),
config: Map.merge(config.http, Enum.into(opts, %{})),
port: http_opts[:port]
}
{:ok, interface_info}
{:error, reason} ->
{:error, reason}
end
else
{:error, "HTTP interface not enabled"}
end
end
defp start_interface_server(:websocket, config, _state, _opts) do
if config.websocket.enabled do
# WebSocket is handled by the HTTP server, so just mark it as enabled
interface_info = %{
type: :websocket,
pid: :embedded, # Embedded in HTTP server
started_at: DateTime.utc_now(),
config: config.websocket
}
{:ok, interface_info}
else
{:error, "WebSocket interface not enabled"}
end
end
defp start_interface_server(unknown_type, _config, _state, _opts) do
{:error, "Unknown interface type: #{unknown_type}"}
end
defp stop_interface_server(:stdio, interface_info) do
if Process.alive?(interface_info.pid) do
Process.exit(interface_info.pid, :shutdown)
:ok
else
:ok
end
end
defp stop_interface_server(:http, interface_info) do
if Process.alive?(interface_info.pid) do
Process.exit(interface_info.pid, :shutdown)
:ok
else
:ok
end
end
defp stop_interface_server(:websocket, _interface_info) do
# WebSocket is embedded in HTTP server, so nothing to stop separately
:ok
end
defp stop_interface_server(_type, _interface_info) do
{:error, "Unknown interface type"}
end
defp handle_stdio_loop(state) do
# Handle MCP JSON-RPC messages from STDIO
# Use different approaches for Docker vs regular environments
if docker_environment?() do
handle_stdio_docker_loop(state)
else
handle_stdio_regular_loop(state)
end
end
defp handle_stdio_regular_loop(state) do
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, "STDIO interface shutting down (EOF)")
exit(:normal)
{:error, reason} ->
IO.puts(:stderr, "STDIO error: #{inspect(reason)}")
exit({:error, reason})
line ->
handle_stdio_message(String.trim(line), state)
handle_stdio_regular_loop(state)
end
end
defp handle_stdio_docker_loop(state) do
# In Docker, use regular IO.read instead of Port.open({:fd, 0, 1})
# to avoid "driver_select stealing control of fd=0" conflicts with external MCP servers
# This allows external servers to use pipes while Agent Coordinator reads from stdin
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, "STDIO interface shutting down (EOF)")
exit(:normal)
{:error, reason} ->
IO.puts(:stderr, "STDIO error: #{inspect(reason)}")
exit({:error, reason})
line ->
handle_stdio_message(String.trim(line), state)
handle_stdio_docker_loop(state)
end
end
defp handle_stdio_message("", _state), do: :ok
defp handle_stdio_message(json_line, _state) do
try do
request = Jason.decode!(json_line)
# Create local client context for stdio
_client_context = ToolFilter.local_context()
# Process through MCP server with full tool access
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
# Send response
IO.puts(Jason.encode!(response))
rescue
e in Jason.DecodeError ->
error_response = %{
"jsonrpc" => "2.0",
"id" => nil,
"error" => %{
"code" => -32700,
"message" => "Parse error: #{Exception.message(e)}"
}
}
IO.puts(Jason.encode!(error_response))
e ->
# Try to get the ID from the malformed request
id = try do
partial = Jason.decode!(json_line)
Map.get(partial, "id")
rescue
_ -> nil
end
error_response = %{
"jsonrpc" => "2.0",
"id" => id,
"error" => %{
"code" => -32603,
"message" => "Internal error: #{Exception.message(e)}"
}
}
IO.puts(Jason.encode!(error_response))
end
end
defp get_interface_metrics(:stdio, interface_info) do
%{
type: :stdio,
status: if(Process.alive?(interface_info.pid), do: :running, else: :stopped),
uptime: DateTime.diff(DateTime.utc_now(), interface_info.started_at, :second),
pid: interface_info.pid
}
end
defp get_interface_metrics(:http, interface_info) do
%{
type: :http,
status: if(Process.alive?(interface_info.pid), do: :running, else: :stopped),
uptime: DateTime.diff(DateTime.utc_now(), interface_info.started_at, :second),
port: interface_info.port,
pid: interface_info.pid
}
end
defp get_interface_metrics(:websocket, interface_info) do
%{
type: :websocket,
status: :running, # Embedded in HTTP server
uptime: DateTime.diff(DateTime.utc_now(), interface_info.started_at, :second),
embedded: true
}
end
defp get_sessions_by_interface(session_registry) do
Enum.reduce(session_registry, %{}, fn {_session_id, session_data}, acc ->
interface_type = session_data.interface_type
count = Map.get(acc, interface_type, 0)
Map.put(acc, interface_type, count + 1)
end)
end
defp find_interface_by_pid(pid, interfaces) do
Enum.find(interfaces, fn {_type, interface_info} ->
interface_info.pid == pid
end)
end
defp should_auto_restart?(interface_type, config) do
Map.get(config.auto_restart, interface_type, false)
end
defp initialize_metrics do
%{
started_at: DateTime.utc_now(),
requests_total: 0,
errors_total: 0,
sessions_total: 0
}
end
defp get_uptime do
{uptime_ms, _} = :erlang.statistics(:wall_clock)
div(uptime_ms, 1000)
end
# Deep merge helper for configuration
defp deep_merge(left, right) when is_map(left) and is_map(right) do
Map.merge(left, right, fn _key, left_val, right_val ->
deep_merge(left_val, right_val)
end)
end
defp deep_merge(_left, right), do: right
# Check if running in Docker environment
defp docker_environment? do
# Check common Docker environment indicators
System.get_env("DOCKER_CONTAINER") != nil or
System.get_env("container") != nil or
System.get_env("DOCKERIZED") != nil or
File.exists?("/.dockerenv") or
File.exists?("/proc/1/cgroup") and
(File.read!("/proc/1/cgroup") |> String.contains?("docker")) or
String.contains?(to_string(System.get_env("PATH", "")), "/app/") or
# Check if we're running under a container init system
case File.read("/proc/1/comm") do
{:ok, comm} -> String.trim(comm) in ["bash", "sh", "docker-init", "tini"]
_ -> false
end
end
end

View File

@@ -11,7 +11,7 @@ defmodule AgentCoordinator.MCPServer do
use GenServer
require Logger
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task, CodebaseRegistry, VSCodeToolProvider}
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task, CodebaseRegistry, VSCodeToolProvider, ToolFilter, SessionManager, ActivityTracker}
# State for tracking external servers and agent sessions
defstruct [
@@ -38,7 +38,7 @@ defmodule AgentCoordinator.MCPServer do
"enum" => ["coding", "testing", "documentation", "analysis", "review"]
}
},
"codebase_id" => %{"type" => "string"},
"codebase_id" => %{"type" => "string", "description" => "If the project is found locally on the machine, use the name of the directory in which you are currently at (.). If it is remote, use the git registered codebase ID, if it is a multicodebase project, and there is no apparently folder to base as the rootmost -- ask."},
"workspace_path" => %{"type" => "string"},
"cross_codebase_capable" => %{"type" => "boolean"}
},
@@ -71,7 +71,7 @@ defmodule AgentCoordinator.MCPServer do
"title" => %{"type" => "string"},
"description" => %{"type" => "string"},
"priority" => %{"type" => "string", "enum" => ["low", "normal", "high", "urgent"]},
"codebase_id" => %{"type" => "string"},
"codebase_id" => %{"type" => "string", "description" => "If the project is found locally on the machine, use the name of the directory in which you are currently at (.). If it is remote, use the git registered codebase ID, if it is a multicodebase project, and there is no apparently folder to base as the rootmost -- ask."},
"file_paths" => %{"type" => "array", "items" => %{"type" => "string"}},
"required_capabilities" => %{
"type" => "array",
@@ -331,6 +331,25 @@ defmodule AgentCoordinator.MCPServer do
},
"required" => ["agent_id"]
}
},
%{
"name" => "discover_codebase_info",
"description" => "Intelligently discover codebase information from workspace path, including git repository details, canonical ID generation, and project identification.",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"agent_id" => %{"type" => "string"},
"workspace_path" => %{
"type" => "string",
"description" => "Path to the workspace/project directory"
},
"custom_id" => %{
"type" => "string",
"description" => "Optional: Override automatic codebase ID detection"
}
},
"required" => ["agent_id", "workspace_path"]
}
}
]
@@ -344,8 +363,8 @@ defmodule AgentCoordinator.MCPServer do
GenServer.call(__MODULE__, {:mcp_request, request})
end
def get_tools do
case GenServer.call(__MODULE__, :get_all_tools, 5000) do
def get_tools(client_context \\ nil) do
case GenServer.call(__MODULE__, {:get_all_tools, client_context}, 5000) do
tools when is_list(tools) -> tools
_ -> @mcp_tools
end
@@ -464,7 +483,20 @@ defmodule AgentCoordinator.MCPServer do
end
end
def handle_call({:get_all_tools, client_context}, _from, state) do
all_tools = get_all_unified_tools_from_state(state)
# Apply tool filtering if client context is provided
filtered_tools = case client_context do
nil -> all_tools # No filtering for nil context (backward compatibility)
context -> ToolFilter.filter_tools(all_tools, context)
end
{:reply, filtered_tools, state}
end
def handle_call(:get_all_tools, _from, state) do
# Backward compatibility - no filtering
all_tools = get_all_unified_tools_from_state(state)
{:reply, all_tools, state}
end
@@ -599,10 +631,33 @@ defmodule AgentCoordinator.MCPServer do
:ok
end
# Track the session if we have caller info
track_agent_session(agent.id, name, capabilities)
# Generate session token for the agent
session_metadata = %{
name: name,
capabilities: capabilities,
codebase_id: agent.codebase_id,
workspace_path: opts[:workspace_path],
registered_at: DateTime.utc_now()
}
{:ok, %{agent_id: agent.id, codebase_id: agent.codebase_id, status: "registered"}}
case SessionManager.create_session(agent.id, session_metadata) do
{:ok, session_token} ->
# Track the session if we have caller info
track_agent_session(agent.id, name, capabilities)
{:ok, %{
agent_id: agent.id,
codebase_id: agent.codebase_id,
status: "registered",
session_token: session_token,
expires_at: DateTime.add(DateTime.utc_now(), 60, :minute) |> DateTime.to_iso8601()
}}
{:error, reason} ->
IO.puts(:stderr, "Failed to create session for agent #{agent.id}: #{inspect(reason)}")
# Still return success but without session token for backward compatibility
{:ok, %{agent_id: agent.id, codebase_id: agent.codebase_id, status: "registered"}}
end
{:error, reason} ->
{:error, "Failed to register agent: #{reason}"}
@@ -775,6 +830,8 @@ defmodule AgentCoordinator.MCPServer do
workspace_path: agent.workspace_path,
online: Agent.is_online?(agent),
cross_codebase_capable: Agent.can_work_cross_codebase?(agent),
current_activity: agent.current_activity,
current_files: agent.current_files || [],
current_task:
status.current_task &&
%{
@@ -1008,6 +1065,9 @@ defmodule AgentCoordinator.MCPServer do
online: Agent.is_online?(agent),
cross_codebase_capable: Agent.can_work_cross_codebase?(agent),
last_heartbeat: agent.last_heartbeat,
current_activity: agent.current_activity,
current_files: agent.current_files || [],
activity_history: agent.activity_history || [],
tasks: task_info
}
end)
@@ -1074,6 +1134,77 @@ defmodule AgentCoordinator.MCPServer do
end
end
# NEW: Codebase discovery function
defp discover_codebase_info(%{"agent_id" => agent_id, "workspace_path" => workspace_path} = args) do
custom_id = Map.get(args, "custom_id")
# Use the CodebaseIdentifier to analyze the workspace
opts = if custom_id, do: [custom_id: custom_id], else: []
case AgentCoordinator.CodebaseIdentifier.identify_codebase(workspace_path, opts) do
codebase_info ->
# Also check if this codebase is already registered
existing_codebase = case CodebaseRegistry.get_codebase(codebase_info.canonical_id) do
{:ok, codebase} -> codebase
{:error, :not_found} -> nil
end
# Check for other agents working on same codebase
agents = TaskRegistry.list_agents()
related_agents = Enum.filter(agents, fn agent ->
agent.codebase_id == codebase_info.canonical_id and agent.id != agent_id
end)
response = %{
codebase_info: codebase_info,
already_registered: existing_codebase != nil,
existing_codebase: existing_codebase,
related_agents: Enum.map(related_agents, fn agent ->
%{
agent_id: agent.id,
name: agent.name,
capabilities: agent.capabilities,
status: agent.status,
workspace_path: agent.workspace_path,
online: Agent.is_online?(agent)
}
end),
recommendations: generate_codebase_recommendations(codebase_info, existing_codebase, related_agents)
}
{:ok, response}
end
end
defp generate_codebase_recommendations(codebase_info, existing_codebase, related_agents) do
recommendations = []
# Recommend registration if not already registered
recommendations = if existing_codebase == nil do
["Consider registering this codebase with register_codebase for better coordination" | recommendations]
else
recommendations
end
# Recommend coordination if other agents are working on same codebase
recommendations = if length(related_agents) > 0 do
agent_names = Enum.map(related_agents, & &1.name) |> Enum.join(", ")
["Other agents working on this codebase: #{agent_names}. Consider coordination." | recommendations]
else
recommendations
end
# Recommend git setup if local folder without git
recommendations = if codebase_info.identification_method == :folder_name do
["Consider initializing git repository for better distributed coordination" | recommendations]
else
recommendations
end
Enum.reverse(recommendations)
end
# External MCP server management functions
defp start_external_server(name, %{type: :stdio} = config) do
@@ -1095,17 +1226,15 @@ defmodule AgentCoordinator.MCPServer do
tools: []
}
# Initialize the server and get tools
# Initialize the server and get tools with shorter timeout
case initialize_external_server(server_info) do
{:ok, tools} ->
{:ok, %{server_info | tools: tools}}
{:error, reason} ->
# Cleanup on initialization failure
cleanup_external_pid_file(pid_file_path)
kill_external_process(os_pid)
if Port.info(port), do: Port.close(port)
{:error, reason}
# Log error but don't fail - continue with empty tools
IO.puts(:stderr, "Failed to initialize #{name}: #{reason}")
{:ok, %{server_info | tools: []}}
end
{:error, reason} ->
@@ -1145,6 +1274,8 @@ defmodule AgentCoordinator.MCPServer do
env_list =
Enum.map(env, fn {key, value} -> {String.to_charlist(key), String.to_charlist(value)} end)
# Use pipe communication but allow stdin/stdout for MCP protocol
# Remove :nouse_stdio since MCP servers need stdin/stdout to communicate
port_options = [
:binary,
:stream,
@@ -1226,7 +1357,7 @@ defmodule AgentCoordinator.MCPServer do
Port.command(server_info.port, request_json)
# Collect full response by reading multiple lines if needed
response_data = collect_external_response(server_info.port, "", 30_000)
response_data = collect_external_response(server_info.port, "", 5_000)
cond do
response_data == "" ->
@@ -1372,35 +1503,6 @@ defmodule AgentCoordinator.MCPServer do
pid_file_path
end
defp cleanup_external_pid_file(pid_file_path) do
if File.exists?(pid_file_path) do
File.rm(pid_file_path)
end
end
defp kill_external_process(os_pid) when is_integer(os_pid) do
try do
case System.cmd("kill", ["-TERM", to_string(os_pid)]) do
{_, 0} ->
IO.puts(:stderr, "Successfully terminated process #{os_pid}")
:ok
{_, _} ->
case System.cmd("kill", ["-KILL", to_string(os_pid)]) do
{_, 0} ->
IO.puts(:stderr, "Force killed process #{os_pid}")
:ok
{_, _} ->
IO.puts(:stderr, "Failed to kill process #{os_pid}")
:error
end
end
rescue
_ -> :error
end
end
defp get_all_unified_tools_from_state(state) do
# Combine coordinator tools with external server tools from state
coordinator_tools = @mcp_tools
@@ -1427,17 +1529,24 @@ defmodule AgentCoordinator.MCPServer do
end
defp route_tool_call(tool_name, args, state) do
# Extract agent_id for activity tracking
agent_id = Map.get(args, "agent_id")
# Update agent activity before processing the tool call
if agent_id do
ActivityTracker.update_agent_activity(agent_id, tool_name, args)
end
# Check if it's a coordinator tool first
coordinator_tool_names = Enum.map(@mcp_tools, & &1["name"])
cond do
result = cond do
tool_name in coordinator_tool_names ->
handle_coordinator_tool(tool_name, args)
# Check if it's a VS Code tool
String.starts_with?(tool_name, "vscode_") ->
# Route to VS Code Tool Provider with agent context
agent_id = Map.get(args, "agent_id")
context = if agent_id, do: %{agent_id: agent_id}, else: %{}
VSCodeToolProvider.handle_tool_call(tool_name, args, context)
@@ -1445,6 +1554,13 @@ defmodule AgentCoordinator.MCPServer do
# Try to route to external server
route_to_external_server(tool_name, args, state)
end
# Clear agent activity after tool call completes (optional - could keep until next call)
# if agent_id do
# ActivityTracker.clear_agent_activity(agent_id)
# end
result
end
defp handle_coordinator_tool(tool_name, args) do
@@ -1465,6 +1581,7 @@ defmodule AgentCoordinator.MCPServer do
"create_agent_task" -> create_agent_task(args)
"get_detailed_task_board" -> get_detailed_task_board(args)
"get_agent_task_history" -> get_agent_task_history(args)
"discover_codebase_info" -> discover_codebase_info(args)
_ -> {:error, "Unknown coordinator tool: #{tool_name}"}
end
end

View File

@@ -0,0 +1,192 @@
defmodule AgentCoordinator.SessionManager do
@moduledoc """
Session management for MCP agents with token-based authentication.
Implements MCP-compliant session management where:
1. Agents register and receive session tokens
2. Session tokens must be included in Mcp-Session-Id headers
3. Session tokens are cryptographically secure and time-limited
4. Sessions are tied to specific agent IDs
"""
use GenServer
require Logger
defstruct [
:sessions,
:config
]
@session_expiry_minutes 60
@cleanup_interval_minutes 5
# Client API
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@doc """
Generate a new session token for an agent.
Returns {:ok, session_token} or {:error, reason}
"""
def create_session(agent_id, metadata \\ %{}) do
GenServer.call(__MODULE__, {:create_session, agent_id, metadata})
end
@doc """
Validate a session token and return agent information.
Returns {:ok, agent_id, metadata} or {:error, reason}
"""
def validate_session(session_token) do
GenServer.call(__MODULE__, {:validate_session, session_token})
end
@doc """
Invalidate a session token.
"""
def invalidate_session(session_token) do
GenServer.call(__MODULE__, {:invalidate_session, session_token})
end
@doc """
Get all active sessions for an agent.
"""
def get_agent_sessions(agent_id) do
GenServer.call(__MODULE__, {:get_agent_sessions, agent_id})
end
@doc """
Clean up expired sessions.
"""
def cleanup_expired_sessions do
GenServer.cast(__MODULE__, :cleanup_expired)
end
# Server implementation
@impl GenServer
def init(opts) do
# Start periodic cleanup
schedule_cleanup()
state = %__MODULE__{
sessions: %{},
config: %{
expiry_minutes: Keyword.get(opts, :expiry_minutes, @session_expiry_minutes),
cleanup_interval: Keyword.get(opts, :cleanup_interval, @cleanup_interval_minutes)
}
}
IO.puts(:stderr, "SessionManager started with #{state.config.expiry_minutes}min expiry")
{:ok, state}
end
@impl GenServer
def handle_call({:create_session, agent_id, metadata}, _from, state) do
session_token = generate_session_token()
expires_at = DateTime.add(DateTime.utc_now(), state.config.expiry_minutes, :minute)
session_data = %{
agent_id: agent_id,
token: session_token,
created_at: DateTime.utc_now(),
expires_at: expires_at,
metadata: metadata,
last_activity: DateTime.utc_now()
}
new_sessions = Map.put(state.sessions, session_token, session_data)
new_state = %{state | sessions: new_sessions}
IO.puts(:stderr, "Created session #{session_token} for agent #{agent_id}")
{:reply, {:ok, session_token}, new_state}
end
@impl GenServer
def handle_call({:validate_session, session_token}, _from, state) do
case Map.get(state.sessions, session_token) do
nil ->
{:reply, {:error, :session_not_found}, state}
session_data ->
if DateTime.compare(DateTime.utc_now(), session_data.expires_at) == :gt do
# Session expired, remove it
new_sessions = Map.delete(state.sessions, session_token)
new_state = %{state | sessions: new_sessions}
{:reply, {:error, :session_expired}, new_state}
else
# Session valid, update last activity
updated_session = %{session_data | last_activity: DateTime.utc_now()}
new_sessions = Map.put(state.sessions, session_token, updated_session)
new_state = %{state | sessions: new_sessions}
result = {:ok, session_data.agent_id, session_data.metadata}
{:reply, result, new_state}
end
end
end
@impl GenServer
def handle_call({:invalidate_session, session_token}, _from, state) do
case Map.get(state.sessions, session_token) do
nil ->
{:reply, {:error, :session_not_found}, state}
session_data ->
new_sessions = Map.delete(state.sessions, session_token)
new_state = %{state | sessions: new_sessions}
IO.puts(:stderr, "Invalidated session #{session_token} for agent #{session_data.agent_id}")
{:reply, :ok, new_state}
end
end
@impl GenServer
def handle_call({:get_agent_sessions, agent_id}, _from, state) do
agent_sessions =
state.sessions
|> Enum.filter(fn {_token, session} -> session.agent_id == agent_id end)
|> Enum.map(fn {token, session} -> {token, session} end)
{:reply, agent_sessions, state}
end
@impl GenServer
def handle_cast(:cleanup_expired, state) do
now = DateTime.utc_now()
{expired_sessions, active_sessions} =
Enum.split_with(state.sessions, fn {_token, session} ->
DateTime.compare(now, session.expires_at) == :gt
end)
if length(expired_sessions) > 0 do
IO.puts(:stderr, "Cleaned up #{length(expired_sessions)} expired sessions")
end
new_state = %{state | sessions: Map.new(active_sessions)}
schedule_cleanup()
{:noreply, new_state}
end
@impl GenServer
def handle_info(:cleanup_expired, state) do
handle_cast(:cleanup_expired, state)
end
# Private functions
defp generate_session_token do
# Generate cryptographically secure session token
# Format: "mcp_" + base64url(32 random bytes) + "_" + timestamp
random_bytes = :crypto.strong_rand_bytes(32)
timestamp = DateTime.utc_now() |> DateTime.to_unix()
token_body = Base.url_encode64(random_bytes, padding: false)
"mcp_#{token_body}_#{timestamp}"
end
defp schedule_cleanup do
Process.send_after(self(), :cleanup_expired, @cleanup_interval_minutes * 60 * 1000)
end
end

View File

@@ -79,6 +79,10 @@ defmodule AgentCoordinator.TaskRegistry do
GenServer.call(__MODULE__, {:complete_task, agent_id}, 30_000)
end
def update_agent(agent_id, updated_agent) do
GenServer.call(__MODULE__, {:update_agent, agent_id, updated_agent})
end
def get_task_board do
GenServer.call(__MODULE__, :get_task_board)
end
@@ -423,6 +427,18 @@ defmodule AgentCoordinator.TaskRegistry do
end
end
def handle_call({:update_agent, agent_id, updated_agent}, _from, state) do
case Map.get(state.agents, agent_id) do
nil ->
{:reply, {:error, :agent_not_found}, state}
_current_agent ->
new_agents = Map.put(state.agents, agent_id, updated_agent)
new_state = %{state | agents: new_agents}
{:reply, :ok, new_state}
end
end
def handle_call(:get_task_board, _from, state) do
agents_info =
Enum.map(state.agents, fn {_id, agent} ->
@@ -439,7 +455,9 @@ defmodule AgentCoordinator.TaskRegistry do
capabilities: agent.capabilities,
current_task: current_task,
last_heartbeat: agent.last_heartbeat,
online: Agent.is_online?(agent)
online: Agent.is_online?(agent),
current_activity: agent.current_activity,
current_files: agent.current_files || []
}
end)

View File

@@ -0,0 +1,282 @@
defmodule AgentCoordinator.ToolFilter do
@moduledoc """
Intelligent tool filtering system that adapts available tools based on client context.
This module determines which tools should be available to different types of clients:
- Local clients: Full tool access including filesystem and VSCode tools
- Remote clients: Limited to agent coordination and safe remote tools
- Web clients: Browser-safe tools only
Tool filtering is based on:
- Tool capabilities and requirements
- Client connection type (local/remote)
- Security considerations
- Tool metadata annotations
"""
require Logger
@doc """
Context information about the client connection.
"""
defstruct [
:connection_type, # :local, :remote, :web
:client_info, # Client identification
:capabilities, # Client declared capabilities
:security_level, # :trusted, :sandboxed, :restricted
:origin, # For web clients, the origin domain
:user_agent # Client user agent string
]
@type client_context :: %__MODULE__{
connection_type: :local | :remote | :web,
client_info: map(),
capabilities: [String.t()],
security_level: :trusted | :sandboxed | :restricted,
origin: String.t() | nil,
user_agent: String.t() | nil
}
# Tool name patterns that indicate local-only functionality (defined as function to avoid compilation issues)
defp local_only_patterns do
[
~r/^(read_file|write_file|create_file|delete_file)/,
~r/^(list_dir|search_files|move_file)/,
~r/^vscode_/,
~r/^(run_in_terminal|get_terminal)/,
~r/filesystem/,
~r/directory/
]
end
# Tools that are always safe for remote access
@always_safe_tools [
# Agent coordination tools
"register_agent",
"create_task",
"get_next_task",
"complete_task",
"get_task_board",
"get_detailed_task_board",
"get_agent_task_history",
"heartbeat",
"unregister_agent",
"register_task_set",
"create_agent_task",
"create_cross_codebase_task",
"list_codebases",
"register_codebase",
"get_codebase_status",
"add_codebase_dependency",
# Memory and knowledge graph (safe for remote)
"create_entities",
"create_relations",
"read_graph",
"search_nodes",
"open_nodes",
"add_observations",
"delete_entities",
"delete_relations",
"delete_observations",
# Sequential thinking (safe for remote)
"sequentialthinking",
# Library documentation (safe for remote)
"get-library-docs",
"resolve-library-id"
]
@doc """
Filter tools based on client context.
Returns a filtered list of tools appropriate for the client's context.
"""
@spec filter_tools([map()], client_context()) :: [map()]
def filter_tools(tools, %__MODULE__{} = context) do
tools
|> Enum.filter(&should_include_tool?(&1, context))
|> maybe_annotate_tools(context)
end
@doc """
Determine if a tool should be included for the given client context.
"""
@spec should_include_tool?(map(), client_context()) :: boolean()
def should_include_tool?(tool, context) do
tool_name = Map.get(tool, "name", "")
cond do
# Always include safe tools
tool_name in @always_safe_tools ->
true
# Local clients get everything
context.connection_type == :local ->
true
# Remote/web clients get filtered access
context.connection_type in [:remote, :web] ->
not is_local_only_tool?(tool, context)
# Default to restrictive
true ->
tool_name in @always_safe_tools
end
end
@doc """
Detect client context from connection information.
"""
@spec detect_client_context(map()) :: client_context()
def detect_client_context(connection_info) do
connection_type = determine_connection_type(connection_info)
security_level = determine_security_level(connection_type, connection_info)
%__MODULE__{
connection_type: connection_type,
client_info: Map.get(connection_info, :client_info, %{}),
capabilities: Map.get(connection_info, :capabilities, []),
security_level: security_level,
origin: Map.get(connection_info, :origin),
user_agent: Map.get(connection_info, :user_agent)
}
end
@doc """
Create a local client context (for stdio and direct connections).
"""
@spec local_context() :: client_context()
def local_context do
%__MODULE__{
connection_type: :local,
client_info: %{type: "local_stdio"},
capabilities: ["full_access"],
security_level: :trusted,
origin: nil,
user_agent: "agent-coordinator-local"
}
end
@doc """
Create a remote client context.
"""
@spec remote_context(map()) :: client_context()
def remote_context(opts \\ %{}) do
%__MODULE__{
connection_type: :remote,
client_info: Map.get(opts, :client_info, %{type: "remote_http"}),
capabilities: Map.get(opts, :capabilities, ["coordination"]),
security_level: :sandboxed,
origin: Map.get(opts, :origin),
user_agent: Map.get(opts, :user_agent, "unknown")
}
end
@doc """
Get tool filtering statistics for monitoring.
"""
@spec get_filter_stats([map()], client_context()) :: map()
def get_filter_stats(original_tools, context) do
filtered_tools = filter_tools(original_tools, context)
%{
original_count: length(original_tools),
filtered_count: length(filtered_tools),
removed_count: length(original_tools) - length(filtered_tools),
connection_type: context.connection_type,
security_level: context.security_level,
filtered_at: DateTime.utc_now()
}
end
# Private helpers
defp is_local_only_tool?(tool, _context) do
tool_name = Map.get(tool, "name", "")
description = Map.get(tool, "description", "")
# Check against known local-only tool names
name_is_local = tool_name in get_local_only_tool_names() or
Enum.any?(local_only_patterns(), &Regex.match?(&1, tool_name))
# Check description for local-only indicators
description_is_local = String.contains?(String.downcase(description),
["filesystem", "file system", "vscode", "terminal", "local file", "directory"])
# Check tool schema for local-only parameters
schema_is_local = has_local_only_parameters?(tool)
name_is_local or description_is_local or schema_is_local
end
defp get_local_only_tool_names do
[
# Filesystem tools
"read_file", "write_file", "create_file", "delete_file",
"list_directory", "search_files", "move_file", "get_file_info",
"list_allowed_directories", "directory_tree", "edit_file",
"read_text_file", "read_multiple_files", "read_media_file",
# VSCode tools
"vscode_create_file", "vscode_write_file", "vscode_read_file",
"vscode_delete_file", "vscode_list_directory", "vscode_get_active_editor",
"vscode_set_editor_content", "vscode_get_selection", "vscode_set_selection",
"vscode_show_message", "vscode_run_command", "vscode_get_workspace_folders",
# Terminal/process tools
"run_in_terminal", "get_terminal_output", "terminal_last_command",
"terminal_selection"
]
end
defp has_local_only_parameters?(tool) do
schema = Map.get(tool, "inputSchema", %{})
properties = Map.get(schema, "properties", %{})
# Look for file path parameters or other local indicators
Enum.any?(properties, fn {param_name, param_schema} ->
param_name in ["path", "filePath", "file_path", "directory", "workspace_path"] or
String.contains?(Map.get(param_schema, "description", ""),
["file path", "directory", "workspace", "local"])
end)
end
defp determine_connection_type(connection_info) do
cond do
Map.get(connection_info, :transport) == :stdio -> :local
Map.get(connection_info, :transport) == :websocket -> :web
Map.get(connection_info, :transport) == :http -> :remote
Map.get(connection_info, :remote_ip) == "127.0.0.1" -> :local
Map.get(connection_info, :remote_ip) == "::1" -> :local
Map.has_key?(connection_info, :remote_ip) -> :remote
true -> :local # Default to local for stdio
end
end
defp determine_security_level(connection_type, connection_info) do
case connection_type do
:local -> :trusted
:remote ->
if Map.get(connection_info, :secure, false) do
:sandboxed
else
:restricted
end
:web -> :sandboxed
end
end
defp maybe_annotate_tools(tools, context) do
# Add context information to tools if needed
if context.connection_type == :remote do
Enum.map(tools, fn tool ->
Map.put(tool, "_filtered_for", "remote_client")
end)
else
tools
end
end
end

View File

@@ -0,0 +1,383 @@
defmodule AgentCoordinator.WebSocketHandler do
@moduledoc """
WebSocket handler for real-time MCP communication.
Provides:
- Real-time MCP JSON-RPC over WebSocket
- Tool filtering based on client context
- Session management
- Heartbeat and connection monitoring
"""
@behaviour WebSock
require Logger
alias AgentCoordinator.{MCPServer, ToolFilter}
defstruct [
:client_context,
:session_id,
:last_heartbeat,
:agent_id,
:connection_info
]
@heartbeat_interval 30_000 # 30 seconds
@impl WebSock
def init(opts) do
session_id = "ws_" <> UUID.uuid4()
# Initialize connection state
state = %__MODULE__{
session_id: session_id,
last_heartbeat: DateTime.utc_now(),
connection_info: opts
}
# Start heartbeat timer
Process.send_after(self(), :heartbeat, @heartbeat_interval)
IO.puts(:stderr, "WebSocket connection established: #{session_id}")
{:ok, state}
end
@impl WebSock
def handle_in({text, [opcode: :text]}, state) do
case Jason.decode(text) do
{:ok, message} ->
handle_mcp_message(message, state)
{:error, %Jason.DecodeError{} = error} ->
error_response = %{
"jsonrpc" => "2.0",
"id" => nil,
"error" => %{
"code" => -32700,
"message" => "Parse error: #{Exception.message(error)}"
}
}
{:reply, {:text, Jason.encode!(error_response)}, state}
end
end
@impl WebSock
def handle_in({_binary, [opcode: :binary]}, state) do
IO.puts(:stderr, "Received unexpected binary data on WebSocket")
{:ok, state}
end
@impl WebSock
def handle_info(:heartbeat, state) do
# Send heartbeat if we have an agent registered
if state.agent_id do
heartbeat_request = %{
"jsonrpc" => "2.0",
"id" => generate_request_id(),
"method" => "tools/call",
"params" => %{
"name" => "heartbeat",
"arguments" => %{"agent_id" => state.agent_id}
}
}
# Send heartbeat to MCP server
MCPServer.handle_mcp_request(heartbeat_request)
end
# Schedule next heartbeat
Process.send_after(self(), :heartbeat, @heartbeat_interval)
updated_state = %{state | last_heartbeat: DateTime.utc_now()}
{:ok, updated_state}
end
@impl WebSock
def handle_info(message, state) do
IO.puts(:stderr, "Received unexpected message: #{inspect(message)}")
{:ok, state}
end
@impl WebSock
def terminate(:remote, state) do
IO.puts(:stderr, "WebSocket connection closed by client: #{state.session_id}")
cleanup_session(state)
:ok
end
@impl WebSock
def terminate(reason, state) do
IO.puts(:stderr, "WebSocket connection terminated: #{state.session_id}, reason: #{inspect(reason)}")
cleanup_session(state)
:ok
end
# Private helper functions
defp handle_mcp_message(message, state) do
method = Map.get(message, "method")
case method do
"initialize" ->
handle_initialize(message, state)
"tools/list" ->
handle_tools_list(message, state)
"tools/call" ->
handle_tool_call(message, state)
"notifications/initialized" ->
handle_initialized_notification(message, state)
_ ->
# Forward other methods to MCP server
forward_to_mcp_server(message, state)
end
end
defp handle_initialize(message, state) do
# Extract client info from initialize message
params = Map.get(message, "params", %{})
client_info = Map.get(params, "clientInfo", %{})
# Detect client context
connection_info = %{
transport: :websocket,
client_info: client_info,
session_id: state.session_id,
capabilities: Map.get(params, "capabilities", [])
}
client_context = ToolFilter.detect_client_context(connection_info)
# Send initialize response
response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"result" => %{
"protocolVersion" => "2024-11-05",
"capabilities" => %{
"tools" => %{},
"coordination" => %{
"automatic_task_tracking" => true,
"agent_management" => true,
"multi_server_proxy" => true,
"heartbeat_coverage" => true,
"session_tracking" => true,
"tool_filtering" => true,
"websocket_realtime" => true
}
},
"serverInfo" => %{
"name" => "agent-coordinator-websocket",
"version" => AgentCoordinator.version(),
"description" => "Agent Coordinator WebSocket interface with tool filtering"
},
"_meta" => %{
"session_id" => state.session_id,
"connection_type" => client_context.connection_type,
"security_level" => client_context.security_level
}
}
}
updated_state = %{state |
client_context: client_context,
connection_info: connection_info
}
{:reply, {:text, Jason.encode!(response)}, updated_state}
end
defp handle_tools_list(message, state) do
if state.client_context do
# Get filtered tools based on client context
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, state.client_context)
response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"result" => %{
"tools" => filtered_tools,
"_meta" => %{
"filtered_for" => state.client_context.connection_type,
"original_count" => length(all_tools),
"filtered_count" => length(filtered_tools),
"session_id" => state.session_id
}
}
}
{:reply, {:text, Jason.encode!(response)}, state}
else
# Client hasn't initialized yet
error_response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"error" => %{
"code" => -32002,
"message" => "Client must initialize first"
}
}
{:reply, {:text, Jason.encode!(error_response)}, state}
end
end
defp handle_tool_call(message, state) do
if state.client_context do
tool_name = get_in(message, ["params", "name"])
# Check if tool is allowed for this client context
if tool_allowed_for_context?(tool_name, state.client_context) do
# Enhance message with session info
enhanced_message = add_websocket_session_info(message, state)
# Track agent ID if this is a register_agent call
updated_state = maybe_track_agent_id(message, state)
# Forward to MCP server
case MCPServer.handle_mcp_request(enhanced_message) do
response when is_map(response) ->
{:reply, {:text, Jason.encode!(response)}, updated_state}
unexpected ->
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
error_response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"error" => %{
"code" => -32603,
"message" => "Internal server error"
}
}
{:reply, {:text, Jason.encode!(error_response)}, updated_state}
end
else
# Tool not allowed for this client
error_response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"error" => %{
"code" => -32601,
"message" => "Tool not available for #{state.client_context.connection_type} clients: #{tool_name}"
}
}
{:reply, {:text, Jason.encode!(error_response)}, state}
end
else
# Client hasn't initialized yet
error_response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"error" => %{
"code" => -32002,
"message" => "Client must initialize first"
}
}
{:reply, {:text, Jason.encode!(error_response)}, state}
end
end
defp handle_initialized_notification(_message, state) do
# Client is ready to receive notifications
IO.puts(:stderr, "WebSocket client initialized: #{state.session_id}")
{:ok, state}
end
defp forward_to_mcp_server(message, state) do
if state.client_context do
enhanced_message = add_websocket_session_info(message, state)
case MCPServer.handle_mcp_request(enhanced_message) do
response when is_map(response) ->
{:reply, {:text, Jason.encode!(response)}, state}
nil ->
# Some notifications don't return responses
{:ok, state}
unexpected ->
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
{:ok, state}
end
else
error_response = %{
"jsonrpc" => "2.0",
"id" => Map.get(message, "id"),
"error" => %{
"code" => -32002,
"message" => "Client must initialize first"
}
}
{:reply, {:text, Jason.encode!(error_response)}, state}
end
end
defp add_websocket_session_info(message, state) do
# Add session tracking info to the message
params = Map.get(message, "params", %{})
enhanced_params = params
|> Map.put("_session_id", state.session_id)
|> Map.put("_transport", "websocket")
|> Map.put("_client_context", %{
connection_type: state.client_context.connection_type,
security_level: state.client_context.security_level,
session_id: state.session_id
})
Map.put(message, "params", enhanced_params)
end
defp tool_allowed_for_context?(tool_name, client_context) do
all_tools = MCPServer.get_tools()
filtered_tools = ToolFilter.filter_tools(all_tools, client_context)
Enum.any?(filtered_tools, fn tool ->
Map.get(tool, "name") == tool_name
end)
end
defp maybe_track_agent_id(message, state) do
case get_in(message, ["params", "name"]) do
"register_agent" ->
# We'll get the agent_id from the response, but for now mark that we expect one
%{state | agent_id: :pending}
_ ->
state
end
end
defp cleanup_session(state) do
# Unregister agent if one was registered through this session
if state.agent_id && state.agent_id != :pending do
unregister_request = %{
"jsonrpc" => "2.0",
"id" => generate_request_id(),
"method" => "tools/call",
"params" => %{
"name" => "unregister_agent",
"arguments" => %{
"agent_id" => state.agent_id,
"reason" => "WebSocket connection closed"
}
}
}
MCPServer.handle_mcp_request(unregister_request)
end
end
defp generate_request_id do
"ws_req_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
end
end

View File

@@ -48,6 +48,12 @@ defmodule AgentCoordinator.MixProject do
{:gen_stage, "~> 1.2"},
{:uuid, "~> 1.1"},
# HTTP server dependencies
{:plug, "~> 1.15"},
{:plug_cowboy, "~> 2.7"},
{:websock_adapter, "~> 0.5"},
{:cors_plug, "~> 3.0"},
# Development and testing dependencies
{:ex_doc, "~> 0.34", only: :dev, runtime: false},
{:dialyxir, "~> 1.4", only: [:dev], runtime: false},

View File

@@ -2,6 +2,9 @@
"bunt": {:hex, :bunt, "1.0.0", "081c2c665f086849e6d57900292b3a161727ab40431219529f13c4ddcf3e7a44", [:mix], [], "hexpm", "dc5f86aa08a5f6fa6b8096f0735c4e76d54ae5c9fa2c143e5a1fc7c1cd9bb6b5"},
"chacha20": {:hex, :chacha20, "1.0.4", "0359d8f9a32269271044c1b471d5cf69660c362a7c61a98f73a05ef0b5d9eb9e", [:mix], [], "hexpm", "2027f5d321ae9903f1f0da7f51b0635ad6b8819bc7fe397837930a2011bc2349"},
"connection": {:hex, :connection, "1.1.0", "ff2a49c4b75b6fb3e674bfc5536451607270aac754ffd1bdfe175abe4a6d7a68", [:mix], [], "hexpm", "722c1eb0a418fbe91ba7bd59a47e28008a189d47e37e0e7bb85585a016b2869c"},
"cors_plug": {:hex, :cors_plug, "3.0.3", "7c3ac52b39624bc616db2e937c282f3f623f25f8d550068b6710e58d04a0e330", [:mix], [{:plug, "~> 1.13", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "3f2d759e8c272ed3835fab2ef11b46bddab8c1ab9528167bd463b6452edf830d"},
"cowboy": {:hex, :cowboy, "2.13.0", "09d770dd5f6a22cc60c071f432cd7cb87776164527f205c5a6b0f24ff6b38990", [:make, :rebar3], [{:cowlib, ">= 2.14.0 and < 3.0.0", [hex: :cowlib, repo: "hexpm", optional: false]}, {:ranch, ">= 1.8.0 and < 3.0.0", [hex: :ranch, repo: "hexpm", optional: false]}], "hexpm", "e724d3a70995025d654c1992c7b11dbfea95205c047d86ff9bf1cda92ddc5614"},
"cowboy_telemetry": {:hex, :cowboy_telemetry, "0.4.0", "f239f68b588efa7707abce16a84d0d2acf3a0f50571f8bb7f56a15865aae820c", [:rebar3], [{:cowboy, "~> 2.7", [hex: :cowboy, repo: "hexpm", optional: false]}, {:telemetry, "~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "7d98bac1ee4565d31b62d59f8823dfd8356a169e7fcbb83831b8a5397404c9de"},
"cowlib": {:hex, :cowlib, "2.15.0", "3c97a318a933962d1c12b96ab7c1d728267d2c523c25a5b57b0f93392b6e9e25", [:make, :rebar3], [], "hexpm", "4f00c879a64b4fe7c8fcb42a4281925e9ffdb928820b03c3ad325a617e857532"},
"credo": {:hex, :credo, "1.7.12", "9e3c20463de4b5f3f23721527fcaf16722ec815e70ff6c60b86412c695d426c1", [:mix], [{:bunt, "~> 0.2.1 or ~> 1.0", [hex: :bunt, repo: "hexpm", optional: false]}, {:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "8493d45c656c5427d9c729235b99d498bd133421f3e0a683e5c1b561471291e5"},
"curve25519": {:hex, :curve25519, "1.0.5", "f801179424e4012049fcfcfcda74ac04f65d0ffceeb80e7ef1d3352deb09f5bb", [:mix], [], "hexpm", "0fba3ad55bf1154d4d5fc3ae5fb91b912b77b13f0def6ccb3a5d58168ff4192d"},
@@ -20,11 +23,18 @@
"makeup": {:hex, :makeup, "1.2.1", "e90ac1c65589ef354378def3ba19d401e739ee7ee06fb47f94c687016e3713d1", [:mix], [{:nimble_parsec, "~> 1.4", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "d36484867b0bae0fea568d10131197a4c2e47056a6fbe84922bf6ba71c8d17ce"},
"makeup_elixir": {:hex, :makeup_elixir, "1.0.1", "e928a4f984e795e41e3abd27bfc09f51db16ab8ba1aebdba2b3a575437efafc2", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "7284900d412a3e5cfd97fdaed4f5ed389b8f2b4cb49efc0eb3bd10e2febf9507"},
"makeup_erlang": {:hex, :makeup_erlang, "1.0.2", "03e1804074b3aa64d5fad7aa64601ed0fb395337b982d9bcf04029d68d51b6a7", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "af33ff7ef368d5893e4a267933e7744e46ce3cf1f61e2dccf53a111ed3aa3727"},
"mime": {:hex, :mime, "2.0.7", "b8d739037be7cd402aee1ba0306edfdef982687ee7e9859bee6198c1e7e2f128", [:mix], [], "hexpm", "6171188e399ee16023ffc5b76ce445eb6d9672e2e241d2df6050f3c771e80ccd"},
"nimble_parsec": {:hex, :nimble_parsec, "1.4.2", "8efba0122db06df95bfaa78f791344a89352ba04baedd3849593bfce4d0dc1c6", [:mix], [], "hexpm", "4b21398942dda052b403bbe1da991ccd03a053668d147d53fb8c4e0efe09c973"},
"nkeys": {:hex, :nkeys, "0.3.0", "837add5261a3cdd8ff75b54e0475062313093929ab5e042fa48e010f33b10d16", [:mix], [{:ed25519, "~> 1.3", [hex: :ed25519, repo: "hexpm", optional: false]}, {:kcl, "~> 1.4", [hex: :kcl, repo: "hexpm", optional: false]}], "hexpm", "b5af773a296620ee8eeb1ec6dc5b68f716386f7e53f7bda8c4ac23515823dfe4"},
"phoenix_pubsub": {:hex, :phoenix_pubsub, "2.1.3", "3168d78ba41835aecad272d5e8cd51aa87a7ac9eb836eabc42f6e57538e3731d", [:mix], [], "hexpm", "bba06bc1dcfd8cb086759f0edc94a8ba2bc8896d5331a1e2c2902bf8e36ee502"},
"plug": {:hex, :plug, "1.18.1", "5067f26f7745b7e31bc3368bc1a2b818b9779faa959b49c934c17730efc911cf", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "57a57db70df2b422b564437d2d33cf8d33cd16339c1edb190cd11b1a3a546cc2"},
"plug_cowboy": {:hex, :plug_cowboy, "2.7.4", "729c752d17cf364e2b8da5bdb34fb5804f56251e88bb602aff48ae0bd8673d11", [:mix], [{:cowboy, "~> 2.7", [hex: :cowboy, repo: "hexpm", optional: false]}, {:cowboy_telemetry, "~> 0.3", [hex: :cowboy_telemetry, repo: "hexpm", optional: false]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "9b85632bd7012615bae0a5d70084deb1b25d2bcbb32cab82d1e9a1e023168aa3"},
"plug_crypto": {:hex, :plug_crypto, "2.1.1", "19bda8184399cb24afa10be734f84a16ea0a2bc65054e23a62bb10f06bc89491", [:mix], [], "hexpm", "6470bce6ffe41c8bd497612ffde1a7e4af67f36a15eea5f921af71cf3e11247c"},
"poly1305": {:hex, :poly1305, "1.0.4", "7cdc8961a0a6e00a764835918cdb8ade868044026df8ef5d718708ea6cc06611", [:mix], [{:chacha20, "~> 1.0", [hex: :chacha20, repo: "hexpm", optional: false]}, {:equivalex, "~> 1.0", [hex: :equivalex, repo: "hexpm", optional: false]}], "hexpm", "e14e684661a5195e149b3139db4a1693579d4659d65bba115a307529c47dbc3b"},
"ranch": {:hex, :ranch, "2.2.0", "25528f82bc8d7c6152c57666ca99ec716510fe0925cb188172f41ce93117b1b0", [:make, :rebar3], [], "hexpm", "fa0b99a1780c80218a4197a59ea8d3bdae32fbff7e88527d7d8a4787eff4f8e7"},
"salsa20": {:hex, :salsa20, "1.0.4", "404cbea1fa8e68a41bcc834c0a2571ac175580fec01cc38cc70c0fb9ffc87e9b", [:mix], [], "hexpm", "745ddcd8cfa563ddb0fd61e7ce48d5146279a2cf7834e1da8441b369fdc58ac6"},
"telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"},
"uuid": {:hex, :uuid, "1.1.8", "e22fc04499de0de3ed1116b770c7737779f226ceefa0badb3592e64d5cfb4eb9", [:mix], [], "hexpm", "c790593b4c3b601f5dc2378baae7efaf5b3d73c4c6456ba85759905be792f2ac"},
"websock": {:hex, :websock, "0.5.3", "2f69a6ebe810328555b6fe5c831a851f485e303a7c8ce6c5f675abeb20ebdadc", [:mix], [], "hexpm", "6105453d7fac22c712ad66fab1d45abdf049868f253cf719b625151460b8b453"},
"websock_adapter": {:hex, :websock_adapter, "0.5.8", "3b97dc94e407e2d1fc666b2fb9acf6be81a1798a2602294aac000260a7c4a47d", [:mix], [{:bandit, ">= 0.6.0", [hex: :bandit, repo: "hexpm", optional: true]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:plug_cowboy, "~> 2.6", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:websock, "~> 0.5", [hex: :websock, repo: "hexpm", optional: false]}], "hexpm", "315b9a1865552212b5f35140ad194e67ce31af45bcee443d4ecb96b5fd3f3782"},
}

12
nats-server.conf Normal file
View File

@@ -0,0 +1,12 @@
port: 4222
jetstream {
store_dir: /var/lib/nats/jetstream
max_memory_store: 1GB
max_file_store: 10GB
}
http_port: 8222
log_file: "/var/log/nats-server.log"
debug: false
trace: false

View File

@@ -37,67 +37,7 @@ end
# Log that we're ready
IO.puts(:stderr, \"Unified MCP server ready with automatic task tracking\")
# Handle MCP JSON-RPC messages through the unified server
defmodule UnifiedMCPStdio do
def start do
spawn_link(fn -> message_loop() end)
Process.sleep(:infinity)
end
defp message_loop do
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, \"Unified MCP server shutting down\")
System.halt(0)
{:error, reason} ->
IO.puts(:stderr, \"IO Error: #{inspect(reason)}\")
System.halt(1)
line ->
handle_message(String.trim(line))
message_loop()
end
end
defp handle_message(\"\"), do: :ok
defp handle_message(json_line) do
try do
request = Jason.decode!(json_line)
# Route through unified MCP server for automatic task tracking
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
IO.puts(Jason.encode!(response))
rescue
e in Jason.DecodeError ->
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => nil,
\"error\" => %{
\"code\" => -32700,
\"message\" => \"Parse error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
e ->
# Try to get the ID from the malformed request
id = try do
partial = Jason.decode!(json_line)
Map.get(partial, \"id\")
rescue
_ -> nil
end
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => id,
\"error\" => %{
\"code\" => -32603,
\"message\" => \"Internal error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
end
end
end
UnifiedMCPStdio.start()
# STDIO handling is now managed by InterfaceManager, not here
# Just keep the process alive
Process.sleep(:infinity)
"

235
scripts/mcp_launcher_multi.sh Executable file
View File

@@ -0,0 +1,235 @@
#!/bin/bash
# AgentCoordinator Multi-Interface MCP Server Launcher
# This script starts the unified MCP server with support for multiple interface modes:
# - stdio: Traditional MCP over stdio (default for VSCode)
# - http: HTTP REST API for remote clients
# - websocket: WebSocket interface for real-time web clients
# - remote: Both HTTP and WebSocket
# - all: All interface modes
set -e
export PATH="$HOME/.asdf/shims:$PATH"
# Change to the project directory
cd "$(dirname "$0")/.."
# Parse command line arguments
INTERFACE_MODE="${1:-stdio}"
HTTP_PORT="${2:-8080}"
WS_PORT="${3:-8081}"
# Set environment variables
export MIX_ENV="${MIX_ENV:-dev}"
export NATS_HOST="${NATS_HOST:-localhost}"
export NATS_PORT="${NATS_PORT:-4222}"
export MCP_INTERFACE_MODE="$INTERFACE_MODE"
export MCP_HTTP_PORT="$HTTP_PORT"
export MCP_WS_PORT="$WS_PORT"
# Validate interface mode
case "$INTERFACE_MODE" in
stdio|http|websocket|remote|all)
;;
*)
echo "Invalid interface mode: $INTERFACE_MODE"
echo "Valid modes: stdio, http, websocket, remote, all"
exit 1
;;
esac
# Log startup
echo "Starting AgentCoordinator Multi-Interface MCP Server..." >&2
echo "Interface Mode: $INTERFACE_MODE" >&2
echo "Environment: $MIX_ENV" >&2
echo "NATS: $NATS_HOST:$NATS_PORT" >&2
if [[ "$INTERFACE_MODE" != "stdio" ]]; then
echo "HTTP Port: $HTTP_PORT" >&2
echo "WebSocket Port: $WS_PORT" >&2
fi
# Install dependencies if needed
if [[ ! -d "deps" ]] || [[ ! -d "_build" ]]; then
echo "Installing dependencies..." >&2
mix deps.get
mix compile
fi
# Start the appropriate interface mode
case "$INTERFACE_MODE" in
stdio)
# Traditional stdio mode for VSCode and local clients
exec mix run --no-halt -e "
# Ensure all applications are started
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
# Configure interface manager for stdio only
Application.put_env(:agent_coordinator, :interfaces, %{
enabled_interfaces: [:stdio],
stdio: %{enabled: true, handle_stdio: true},
http: %{enabled: false},
websocket: %{enabled: false}
})
# MCPServer and InterfaceManager are started by the application supervisor automatically
IO.puts(:stderr, \"STDIO MCP server ready with tool filtering\")
# Handle MCP JSON-RPC messages through the unified server
defmodule StdioMCPHandler do
def start do
spawn_link(fn -> message_loop() end)
Process.sleep(:infinity)
end
defp message_loop do
case IO.read(:stdio, :line) do
:eof ->
IO.puts(:stderr, \"MCP server shutting down\")
System.halt(0)
{:error, reason} ->
IO.puts(:stderr, \"IO Error: #{inspect(reason)}\")
System.halt(1)
line ->
handle_message(String.trim(line))
message_loop()
end
end
defp handle_message(\"\"), do: :ok
defp handle_message(json_line) do
try do
request = Jason.decode!(json_line)
# Route through unified MCP server with local context (full tool access)
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
IO.puts(Jason.encode!(response))
rescue
e in Jason.DecodeError ->
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => nil,
\"error\" => %{
\"code\" => -32700,
\"message\" => \"Parse error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
e ->
id = try do
partial = Jason.decode!(json_line)
Map.get(partial, \"id\")
rescue
_ -> nil
end
error_response = %{
\"jsonrpc\" => \"2.0\",
\"id\" => id,
\"error\" => %{
\"code\" => -32603,
\"message\" => \"Internal error: #{Exception.message(e)}\"
}
}
IO.puts(Jason.encode!(error_response))
end
end
end
StdioMCPHandler.start()
"
;;
http)
# HTTP-only mode for REST API clients
exec mix run --no-halt -e "
# Ensure all applications are started
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
# Configure interface manager for HTTP only
Application.put_env(:agent_coordinator, :interfaces, %{
enabled_interfaces: [:http],
stdio: %{enabled: false},
http: %{enabled: true, port: $HTTP_PORT, host: \"0.0.0.0\"},
websocket: %{enabled: false}
})
IO.puts(:stderr, \"HTTP MCP server ready on port $HTTP_PORT with tool filtering\")
IO.puts(:stderr, \"Available endpoints:\")
IO.puts(:stderr, \" GET /health - Health check\")
IO.puts(:stderr, \" GET /mcp/capabilities - Server capabilities\")
IO.puts(:stderr, \" GET /mcp/tools - Available tools (filtered)\")
IO.puts(:stderr, \" POST /mcp/tools/:tool_name - Execute tool\")
IO.puts(:stderr, \" POST /mcp/request - Full MCP request\")
IO.puts(:stderr, \" GET /agents - Agent status\")
Process.sleep(:infinity)
"
;;
websocket)
# WebSocket-only mode
exec mix run --no-halt -e "
# Ensure all applications are started
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
# Configure interface manager for WebSocket only
Application.put_env(:agent_coordinator, :interfaces, %{
enabled_interfaces: [:websocket],
stdio: %{enabled: false},
http: %{enabled: true, port: $WS_PORT, host: \"0.0.0.0\"},
websocket: %{enabled: true, port: $WS_PORT}
})
IO.puts(:stderr, \"WebSocket MCP server ready on port $WS_PORT with tool filtering\")
IO.puts(:stderr, \"WebSocket endpoint: ws://localhost:$WS_PORT/mcp/ws\")
Process.sleep(:infinity)
"
;;
remote)
# Both HTTP and WebSocket for remote clients
exec mix run --no-halt -e "
# Ensure all applications are started
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
# Configure interface manager for remote access
Application.put_env(:agent_coordinator, :interfaces, %{
enabled_interfaces: [:http, :websocket],
stdio: %{enabled: false},
http: %{enabled: true, port: $HTTP_PORT, host: \"0.0.0.0\"},
websocket: %{enabled: true, port: $HTTP_PORT}
})
IO.puts(:stderr, \"Remote MCP server ready on port $HTTP_PORT with tool filtering\")
IO.puts(:stderr, \"HTTP endpoints available at http://localhost:$HTTP_PORT/\")
IO.puts(:stderr, \"WebSocket endpoint: ws://localhost:$HTTP_PORT/mcp/ws\")
Process.sleep(:infinity)
"
;;
all)
# All interface modes
exec mix run --no-halt -e "
# Ensure all applications are started
{:ok, _} = Application.ensure_all_started(:agent_coordinator)
# Configure interface manager for all interfaces
Application.put_env(:agent_coordinator, :interfaces, %{
enabled_interfaces: [:stdio, :http, :websocket],
stdio: %{enabled: true, handle_stdio: false}, # Don't handle stdio in all mode
http: %{enabled: true, port: $HTTP_PORT, host: \"0.0.0.0\"},
websocket: %{enabled: true, port: $HTTP_PORT}
})
IO.puts(:stderr, \"Multi-interface MCP server ready with tool filtering\")
IO.puts(:stderr, \"STDIO: Available for local MCP clients\")
IO.puts(:stderr, \"HTTP: Available at http://localhost:$HTTP_PORT/\")
IO.puts(:stderr, \"WebSocket: Available at ws://localhost:$HTTP_PORT/mcp/ws\")
Process.sleep(:infinity)
"
;;
esac

282
scripts/test_multi_interface.py Executable file
View File

@@ -0,0 +1,282 @@
#!/usr/bin/env python3
"""
Test script for Agent Coordinator Multi-Interface MCP Server.
This script tests:
1. HTTP interface with tool filtering
2. WebSocket interface with real-time communication
3. Tool filtering based on client context
4. Agent registration and coordination
"""
import json
import requests
import websocket
import asyncio
import time
from concurrent.futures import ThreadPoolExecutor
BASE_URL = "http://localhost:8080"
WS_URL = "ws://localhost:8080/mcp/ws"
def test_http_interface():
"""Test HTTP interface and tool filtering."""
print("\n=== Testing HTTP Interface ===")
# Test health endpoint
try:
response = requests.get(f"{BASE_URL}/health")
print(f"Health check: {response.status_code}")
if response.status_code == 200:
print(f"Health data: {response.json()}")
except Exception as e:
print(f"Health check failed: {e}")
return False
# Test capabilities endpoint
try:
response = requests.get(f"{BASE_URL}/mcp/capabilities")
print(f"Capabilities: {response.status_code}")
if response.status_code == 200:
caps = response.json()
print(f"Tools available: {len(caps.get('tools', []))}")
print(f"Connection type: {caps.get('context', {}).get('connection_type')}")
print(f"Security level: {caps.get('context', {}).get('security_level')}")
# Check that local-only tools are filtered out
tool_names = [tool.get('name') for tool in caps.get('tools', [])]
local_tools = ['read_file', 'vscode_create_file', 'run_in_terminal']
filtered_out = [tool for tool in local_tools if tool not in tool_names]
print(f"Local tools filtered out: {filtered_out}")
except Exception as e:
print(f"Capabilities test failed: {e}")
return False
# Test tool list endpoint
try:
response = requests.get(f"{BASE_URL}/mcp/tools")
print(f"Tools list: {response.status_code}")
if response.status_code == 200:
tools = response.json()
print(f"Filter stats: {tools.get('_meta', {}).get('filter_stats')}")
except Exception as e:
print(f"Tools list test failed: {e}")
return False
# Test agent registration
try:
register_data = {
"arguments": {
"name": "Test Agent HTTP",
"capabilities": ["testing", "analysis"]
}
}
response = requests.post(f"{BASE_URL}/mcp/tools/register_agent",
json=register_data,
headers={"Content-Type": "application/json"})
print(f"Agent registration: {response.status_code}")
if response.status_code == 200:
result = response.json()
print(f"Registration result: {result.get('result')}")
return result.get('result', {}).get('agent_id')
except Exception as e:
print(f"Agent registration failed: {e}")
return False
return True
def test_websocket_interface():
"""Test WebSocket interface with real-time communication."""
print("\n=== Testing WebSocket Interface ===")
messages_received = []
def on_message(ws, message):
print(f"Received: {message}")
messages_received.append(json.loads(message))
def on_error(ws, error):
print(f"WebSocket error: {error}")
def on_close(ws, close_status_code, close_msg):
print("WebSocket connection closed")
def on_open(ws):
print("WebSocket connection opened")
# Send initialize message
init_msg = {
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"clientInfo": {
"name": "test-websocket-client",
"version": "1.0.0"
},
"capabilities": ["coordination"]
}
}
ws.send(json.dumps(init_msg))
# Wait a bit then request tools list
time.sleep(0.5)
tools_msg = {
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}
ws.send(json.dumps(tools_msg))
# Register an agent
time.sleep(0.5)
register_msg = {
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "register_agent",
"arguments": {
"name": "Test Agent WebSocket",
"capabilities": ["testing", "websocket"]
}
}
}
ws.send(json.dumps(register_msg))
# Close after a delay
time.sleep(2)
ws.close()
try:
ws = websocket.WebSocketApp(WS_URL,
on_open=on_open,
on_message=on_message,
on_error=on_error,
on_close=on_close)
ws.run_forever()
print(f"Messages received: {len(messages_received)}")
for i, msg in enumerate(messages_received):
print(f"Message {i+1}: {msg.get('result', {}).get('_meta', 'No meta')}")
return len(messages_received) > 0
except Exception as e:
print(f"WebSocket test failed: {e}")
return False
def test_tool_filtering():
"""Test tool filtering functionality specifically."""
print("\n=== Testing Tool Filtering ===")
try:
# Get tools from HTTP (remote context)
response = requests.get(f"{BASE_URL}/mcp/tools")
if response.status_code != 200:
print("Failed to get tools from HTTP")
return False
remote_tools = response.json()
tool_names = [tool.get('name') for tool in remote_tools.get('tools', [])]
# Check that coordination tools are present
coordination_tools = ['register_agent', 'create_task', 'get_task_board', 'heartbeat']
present_coordination = [tool for tool in coordination_tools if tool in tool_names]
print(f"Coordination tools present: {present_coordination}")
# Check that local-only tools are filtered out
local_only_tools = ['read_file', 'write_file', 'vscode_create_file', 'run_in_terminal']
filtered_local = [tool for tool in local_only_tools if tool not in tool_names]
print(f"Local-only tools filtered: {filtered_local}")
# Check that safe remote tools are present
safe_remote_tools = ['create_entities', 'sequentialthinking', 'get-library-docs']
present_safe = [tool for tool in safe_remote_tools if tool in tool_names]
print(f"Safe remote tools present: {present_safe}")
# Verify filter statistics
filter_stats = remote_tools.get('_meta', {}).get('filter_stats', {})
print(f"Filter stats: {filter_stats}")
success = (
len(present_coordination) >= 3 and # Most coordination tools present
len(filtered_local) >= 2 and # Local tools filtered
filter_stats.get('connection_type') == 'remote'
)
return success
except Exception as e:
print(f"Tool filtering test failed: {e}")
return False
def test_forbidden_tool_access():
"""Test that local-only tools are properly blocked for remote clients."""
print("\n=== Testing Forbidden Tool Access ===")
try:
# Try to call a local-only tool
forbidden_data = {
"arguments": {
"path": "/etc/passwd",
"agent_id": "test_agent"
}
}
response = requests.post(f"{BASE_URL}/mcp/tools/read_file",
json=forbidden_data,
headers={"Content-Type": "application/json"})
print(f"Forbidden tool call status: {response.status_code}")
if response.status_code == 403:
error_data = response.json()
print(f"Expected 403 error: {error_data.get('error', {}).get('message')}")
return True
else:
print(f"Unexpected response: {response.json()}")
return False
except Exception as e:
print(f"Forbidden tool test failed: {e}")
return False
def main():
"""Run all tests."""
print("Agent Coordinator Multi-Interface Test Suite")
print("=" * 50)
# Test results
results = {}
# HTTP Interface Test
results['http'] = test_http_interface()
# WebSocket Interface Test
results['websocket'] = test_websocket_interface()
# Tool Filtering Test
results['tool_filtering'] = test_tool_filtering()
# Forbidden Access Test
results['forbidden'] = test_forbidden_tool_access()
# Summary
print("\n" + "=" * 50)
print("TEST RESULTS SUMMARY")
print("=" * 50)
for test_name, success in results.items():
status = "✅ PASS" if success else "❌ FAIL"
print(f"{test_name.ljust(20)}: {status}")
total_tests = len(results)
passed_tests = sum(results.values())
print(f"\nOverall: {passed_tests}/{total_tests} tests passed")
if passed_tests == total_tests:
print("🎉 All tests passed! Multi-interface MCP server is working correctly.")
return 0
else:
print("⚠️ Some tests failed. Check the implementation.")
return 1
if __name__ == "__main__":
exit(main())

63
test/test_multi_interface.exs Executable file
View File

@@ -0,0 +1,63 @@
#!/usr/bin/env elixir
# Simple test script to verify multi-interface functionality
Mix.install([
{:jason, "~> 1.4"}
])
defmodule MultiInterfaceTest do
def test_stdio_mode do
IO.puts("Testing STDIO mode...")
# Start the application manually in stdio mode
System.put_env("MCP_INTERFACE_MODE", "stdio")
IO.puts("✅ STDIO mode configuration test passed")
end
def test_http_mode do
IO.puts("Testing HTTP mode configuration...")
# Test HTTP mode configuration
System.put_env("MCP_INTERFACE_MODE", "http")
System.put_env("MCP_HTTP_PORT", "8080")
System.put_env("MCP_HTTP_HOST", "127.0.0.1")
IO.puts("✅ HTTP mode configuration test passed")
end
def test_multi_mode do
IO.puts("Testing multi-interface mode...")
# Test multiple interfaces
System.put_env("MCP_INTERFACE_MODE", "stdio,http,websocket")
System.put_env("MCP_HTTP_PORT", "8080")
IO.puts("✅ Multi-interface mode configuration test passed")
end
def run_tests do
IO.puts("🚀 Testing Multi-Interface MCP Server")
IO.puts("====================================")
test_stdio_mode()
test_http_mode()
test_multi_mode()
IO.puts("")
IO.puts("✅ All configuration tests passed!")
IO.puts("You can now test the actual server with:")
IO.puts("")
IO.puts(" # STDIO mode (default)")
IO.puts(" mix run --no-halt")
IO.puts("")
IO.puts(" # HTTP mode")
IO.puts(" MCP_INTERFACE_MODE=http MCP_HTTP_PORT=8080 mix run --no-halt")
IO.puts("")
IO.puts(" # Multi-interface mode")
IO.puts(" MCP_INTERFACE_MODE=stdio,http,websocket MCP_HTTP_PORT=8080 mix run --no-halt")
IO.puts("")
end
end
MultiInterfaceTest.run_tests()

View File

@@ -0,0 +1,154 @@
#!/usr/bin/env elixir
# Quick test script for the enhanced MCP session management
# This tests the new session token authentication flow
Mix.install([
{:jason, "~> 1.4"},
{:httpoison, "~> 2.0"}
])
defmodule SessionManagementTest do
@base_url "http://localhost:4000"
def run_test do
IO.puts("🔧 Testing Enhanced MCP Session Management")
IO.puts("=" <> String.duplicate("=", 50))
# Step 1: Register an agent to get a session token
IO.puts("\n1⃣ Registering agent to get session token...")
register_payload = %{
"jsonrpc" => "2.0",
"id" => "test_001",
"method" => "agents/register",
"params" => %{
"name" => "Test Agent Blue Koala",
"capabilities" => ["coding", "testing"],
"codebase_id" => "test_codebase",
"workspace_path" => "/tmp/test"
}
}
case post_mcp_request("/mcp/request", register_payload) do
{:ok, %{"result" => result}} ->
session_token = Map.get(result, "session_token")
expires_at = Map.get(result, "expires_at")
IO.puts("✅ Agent registered successfully!")
IO.puts(" Session Token: #{String.slice(session_token || "nil", 0, 20)}...")
IO.puts(" Expires At: #{expires_at}")
if session_token do
test_authenticated_request(session_token)
else
IO.puts("❌ No session token returned!")
end
{:ok, %{"error" => error}} ->
IO.puts("❌ Registration failed: #{inspect(error)}")
{:error, reason} ->
IO.puts("❌ Request failed: #{reason}")
end
# Step 2: Test MCP protocol headers
IO.puts("\n2⃣ Testing MCP protocol headers...")
test_protocol_headers()
IO.puts("\n🎉 Session management test completed!")
end
defp test_authenticated_request(session_token) do
IO.puts("\n🔐 Testing authenticated request with session token...")
# Try to call a tool that requires authentication
tool_payload = %{
"jsonrpc" => "2.0",
"id" => "test_002",
"method" => "tools/call",
"params" => %{
"name" => "get_task_board",
"arguments" => %{"agent_id" => "Test Agent Blue Koala"}
}
}
headers = [
{"Content-Type", "application/json"},
{"Mcp-Session-Id", session_token}
]
case HTTPoison.post("#{@base_url}/mcp/request", Jason.encode!(tool_payload), headers) do
{:ok, %HTTPoison.Response{status_code: 200, headers: response_headers, body: body}} ->
IO.puts("✅ Authenticated request successful!")
# Check for MCP protocol headers
mcp_version = get_header_value(response_headers, "mcp-protocol-version")
IO.puts(" MCP Protocol Version: #{mcp_version || "Not found"}")
# Parse response
case Jason.decode(body) do
{:ok, %{"result" => _result}} ->
IO.puts(" ✅ Valid MCP response received")
{:ok, %{"error" => error}} ->
IO.puts(" ⚠️ MCP error: #{inspect(error)}")
_ ->
IO.puts(" ❌ Invalid response format")
end
{:ok, %HTTPoison.Response{status_code: status_code, body: body}} ->
IO.puts("❌ Request failed with status #{status_code}")
case Jason.decode(body) do
{:ok, parsed} -> IO.puts(" Error: #{inspect(parsed)}")
_ -> IO.puts(" Body: #{body}")
end
{:error, reason} ->
IO.puts("❌ HTTP request failed: #{inspect(reason)}")
end
end
defp test_protocol_headers do
case HTTPoison.get("#{@base_url}/health") do
{:ok, %HTTPoison.Response{headers: headers}} ->
mcp_version = get_header_value(headers, "mcp-protocol-version")
server_header = get_header_value(headers, "server")
IO.puts("✅ Protocol headers check:")
IO.puts(" MCP-Protocol-Version: #{mcp_version || "❌ Missing"}")
IO.puts(" Server: #{server_header || "❌ Missing"}")
{:error, reason} ->
IO.puts("❌ Failed to test headers: #{inspect(reason)}")
end
end
defp post_mcp_request(endpoint, payload) do
headers = [{"Content-Type", "application/json"}]
case HTTPoison.post("#{@base_url}#{endpoint}", Jason.encode!(payload), headers) do
{:ok, %HTTPoison.Response{status_code: 200, body: body}} ->
Jason.decode(body)
{:ok, %HTTPoison.Response{status_code: status_code, body: body}} ->
{:error, "HTTP #{status_code}: #{body}"}
{:error, reason} ->
{:error, inspect(reason)}
end
end
defp get_header_value(headers, header_name) do
headers
|> Enum.find(fn {name, _value} ->
String.downcase(name) == String.downcase(header_name)
end)
|> case do
{_name, value} -> value
nil -> nil
end
end
end
# Run the test
SessionManagementTest.run_test()