Compare commits
9 Commits
8df694b71c
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
101e1e5c81
|
|||
| ee30aca4d7 | |||
| d22675fd16 | |||
| 87b7181d4a | |||
| 0d9166cc56 | |||
| 74a8574778 | |||
| 5d3e04c5f8 | |||
| 0327665fe2 | |||
| b1f55799ec |
@@ -1,3 +0,0 @@
|
||||
---
|
||||
applyTo: '**/*.md'
|
||||
---
|
||||
9
.github/instructions/example.instructions.md
vendored
Normal file
9
.github/instructions/example.instructions.md
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
applyTo: '**'
|
||||
---
|
||||
- You must always use the agent-coordinator mcp server, this will allow you to coordinate your work with other agents that may be actively working on the same codebase as you.
|
||||
- All MCP tool calls which would alter anythings state should be done through the agent-coordinator, even vscode internal tool calls. Searches are fine to not be done through the agent coordinator. The core reason for running them through the agent-coordinator is for the heartbeat.
|
||||
- Always make sure you append at least 2 random words onto the end of your agent name when registering / sending metadata, eg: `GitHub Copilot Silver Banana`, `GitHub Copilot Yellow Koala`, `GitHub Copilot Olive Llama`, this will assure that not agents have overlapping names and get tracked as the same agent (hopefully)
|
||||
- Always check the task board when given new user input which will require making any changes to files.
|
||||
- Always populate your own task board with your task set so other agents may be informed of your planned tasks, and in turn you will be able to see other agents' agendas.
|
||||
- Once you are coming to an end of your current query, and you will be passing the turn to wait for user input, you must unregister yourself as an agent, and upon the followup you should re-register and follow through with the same registration flow.
|
||||
@@ -1,50 +0,0 @@
|
||||
---
|
||||
applyTo: '**'
|
||||
---
|
||||
|
||||
# No Duplicate Files Policy
|
||||
|
||||
## Critical Rule: NO DUPLICATE FILES
|
||||
|
||||
**NEVER** create files with adjectives or verbs that duplicate existing functionality:
|
||||
- ❌ `enhanced_mcp_server.ex` when `mcp_server.ex` exists
|
||||
- ❌ `unified_mcp_server.ex` when `mcp_server.ex` exists
|
||||
- ❌ `mcp_server_manager.ex` when `mcp_server.ex` exists
|
||||
- ❌ `new_config.ex` when `config.ex` exists
|
||||
- ❌ `improved_task_registry.ex` when `task_registry.ex` exists
|
||||
|
||||
## What To Do Instead
|
||||
|
||||
1. **BEFORE** making changes that might create a new file:
|
||||
```bash
|
||||
git add . && git commit -m "Save current state before refactoring"
|
||||
```
|
||||
|
||||
2. **MODIFY** the existing file directly instead of creating a "new" version
|
||||
|
||||
3. **IF** you need to completely rewrite a file:
|
||||
- Make the changes directly to the original file
|
||||
- Don't create `*_new.*` or `enhanced_*.*` versions
|
||||
|
||||
## Why This Rule Exists
|
||||
|
||||
When you create duplicate files:
|
||||
- Future sessions can't tell which file is "real"
|
||||
- The codebase becomes inconsistent and confusing
|
||||
- Multiple implementations cause bugs and maintenance nightmares
|
||||
- Even YOU get confused about which file to edit next time
|
||||
|
||||
## The Human Is Right
|
||||
|
||||
The human specifically said: "Do not re-create the same file with some adjective/verb attached while leaving the original, instead, update the code and make it better, changes are good."
|
||||
|
||||
**Listen to them.** They prefer file replacement over duplicates.
|
||||
|
||||
## Implementation
|
||||
|
||||
- Always check if a file with similar functionality exists before creating a new one
|
||||
- Use `git add . && git commit` before potentially destructive changes
|
||||
- Replace, don't duplicate
|
||||
- Keep the codebase clean and consistent
|
||||
|
||||
**This rule is more important than any specific feature request.**
|
||||
41
.github/workflows/build.yml
vendored
Normal file
41
.github/workflows/build.yml
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
name: build-container
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
run-name: build-image-${{ github.run_id }}
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v5
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to GHCR
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
tags: |
|
||||
ghcr.io/rooba/agentcoordinator:latest
|
||||
ghcr.io/rooba/agentcoordinator:${{ github.sha }}
|
||||
file: ./Dockerfile
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -23,7 +23,8 @@ agent_coordinator-*.tar
|
||||
/tmp/
|
||||
|
||||
# IDE and Editor files
|
||||
.vscode/
|
||||
/.vscode/
|
||||
!/.vscode/mcp.json
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
@@ -98,3 +99,6 @@ coverage/
|
||||
/erl_crash.dump
|
||||
/_build
|
||||
/test_env
|
||||
/docs
|
||||
|
||||
!/.vscode/mcp.json
|
||||
|
||||
16
.vscode/mcp.json
vendored
Normal file
16
.vscode/mcp.json
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"servers": {
|
||||
"coordinator": {
|
||||
"command": "/home/user/agent_coordinator/scripts/mcp_launcher.sh",
|
||||
"args": [],
|
||||
"env": {
|
||||
"MIX_ENV": "dev",
|
||||
"NATS_HOST": "127.0.0.1",
|
||||
"NATS_PORT": "4222",
|
||||
"MCP_CONFIG_FILE": "/home/user/agent_coordinator/mcp_servers.json"
|
||||
},
|
||||
"type": "stdio"
|
||||
}
|
||||
},
|
||||
"inputs": []
|
||||
}
|
||||
56
CHANGELOG.md
56
CHANGELOG.md
@@ -1,56 +0,0 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
|
||||
- Initial repository structure cleanup
|
||||
- Organized scripts into dedicated directories
|
||||
- Enhanced documentation
|
||||
- GitHub Actions CI/CD workflow
|
||||
- Development and testing dependencies
|
||||
|
||||
### Changed
|
||||
|
||||
- Moved demo files to `examples/` directory
|
||||
- Moved utility scripts to `scripts/` directory
|
||||
- Updated project metadata in mix.exs
|
||||
- Enhanced .gitignore for better coverage
|
||||
|
||||
## [0.1.0] - 2025-08-22
|
||||
|
||||
### Features
|
||||
|
||||
- Initial release of AgentCoordinator
|
||||
- Distributed task coordination system for AI agents
|
||||
- NATS-based messaging and persistence
|
||||
- MCP (Model Context Protocol) server integration
|
||||
- Task registry with agent-specific inboxes
|
||||
- File-level conflict resolution
|
||||
- Real-time agent communication
|
||||
- Event sourcing with configurable retention
|
||||
- Fault-tolerant supervision trees
|
||||
- Command-line interface for task management
|
||||
- VS Code integration setup scripts
|
||||
- Comprehensive examples and documentation
|
||||
|
||||
### Core Features
|
||||
|
||||
- Agent registration and capability management
|
||||
- Task creation, assignment, and completion
|
||||
- Task board visualization
|
||||
- Heartbeat monitoring for agent health
|
||||
- Persistent task state with NATS JetStream
|
||||
- MCP tools for external agent integration
|
||||
|
||||
### Development Tools
|
||||
|
||||
- Setup scripts for NATS and VS Code configuration
|
||||
- Example MCP client implementations
|
||||
- Test scripts for various scenarios
|
||||
- Demo workflows for testing functionality
|
||||
195
CONTRIBUTING.md
195
CONTRIBUTING.md
@@ -1,195 +0,0 @@
|
||||
# Contributing to AgentCoordinator
|
||||
|
||||
Thank you for your interest in contributing to AgentCoordinator! This document provides guidelines for contributing to the project.
|
||||
|
||||
## 🤝 Code of Conduct
|
||||
|
||||
By participating in this project, you agree to abide by our Code of Conduct. Please report unacceptable behavior to the project maintainers.
|
||||
|
||||
## 🚀 How to Contribute
|
||||
|
||||
### Reporting Bugs
|
||||
|
||||
1. **Check existing issues** first to see if the bug has already been reported
|
||||
2. **Create a new issue** with a clear title and description
|
||||
3. **Include reproduction steps** with specific details
|
||||
4. **Provide system information** (Elixir version, OS, etc.)
|
||||
5. **Add relevant logs** or error messages
|
||||
|
||||
### Suggesting Features
|
||||
|
||||
1. **Check existing feature requests** to avoid duplicates
|
||||
2. **Create a new issue** with the `enhancement` label
|
||||
3. **Describe the feature** and its use case clearly
|
||||
4. **Explain why** this feature would be beneficial
|
||||
5. **Provide examples** of how it would be used
|
||||
|
||||
### Development Setup
|
||||
|
||||
1. **Fork the repository** on GitHub
|
||||
2. **Clone your fork** locally:
|
||||
```bash
|
||||
git clone https://github.com/your-username/agent_coordinator.git
|
||||
cd agent_coordinator
|
||||
```
|
||||
3. **Install dependencies**:
|
||||
```bash
|
||||
mix deps.get
|
||||
```
|
||||
4. **Start NATS server**:
|
||||
```bash
|
||||
nats-server -js -p 4222 -m 8222
|
||||
```
|
||||
5. **Run tests** to ensure everything works:
|
||||
```bash
|
||||
mix test
|
||||
```
|
||||
|
||||
### Making Changes
|
||||
|
||||
1. **Create a feature branch**:
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
2. **Make your changes** following our coding standards
|
||||
3. **Add tests** for new functionality
|
||||
4. **Run the test suite**:
|
||||
```bash
|
||||
mix test
|
||||
```
|
||||
5. **Run code quality checks**:
|
||||
```bash
|
||||
mix format
|
||||
mix credo
|
||||
mix dialyzer
|
||||
```
|
||||
6. **Commit your changes** with a descriptive message:
|
||||
```bash
|
||||
git commit -m "Add feature: your feature description"
|
||||
```
|
||||
7. **Push to your fork**:
|
||||
```bash
|
||||
git push origin feature/your-feature-name
|
||||
```
|
||||
8. **Create a Pull Request** on GitHub
|
||||
|
||||
## 📝 Coding Standards
|
||||
|
||||
### Elixir Style Guide
|
||||
|
||||
- Follow the [Elixir Style Guide](https://github.com/christopheradams/elixir_style_guide)
|
||||
- Use `mix format` to format your code
|
||||
- Write clear, descriptive function and variable names
|
||||
- Add `@doc` and `@spec` for public functions
|
||||
- Follow the existing code patterns in the project
|
||||
|
||||
### Code Organization
|
||||
|
||||
- Keep modules focused and cohesive
|
||||
- Use appropriate GenServer patterns for stateful processes
|
||||
- Follow OTP principles and supervision tree design
|
||||
- Organize code into logical namespaces
|
||||
|
||||
### Testing
|
||||
|
||||
- Write comprehensive tests for all new functionality
|
||||
- Use descriptive test names that explain what is being tested
|
||||
- Follow the existing test patterns and structure
|
||||
- Ensure tests are fast and reliable
|
||||
- Aim for good test coverage (check with `mix test --cover`)
|
||||
|
||||
### Documentation
|
||||
|
||||
- Update documentation for any API changes
|
||||
- Add examples for new features
|
||||
- Keep the README.md up to date
|
||||
- Use clear, concise language
|
||||
- Include code examples where helpful
|
||||
|
||||
## 🔧 Pull Request Guidelines
|
||||
|
||||
### Before Submitting
|
||||
|
||||
- [ ] Tests pass locally (`mix test`)
|
||||
- [ ] Code is properly formatted (`mix format`)
|
||||
- [ ] No linting errors (`mix credo`)
|
||||
- [ ] Type checks pass (`mix dialyzer`)
|
||||
- [ ] Documentation is updated
|
||||
- [ ] CHANGELOG.md is updated (if applicable)
|
||||
|
||||
### Pull Request Description
|
||||
|
||||
Please include:
|
||||
|
||||
1. **Clear title** describing the change
|
||||
2. **Description** of what the PR does
|
||||
3. **Issue reference** if applicable (fixes #123)
|
||||
4. **Testing instructions** for reviewers
|
||||
5. **Breaking changes** if any
|
||||
6. **Screenshots** if UI changes are involved
|
||||
|
||||
### Review Process
|
||||
|
||||
1. At least one maintainer will review your PR
|
||||
2. Address any feedback or requested changes
|
||||
3. Once approved, a maintainer will merge your PR
|
||||
4. Your contribution will be credited in the release notes
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
mix test
|
||||
|
||||
# Run tests with coverage
|
||||
mix test --cover
|
||||
|
||||
# Run specific test file
|
||||
mix test test/agent_coordinator/mcp_server_test.exs
|
||||
|
||||
# Run tests in watch mode
|
||||
mix test.watch
|
||||
```
|
||||
|
||||
### Writing Tests
|
||||
|
||||
- Place test files in the `test/` directory
|
||||
- Mirror the structure of the `lib/` directory
|
||||
- Use descriptive `describe` blocks to group related tests
|
||||
- Use `setup` blocks for common test setup
|
||||
- Mock external dependencies appropriately
|
||||
|
||||
## 🚀 Release Process
|
||||
|
||||
1. Update version in `mix.exs`
|
||||
2. Update `CHANGELOG.md` with new version details
|
||||
3. Create and push a version tag
|
||||
4. Create a GitHub release
|
||||
5. Publish to Hex (maintainers only)
|
||||
|
||||
## 📞 Getting Help
|
||||
|
||||
- **GitHub Issues**: For bugs and feature requests
|
||||
- **GitHub Discussions**: For questions and general discussion
|
||||
- **Documentation**: Check the [online docs](https://hexdocs.pm/agent_coordinator)
|
||||
|
||||
## 🏷️ Issue Labels
|
||||
|
||||
- `bug`: Something isn't working
|
||||
- `enhancement`: New feature or request
|
||||
- `documentation`: Improvements or additions to documentation
|
||||
- `good first issue`: Good for newcomers
|
||||
- `help wanted`: Extra attention is needed
|
||||
- `question`: Further information is requested
|
||||
|
||||
## 🎉 Recognition
|
||||
|
||||
Contributors will be:
|
||||
|
||||
- Listed in the project's contributors section
|
||||
- Mentioned in release notes for significant contributions
|
||||
- Given credit in any related blog posts or presentations
|
||||
|
||||
Thank you for contributing to AgentCoordinator! 🚀
|
||||
92
Dockerfile
92
Dockerfile
@@ -2,18 +2,16 @@
|
||||
# Creates a production-ready container for the MCP server without requiring local Elixir/OTP installation
|
||||
|
||||
# Build stage - Use official Elixir image with OTP
|
||||
FROM elixir:1.16-otp-26-alpine AS builder
|
||||
FROM elixir:1.18 AS builder
|
||||
|
||||
# Install build dependencies
|
||||
RUN apk add --no-cache \
|
||||
build-base \
|
||||
|
||||
# Set environment variables
|
||||
RUN apt-get update && apt-get install -y \
|
||||
git \
|
||||
curl \
|
||||
bash
|
||||
|
||||
# Install Node.js and npm for MCP external servers (bunx dependency)
|
||||
RUN apk add --no-cache nodejs npm
|
||||
RUN npm install -g bun
|
||||
bash \
|
||||
unzip \
|
||||
zlib1g
|
||||
|
||||
# Set build environment
|
||||
ENV MIX_ENV=prod
|
||||
@@ -22,79 +20,29 @@ ENV MIX_ENV=prod
|
||||
WORKDIR /app
|
||||
|
||||
# Copy mix files
|
||||
COPY mix.exs mix.lock ./
|
||||
COPY lib lib
|
||||
COPY mcp_servers.json \
|
||||
mix.exs \
|
||||
mix.lock \
|
||||
docker-entrypoint.sh ./
|
||||
COPY scripts ./scripts/
|
||||
|
||||
|
||||
# Install mix dependencies
|
||||
RUN mix local.hex --force && \
|
||||
mix local.rebar --force && \
|
||||
mix deps.get --only $MIX_ENV && \
|
||||
mix deps.compile
|
||||
|
||||
# Copy source code
|
||||
COPY lib lib
|
||||
COPY config config
|
||||
|
||||
# Compile the release
|
||||
RUN mix deps.get
|
||||
RUN mix deps.compile
|
||||
RUN mix compile
|
||||
|
||||
# Prepare release
|
||||
RUN mix release
|
||||
RUN chmod +x ./docker-entrypoint.sh ./scripts/mcp_launcher.sh
|
||||
RUN curl -fsSL https://bun.sh/install | bash
|
||||
RUN ln -s /root/.bun/bin/* /usr/local/bin/
|
||||
|
||||
# Runtime stage - Use smaller Alpine image
|
||||
FROM alpine:3.18 AS runtime
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
bash \
|
||||
openssl \
|
||||
ncurses-libs \
|
||||
libstdc++ \
|
||||
nodejs \
|
||||
npm
|
||||
|
||||
# Install Node.js packages for external MCP servers
|
||||
RUN npm install -g bun
|
||||
|
||||
# Create non-root user for security
|
||||
RUN addgroup -g 1000 appuser && \
|
||||
adduser -u 1000 -G appuser -s /bin/bash -D appuser
|
||||
|
||||
# Create app directory and set permissions
|
||||
WORKDIR /app
|
||||
RUN chown -R appuser:appuser /app
|
||||
|
||||
# Copy the release from builder stage
|
||||
COPY --from=builder --chown=appuser:appuser /app/_build/prod/rel/agent_coordinator ./
|
||||
|
||||
# Copy configuration files
|
||||
COPY --chown=appuser:appuser mcp_servers.json ./
|
||||
COPY --chown=appuser:appuser scripts/mcp_launcher.sh ./scripts/
|
||||
|
||||
# Make scripts executable
|
||||
RUN chmod +x ./scripts/mcp_launcher.sh
|
||||
|
||||
# Copy Docker entrypoint script
|
||||
COPY --chown=appuser:appuser docker-entrypoint.sh ./
|
||||
RUN chmod +x ./docker-entrypoint.sh
|
||||
|
||||
# Switch to non-root user
|
||||
USER appuser
|
||||
|
||||
# Set environment variables
|
||||
ENV MIX_ENV=prod
|
||||
ENV NATS_HOST=localhost
|
||||
ENV NATS_PORT=4222
|
||||
ENV SHELL=/bin/bash
|
||||
|
||||
# Expose the default port (if needed for HTTP endpoints)
|
||||
EXPOSE 4000
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD /app/bin/agent_coordinator ping || exit 1
|
||||
|
||||
# Set the entrypoint
|
||||
ENTRYPOINT ["/app/docker-entrypoint.sh"]
|
||||
|
||||
# Default command
|
||||
CMD ["/app/scripts/mcp_launcher.sh"]
|
||||
CMD ["/app/scripts/mcp_launcher.sh"]
|
||||
|
||||
788
README.md
788
README.md
@@ -1,139 +1,40 @@
|
||||
# Agent Coordinator
|
||||
|
||||
A **Model Context Protocol (MCP) server** that enables multiple AI agents to coordinate their work seamlessly across codebases without conflicts. Built with Elixir for reliability and fault tolerance.
|
||||
Agent Coordinator is a MCP proxy server that enables multiple AI agents to collaborate seamlessly without conflicts. It acts as a single MCP interface that proxies ALL tool calls through itself, ensuring every agent maintains full project awareness while the coordinator tracks real-time agent presence.
|
||||
|
||||
## 🎯 What is Agent Coordinator?
|
||||
|
||||
Agent Coordinator is a **MCP proxy server** that enables multiple AI agents to collaborate seamlessly without conflicts. As shown in the architecture diagram above, it acts as a **single MCP interface** that proxies ALL tool calls through itself, ensuring every agent maintains full project awareness while the coordinator tracks real-time agent presence.
|
||||
## What is Agent Coordinator?
|
||||
|
||||
**The coordinator operates as a transparent proxy layer:**
|
||||
|
||||
- **Single Interface**: All agents connect to one MCP server (the coordinator)
|
||||
- **Proxy Architecture**: Every tool call flows through the coordinator to external MCP servers
|
||||
- **Presence Tracking**: Each proxied tool call updates agent heartbeat and task status
|
||||
- **Project Awareness**: All agents see the same unified view of project state through the proxy
|
||||
|
||||
**This proxy design orchestrates four core components:**
|
||||
|
||||
- **Task Registry**: Intelligent task queuing, agent matching, and automatic progress tracking
|
||||
- **Agent Manager**: Agent registration, heartbeat monitoring, and capability-based assignment
|
||||
- **Codebase Registry**: Cross-repository coordination, dependency management, and workspace organization
|
||||
- **Unified Tool Registry**: Seamlessly proxies external MCP tools while adding coordination capabilities
|
||||
|
||||
Instead of agents conflicting over files or duplicating work, they connect through a **single MCP proxy interface** that routes ALL tool calls through the coordinator. This ensures every tool usage updates agent presence, tracks coordinated tasks, and maintains real-time project awareness across all agents via shared task boards and agent inboxes.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- **🔄 MCP Proxy Architecture**: Single server that proxies ALL external MCP servers for unified agent access
|
||||
- **👁️ Real-Time Activity Tracking**: Live visibility into agent activities: "Reading file.ex", "Editing main.py", "Sequential thinking"
|
||||
- **📡 Real-Time Presence Tracking**: Every tool call updates agent status and project awareness
|
||||
- **📁 File-Level Coordination**: Track exactly which files each agent is working on to prevent conflicts
|
||||
- **📜 Activity History**: Rolling log of recent agent actions with timestamps and file details
|
||||
- **🤖 Multi-Agent Coordination**: Register multiple AI agents (GitHub Copilot, Claude, etc.) with different capabilities
|
||||
- **🎯 Transparent Tool Routing**: Automatically routes tool calls to appropriate external servers while tracking usage
|
||||
- **📝 Automatic Task Creation**: Every tool usage becomes a tracked task with agent coordination context
|
||||
- **⚡ Full Project Awareness**: All agents see unified project state through the proxy layer
|
||||
- **📡 External Server Management**: Automatically starts, monitors, and manages MCP servers defined in `mcp_servers.json`
|
||||
- **🛠️ Universal Tool Registry**: Proxies tools from all external servers while adding native coordination tools
|
||||
- **🔌 Dynamic Tool Discovery**: Automatically discovers new tools when external servers start/restart
|
||||
- **🎮 Cross-Codebase Support**: Coordinate work across multiple repositories and projects
|
||||
- **🔌 MCP Standard Compliance**: Works with any MCP-compatible AI agent or tool
|
||||
|
||||
## 🚀 How It Works
|
||||
|
||||

|
||||
|
||||
**The Agent Coordinator acts as a transparent MCP proxy server** that routes ALL tool calls through itself to maintain agent presence and provide full project awareness. Every external MCP server is proxied through the coordinator, ensuring unified agent coordination.
|
||||
|
||||
### 🔄 Proxy Architecture Flow
|
||||
|
||||
1. **Agent Registration**: Multiple AI agents (Purple Zebra, Yellow Elephant, etc.) register with their capabilities
|
||||
2. **External Server Discovery**: Coordinator automatically starts and discovers tools from external MCP servers
|
||||
3. **Unified Proxy Interface**: All tools (native + external) are available through a single MCP interface
|
||||
4. **Transparent Tool Routing**: ALL tool calls proxy through coordinator → external servers → coordinator → agents
|
||||
5. **Presence Tracking**: Every proxied tool call updates agent heartbeat and task status
|
||||
6. **Project Awareness**: All agents maintain unified project state through the proxy layer
|
||||
|
||||
## 👁️ Real-Time Activity Tracking - FANTASTIC Feature! 🎉
|
||||
|
||||
**See exactly what every agent is doing in real-time!** The coordinator intelligently tracks and displays agent activities as they happen:
|
||||
|
||||
### 🎯 Live Activity Examples
|
||||
|
||||
```json
|
||||
{
|
||||
"agent_id": "github-copilot-purple-elephant",
|
||||
"name": "GitHub Copilot Purple Elephant",
|
||||
"current_activity": "Reading mix.exs",
|
||||
"current_files": ["/home/ra/agent_coordinator/mix.exs"],
|
||||
"activity_history": [
|
||||
{
|
||||
"activity": "Reading mix.exs",
|
||||
"files": ["/home/ra/agent_coordinator/mix.exs"],
|
||||
"timestamp": "2025-09-06T16:41:09.193087Z"
|
||||
},
|
||||
{
|
||||
"activity": "Sequential thinking: Analyzing the current codebase structure...",
|
||||
"files": [],
|
||||
"timestamp": "2025-09-06T16:41:05.123456Z"
|
||||
},
|
||||
{
|
||||
"activity": "Editing agent.ex",
|
||||
"files": ["/home/ra/agent_coordinator/lib/agent_coordinator/agent.ex"],
|
||||
"timestamp": "2025-09-06T16:40:58.987654Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 🚀 Activity Types Tracked
|
||||
|
||||
- **📂 File Operations**: "Reading config.ex", "Editing main.py", "Writing README.md", "Creating new_feature.js"
|
||||
- **🧠 Thinking Activities**: "Sequential thinking: Analyzing the problem...", "Having a sequential thought..."
|
||||
- **🔍 Search Operations**: "Searching for 'function'", "Semantic search for 'authentication'"
|
||||
- **⚡ Terminal Commands**: "Running: mix test...", "Checking terminal output"
|
||||
- **🛠️ VS Code Actions**: "VS Code: set editor content", "Viewing active editor in VS Code"
|
||||
- **🧪 Testing**: "Running tests in user_test.exs", "Running all tests"
|
||||
- **📊 Task Management**: "Creating task: Fix bug", "Getting next task", "Completing current task"
|
||||
- **🌐 Web Operations**: "Fetching 3 webpages", "Getting library docs for React"
|
||||
|
||||
### 🎯 Benefits
|
||||
|
||||
- **🚫 Prevent File Conflicts**: See which files are being edited by which agents
|
||||
- **👥 Coordinate Team Work**: Know when agents are working on related tasks
|
||||
- **🐛 Debug Agent Behavior**: Track what agents did before encountering issues
|
||||
- **📈 Monitor Progress**: Watch real-time progress across multiple agents
|
||||
- **🔄 Optimize Workflows**: Identify bottlenecks and coordination opportunities
|
||||
|
||||
**Every tool call automatically updates the agent's activity - no configuration needed!** 🫡😸
|
||||
|
||||
## Overview
|
||||
<!--  Let's not show this it's confusing -->
|
||||
### 🏗️ Architecture Components
|
||||
|
||||
**Core Coordinator Components:**
|
||||
|
||||
- **Task Registry**: Intelligent task queuing, agent matching, and progress tracking
|
||||
- **Agent Manager**: Registration, heartbeat monitoring, and capability-based assignment
|
||||
- **Codebase Registry**: Cross-repository coordination and workspace management
|
||||
- **Unified Tool Registry**: Combines native coordination tools with external MCP tools
|
||||
- Task Registry: Intelligent task queuing, agent matching, and progress tracking
|
||||
- Agent Manager: Registration, heartbeat monitoring, and capability-based assignment
|
||||
Codebase Registry: Cross-repository coordination and workspace management
|
||||
- Unified Tool Registry: Combines native coordination tools with external MCP tools
|
||||
- Every tool call automatically updates the agent's activity for other agent's to see
|
||||
|
||||
**External Integration:**
|
||||
|
||||
- **MCP Servers**: Filesystem, Memory, Context7, Sequential Thinking, and more
|
||||
- **VS Code Integration**: Direct editor commands and workspace management
|
||||
- **Real-Time Dashboard**: Live task board showing agent status and progress
|
||||
- VS Code Integration: Direct editor commands and workspace management
|
||||
|
||||
**Example Proxy Tool Call Flow:**
|
||||
|
||||
```text
|
||||
Agent calls "read_file" → Coordinator proxies to filesystem server →
|
||||
Updates agent presence + task tracking → Returns file content to agent
|
||||
|
||||
Result: All other agents now aware of the file access via task board
|
||||
```
|
||||
|
||||
## 🔧 MCP Server Management & Unified Tool Registry
|
||||
|
||||
Agent Coordinator acts as a **unified MCP proxy server** that manages multiple external MCP servers while providing its own coordination capabilities. This creates a single, powerful interface for AI agents to access hundreds of tools seamlessly.
|
||||
|
||||
### 📡 External Server Management
|
||||
### External Server Management
|
||||
|
||||
The coordinator automatically manages external MCP servers based on configuration in `mcp_servers.json`:
|
||||
|
||||
@@ -143,7 +44,7 @@ The coordinator automatically manages external MCP servers based on configuratio
|
||||
"mcp_filesystem": {
|
||||
"type": "stdio",
|
||||
"command": "bunx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
|
||||
"auto_restart": true,
|
||||
"description": "Filesystem operations server"
|
||||
},
|
||||
@@ -153,12 +54,6 @@ The coordinator automatically manages external MCP servers based on configuratio
|
||||
"args": ["-y", "@modelcontextprotocol/server-memory"],
|
||||
"auto_restart": true,
|
||||
"description": "Memory and knowledge graph server"
|
||||
},
|
||||
"mcp_figma": {
|
||||
"type": "http",
|
||||
"url": "http://127.0.0.1:3845/mcp",
|
||||
"auto_restart": true,
|
||||
"description": "Figma design integration server"
|
||||
}
|
||||
},
|
||||
"config": {
|
||||
@@ -170,514 +65,211 @@ The coordinator automatically manages external MCP servers based on configuratio
|
||||
}
|
||||
```
|
||||
|
||||
**Server Lifecycle Management:**
|
||||
|
||||
1. **🚀 Startup**: Reads config and spawns each external server process
|
||||
2. **🔍 Discovery**: Sends MCP `initialize` and `tools/list` requests to discover available tools
|
||||
3. **📋 Registration**: Adds discovered tools to the unified tool registry
|
||||
4. **💓 Monitoring**: Continuously monitors server health and heartbeat
|
||||
5. **🔄 Auto-Restart**: Automatically restarts failed servers (if configured)
|
||||
6. **🛡️ Cleanup**: Properly terminates processes and cleans up resources on shutdown
|
||||
|
||||
### 🛠️ Unified Tool Registry
|
||||
|
||||
The coordinator combines tools from multiple sources into a single, coherent interface:
|
||||
|
||||
**Native Coordination Tools:**
|
||||
|
||||
- `register_agent` - Register agents with capabilities
|
||||
- `create_task` - Create coordination tasks
|
||||
- `get_next_task` - Get assigned tasks
|
||||
- `complete_task` - Mark tasks complete
|
||||
- `get_task_board` - View all agent status
|
||||
- `heartbeat` - Maintain agent liveness
|
||||
|
||||
**External Server Tools (Auto-Discovered):**
|
||||
|
||||
- **Filesystem**: `read_file`, `write_file`, `list_directory`, `search_files`
|
||||
- **Memory**: `search_nodes`, `store_memory`, `recall_information`
|
||||
- **Context7**: `get-library-docs`, `search-docs`, `get-library-info`
|
||||
- **Figma**: `get_code`, `get_designs`, `fetch_assets`
|
||||
- **Sequential Thinking**: `sequentialthinking`, `analyze_problem`
|
||||
- **VS Code**: `run_command`, `install_extension`, `open_file`, `create_task`
|
||||
|
||||
**Dynamic Discovery Process:**
|
||||
|
||||
1. **🚀 Startup**: Agent Coordinator starts external MCP server process
|
||||
2. **🤝 Initialize**: Sends MCP `initialize` request → Server responds with capabilities
|
||||
3. **📋 Discovery**: Sends `tools/list` request → Server returns available tools
|
||||
4. **✅ Registration**: Adds discovered tools to unified tool registry
|
||||
|
||||
This process repeats automatically when servers restart or new servers are added.
|
||||
|
||||
### 🎯 Intelligent Tool Routing
|
||||
|
||||
When an AI agent calls a tool, the coordinator intelligently routes the request:
|
||||
|
||||
**Routing Logic:**
|
||||
|
||||
1. **Native Tools**: Handled directly by Agent Coordinator modules
|
||||
2. **External Tools**: Routed to the appropriate external MCP server
|
||||
3. **VS Code Tools**: Routed to integrated VS Code Tool Provider
|
||||
4. **Unknown Tools**: Return helpful error with available alternatives
|
||||
|
||||
**Automatic Task Tracking:**
|
||||
|
||||
- Every tool call automatically creates or updates agent tasks
|
||||
- Maintains context of what agents are working on
|
||||
- Provides visibility into cross-agent coordination
|
||||
- Enables intelligent task distribution and conflict prevention
|
||||
|
||||
**Example Tool Call Flow:**
|
||||
|
||||
```bash
|
||||
Agent calls "read_file" → Coordinator routes to filesystem server →
|
||||
Updates agent task → Sends heartbeat → Returns file content
|
||||
```
|
||||
|
||||
## 🛠️ Prerequisites
|
||||
## Setup
|
||||
|
||||
Choose one of these installation methods:
|
||||
|
||||
### Option 1: Docker (Recommended - No Elixir Installation Required)
|
||||
<details>
|
||||
<summary>Docker</summary>
|
||||
|
||||
- **Docker**: 20.10+ and Docker Compose
|
||||
- **Node.js**: 18+ (for external MCP servers via bun)
|
||||
### 1. Start NATS Server
|
||||
|
||||
### Option 2: Manual Installation
|
||||
First, start a NATS server that the Agent Coordinator can connect to:
|
||||
|
||||
- **Elixir**: 1.16+ with OTP 26+
|
||||
- **Mix**: Comes with Elixir installation
|
||||
- **Node.js**: 18+ (for external MCP servers via bun)
|
||||
```bash
|
||||
# Start NATS server with persistent storage
|
||||
docker run -d \
|
||||
--name nats-server \
|
||||
--network agent-coordinator-net \
|
||||
-p 4222:4222 \
|
||||
-p 8222:8222 \
|
||||
-v nats_data:/data \
|
||||
nats:2.10-alpine \
|
||||
--jetstream \
|
||||
--store_dir=/data \
|
||||
--max_mem_store=1Gb \
|
||||
--max_file_store=10Gb
|
||||
|
||||
## ⚡ Quick Start
|
||||
# Create the network first if it doesn't exist
|
||||
docker network create agent-coordinator-net
|
||||
```
|
||||
|
||||
### Option A: Docker Setup (Easiest)
|
||||
### 2. Configure Your AI Tools
|
||||
|
||||
#### 1. Get the Code
|
||||
**For STDIO Mode (Recommended - Direct MCP Integration):**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/your-username/agent_coordinator.git
|
||||
cd agent_coordinator
|
||||
```
|
||||
First, create a Docker network and start the NATS server:
|
||||
|
||||
#### 2. Run with Docker Compose
|
||||
```bash
|
||||
# Create network for secure communication
|
||||
docker network create agent-coordinator-net
|
||||
|
||||
```bash
|
||||
# Start the full stack (MCP server + NATS + monitoring)
|
||||
docker-compose up -d
|
||||
# Start NATS server
|
||||
docker run -d \
|
||||
--name nats-server \
|
||||
--network agent-coordinator-net \
|
||||
-p 4222:4222 \
|
||||
-v nats_data:/data \
|
||||
nats:2.10-alpine \
|
||||
--jetstream \
|
||||
--store_dir=/data \
|
||||
--max_mem_store=1Gb \
|
||||
--max_file_store=10Gb
|
||||
```
|
||||
|
||||
# Or start just the MCP server
|
||||
docker-compose up agent-coordinator
|
||||
Then add this configuration to your VS Code `mcp.json` configuration file inside of your workspace's `./.vscode/mcp.json`:
|
||||
|
||||
# Check logs
|
||||
docker-compose logs -f agent-coordinator
|
||||
```
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"agent-coordinator": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"--network=agent-coordinator-net",
|
||||
"-v=./mcp_servers.json:/app/mcp_servers.json:ro",
|
||||
"-v=/path/to/your/workspace:/workspace:rw",
|
||||
"-e=NATS_HOST=nats-server",
|
||||
"-e=NATS_PORT=4222",
|
||||
"-i",
|
||||
"--rm",
|
||||
"ghcr.io/rooba/agentcoordinator:latest"
|
||||
],
|
||||
"type": "stdio"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. Configuration
|
||||
**Important Notes for File System Access:**
|
||||
|
||||
Edit `mcp_servers.json` to configure external MCP servers, then restart:
|
||||
If you're using MCP filesystem servers, mount the directories they need access to:
|
||||
|
||||
```bash
|
||||
docker-compose restart agent-coordinator
|
||||
```
|
||||
```json
|
||||
{
|
||||
"args": [
|
||||
"run",
|
||||
"--network=agent-coordinator-net",
|
||||
"-v=./mcp_servers.json:/app/mcp_servers.json:ro",
|
||||
"-v=/home/user/projects:/home/user/projects:rw",
|
||||
"-v=/path/to/workspace:/workspace:rw",
|
||||
"-e=NATS_HOST=nats-server",
|
||||
"-e=NATS_PORT=4222",
|
||||
"-i",
|
||||
"--rm",
|
||||
"ghcr.io/rooba/agentcoordinator:latest"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Option B: Manual Setup
|
||||
**For HTTP/WebSocket Mode (Alternative - Web API Access):**
|
||||
|
||||
#### 1. Clone the Repository
|
||||
If you prefer to run as a web service instead of stdio:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/your-username/agent_coordinator.git
|
||||
cd agent_coordinator
|
||||
mix deps.get
|
||||
```
|
||||
```bash
|
||||
# Create network first
|
||||
docker network create agent-coordinator-net
|
||||
|
||||
#### 2. Start the MCP Server
|
||||
# Start NATS server
|
||||
docker run -d \
|
||||
--name nats-server \
|
||||
--network agent-coordinator-net \
|
||||
-p 4222:4222 \
|
||||
-v nats_data:/data \
|
||||
nats:2.10-alpine \
|
||||
--jetstream \
|
||||
--store_dir=/data \
|
||||
--max_mem_store=1Gb \
|
||||
--max_file_store=10Gb
|
||||
|
||||
```bash
|
||||
# Start the MCP server directly
|
||||
./scripts/mcp_launcher.sh
|
||||
# Run Agent Coordinator in HTTP mode
|
||||
docker run -d \
|
||||
--name agent-coordinator \
|
||||
--network agent-coordinator-net \
|
||||
-p 8080:4000 \
|
||||
-v ./mcp_servers.json:/app/mcp_servers.json:ro \
|
||||
-v /path/to/workspace:/workspace:rw \
|
||||
-e NATS_HOST=nats-server \
|
||||
-e NATS_PORT=4222 \
|
||||
-e MCP_INTERFACE_MODE=http \
|
||||
-e MCP_HTTP_PORT=4000 \
|
||||
ghcr.io/rooba/agentcoordinator:latest
|
||||
```
|
||||
|
||||
# Or in development mode
|
||||
mix run --no-halt
|
||||
```
|
||||
Then access via HTTP API at `http://localhost:8080/mcp` or configure your MCP client to use the HTTP endpoint.
|
||||
|
||||
### 3. Configure Your AI Tools
|
||||
Create or edit `mcp_servers.json` in your project directory to configure external MCP servers:
|
||||
|
||||
#### For Docker Setup
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"mcp_filesystem": {
|
||||
"type": "stdio",
|
||||
"command": "bunx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
|
||||
"auto_restart": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If using Docker, the MCP server is available at the container's stdio interface. Add this to your VS Code `settings.json`:
|
||||
</details>
|
||||
|
||||
```json
|
||||
{
|
||||
"github.copilot.advanced": {
|
||||
"mcp": {
|
||||
"servers": {
|
||||
"agent-coordinator": {
|
||||
"command": "docker",
|
||||
"args": ["exec", "-i", "agent-coordinator", "/app/scripts/mcp_launcher.sh"],
|
||||
"env": {
|
||||
"MIX_ENV": "prod"
|
||||
}
|
||||
<details>
|
||||
<summary>Manual Setup</summary>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Elixir**: 1.16+ with OTP 26+
|
||||
- **Node.js**: 18+ (for some MCP servers)
|
||||
- **uv**: If using python MCP servers
|
||||
|
||||
### Clone the Repository
|
||||
|
||||
It is suggested to install Elixir (and Erlang) via [asdf](https://asdf-vm.com/) for easy version management.
|
||||
|
||||
NATS can be found at [nats.io](https://github.com/nats-io/nats-server/releases/latest), or via Docker
|
||||
|
||||
```bash
|
||||
git clone https://github.com/rooba/agentcoordinator.git
|
||||
cd agentcoordinator
|
||||
mix deps.get
|
||||
mix compile
|
||||
```
|
||||
|
||||
### Start the MCP Server directly
|
||||
|
||||
```bash
|
||||
# Start the MCP server directly
|
||||
export MCP_INTERFACE_MODE=stdio # or http / websocket
|
||||
# export MCP_HTTP_PORT=4000 # if using http mode
|
||||
|
||||
./scripts/mcp_launcher.sh
|
||||
|
||||
# Or in development mode
|
||||
mix run --no-halt
|
||||
```
|
||||
|
||||
### Run via VS Code or similar tools
|
||||
|
||||
Add this to your workspace's `./.vscode/mcp.json` (vscode copilot) or `mcp_servers.json` depending on your tool:
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"agent-coordinator": {
|
||||
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
|
||||
"args": [],
|
||||
"env": {
|
||||
"MIX_ENV": "prod",
|
||||
"NATS_HOST": "localhost",
|
||||
"NATS_PORT": "4222",
|
||||
"MCP_CONFIG_FILE": "/path/to/mcp_servers.json",
|
||||
"PWD": "${workspaceFolder}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
#### For Manual Setup
|
||||
|
||||
Add this to your VS Code `settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"github.copilot.advanced": {
|
||||
"mcp": {
|
||||
"servers": {
|
||||
"agent-coordinator": {
|
||||
"command": "/path/to/agent_coordinator/scripts/mcp_launcher.sh",
|
||||
"args": [],
|
||||
"env": {
|
||||
"MIX_ENV": "dev"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Test It Works
|
||||
|
||||
#### Docker Testing
|
||||
|
||||
```bash
|
||||
# Test with Docker
|
||||
docker-compose exec agent-coordinator /app/bin/agent_coordinator ping
|
||||
|
||||
# Run example (if available in container)
|
||||
docker-compose exec agent-coordinator mix run examples/full_workflow_demo.exs
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f agent-coordinator
|
||||
```
|
||||
|
||||
#### Manual Testing
|
||||
|
||||
```bash
|
||||
# Run the demo to see it in action
|
||||
mix run examples/full_workflow_demo.exs
|
||||
```
|
||||
|
||||
## 🐳 Docker Usage Guide
|
||||
|
||||
### Available Docker Commands
|
||||
|
||||
#### Basic Operations
|
||||
|
||||
```bash
|
||||
# Build the image
|
||||
docker build -t agent-coordinator .
|
||||
|
||||
# Run standalone container
|
||||
docker run -d --name agent-coordinator -p 4000:4000 agent-coordinator
|
||||
|
||||
# Run with custom config
|
||||
docker run -d \
|
||||
-v ./mcp_servers.json:/app/mcp_servers.json:ro \
|
||||
-p 4000:4000 \
|
||||
agent-coordinator
|
||||
```
|
||||
|
||||
#### Docker Compose Operations
|
||||
|
||||
```bash
|
||||
# Start full stack
|
||||
docker-compose up -d
|
||||
|
||||
# Start only agent coordinator
|
||||
docker-compose up -d agent-coordinator
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f agent-coordinator
|
||||
|
||||
# Restart after config changes
|
||||
docker-compose restart agent-coordinator
|
||||
|
||||
# Stop everything
|
||||
docker-compose down
|
||||
|
||||
# Remove volumes (reset data)
|
||||
docker-compose down -v
|
||||
```
|
||||
|
||||
#### Development with Docker
|
||||
|
||||
```bash
|
||||
# Start in development mode
|
||||
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
|
||||
|
||||
# Interactive shell for debugging
|
||||
docker-compose exec agent-coordinator bash
|
||||
|
||||
# Run tests in container
|
||||
docker-compose exec agent-coordinator mix test
|
||||
|
||||
# Watch logs during development
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Configure the container using environment variables:
|
||||
|
||||
```bash
|
||||
# docker-compose.override.yml example
|
||||
version: '3.8'
|
||||
services:
|
||||
agent-coordinator:
|
||||
environment:
|
||||
- MIX_ENV=prod
|
||||
- NATS_HOST=nats
|
||||
- NATS_PORT=4222
|
||||
- LOG_LEVEL=info
|
||||
```
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
#### External MCP Servers
|
||||
|
||||
Mount your own `mcp_servers.json`:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
-v ./my-mcp-config.json:/app/mcp_servers.json:ro \
|
||||
agent-coordinator
|
||||
```
|
||||
|
||||
#### Persistent Data
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
-v agent_data:/app/data \
|
||||
-v nats_data:/data \
|
||||
agent-coordinator
|
||||
```
|
||||
|
||||
### Monitoring & Health Checks
|
||||
|
||||
#### Container Health
|
||||
|
||||
```bash
|
||||
# Check container health
|
||||
docker-compose ps
|
||||
|
||||
# Health check details
|
||||
docker inspect --format='{{json .State.Health}}' agent-coordinator
|
||||
|
||||
# Manual health check
|
||||
docker-compose exec agent-coordinator /app/bin/agent_coordinator ping
|
||||
```
|
||||
|
||||
#### NATS Monitoring
|
||||
|
||||
Access NATS monitoring dashboard:
|
||||
```bash
|
||||
# Start with monitoring profile
|
||||
docker-compose --profile monitoring up -d
|
||||
|
||||
# Access dashboard at http://localhost:8080
|
||||
open http://localhost:8080
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### Common Issues
|
||||
|
||||
```bash
|
||||
# Check container logs
|
||||
docker-compose logs agent-coordinator
|
||||
|
||||
# Check NATS connectivity
|
||||
docker-compose exec agent-coordinator nc -z nats 4222
|
||||
|
||||
# Restart stuck container
|
||||
docker-compose restart agent-coordinator
|
||||
|
||||
# Reset everything
|
||||
docker-compose down -v && docker-compose up -d
|
||||
```
|
||||
|
||||
#### Performance Tuning
|
||||
|
||||
```bash
|
||||
# Allocate more memory
|
||||
docker-compose up -d --scale agent-coordinator=1 \
|
||||
--memory=1g --cpus="2.0"
|
||||
```
|
||||
|
||||
## 🎮 How to Use
|
||||
|
||||
Once your AI agents are connected via MCP, they can:
|
||||
|
||||
### Register as an Agent
|
||||
|
||||
```bash
|
||||
# An agent identifies itself with capabilities
|
||||
register_agent("GitHub Copilot", ["coding", "testing"], codebase_id: "my-project")
|
||||
```
|
||||
|
||||
### Create Tasks
|
||||
|
||||
```bash
|
||||
# Tasks are created with requirements
|
||||
create_task("Fix login bug", "Authentication fails on mobile",
|
||||
priority: "high",
|
||||
required_capabilities: ["coding", "debugging"]
|
||||
)
|
||||
```
|
||||
|
||||
### Coordinate Automatically
|
||||
|
||||
The coordinator automatically:
|
||||
|
||||
- **Matches** tasks to agents based on capabilities
|
||||
- **Queues** tasks when no suitable agents are available
|
||||
- **Tracks** agent heartbeats to ensure they're still working
|
||||
- **Handles** cross-codebase tasks that span multiple repositories
|
||||
|
||||
### Available MCP Tools
|
||||
|
||||
All MCP-compatible AI agents get these tools automatically:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `register_agent` | Register an agent with capabilities |
|
||||
| `create_task` | Create a new task with requirements |
|
||||
| `get_next_task` | Get the next task assigned to an agent |
|
||||
| `complete_task` | Mark current task as completed |
|
||||
| `get_task_board` | View all agents and their status |
|
||||
| `heartbeat` | Send agent heartbeat to stay active |
|
||||
| `register_codebase` | Register a new codebase/repository |
|
||||
| `create_cross_codebase_task` | Create tasks spanning multiple repos |
|
||||
|
||||
## 🧪 Development & Testing
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
mix test
|
||||
|
||||
# Run with coverage
|
||||
mix test --cover
|
||||
|
||||
# Try the examples
|
||||
mix run examples/full_workflow_demo.exs
|
||||
mix run examples/auto_heartbeat_demo.exs
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
|
||||
```bash
|
||||
# Format code
|
||||
mix format
|
||||
|
||||
# Run static analysis
|
||||
mix credo
|
||||
|
||||
# Type checking
|
||||
mix dialyzer
|
||||
```
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```text
|
||||
agent_coordinator/
|
||||
├── lib/
|
||||
│ ├── agent_coordinator.ex # Main module
|
||||
│ └── agent_coordinator/
|
||||
│ ├── mcp_server.ex # MCP protocol implementation
|
||||
│ ├── task_registry.ex # Task management
|
||||
│ ├── agent.ex # Agent management
|
||||
│ ├── codebase_registry.ex # Multi-repository support
|
||||
│ └── application.ex # Application supervisor
|
||||
├── examples/ # Working examples
|
||||
├── test/ # Test suite
|
||||
├── scripts/ # Helper scripts
|
||||
└── docs/ # Technical documentation
|
||||
├── README.md # Documentation index
|
||||
├── AUTO_HEARTBEAT.md # Unified MCP server details
|
||||
├── VSCODE_TOOL_INTEGRATION.md # VS Code integration
|
||||
└── LANGUAGE_IMPLEMENTATIONS.md # Alternative language guides
|
||||
```
|
||||
|
||||
## 🤔 Why This Design?
|
||||
|
||||
**The Problem**: Multiple AI agents working on the same codebase step on each other, duplicate work, or create conflicts.
|
||||
|
||||
**The Solution**: A coordination layer that:
|
||||
|
||||
- Lets agents register their capabilities
|
||||
- Intelligently distributes tasks
|
||||
- Tracks progress and prevents conflicts
|
||||
- Scales across multiple repositories
|
||||
|
||||
**Why Elixir?**: Built-in concurrency, fault tolerance, and excellent for coordination systems.
|
||||
|
||||
## 🚀 Alternative Implementations
|
||||
|
||||
While this Elixir version works great, you might want to consider these languages for broader adoption:
|
||||
|
||||
### Go Implementation
|
||||
|
||||
- **Pros**: Single binary deployment, great performance, large community
|
||||
- **Cons**: More verbose concurrency patterns
|
||||
- **Best for**: Teams wanting simple deployment and good performance
|
||||
|
||||
### Python Implementation
|
||||
|
||||
- **Pros**: Huge ecosystem, familiar to most developers, excellent tooling
|
||||
- **Cons**: GIL limitations for true concurrency
|
||||
- **Best for**: AI/ML teams already using Python ecosystem
|
||||
|
||||
### Rust Implementation
|
||||
|
||||
- **Pros**: Maximum performance, memory safety, growing adoption
|
||||
- **Cons**: Steeper learning curve, smaller ecosystem
|
||||
- **Best for**: Performance-critical deployments
|
||||
|
||||
### Node.js Implementation
|
||||
|
||||
- **Pros**: JavaScript familiarity, event-driven nature fits coordination
|
||||
- **Cons**: Single-threaded limitations, callback complexity
|
||||
- **Best for**: Web teams already using Node.js
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Contributions are welcome! Here's how:
|
||||
|
||||
1. Fork the repository
|
||||
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
|
||||
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
|
||||
4. Push to the branch (`git push origin feature/amazing-feature`)
|
||||
5. Open a Pull Request
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## 🙏 Acknowledgments
|
||||
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io/) for the agent communication standard
|
||||
- [Elixir](https://elixir-lang.org/) community for the excellent ecosystem
|
||||
- AI development teams pushing the boundaries of collaborative coding
|
||||
|
||||
---
|
||||
|
||||
**Agent Coordinator** - Making AI agents work together, not against each other.
|
||||
</details>
|
||||
|
||||
@@ -18,10 +18,9 @@ services:
|
||||
profiles:
|
||||
- dev
|
||||
|
||||
# Lightweight development NATS without persistence
|
||||
nats:
|
||||
command:
|
||||
command:
|
||||
- '--jetstream'
|
||||
volumes: []
|
||||
profiles:
|
||||
- dev
|
||||
- dev
|
||||
|
||||
@@ -1,51 +1,17 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# Agent Coordinator MCP Server
|
||||
agent-coordinator:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: agent-coordinator
|
||||
environment:
|
||||
- MIX_ENV=prod
|
||||
- NATS_HOST=nats
|
||||
- NATS_PORT=4222
|
||||
volumes:
|
||||
# Mount local mcp_servers.json for easy configuration
|
||||
- ./mcp_servers.json:/app/mcp_servers.json:ro
|
||||
# Mount a directory for persistent data (optional)
|
||||
- agent_data:/app/data
|
||||
ports:
|
||||
# Expose port 4000 if the app serves HTTP endpoints
|
||||
- "4000:4000"
|
||||
depends_on:
|
||||
nats:
|
||||
condition: service_healthy
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["/app/bin/agent_coordinator", "ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
# NATS Message Broker (optional but recommended for production)
|
||||
nats:
|
||||
image: nats:2.10-alpine
|
||||
container_name: agent-coordinator-nats
|
||||
command:
|
||||
- '--jetstream'
|
||||
- '--store_dir=/data'
|
||||
- '--max_file_store=1G'
|
||||
- '--max_mem_store=256M'
|
||||
- '--http_port=8222'
|
||||
ports:
|
||||
# NATS client port
|
||||
- "4222:4222"
|
||||
# NATS HTTP monitoring port
|
||||
- "8222:8222"
|
||||
# NATS routing port for clustering
|
||||
- "6222:6222"
|
||||
- "4223:4222"
|
||||
- "8223:8222"
|
||||
- "6223:6222"
|
||||
volumes:
|
||||
- nats_data:/data
|
||||
restart: unless-stopped
|
||||
@@ -55,31 +21,32 @@ services:
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
networks:
|
||||
- agent-coordinator-network
|
||||
|
||||
# Optional: NATS Monitoring Dashboard
|
||||
nats-board:
|
||||
image: devforth/nats-board:latest
|
||||
container_name: agent-coordinator-nats-board
|
||||
agent-coordinator:
|
||||
image: ghcr.io/rooba/agentcoordinator:latest
|
||||
container_name: agent-coordinator
|
||||
environment:
|
||||
- NATS_HOSTS=nats:4222
|
||||
- NATS_HOST=nats
|
||||
- NATS_PORT=4222
|
||||
- MIX_ENV=prod
|
||||
volumes:
|
||||
- ./mcp_servers.json:/app/mcp_servers.json:ro
|
||||
- ./workspace:/workspace:rw
|
||||
ports:
|
||||
- "8080:8080"
|
||||
- "4000:4000"
|
||||
depends_on:
|
||||
nats:
|
||||
condition: service_healthy
|
||||
restart: unless-stopped
|
||||
profiles:
|
||||
- monitoring
|
||||
networks:
|
||||
- agent-coordinator-network
|
||||
|
||||
volumes:
|
||||
# Persistent storage for NATS JetStream
|
||||
nats_data:
|
||||
driver: local
|
||||
|
||||
# Persistent storage for agent coordinator data
|
||||
agent_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
default:
|
||||
name: agent-coordinator-network
|
||||
agent-coordinator-network:
|
||||
driver: bridge
|
||||
|
||||
@@ -9,13 +9,23 @@ set -e
|
||||
export MIX_ENV="${MIX_ENV:-prod}"
|
||||
export NATS_HOST="${NATS_HOST:-localhost}"
|
||||
export NATS_PORT="${NATS_PORT:-4222}"
|
||||
export DOCKERIZED="true"
|
||||
COLORIZED="${COLORIZED:-}"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
if [ ! -z "$COLORIZED" ]; then
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
else
|
||||
RED=''
|
||||
GREEN=''
|
||||
YELLOW=''
|
||||
BLUE=''
|
||||
NC=''
|
||||
fi
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
@@ -30,22 +40,12 @@ log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1" >&2
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1" >&2
|
||||
log_debug() {
|
||||
echo -e "${GREEN}[DEBUG]${NC} $1" >&2
|
||||
}
|
||||
|
||||
# Cleanup function for graceful shutdown
|
||||
cleanup() {
|
||||
log_info "Received shutdown signal, cleaning up..."
|
||||
|
||||
# Send termination signals to child processes
|
||||
if [ ! -z "$MAIN_PID" ]; then
|
||||
log_info "Stopping main process (PID: $MAIN_PID)..."
|
||||
kill -TERM "$MAIN_PID" 2>/dev/null || true
|
||||
wait "$MAIN_PID" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
log_success "Cleanup completed"
|
||||
log_info "Received shutdown signal, shutting down..."
|
||||
exit 0
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ wait_for_nats() {
|
||||
|
||||
while [ $count -lt $timeout ]; do
|
||||
if nc -z "$NATS_HOST" "$NATS_PORT" 2>/dev/null; then
|
||||
log_success "NATS is available"
|
||||
log_debug "NATS is available"
|
||||
return 0
|
||||
fi
|
||||
|
||||
@@ -88,13 +88,7 @@ validate_config() {
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate JSON
|
||||
if ! cat /app/mcp_servers.json | bun run -e "JSON.parse(require('fs').readFileSync(0, 'utf8'))" >/dev/null 2>&1; then
|
||||
log_error "Invalid JSON in mcp_servers.json"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Configuration validation passed"
|
||||
log_debug "Configuration validation passed"
|
||||
}
|
||||
|
||||
# Pre-install external MCP server dependencies
|
||||
@@ -120,7 +114,7 @@ preinstall_dependencies() {
|
||||
bun add --global --silent "$package" || log_warn "Failed to cache $package"
|
||||
done
|
||||
|
||||
log_success "Dependencies pre-installed"
|
||||
log_debug "Dependencies pre-installed"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
@@ -129,6 +123,7 @@ main() {
|
||||
log_info "Environment: $MIX_ENV"
|
||||
log_info "NATS: $NATS_HOST:$NATS_PORT"
|
||||
|
||||
|
||||
# Validate configuration
|
||||
validate_config
|
||||
|
||||
@@ -147,8 +142,7 @@ main() {
|
||||
if [ "$#" -eq 0 ] || [ "$1" = "/app/scripts/mcp_launcher.sh" ]; then
|
||||
# Default: start the MCP server
|
||||
log_info "Starting MCP server via launcher script..."
|
||||
exec "/app/scripts/mcp_launcher.sh" &
|
||||
MAIN_PID=$!
|
||||
exec "/app/scripts/mcp_launcher.sh"
|
||||
elif [ "$1" = "bash" ] || [ "$1" = "sh" ]; then
|
||||
# Interactive shell mode
|
||||
log_info "Starting interactive shell..."
|
||||
@@ -156,21 +150,10 @@ main() {
|
||||
elif [ "$1" = "release" ]; then
|
||||
# Direct release mode
|
||||
log_info "Starting via Elixir release..."
|
||||
exec "/app/bin/agent_coordinator" "start" &
|
||||
MAIN_PID=$!
|
||||
exec "/app/bin/agent_coordinator" "start"
|
||||
else
|
||||
# Custom command
|
||||
log_info "Starting custom command: $*"
|
||||
exec "$@" &
|
||||
MAIN_PID=$!
|
||||
fi
|
||||
|
||||
# Wait for the main process if it's running in background
|
||||
if [ ! -z "$MAIN_PID" ]; then
|
||||
log_success "Main process started (PID: $MAIN_PID)"
|
||||
wait "$MAIN_PID"
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Execute main function with all arguments
|
||||
main "$@"
|
||||
|
||||
@@ -1,333 +0,0 @@
|
||||
# Unified MCP Server with Auto-Heartbeat System Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The Agent Coordinator now operates as a **unified MCP server** that internally manages all external MCP servers (Context7, Figma, Filesystem, Firebase, Memory, Sequential Thinking, etc.) while providing automatic task tracking and heartbeat coverage for every tool operation. GitHub Copilot sees only a single MCP server, but gets access to all tools with automatic coordination.
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. Unified MCP Server Architecture
|
||||
- **Single interface**: GitHub Copilot connects to only the Agent Coordinator
|
||||
- **Internal server management**: Automatically starts and manages all external MCP servers
|
||||
- **Unified tool registry**: Aggregates tools from all servers into one comprehensive list
|
||||
- **Automatic task tracking**: Every tool call automatically creates/updates agent tasks
|
||||
|
||||
### 2. Automatic Task Tracking
|
||||
- **Transparent operation**: Any tool usage automatically becomes a tracked task
|
||||
- **No explicit coordination needed**: Agents don't need to call `create_task` manually
|
||||
- **Real-time activity monitoring**: See what each agent is working on in real-time
|
||||
- **Smart task titles**: Automatically generated based on tool usage and context
|
||||
|
||||
### 3. Enhanced Heartbeat Coverage
|
||||
- **Universal coverage**: Every tool call from any server includes heartbeat management
|
||||
- **Agent session tracking**: Automatic agent registration for GitHub Copilot
|
||||
- **Activity-based heartbeats**: Heartbeats sent before/after each tool operation
|
||||
- **Session metadata**: Enhanced task board shows real activity and tool usage
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
GitHub Copilot
|
||||
↓
|
||||
Agent Coordinator (Single Visible MCP Server)
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Unified MCP Server │
|
||||
│ • Aggregates all tools into single interface │
|
||||
│ • Automatic task tracking for every operation │
|
||||
│ • Agent coordination tools (create_task, etc.) │
|
||||
│ • Universal heartbeat coverage │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ MCP Server Manager │
|
||||
│ • Starts & manages external servers internally │
|
||||
│ • Health monitoring & auto-restart │
|
||||
│ • Tool aggregation & routing │
|
||||
│ • Auto-task creation for any tool usage │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌──────────┬──────────┬───────────┬──────────┬─────────────┐
|
||||
│ Context7 │ Figma │Filesystem │ Firebase │ Memory + │
|
||||
│ Server │ Server │ Server │ Server │ Sequential │
|
||||
└──────────┴──────────┴───────────┴──────────┴─────────────┘
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### GitHub Copilot Experience
|
||||
|
||||
From GitHub Copilot's perspective, there's only one MCP server with all tools available:
|
||||
|
||||
```javascript
|
||||
// All these tools are available from the single Agent Coordinator server:
|
||||
|
||||
// Agent coordination tools
|
||||
register_agent, create_task, get_next_task, complete_task, get_task_board, heartbeat
|
||||
|
||||
// Context7 tools
|
||||
mcp_context7_get-library-docs, mcp_context7_resolve-library-id
|
||||
|
||||
// Figma tools
|
||||
mcp_figma_get_code, mcp_figma_get_image, mcp_figma_get_variable_defs
|
||||
|
||||
// Filesystem tools
|
||||
mcp_filesystem_read_file, mcp_filesystem_write_file, mcp_filesystem_list_directory
|
||||
|
||||
// Firebase tools
|
||||
mcp_firebase_firestore_get_documents, mcp_firebase_auth_get_user
|
||||
|
||||
// Memory tools
|
||||
mcp_memory_search_nodes, mcp_memory_create_entities
|
||||
|
||||
// Sequential thinking tools
|
||||
mcp_sequentialthi_sequentialthinking
|
||||
|
||||
// Plus any other configured MCP servers...
|
||||
```
|
||||
|
||||
### Automatic Task Tracking
|
||||
|
||||
Every tool usage automatically creates or updates an agent's current task:
|
||||
|
||||
```elixir
|
||||
# When GitHub Copilot calls any tool, it automatically:
|
||||
# 1. Sends pre-operation heartbeat
|
||||
# 2. Creates/updates current task based on tool usage
|
||||
# 3. Routes to appropriate external server
|
||||
# 4. Sends post-operation heartbeat
|
||||
# 5. Updates task activity log
|
||||
|
||||
# Example: Reading a file automatically creates a task
|
||||
Tool Call: mcp_filesystem_read_file(%{"path" => "/project/src/main.rs"})
|
||||
Auto-Created Task: "Reading file: main.rs"
|
||||
Description: "Reading and analyzing file content from /project/src/main.rs"
|
||||
|
||||
# Example: Figma code generation automatically creates a task
|
||||
Tool Call: mcp_figma_get_code(%{"nodeId" => "123:456"})
|
||||
Auto-Created Task: "Generating Figma code: 123:456"
|
||||
Description: "Generating code for Figma component 123:456"
|
||||
|
||||
# Example: Library research automatically creates a task
|
||||
Tool Call: mcp_context7_get-library-docs(%{"context7CompatibleLibraryID" => "/vercel/next.js"})
|
||||
Auto-Created Task: "Researching: /vercel/next.js"
|
||||
Description: "Researching documentation for /vercel/next.js library"
|
||||
```
|
||||
|
||||
### Task Board with Real Activity
|
||||
|
||||
```elixir
|
||||
# Get enhanced task board showing real agent activity
|
||||
{:ok, board} = get_task_board()
|
||||
|
||||
# Returns:
|
||||
%{
|
||||
agents: [
|
||||
%{
|
||||
agent_id: "github_copilot_session",
|
||||
name: "GitHub Copilot",
|
||||
status: :working,
|
||||
current_task: %{
|
||||
title: "Reading file: database.ex",
|
||||
description: "Reading and analyzing file content from /project/lib/database.ex",
|
||||
auto_generated: true,
|
||||
tool_name: "mcp_filesystem_read_file",
|
||||
created_at: ~U[2025-08-23 10:30:00Z]
|
||||
},
|
||||
last_heartbeat: ~U[2025-08-23 10:30:05Z],
|
||||
online: true
|
||||
}
|
||||
],
|
||||
pending_tasks: [],
|
||||
total_agents: 1,
|
||||
active_tasks: 1,
|
||||
pending_count: 0
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### MCP Server Configuration
|
||||
|
||||
External servers are configured in `mcp_servers.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"mcp_context7": {
|
||||
"type": "stdio",
|
||||
"command": "uvx",
|
||||
"args": ["mcp-server-context7"],
|
||||
"auto_restart": true,
|
||||
"description": "Context7 library documentation server"
|
||||
},
|
||||
"mcp_figma": {
|
||||
"type": "stdio",
|
||||
"command": "npx",
|
||||
"args": ["-y", "@figma/mcp-server-figma"],
|
||||
"auto_restart": true,
|
||||
"description": "Figma design integration server"
|
||||
},
|
||||
"mcp_filesystem": {
|
||||
"type": "stdio",
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
|
||||
"auto_restart": true,
|
||||
"description": "Filesystem operations with auto-task tracking"
|
||||
}
|
||||
},
|
||||
"config": {
|
||||
"startup_timeout": 30000,
|
||||
"heartbeat_interval": 10000,
|
||||
"auto_restart_delay": 1000,
|
||||
"max_restart_attempts": 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### VS Code Settings
|
||||
|
||||
Update your VS Code MCP settings to point to the unified server:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcp.servers": {
|
||||
"agent-coordinator": {
|
||||
"command": "/home/ra/agent_coordinator/scripts/mcp_launcher.sh",
|
||||
"args": []
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
### 1. Simplified Configuration
|
||||
- **One server**: GitHub Copilot only needs to connect to Agent Coordinator
|
||||
- **No manual setup**: External servers are managed automatically
|
||||
- **Unified tools**: All tools appear in one comprehensive list
|
||||
|
||||
### 2. Automatic Coordination
|
||||
- **Zero-effort tracking**: Every tool usage automatically tracked as tasks
|
||||
- **Real-time visibility**: See exactly what agents are working on
|
||||
- **Smart task creation**: Descriptive task titles based on actual tool usage
|
||||
- **Universal heartbeats**: Every operation maintains agent liveness
|
||||
|
||||
### 3. Enhanced Collaboration
|
||||
- **Agent communication**: Coordination tools still available for planning
|
||||
- **Multi-agent workflows**: Agents can create tasks for each other
|
||||
- **Activity awareness**: Agents can see what others are working on
|
||||
- **File conflict prevention**: Automatic file locking across operations
|
||||
|
||||
### 4. Operational Excellence
|
||||
- **Auto-restart**: Failed external servers automatically restarted
|
||||
- **Health monitoring**: Real-time status of all managed servers
|
||||
- **Error handling**: Graceful degradation when servers unavailable
|
||||
- **Performance**: Direct routing without external proxy overhead
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### From Individual MCP Servers
|
||||
|
||||
**Before:**
|
||||
```json
|
||||
// VS Code settings with multiple servers
|
||||
{
|
||||
"mcp.servers": {
|
||||
"context7": {"command": "uvx", "args": ["mcp-server-context7"]},
|
||||
"figma": {"command": "npx", "args": ["-y", "@figma/mcp-server-figma"]},
|
||||
"filesystem": {"command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"]},
|
||||
"agent-coordinator": {"command": "/path/to/mcp_launcher.sh"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**After:**
|
||||
```json
|
||||
// VS Code settings with single unified server
|
||||
{
|
||||
"mcp.servers": {
|
||||
"agent-coordinator": {
|
||||
"command": "/home/ra/agent_coordinator/scripts/mcp_launcher.sh",
|
||||
"args": []
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Migration
|
||||
|
||||
1. **Remove individual MCP servers** from VS Code settings
|
||||
2. **Add external servers** to `mcp_servers.json` configuration
|
||||
3. **Update launcher script** path if needed
|
||||
4. **Restart VS Code** to apply changes
|
||||
|
||||
## Startup and Testing
|
||||
|
||||
### Starting the Unified Server
|
||||
|
||||
```bash
|
||||
# From the project directory
|
||||
./scripts/mcp_launcher.sh
|
||||
```
|
||||
|
||||
### Testing Tool Aggregation
|
||||
|
||||
```bash
|
||||
# Test that all tools are available
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | ./scripts/mcp_launcher.sh
|
||||
|
||||
# Should return tools from Agent Coordinator + all external servers
|
||||
```
|
||||
|
||||
### Testing Automatic Task Tracking
|
||||
|
||||
```bash
|
||||
# Use any tool - it should automatically create a task
|
||||
echo '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"mcp_filesystem_read_file","arguments":{"path":"/home/ra/test.txt"}}}' | ./scripts/mcp_launcher.sh
|
||||
|
||||
# Check task board to see auto-created task
|
||||
echo '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"get_task_board","arguments":{}}}' | ./scripts/mcp_launcher.sh
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### External Server Issues
|
||||
|
||||
1. **Server won't start**
|
||||
- Check command path in `mcp_servers.json`
|
||||
- Verify dependencies are installed (`npm install -g @modelcontextprotocol/server-*`)
|
||||
- Check logs for startup errors
|
||||
|
||||
2. **Tools not appearing**
|
||||
- Verify server started successfully
|
||||
- Check server health: use `get_server_status` tool
|
||||
- Restart specific servers if needed
|
||||
|
||||
3. **Auto-restart not working**
|
||||
- Check `auto_restart: true` in server config
|
||||
- Verify process monitoring is active
|
||||
- Check restart attempt limits
|
||||
|
||||
### Task Tracking Issues
|
||||
|
||||
1. **Tasks not auto-creating**
|
||||
- Verify agent session is active
|
||||
- Check that GitHub Copilot is registered as agent
|
||||
- Ensure heartbeat system is working
|
||||
|
||||
2. **Incorrect task titles**
|
||||
- Task titles are generated based on tool name and arguments
|
||||
- Can be customized in `generate_task_title/2` function
|
||||
- File-based operations use file paths in titles
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Planned improvements:
|
||||
|
||||
1. **Dynamic server discovery** - Auto-detect and add new MCP servers
|
||||
2. **Load balancing** - Distribute tool calls across multiple server instances
|
||||
3. **Tool versioning** - Support multiple versions of the same tool
|
||||
4. **Custom task templates** - Configurable task generation based on tool patterns
|
||||
5. **Inter-agent messaging** - Direct communication channels between agents
|
||||
6. **Workflow orchestration** - Multi-step task coordination across agents
|
||||
@@ -1,107 +0,0 @@
|
||||
# Dynamic Tool Discovery Implementation Summary
|
||||
|
||||
## What We Accomplished
|
||||
|
||||
The Agent Coordinator has been successfully refactored to implement **fully dynamic tool discovery** following the MCP protocol specification, eliminating all hardcoded tool lists **and ensuring shared MCP server instances across all agents**.
|
||||
|
||||
## Key Changes Made
|
||||
|
||||
### 1. Removed Hardcoded Tool Lists
|
||||
**Before**:
|
||||
```elixir
|
||||
coordinator_native = ~w[register_agent create_task get_next_task complete_task get_task_board heartbeat]
|
||||
```
|
||||
|
||||
**After**:
|
||||
```elixir
|
||||
# Tools discovered dynamically by checking actual tool definitions
|
||||
coordinator_tools = get_coordinator_tools()
|
||||
if Enum.any?(coordinator_tools, fn tool -> tool["name"] == tool_name end) do
|
||||
{:coordinator, tool_name}
|
||||
end
|
||||
```
|
||||
|
||||
### 2. Made VS Code Tools Conditional
|
||||
**Before**: Always included VS Code tools even if not available
|
||||
|
||||
**After**:
|
||||
```elixir
|
||||
vscode_tools = try do
|
||||
if Code.ensure_loaded?(AgentCoordinator.VSCodeToolProvider) do
|
||||
AgentCoordinator.VSCodeToolProvider.get_tools()
|
||||
else
|
||||
[]
|
||||
end
|
||||
rescue
|
||||
_ -> []
|
||||
end
|
||||
```
|
||||
|
||||
### 3. Added Shared MCP Server Management
|
||||
**MAJOR FIX**: MCPServerManager is now part of the application supervision tree
|
||||
|
||||
**Before**: Each agent/test started its own MCP servers
|
||||
- Multiple server instances for the same functionality
|
||||
- Resource waste and potential conflicts
|
||||
- Different OS PIDs per agent
|
||||
|
||||
**After**: Single shared MCP server instance
|
||||
- Added to `application.ex` supervision tree
|
||||
- All agents use the same MCP server processes
|
||||
- Perfect resource sharing
|
||||
|
||||
### 4. Added Dynamic Tool Refresh
|
||||
**New function**: `refresh_tools/0`
|
||||
- Re-discovers tools from all running MCP servers
|
||||
- Updates tool registry in real-time
|
||||
- Handles both PID and Port server types properly
|
||||
|
||||
### 5. Enhanced Tool Routing
|
||||
**Before**: Used hardcoded tool name lists for routing decisions
|
||||
|
||||
**After**: Checks actual tool definitions to determine routing## Test Results
|
||||
|
||||
✅ All tests passing with dynamic discovery:
|
||||
```
|
||||
Found 44 total tools:
|
||||
• Coordinator tools: 6
|
||||
• External MCP tools: 26+ (context7, filesystem, memory, sequential thinking)
|
||||
• VS Code tools: 12 (when available)
|
||||
```
|
||||
|
||||
**External servers discovered**:
|
||||
- Context7: 2 tools (resolve-library-id, get-library-docs)
|
||||
- Filesystem: 14 tools (read_file, write_file, edit_file, etc.)
|
||||
- Memory: 9 tools (search_nodes, create_entities, etc.)
|
||||
- Sequential Thinking: 1 tool (sequentialthinking)
|
||||
|
||||
## Benefits Achieved
|
||||
|
||||
1. **Perfect MCP Protocol Compliance**: No hardcoded assumptions, everything discovered via `tools/list`
|
||||
2. **Shared Server Architecture**: Single MCP server instance shared by all agents (massive resource savings)
|
||||
3. **Flexibility**: New MCP servers can be added via configuration without code changes
|
||||
4. **Reliability**: Tools automatically re-discovered when servers restart
|
||||
5. **Performance**: Only available tools included in routing decisions + shared server processes
|
||||
6. **Maintainability**: No need to manually sync tool lists with server implementations
|
||||
7. **Resource Efficiency**: No duplicate server processes per agent/session
|
||||
8. **Debugging**: Clear visibility into which tools are available from which servers
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. **`lib/agent_coordinator/mcp_server_manager.ex`**:
|
||||
- Removed `get_coordinator_tool_names/0` function
|
||||
- Modified `find_tool_server/2` to use dynamic discovery
|
||||
- Added conditional VS Code tool loading
|
||||
- Added `refresh_tools/0` and `rediscover_all_tools/1`
|
||||
- Fixed Port vs PID handling for server aliveness checks
|
||||
|
||||
2. **Tests**:
|
||||
- Added `test/dynamic_tool_discovery_test.exs`
|
||||
- All existing tests still pass
|
||||
- New tests verify dynamic discovery works correctly
|
||||
|
||||
## Impact
|
||||
|
||||
This refactoring makes the Agent Coordinator a true MCP-compliant aggregation server that follows the protocol specification exactly, rather than making assumptions about what tools servers provide. It's now much more flexible and maintainable while being more reliable in dynamic environments where servers may come and go.
|
||||
|
||||
The system now perfectly implements the user's original request: **"all tools will reply with what tools are available"** via the MCP protocol's `tools/list` method.
|
||||
@@ -1,253 +0,0 @@
|
||||
# MCP Compliance Enhancement Summary
|
||||
|
||||
## Overview
|
||||
This document summarizes the enhanced Model Context Protocol (MCP) compliance features implemented in the Agent Coordinator system, focusing on session management, security, and real-time streaming capabilities.
|
||||
|
||||
## Implemented Features
|
||||
|
||||
### 1. 🔐 Enhanced Session Management
|
||||
|
||||
#### Session Token Authentication
|
||||
- **Implementation**: Modified `register_agent` to return cryptographically secure session tokens
|
||||
- **Token Format**: 32-byte secure random tokens, Base64 encoded
|
||||
- **Expiry**: 60-minute session timeout with automatic cleanup
|
||||
- **Headers**: Support for `Mcp-Session-Id` header (MCP compliant) and `X-Session-Id` (legacy)
|
||||
|
||||
#### Session Validation Flow
|
||||
```
|
||||
Client Server
|
||||
| |
|
||||
|-- POST /mcp/request ---->|
|
||||
| register_agent |
|
||||
| |
|
||||
|<-- session_token --------|
|
||||
| expires_at |
|
||||
| |
|
||||
|-- Subsequent requests -->|
|
||||
| Mcp-Session-Id: token |
|
||||
| |
|
||||
|<-- Authenticated resp ---|
|
||||
```
|
||||
|
||||
#### Key Components
|
||||
- **SessionManager GenServer**: Manages token lifecycle and validation
|
||||
- **Secure token generation**: Uses `:crypto.strong_rand_bytes/1`
|
||||
- **Automatic cleanup**: Periodic removal of expired sessions
|
||||
- **Backward compatibility**: Supports legacy X-Session-Id headers
|
||||
|
||||
### 2. 📋 MCP Protocol Version Compliance
|
||||
|
||||
#### Protocol Headers
|
||||
- **MCP-Protocol-Version**: `2025-06-18` (current specification)
|
||||
- **Server**: `AgentCoordinator/1.0` identification
|
||||
- **Applied to**: All JSON responses via enhanced `send_json_response/3`
|
||||
|
||||
#### CORS Enhancement
|
||||
- **Session Headers**: Added `mcp-session-id`, `mcp-protocol-version` to allowed headers
|
||||
- **Exposed Headers**: Protocol version and server headers exposed to clients
|
||||
- **Security**: Enhanced origin validation with localhost and HTTPS preference
|
||||
|
||||
### 3. 🔒 Security Enhancements
|
||||
|
||||
#### Origin Validation
|
||||
```elixir
|
||||
defp validate_origin(origin) do
|
||||
case URI.parse(origin) do
|
||||
%URI{host: host} when host in ["localhost", "127.0.0.1", "::1"] -> origin
|
||||
%URI{host: host} when is_binary(host) ->
|
||||
if String.starts_with?(origin, "https://") or
|
||||
String.contains?(host, ["localhost", "127.0.0.1", "dev", "local"]) do
|
||||
origin
|
||||
else
|
||||
Logger.warning("Potentially unsafe origin: #{origin}")
|
||||
"*"
|
||||
end
|
||||
_ -> "*"
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
#### Authenticated Method Protection
|
||||
Protected methods requiring valid session tokens:
|
||||
- `agents/register` ✓
|
||||
- `agents/unregister` ✓
|
||||
- `agents/heartbeat` ✓
|
||||
- `tasks/create` ✓
|
||||
- `tasks/complete` ✓
|
||||
- `codebase/register` ✓
|
||||
- `stream/subscribe` ✓
|
||||
|
||||
### 4. 📡 Server-Sent Events (SSE) Support
|
||||
|
||||
#### Real-time Streaming Endpoint
|
||||
- **Endpoint**: `GET /mcp/stream`
|
||||
- **Transport**: Streamable HTTP (MCP specification)
|
||||
- **Authentication**: Requires valid session token
|
||||
- **Content-Type**: `text/event-stream`
|
||||
|
||||
#### SSE Event Format
|
||||
```
|
||||
event: connected
|
||||
data: {"session_id":"agent_123","protocol_version":"2025-06-18","timestamp":"2025-01-11T..."}
|
||||
|
||||
event: heartbeat
|
||||
data: {"timestamp":"2025-01-11T...","session_id":"agent_123"}
|
||||
```
|
||||
|
||||
#### Features
|
||||
- **Connection establishment**: Sends initial `connected` event
|
||||
- **Heartbeat**: Periodic keepalive events
|
||||
- **Session tracking**: Events include session context
|
||||
- **Graceful disconnection**: Handles client disconnects
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### File Structure
|
||||
```
|
||||
lib/agent_coordinator/
|
||||
├── session_manager.ex # Session token management
|
||||
├── mcp_server.ex # Enhanced register_agent
|
||||
├── http_interface.ex # HTTP/SSE endpoints + security
|
||||
└── application.ex # Supervision tree
|
||||
```
|
||||
|
||||
### Session Manager API
|
||||
```elixir
|
||||
# Create new session
|
||||
{:ok, session_info} = SessionManager.create_session(agent_id, capabilities)
|
||||
|
||||
# Validate existing session
|
||||
{:ok, session_info} = SessionManager.validate_session(token)
|
||||
{:error, :expired} = SessionManager.validate_session(old_token)
|
||||
|
||||
# Manual cleanup (automatic via timer)
|
||||
SessionManager.cleanup_expired_sessions()
|
||||
```
|
||||
|
||||
### HTTP Interface Enhancements
|
||||
```elixir
|
||||
# Session validation middleware
|
||||
case validate_session_for_method(method, conn, context) do
|
||||
{:ok, session_info} -> # Process request
|
||||
{:error, auth_error} -> # Return 401 Unauthorized
|
||||
end
|
||||
|
||||
# MCP headers on all responses
|
||||
defp put_mcp_headers(conn) do
|
||||
conn
|
||||
|> put_resp_header("mcp-protocol-version", "2025-06-18")
|
||||
|> put_resp_header("server", "AgentCoordinator/1.0")
|
||||
end
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### 1. Agent Registration with Session Token
|
||||
```bash
|
||||
curl -X POST http://localhost:4000/mcp/request \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "1",
|
||||
"method": "agents/register",
|
||||
"params": {
|
||||
"name": "My Agent Blue Koala",
|
||||
"capabilities": ["coding", "testing"],
|
||||
"codebase_id": "my_project"
|
||||
}
|
||||
}'
|
||||
|
||||
# Response:
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "1",
|
||||
"result": {
|
||||
"agent_id": "My Agent Blue Koala",
|
||||
"session_token": "abc123...",
|
||||
"expires_at": "2025-01-11T15:30:00Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Authenticated Tool Call
|
||||
```bash
|
||||
curl -X POST http://localhost:4000/mcp/request \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Mcp-Session-Id: abc123..." \
|
||||
-d '{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "2",
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "get_task_board",
|
||||
"arguments": {"agent_id": "My Agent Blue Koala"}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### 3. Server-Sent Events Stream
|
||||
```javascript
|
||||
const eventSource = new EventSource('/mcp/stream', {
|
||||
headers: {
|
||||
'Mcp-Session-Id': 'abc123...'
|
||||
}
|
||||
});
|
||||
|
||||
eventSource.onmessage = function(event) {
|
||||
const data = JSON.parse(event.data);
|
||||
console.log('Received:', data);
|
||||
};
|
||||
```
|
||||
|
||||
## Testing and Verification
|
||||
|
||||
### Automated Test Script
|
||||
- **File**: `test_session_management.exs`
|
||||
- **Coverage**: Registration flow, session validation, protocol headers
|
||||
- **Usage**: `elixir test_session_management.exs`
|
||||
|
||||
### Manual Testing
|
||||
1. Start server: `mix phx.server`
|
||||
2. Register agent via `/mcp/request`
|
||||
3. Use returned session token for authenticated calls
|
||||
4. Verify MCP headers in responses
|
||||
5. Test SSE stream endpoint
|
||||
|
||||
## Benefits
|
||||
|
||||
### 🔐 Security
|
||||
- **Token-based authentication**: Prevents unauthorized access
|
||||
- **Session expiry**: Limits exposure of compromised tokens
|
||||
- **Origin validation**: Mitigates CSRF and unauthorized origins
|
||||
- **Method-level protection**: Granular access control
|
||||
|
||||
### 📋 MCP Compliance
|
||||
- **Official protocol version**: Headers indicate MCP 2025-06-18 support
|
||||
- **Streamable HTTP**: Real-time capabilities via SSE
|
||||
- **Proper error handling**: Standard JSON-RPC error responses
|
||||
- **Session context**: Request metadata for debugging
|
||||
|
||||
### 🚀 Developer Experience
|
||||
- **Backward compatibility**: Legacy headers still supported
|
||||
- **Clear error messages**: Detailed authentication failure reasons
|
||||
- **Real-time updates**: Live agent status via SSE
|
||||
- **Easy testing**: Comprehensive test utilities
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Planned Features
|
||||
- **PubSub integration**: Event-driven SSE updates
|
||||
- **Session persistence**: Redis/database backing
|
||||
- **Rate limiting**: Per-session request throttling
|
||||
- **Audit logging**: Session activity tracking
|
||||
- **WebSocket upgrade**: Bidirectional real-time communication
|
||||
|
||||
### Configuration Options
|
||||
- **Session timeout**: Configurable expiry duration
|
||||
- **Security levels**: Strict/permissive origin validation
|
||||
- **Token rotation**: Automatic refresh mechanisms
|
||||
- **Multi-tenancy**: Workspace-scoped sessions
|
||||
|
||||
---
|
||||
|
||||
*This implementation provides a solid foundation for MCP-compliant session management while maintaining the flexibility to extend with additional features as requirements evolve.*
|
||||
@@ -1,279 +0,0 @@
|
||||
# Agent Coordinator Multi-Interface MCP Server
|
||||
|
||||
The Agent Coordinator now supports multiple interface modes to accommodate different client types and use cases, from local VSCode integration to remote web applications.
|
||||
|
||||
## Interface Modes
|
||||
|
||||
### 1. STDIO Mode (Default)
|
||||
Traditional MCP over stdin/stdout for local clients like VSCode.
|
||||
|
||||
**Features:**
|
||||
- Full tool access (filesystem, VSCode, terminal tools)
|
||||
- Local security context (trusted)
|
||||
- Backward compatible with existing MCP clients
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh stdio
|
||||
# or
|
||||
./scripts/mcp_launcher.sh # original launcher
|
||||
```
|
||||
|
||||
### 2. HTTP Mode
|
||||
REST API interface for remote clients and web applications.
|
||||
|
||||
**Features:**
|
||||
- HTTP endpoints for MCP operations
|
||||
- Tool filtering (removes local-only tools)
|
||||
- CORS support for web clients
|
||||
- Remote security context (sandboxed)
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh http 8080
|
||||
```
|
||||
|
||||
**Endpoints:**
|
||||
- `GET /health` - Health check
|
||||
- `GET /mcp/capabilities` - Server capabilities and filtered tools
|
||||
- `GET /mcp/tools` - List available tools (filtered by context)
|
||||
- `POST /mcp/tools/:tool_name` - Execute specific tool
|
||||
- `POST /mcp/request` - Full MCP JSON-RPC request
|
||||
- `GET /agents` - Agent status (requires authorization)
|
||||
|
||||
### 3. WebSocket Mode
|
||||
Real-time interface for web clients requiring live updates.
|
||||
|
||||
**Features:**
|
||||
- Real-time MCP JSON-RPC over WebSocket
|
||||
- Tool filtering for remote clients
|
||||
- Session management and heartbeat
|
||||
- Automatic cleanup on disconnect
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh websocket 8081
|
||||
```
|
||||
|
||||
**Endpoint:**
|
||||
- `ws://localhost:8081/mcp/ws` - WebSocket connection
|
||||
|
||||
### 4. Remote Mode
|
||||
Both HTTP and WebSocket on the same port for complete remote access.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh remote 8080
|
||||
```
|
||||
|
||||
### 5. All Mode
|
||||
All interface modes simultaneously for maximum compatibility.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh all 8080
|
||||
```
|
||||
|
||||
## Tool Filtering
|
||||
|
||||
The system intelligently filters available tools based on client context:
|
||||
|
||||
### Local Clients (STDIO)
|
||||
- **Context**: Trusted, local machine
|
||||
- **Tools**: All tools available
|
||||
- **Use case**: VSCode extension, local development
|
||||
|
||||
### Remote Clients (HTTP/WebSocket)
|
||||
- **Context**: Sandboxed, remote access
|
||||
- **Tools**: Filtered to exclude local-only operations
|
||||
- **Use case**: Web applications, CI/CD, remote dashboards
|
||||
|
||||
### Tool Categories
|
||||
|
||||
**Always Available (All Contexts):**
|
||||
- Agent coordination: `register_agent`, `create_task`, `get_task_board`, `heartbeat`
|
||||
- Memory/Knowledge: `create_entities`, `read_graph`, `search_nodes`
|
||||
- Documentation: `get-library-docs`, `resolve-library-id`
|
||||
- Reasoning: `sequentialthinking`
|
||||
|
||||
**Local Only (Filtered for Remote):**
|
||||
- Filesystem: `read_file`, `write_file`, `create_file`, `delete_file`
|
||||
- VSCode: `vscode_*` tools
|
||||
- Terminal: `run_in_terminal`, `get_terminal_output`
|
||||
- System: Local file operations
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration is managed through environment variables and config files:
|
||||
|
||||
### Environment Variables
|
||||
- `MCP_INTERFACE_MODE`: Interface mode (`stdio`, `http`, `websocket`, `remote`, `all`)
|
||||
- `MCP_HTTP_PORT`: HTTP server port (default: 8080)
|
||||
- `MCP_WS_PORT`: WebSocket port (default: 8081)
|
||||
|
||||
### Configuration File
|
||||
See `mcp_interfaces_config.json` for detailed configuration options.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Local Context (STDIO)
|
||||
- Full filesystem access
|
||||
- Trusted environment
|
||||
- No network exposure
|
||||
|
||||
### Remote Context (HTTP/WebSocket)
|
||||
- Sandboxed environment
|
||||
- Tool filtering active
|
||||
- CORS protection
|
||||
- No local file access
|
||||
|
||||
### Tool Filtering Rules
|
||||
1. **Allowlist approach**: Safe tools are explicitly allowed for remote clients
|
||||
2. **Pattern matching**: Local-only tools identified by name patterns
|
||||
3. **Schema analysis**: Tools with local-only parameters are filtered
|
||||
4. **Context-aware**: Different tool sets per connection type
|
||||
|
||||
## Client Examples
|
||||
|
||||
### HTTP Client (Python)
|
||||
```python
|
||||
import requests
|
||||
|
||||
# Get available tools
|
||||
response = requests.get("http://localhost:8080/mcp/tools")
|
||||
tools = response.json()
|
||||
|
||||
# Register an agent
|
||||
agent_data = {
|
||||
"arguments": {
|
||||
"name": "Remote Agent",
|
||||
"capabilities": ["analysis", "coordination"]
|
||||
}
|
||||
}
|
||||
response = requests.post("http://localhost:8080/mcp/tools/register_agent",
|
||||
json=agent_data)
|
||||
```
|
||||
|
||||
### WebSocket Client (JavaScript)
|
||||
```javascript
|
||||
const ws = new WebSocket('ws://localhost:8080/mcp/ws');
|
||||
|
||||
ws.onopen = () => {
|
||||
// Initialize connection
|
||||
ws.send(JSON.stringify({
|
||||
jsonrpc: "2.0",
|
||||
id: 1,
|
||||
method: "initialize",
|
||||
params: {
|
||||
protocolVersion: "2024-11-05",
|
||||
clientInfo: { name: "web-client", version: "1.0.0" }
|
||||
}
|
||||
}));
|
||||
};
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
const response = JSON.parse(event.data);
|
||||
console.log('MCP Response:', response);
|
||||
};
|
||||
```
|
||||
|
||||
### VSCode MCP (Traditional)
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"agent-coordinator": {
|
||||
"command": "./scripts/mcp_launcher_multi.sh",
|
||||
"args": ["stdio"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test suite to verify all interface modes:
|
||||
|
||||
```bash
|
||||
# Start the server in remote mode
|
||||
./scripts/mcp_launcher_multi.sh remote 8080 &
|
||||
|
||||
# Run tests
|
||||
python3 scripts/test_multi_interface.py
|
||||
|
||||
# Stop the server
|
||||
kill %1
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
### VSCode Extension Development
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh stdio
|
||||
```
|
||||
Full local tool access for development workflows.
|
||||
|
||||
### Web Dashboard
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh remote 8080
|
||||
```
|
||||
Remote access with HTTP API and WebSocket for real-time updates.
|
||||
|
||||
### CI/CD Integration
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh http 8080
|
||||
```
|
||||
REST API access for automated workflows.
|
||||
|
||||
### Development/Testing
|
||||
```bash
|
||||
./scripts/mcp_launcher_multi.sh all 8080
|
||||
```
|
||||
All interfaces available for comprehensive testing.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ STDIO Client │ │ HTTP Client │ │ WebSocket Client│
|
||||
│ (VSCode) │ │ (Web/API) │ │ (Web/Real-time)│
|
||||
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
|
||||
│ │ │
|
||||
│ Full Tools │ Filtered Tools │ Filtered Tools
|
||||
│ │ │
|
||||
v v v
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Interface Manager │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ STDIO │ │ HTTP │ │ WebSocket │ │
|
||||
│ │ Interface │ │ Interface │ │ Interface │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
└─────────────────────┬───────────────────────────────────────────────┘
|
||||
│
|
||||
v
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ Tool Filter │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ Local Context │ │ Remote Context │ │ Web Context │ │
|
||||
│ │ (Full Access) │ │ (Sandboxed) │ │ (Restricted) │ │
|
||||
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
|
||||
└─────────────────────┬───────────────────────────────────────────────┘
|
||||
│
|
||||
v
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ MCP Server │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ Agent Registry │ │ Task Manager │ │ External MCPs │ │
|
||||
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Flexible Deployment**: Choose the right interface for your use case
|
||||
2. **Security**: Automatic tool filtering prevents unauthorized local access
|
||||
3. **Scalability**: HTTP/WebSocket interfaces support multiple concurrent clients
|
||||
4. **Backward Compatibility**: STDIO mode maintains compatibility with existing tools
|
||||
5. **Real-time Capability**: WebSocket enables live updates and notifications
|
||||
6. **Developer Experience**: Consistent MCP protocol across all interfaces
|
||||
|
||||
The multi-interface system allows the Agent Coordinator to serve both local development workflows and remote/web applications while maintaining security and appropriate tool access levels.
|
||||
@@ -1,121 +0,0 @@
|
||||
# Agent Coordinator - Project Cleanup Summary
|
||||
|
||||
## 🎯 Mission Accomplished
|
||||
|
||||
The Agent Coordinator project has been successfully tidied up and made much more presentable for GitHub! Here's what was accomplished:
|
||||
|
||||
## ✅ Completed Tasks
|
||||
|
||||
### 1. **Updated README.md** ✨
|
||||
- **Before**: Outdated README that didn't accurately describe the project
|
||||
- **After**: Comprehensive, clear README that properly explains:
|
||||
- What Agent Coordinator actually does (MCP server for multi-agent coordination)
|
||||
- Key features and benefits
|
||||
- Quick start guide with practical examples
|
||||
- Clear architecture diagram
|
||||
- Proper project structure documentation
|
||||
- Alternative language implementation recommendations
|
||||
|
||||
### 2. **Cleaned Up Outdated Files** 🗑️
|
||||
- **Removed**: `test_enhanced.exs`, `test_multi_codebase.exs`, `test_timeout_fix.exs`
|
||||
- **Removed**: `README_old.md` (outdated version)
|
||||
- **Removed**: Development artifacts (`erl_crash.dump`, `firebase-debug.log`)
|
||||
- **Updated**: `.gitignore` to prevent future development artifacts
|
||||
|
||||
### 3. **Organized Documentation Structure** 📚
|
||||
- **Created**: `docs/` directory for technical documentation
|
||||
- **Moved**: Technical deep-dive documents to `docs/`
|
||||
- `AUTO_HEARTBEAT.md` - Unified MCP server architecture
|
||||
- `VSCODE_TOOL_INTEGRATION.md` - VS Code integration details
|
||||
- `SEARCH_FILES_TIMEOUT_FIX.md` - Technical timeout solutions
|
||||
- `DYNAMIC_TOOL_DISCOVERY.md` - Dynamic tool discovery system
|
||||
- **Created**: `docs/README.md` - Documentation index and navigation
|
||||
- **Result**: Clean root directory with organized technical docs
|
||||
|
||||
### 4. **Improved Project Structure** 🏗️
|
||||
- **Updated**: Main `AgentCoordinator` module to reflect actual functionality
|
||||
- **Before**: Just a placeholder "hello world" function
|
||||
- **After**: Comprehensive module with:
|
||||
- Proper documentation explaining the system
|
||||
- Practical API functions (`register_agent`, `create_task`, `get_task_board`)
|
||||
- Version and status information
|
||||
- Real examples and usage patterns
|
||||
|
||||
### 5. **Created Language Implementation Guide** 🚀
|
||||
- **New Document**: `docs/LANGUAGE_IMPLEMENTATIONS.md`
|
||||
- **Comprehensive guide** for implementing Agent Coordinator in more accessible languages:
|
||||
- **Go** (highest priority) - Single binary deployment, excellent concurrency
|
||||
- **Python** (second priority) - Huge AI/ML community, familiar ecosystem
|
||||
- **Rust** (third priority) - Maximum performance, memory safety
|
||||
- **Node.js** (fourth priority) - Event-driven, web developer familiarity
|
||||
- **Detailed implementation strategies** with code examples
|
||||
- **Migration guides** for moving from Elixir to other languages
|
||||
- **Performance comparisons** and adoption recommendations
|
||||
|
||||
## 🎨 Project Before vs After
|
||||
|
||||
### Before Cleanup
|
||||
- ❌ Confusing README that didn't explain the actual purpose
|
||||
- ❌ Development artifacts scattered in root directory
|
||||
- ❌ Technical documentation mixed with main docs
|
||||
- ❌ Main module was just a placeholder
|
||||
- ❌ No guidance for developers wanting to use other languages
|
||||
|
||||
### After Cleanup
|
||||
- ✅ Clear, comprehensive README explaining the MCP coordination system
|
||||
- ✅ Clean root directory with organized structure
|
||||
- ✅ Technical docs properly organized in `docs/` directory
|
||||
- ✅ Main module reflects actual project functionality
|
||||
- ✅ Detailed guides for implementing in Go, Python, Rust, Node.js
|
||||
- ✅ Professional presentation suitable for open source
|
||||
|
||||
## 🌟 Key Improvements for GitHub Presentation
|
||||
|
||||
1. **Clear Value Proposition**: README immediately explains what the project does and why it's valuable
|
||||
2. **Easy Getting Started**: Quick start section gets users running in minutes
|
||||
3. **Professional Structure**: Well-organized directories and documentation
|
||||
4. **Multiple Language Options**: Guidance for teams that prefer Go, Python, Rust, or Node.js
|
||||
5. **Technical Deep-Dives**: Detailed docs for developers who want to understand the internals
|
||||
6. **Real Examples**: Working code examples and practical usage patterns
|
||||
|
||||
## 🚀 Recommendations for Broader Adoption
|
||||
|
||||
Based on the cleanup analysis, here are the top recommendations:
|
||||
|
||||
### 1. **Implement Go Version First** (Highest Impact)
|
||||
- **Why**: Single binary deployment, familiar to most developers, excellent performance
|
||||
- **Effort**: 2-3 weeks development time
|
||||
- **Impact**: Would significantly increase adoption
|
||||
|
||||
### 2. **Python Version Second** (AI/ML Community)
|
||||
- **Why**: Huge ecosystem in AI space, very familiar to ML engineers
|
||||
- **Effort**: 3-4 weeks development time
|
||||
- **Impact**: Perfect for AI agent development teams
|
||||
|
||||
### 3. **Create Video Demos**
|
||||
- **What**: Screen recordings showing agent coordination in action
|
||||
- **Why**: Much easier to understand the value than reading docs
|
||||
- **Effort**: 1-2 days
|
||||
- **Impact**: Increases GitHub star rate and adoption
|
||||
|
||||
### 4. **Docker Compose Quick Start**
|
||||
- **What**: Single `docker-compose up` command to get everything running
|
||||
- **Why**: Eliminates setup friction for trying the project
|
||||
- **Effort**: 1 day
|
||||
- **Impact**: Lower barrier to entry
|
||||
|
||||
## 🎯 Current State
|
||||
|
||||
The Agent Coordinator project is now:
|
||||
|
||||
- ✅ **Professional**: Clean, well-organized, and properly documented
|
||||
- ✅ **Accessible**: Clear explanations for what it does and how to use it
|
||||
- ✅ **Extensible**: Guidance for implementing in other languages
|
||||
- ✅ **Developer-Friendly**: Good project structure and documentation organization
|
||||
- ✅ **GitHub-Ready**: Perfect for open source presentation and community adoption
|
||||
|
||||
The Elixir implementation remains the reference implementation with all advanced features, while the documentation now provides clear paths for teams to implement the same concepts in their preferred languages.
|
||||
|
||||
---
|
||||
|
||||
**Result**: The Agent Coordinator project is now much more approachable and ready for the world to enjoy! 🌍
|
||||
@@ -1,77 +0,0 @@
|
||||
# Agent Coordinator Documentation
|
||||
|
||||
This directory contains detailed technical documentation for the Agent Coordinator project.
|
||||
|
||||
## 📚 Documentation Index
|
||||
|
||||
### Core Documentation
|
||||
- [Main README](../README.md) - Project overview, setup, and basic usage
|
||||
- [CHANGELOG](../CHANGELOG.md) - Version history and changes
|
||||
- [CONTRIBUTING](../CONTRIBUTING.md) - How to contribute to the project
|
||||
|
||||
### Technical Deep Dives
|
||||
|
||||
#### Architecture & Design
|
||||
- [AUTO_HEARTBEAT.md](AUTO_HEARTBEAT.md) - Unified MCP server with automatic task tracking and heartbeat system
|
||||
- [VSCODE_TOOL_INTEGRATION.md](VSCODE_TOOL_INTEGRATION.md) - VS Code tool integration and dynamic tool discovery
|
||||
- [DYNAMIC_TOOL_DISCOVERY.md](DYNAMIC_TOOL_DISCOVERY.md) - How the system dynamically discovers and manages MCP tools
|
||||
|
||||
#### Implementation Details
|
||||
- [SEARCH_FILES_TIMEOUT_FIX.md](SEARCH_FILES_TIMEOUT_FIX.md) - Technical details on timeout handling and GenServer call optimization
|
||||
|
||||
## 🎯 Key Concepts
|
||||
|
||||
### Agent Coordination
|
||||
The Agent Coordinator is an MCP server that enables multiple AI agents to work together without conflicts by:
|
||||
|
||||
- **Task Distribution**: Automatically assigns tasks based on agent capabilities
|
||||
- **Heartbeat Management**: Tracks agent liveness and activity
|
||||
- **Cross-Codebase Support**: Coordinates work across multiple repositories
|
||||
- **Tool Unification**: Provides a single interface to multiple external MCP servers
|
||||
|
||||
### Unified MCP Server
|
||||
The system acts as a unified MCP server that internally manages external MCP servers while providing:
|
||||
|
||||
- **Automatic Task Tracking**: Every tool usage becomes a tracked task
|
||||
- **Universal Heartbeat Coverage**: All operations maintain agent liveness
|
||||
- **Dynamic Tool Discovery**: Automatically discovers tools from external servers
|
||||
- **Seamless Integration**: Single interface for all MCP-compatible tools
|
||||
|
||||
### VS Code Integration
|
||||
Advanced integration with VS Code through:
|
||||
|
||||
- **Native Tool Provider**: Direct access to VS Code Extension API
|
||||
- **Permission System**: Granular security controls for VS Code operations
|
||||
- **Multi-Agent Support**: Safe concurrent access to VS Code features
|
||||
- **Workflow Integration**: VS Code tools participate in task coordination
|
||||
|
||||
## 🚀 Getting Started with Documentation
|
||||
|
||||
1. **New Users**: Start with the [Main README](../README.md)
|
||||
2. **Developers**: Read [CONTRIBUTING](../CONTRIBUTING.md) and [AUTO_HEARTBEAT.md](AUTO_HEARTBEAT.md)
|
||||
3. **VS Code Users**: Check out [VSCODE_TOOL_INTEGRATION.md](VSCODE_TOOL_INTEGRATION.md)
|
||||
4. **Troubleshooting**: See [SEARCH_FILES_TIMEOUT_FIX.md](SEARCH_FILES_TIMEOUT_FIX.md) for common issues
|
||||
|
||||
## 📖 Documentation Standards
|
||||
|
||||
All documentation in this project follows these standards:
|
||||
|
||||
- **Clear Structure**: Hierarchical headings with descriptive titles
|
||||
- **Code Examples**: Practical examples with expected outputs
|
||||
- **Troubleshooting**: Common issues and their solutions
|
||||
- **Implementation Details**: Technical specifics for developers
|
||||
- **User Perspective**: Both end-user and developer viewpoints
|
||||
|
||||
## 🤝 Contributing to Documentation
|
||||
|
||||
When adding new documentation:
|
||||
|
||||
1. Place technical deep-dives in this `docs/` directory
|
||||
2. Update this index file to reference new documents
|
||||
3. Keep the main README focused on getting started
|
||||
4. Include practical examples and troubleshooting sections
|
||||
5. Use clear, descriptive headings and consistent formatting
|
||||
|
||||
---
|
||||
|
||||
📝 **Last Updated**: August 25, 2025
|
||||
@@ -1,89 +0,0 @@
|
||||
# Search Files Timeout Fix
|
||||
|
||||
## Problem Description
|
||||
|
||||
The `search_files` tool (from the filesystem MCP server) was causing the agent-coordinator to exit with code 1 due to timeout issues. The error showed:
|
||||
|
||||
```
|
||||
** (EXIT from #PID<0.95.0>) exited in: GenServer.call(AgentCoordinator.UnifiedMCPServer, {:handle_request, ...}, 5000)
|
||||
** (EXIT) time out
|
||||
```
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
The issue was a timeout mismatch in the GenServer call chain:
|
||||
|
||||
1. **External tool calls** (like `search_files`) can take longer than 5 seconds to complete
|
||||
2. **TaskRegistry and Inbox modules** were using default 5-second GenServer timeouts
|
||||
3. During tool execution, **heartbeat operations** are called via `TaskRegistry.heartbeat_agent/1`
|
||||
4. When the external tool took longer than 5 seconds, the heartbeat call would timeout
|
||||
5. This caused the entire tool call to fail with exit code 1
|
||||
|
||||
## Call Chain Analysis
|
||||
|
||||
```
|
||||
External MCP Tool Call (search_files)
|
||||
↓
|
||||
UnifiedMCPServer.handle_mcp_request (60s timeout) ✓
|
||||
↓
|
||||
MCPServerManager.route_tool_call (60s timeout) ✓
|
||||
↓
|
||||
call_external_tool
|
||||
↓
|
||||
TaskRegistry.heartbeat_agent (5s timeout) ❌ ← TIMEOUT HERE
|
||||
```
|
||||
|
||||
## Solution Applied
|
||||
|
||||
Updated GenServer call timeouts in the following modules:
|
||||
|
||||
### TaskRegistry Module
|
||||
- `register_agent/1`: 5s → 30s
|
||||
- `heartbeat_agent/1`: 5s → 30s ← **Most Critical Fix**
|
||||
- `update_task_activity/3`: 5s → 30s
|
||||
- `assign_task/1`: 5s → 30s
|
||||
- `create_task/3`: 5s → 30s
|
||||
- `complete_task/1`: 5s → 30s
|
||||
- `get_agent_current_task/1`: 5s → 15s
|
||||
|
||||
### Inbox Module
|
||||
- `add_task/2`: 5s → 30s
|
||||
- `complete_current_task/1`: 5s → 30s
|
||||
- `get_next_task/1`: 5s → 15s
|
||||
- `get_status/1`: 5s → 15s
|
||||
- `list_tasks/1`: 5s → 15s
|
||||
- `get_current_task/1`: 5s → 15s
|
||||
|
||||
## Timeout Strategy
|
||||
|
||||
- **Long operations** (registration, task creation, heartbeat): **30 seconds**
|
||||
- **Read operations** (status, get tasks, list): **15 seconds**
|
||||
- **External tool routing**: **60 seconds** (already correct)
|
||||
|
||||
## Impact
|
||||
|
||||
This fix ensures that:
|
||||
|
||||
1. ✅ `search_files` and other long-running external tools won't cause timeouts
|
||||
2. ✅ Agent heartbeat operations can complete successfully during tool execution
|
||||
3. ✅ The agent-coordinator won't exit with code 1 due to timeout issues
|
||||
4. ✅ All automatic task tracking continues to work properly
|
||||
|
||||
## Files Modified
|
||||
|
||||
- `/lib/agent_coordinator/task_registry.ex` - Updated GenServer call timeouts
|
||||
- `/lib/agent_coordinator/inbox.ex` - Updated GenServer call timeouts
|
||||
|
||||
## Verification
|
||||
|
||||
The fix can be verified by:
|
||||
|
||||
1. Running the agent-coordinator with external MCP servers
|
||||
2. Executing `search_files` or other filesystem tools on large directories
|
||||
3. Confirming no timeout errors occur and exit code remains 0
|
||||
|
||||
## Future Considerations
|
||||
|
||||
- Consider making timeouts configurable via application config
|
||||
- Monitor for any other GenServer calls that might need timeout adjustments
|
||||
- Add timeout logging to help identify future timeout issues
|
||||
@@ -1,441 +0,0 @@
|
||||
# VS Code Tool Integration with Agent Coordinator
|
||||
|
||||
## 🎉 Latest Update: Dynamic Tool Discovery (COMPLETED)
|
||||
|
||||
**Date**: August 23, 2025
|
||||
**Status**: ✅ **COMPLETED** - Full dynamic tool discovery implementation
|
||||
|
||||
### What Changed
|
||||
The Agent Coordinator has been refactored to eliminate all hardcoded tool lists and implement **fully dynamic tool discovery** following the MCP protocol specification.
|
||||
|
||||
**Key Improvements**:
|
||||
- ✅ **No hardcoded tools**: All external server tools discovered via MCP `tools/list`
|
||||
- ✅ **Conditional VS Code tools**: Only included when VS Code functionality is available
|
||||
- ✅ **Real-time refresh**: `refresh_tools()` function to rediscover tools on demand
|
||||
- ✅ **Perfect MCP compliance**: Follows protocol specification exactly
|
||||
- ✅ **Better error handling**: Proper handling of both PIDs and Ports for server monitoring
|
||||
|
||||
**Example Tool Discovery Results**:
|
||||
```
|
||||
Found 44 total tools:
|
||||
• Coordinator tools: 6 (register_agent, create_task, etc.)
|
||||
• External MCP tools: 26+ (context7, filesystem, memory, sequential thinking)
|
||||
• VS Code tools: 12 (when available)
|
||||
```
|
||||
|
||||
### Benefits
|
||||
1. **MCP Protocol Compliance**: Perfect adherence to MCP specification
|
||||
2. **Flexibility**: New MCP servers can be added without code changes
|
||||
3. **Reliability**: Tools automatically discovered when servers restart
|
||||
4. **Performance**: Only available tools are included in routing
|
||||
5. **Debugging**: Clear visibility into which tools are available
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the implementation of VS Code's built-in tools as MCP (Model Context Protocol) tools within the Agent Coordinator system. This integration allows agents to access VS Code's native capabilities alongside external MCP servers through a unified coordination interface.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Current State
|
||||
- Agent Coordinator acts as a unified MCP server
|
||||
- Proxies tools from external MCP servers (Context7, filesystem, memory, sequential thinking, etc.)
|
||||
- Manages task coordination, agent assignment, and cross-codebase workflows
|
||||
|
||||
### Proposed Enhancement
|
||||
- Add VS Code Extension API tools as native MCP tools
|
||||
- Integrate with existing tool routing and coordination system
|
||||
- Maintain security and permission controls
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Core VS Code Tool Provider
|
||||
|
||||
#### 1.1 Create VSCodeToolProvider Module
|
||||
**File**: `lib/agent_coordinator/vscode_tool_provider.ex`
|
||||
|
||||
**Core Tools to Implement**:
|
||||
- `vscode_read_file` - Read file contents using VS Code API
|
||||
- `vscode_write_file` - Write file contents
|
||||
- `vscode_create_file` - Create new files
|
||||
- `vscode_delete_file` - Delete files
|
||||
- `vscode_list_directory` - List directory contents
|
||||
- `vscode_get_workspace_folders` - Get workspace information
|
||||
- `vscode_run_command` - Execute VS Code commands
|
||||
- `vscode_get_active_editor` - Get current editor state
|
||||
- `vscode_set_editor_content` - Modify editor content
|
||||
- `vscode_get_selection` - Get current text selection
|
||||
- `vscode_set_selection` - Set text selection
|
||||
- `vscode_show_message` - Display messages to user
|
||||
|
||||
#### 1.2 Tool Definitions
|
||||
Each tool will have:
|
||||
- MCP-compliant schema definition
|
||||
- Input validation
|
||||
- Error handling
|
||||
- Audit logging
|
||||
- Permission checking
|
||||
|
||||
### Phase 2: Advanced Editor Operations
|
||||
|
||||
#### 2.1 Language Services Integration
|
||||
- `vscode_get_diagnostics` - Get language server diagnostics
|
||||
- `vscode_format_document` - Format current document
|
||||
- `vscode_format_selection` - Format selected text
|
||||
- `vscode_find_references` - Find symbol references
|
||||
- `vscode_go_to_definition` - Navigate to definition
|
||||
- `vscode_rename_symbol` - Rename symbols
|
||||
- `vscode_code_actions` - Get available code actions
|
||||
|
||||
#### 2.2 Search and Navigation
|
||||
- `vscode_find_in_files` - Search across workspace
|
||||
- `vscode_find_symbols` - Find symbols in workspace
|
||||
- `vscode_goto_line` - Navigate to specific line
|
||||
- `vscode_reveal_in_explorer` - Show file in explorer
|
||||
|
||||
### Phase 3: Terminal and Process Management
|
||||
|
||||
#### 3.1 Terminal Operations
|
||||
- `vscode_create_terminal` - Create new terminal
|
||||
- `vscode_send_to_terminal` - Send commands to terminal
|
||||
- `vscode_get_terminal_output` - Get terminal output (if possible)
|
||||
- `vscode_close_terminal` - Close terminal instances
|
||||
|
||||
#### 3.2 Task and Process Management
|
||||
- `vscode_run_task` - Execute VS Code tasks
|
||||
- `vscode_get_tasks` - List available tasks
|
||||
- `vscode_debug_start` - Start debugging session
|
||||
- `vscode_debug_stop` - Stop debugging
|
||||
|
||||
### Phase 4: Git and Version Control
|
||||
|
||||
#### 4.1 Git Operations
|
||||
- `vscode_git_status` - Get git status
|
||||
- `vscode_git_commit` - Create commits
|
||||
- `vscode_git_push` - Push changes
|
||||
- `vscode_git_pull` - Pull changes
|
||||
- `vscode_git_branch` - Branch operations
|
||||
- `vscode_git_diff` - Get file differences
|
||||
|
||||
### Phase 5: Extension and Settings Management
|
||||
|
||||
#### 5.1 Configuration
|
||||
- `vscode_get_settings` - Get VS Code settings
|
||||
- `vscode_update_settings` - Update settings
|
||||
- `vscode_get_extensions` - List installed extensions
|
||||
- `vscode_install_extension` - Install extensions (if permitted)
|
||||
|
||||
## Security and Safety
|
||||
|
||||
### Permission Model
|
||||
```elixir
|
||||
defmodule AgentCoordinator.VSCodePermissions do
|
||||
@moduledoc """
|
||||
Manages permissions for VS Code tool access.
|
||||
"""
|
||||
|
||||
# Permission levels:
|
||||
# :read_only - File reading, workspace inspection
|
||||
# :editor - Text editing, selections
|
||||
# :filesystem - File creation/deletion
|
||||
# :terminal - Terminal access
|
||||
# :git - Version control operations
|
||||
# :admin - Settings, extensions, system commands
|
||||
end
|
||||
```
|
||||
|
||||
### Sandboxing
|
||||
- Restrict file operations to workspace folders only
|
||||
- Prevent access to system files outside workspace
|
||||
- Rate limiting for expensive operations
|
||||
- Command whitelist for `vscode_run_command`
|
||||
|
||||
### Audit Logging
|
||||
- Log all VS Code tool calls with:
|
||||
- Timestamp
|
||||
- Agent ID
|
||||
- Tool name and parameters
|
||||
- Result summary
|
||||
- Permission level used
|
||||
|
||||
## Integration Points
|
||||
|
||||
### 1. UnifiedMCPServer Enhancement
|
||||
**File**: `lib/agent_coordinator/unified_mcp_server.ex`
|
||||
|
||||
Add VS Code tools to the tool discovery and routing:
|
||||
|
||||
```elixir
|
||||
defp get_all_tools(state) do
|
||||
# Existing external MCP server tools
|
||||
external_tools = get_external_tools(state)
|
||||
|
||||
# New VS Code tools
|
||||
vscode_tools = VSCodeToolProvider.get_tools()
|
||||
|
||||
external_tools ++ vscode_tools
|
||||
end
|
||||
|
||||
defp route_tool_call(tool_name, args, context, state) do
|
||||
case tool_name do
|
||||
"vscode_" <> _rest ->
|
||||
VSCodeToolProvider.handle_tool_call(tool_name, args, context)
|
||||
_ ->
|
||||
# Route to external MCP servers
|
||||
route_to_external_server(tool_name, args, context, state)
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
### 2. Task Coordination
|
||||
VS Code tools will participate in the same task coordination system:
|
||||
- Task creation and assignment
|
||||
- File locking (prevent conflicts)
|
||||
- Cross-agent coordination
|
||||
- Priority management
|
||||
|
||||
### 3. Agent Capabilities
|
||||
Agents can declare VS Code tool capabilities:
|
||||
```elixir
|
||||
capabilities: [
|
||||
"coding",
|
||||
"analysis",
|
||||
"vscode_editing",
|
||||
"vscode_terminal",
|
||||
"vscode_git"
|
||||
]
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: File Analysis and Editing
|
||||
```json
|
||||
{
|
||||
"tool": "vscode_read_file",
|
||||
"args": {"path": "src/main.rs"}
|
||||
}
|
||||
// Agent reads file, analyzes it
|
||||
|
||||
{
|
||||
"tool": "vscode_get_diagnostics",
|
||||
"args": {"file": "src/main.rs"}
|
||||
}
|
||||
// Agent gets compiler errors
|
||||
|
||||
{
|
||||
"tool": "vscode_set_editor_content",
|
||||
"args": {
|
||||
"file": "src/main.rs",
|
||||
"content": "// Fixed code here",
|
||||
"range": {"start": 10, "end": 15}
|
||||
}
|
||||
}
|
||||
// Agent fixes the issues
|
||||
```
|
||||
|
||||
### Example 2: Cross-Tool Workflow
|
||||
```json
|
||||
// 1. Agent searches documentation using Context7
|
||||
{"tool": "mcp_context7_get-library-docs", "args": {"libraryID": "/rust/std"}}
|
||||
|
||||
// 2. Agent analyzes current code using VS Code
|
||||
{"tool": "vscode_get_active_editor", "args": {}}
|
||||
|
||||
// 3. Agent applies documentation insights to code
|
||||
{"tool": "vscode_format_document", "args": {}}
|
||||
{"tool": "vscode_set_editor_content", "args": {...}}
|
||||
|
||||
// 4. Agent commits changes using VS Code Git
|
||||
{"tool": "vscode_git_commit", "args": {"message": "Applied best practices from docs"}}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Unified Tool Access**: Agents access both external services and VS Code features through same interface
|
||||
2. **Enhanced Capabilities**: Complex workflows combining external data with direct IDE manipulation
|
||||
3. **Consistent Coordination**: Same task management for all tool types
|
||||
4. **Security**: Controlled access to powerful VS Code features
|
||||
5. **Extensibility**: Easy to add new VS Code capabilities as needs arise
|
||||
|
||||
## Implementation Status & Updated Roadmap
|
||||
|
||||
### ✅ **COMPLETED - Phase 1: Core VS Code Tool Provider (August 23, 2025)**
|
||||
|
||||
**Successfully Implemented & Tested:**
|
||||
|
||||
- ✅ VSCodeToolProvider module with 12 core tools
|
||||
- ✅ VSCodePermissions system with 6 permission levels
|
||||
- ✅ Integration with UnifiedMCPServer tool discovery and routing
|
||||
- ✅ Security controls: path sandboxing, command whitelisting, audit logging
|
||||
- ✅ Agent coordination integration (tasks, assignments, coordination)
|
||||
|
||||
**Working Tools:**
|
||||
|
||||
- ✅ File Operations: `vscode_read_file`, `vscode_write_file`, `vscode_create_file`, `vscode_delete_file`, `vscode_list_directory`
|
||||
- ✅ Editor Operations: `vscode_get_active_editor`, `vscode_set_editor_content`, `vscode_get_selection`, `vscode_set_selection`
|
||||
- ✅ Commands: `vscode_run_command`, `vscode_show_message`
|
||||
- ✅ Workspace: `vscode_get_workspace_folders`
|
||||
|
||||
**Key Achievement:** VS Code tools now work seamlessly alongside external MCP servers through unified agent coordination!
|
||||
|
||||
### 🔄 **CURRENT PRIORITY - Phase 1.5: VS Code Extension API Bridge**
|
||||
|
||||
**Status:** Tools currently return placeholder data. Need to implement actual VS Code Extension API calls.
|
||||
|
||||
**Implementation Steps:**
|
||||
|
||||
1. **JavaScript Bridge Module** - Create communication layer between Elixir and VS Code Extension API
|
||||
2. **Real API Integration** - Replace placeholder responses with actual VS Code API calls
|
||||
3. **Error Handling** - Robust error handling for VS Code API failures
|
||||
4. **Testing** - Verify all tools work with real VS Code operations
|
||||
|
||||
**Target Completion:** Next 2-3 days
|
||||
|
||||
### 📅 **UPDATED IMPLEMENTATION TIMELINE**
|
||||
|
||||
#### **Phase 2: Language Services & Advanced Editor Operations (Priority: High)**
|
||||
|
||||
**Target:** Week of August 26, 2025
|
||||
|
||||
**Tools to Implement:**
|
||||
|
||||
- `vscode_get_diagnostics` - Get language server diagnostics
|
||||
- `vscode_format_document` - Format current document
|
||||
- `vscode_format_selection` - Format selected text
|
||||
- `vscode_find_references` - Find symbol references
|
||||
- `vscode_go_to_definition` - Navigate to definition
|
||||
- `vscode_rename_symbol` - Rename symbols across workspace
|
||||
- `vscode_code_actions` - Get available code actions
|
||||
- `vscode_apply_code_action` - Apply specific code action
|
||||
|
||||
**Value:** Enables agents to perform intelligent code analysis and refactoring
|
||||
|
||||
#### **Phase 3: Search, Navigation & Workspace Management (Priority: Medium)**
|
||||
|
||||
**Target:** Week of September 2, 2025
|
||||
|
||||
**Tools to Implement:**
|
||||
|
||||
- `vscode_find_in_files` - Search across workspace with regex support
|
||||
- `vscode_find_symbols` - Find symbols in workspace
|
||||
- `vscode_goto_line` - Navigate to specific line/column
|
||||
- `vscode_reveal_in_explorer` - Show file in explorer
|
||||
- `vscode_open_folder` - Open workspace folder
|
||||
- `vscode_close_folder` - Close workspace folder
|
||||
- `vscode_switch_editor_tab` - Switch between open files
|
||||
|
||||
**Value:** Enables agents to navigate and understand large codebases
|
||||
|
||||
#### **Phase 4: Terminal & Process Management (Priority: Medium)**
|
||||
|
||||
**Target:** Week of September 9, 2025
|
||||
|
||||
**Tools to Implement:**
|
||||
|
||||
- `vscode_create_terminal` - Create new terminal instance
|
||||
- `vscode_send_to_terminal` - Send commands to terminal
|
||||
- `vscode_get_terminal_output` - Get terminal output (if possible via API)
|
||||
- `vscode_close_terminal` - Close terminal instances
|
||||
- `vscode_run_task` - Execute VS Code tasks (build, test, etc.)
|
||||
- `vscode_get_tasks` - List available tasks
|
||||
- `vscode_stop_task` - Stop running task
|
||||
|
||||
**Value:** Enables agents to manage build processes and execute commands
|
||||
|
||||
#### **Phase 5: Git & Version Control Integration (Priority: High)**
|
||||
|
||||
**Target:** Week of September 16, 2025
|
||||
|
||||
**Tools to Implement:**
|
||||
|
||||
- `vscode_git_status` - Get repository status
|
||||
- `vscode_git_commit` - Create commits with messages
|
||||
- `vscode_git_push` - Push changes to remote
|
||||
- `vscode_git_pull` - Pull changes from remote
|
||||
- `vscode_git_branch` - Branch operations (create, switch, delete)
|
||||
- `vscode_git_diff` - Get file differences
|
||||
- `vscode_git_stage` - Stage/unstage files
|
||||
- `vscode_git_blame` - Get blame information
|
||||
|
||||
**Value:** Enables agents to manage version control workflows
|
||||
|
||||
#### **Phase 6: Advanced Features & Extension Management (Priority: Low)**
|
||||
|
||||
**Target:** Week of September 23, 2025
|
||||
|
||||
**Tools to Implement:**
|
||||
|
||||
- `vscode_get_settings` - Get VS Code settings
|
||||
- `vscode_update_settings` - Update settings
|
||||
- `vscode_get_extensions` - List installed extensions
|
||||
- `vscode_install_extension` - Install extensions (if permitted)
|
||||
- `vscode_debug_start` - Start debugging session
|
||||
- `vscode_debug_stop` - Stop debugging
|
||||
- `vscode_set_breakpoint` - Set/remove breakpoints
|
||||
|
||||
**Value:** Complete IDE automation capabilities
|
||||
|
||||
### 🚀 **Key Insights from Phase 1**
|
||||
|
||||
1. **Integration Success**: The MCP tool routing system works perfectly for VS Code tools
|
||||
2. **Permission System**: Granular permissions are essential for security
|
||||
3. **Agent Coordination**: VS Code tools integrate seamlessly with task management
|
||||
4. **Unified Experience**: Agents can now use external services + VS Code through same interface
|
||||
|
||||
### 🎯 **Next Immediate Actions**
|
||||
|
||||
1. **Priority 1**: Implement proper agent identification system for multi-agent scenarios
|
||||
2. **Priority 2**: Implement real VS Code Extension API bridge (replace placeholders)
|
||||
3. **Priority 3**: Add Phase 2 language services tools
|
||||
4. **Priority 4**: Create comprehensive testing suite
|
||||
5. **Priority 5**: Document usage patterns and best practices
|
||||
|
||||
### 🔧 **Critical Enhancement: Multi-Agent Identification System**
|
||||
|
||||
**Problem:** Current system treats all GitHub Copilot instances as the same agent, causing conflicts in multi-agent scenarios.
|
||||
|
||||
**Solution:** Implement unique agent identification with session-based tracking.
|
||||
|
||||
**Implementation Requirements:**
|
||||
|
||||
1. **Agent ID Parameter**: All tools must include an `agent_id` parameter
|
||||
2. **Session-Based Registration**: Each chat session/agent instance gets unique ID
|
||||
3. **Tool Schema Updates**: Add `agent_id` to all VS Code tool schemas
|
||||
4. **Auto-Registration**: System automatically creates unique agents per session
|
||||
5. **Agent Isolation**: Tasks, permissions, and state isolated per agent ID
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Multiple agents can work simultaneously without conflicts
|
||||
- Individual agent permissions and capabilities
|
||||
- Proper task assignment and coordination
|
||||
- Clear audit trails per agent
|
||||
|
||||
### 📊 **Success Metrics**
|
||||
|
||||
- **Tool Reliability**: >95% success rate for all VS Code tool calls
|
||||
- **Performance**: <500ms average response time for VS Code operations
|
||||
- **Security**: Zero security incidents with workspace sandboxing
|
||||
- **Integration**: All tools work seamlessly with agent coordination system
|
||||
- **Adoption**: Agents can complete full development workflows using only coordinated tools## Testing Strategy
|
||||
|
||||
1. **Unit Tests**: Each VS Code tool function
|
||||
2. **Integration Tests**: Tool coordination and routing
|
||||
3. **Security Tests**: Permission enforcement and sandboxing
|
||||
4. **Performance Tests**: Rate limiting and resource usage
|
||||
5. **User Acceptance**: Real workflow testing with multiple agents
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- **Extension-specific Tools**: Tools for specific VS Code extensions
|
||||
- **Collaborative Features**: Multi-agent editing coordination
|
||||
- **AI-Enhanced Operations**: Intelligent code suggestions and fixes
|
||||
- **Remote Development**: Support for remote VS Code scenarios
|
||||
- **Custom Tool Creation**: Framework for users to create their own VS Code tools
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
This implementation transforms the Agent Coordinator from a simple MCP proxy into a comprehensive development environment orchestrator, enabling sophisticated AI-assisted development workflows.
|
||||
@@ -1,305 +0,0 @@
|
||||
<svg viewBox="0 0 1200 800" xmlns="http://www.w3.org/2000/svg">
|
||||
<defs>
|
||||
<style>
|
||||
.agent-box {
|
||||
fill: #e3f2fd;
|
||||
stroke: #1976d2;
|
||||
stroke-width: 2;
|
||||
}
|
||||
.coordinator-box {
|
||||
fill: #f3e5f5;
|
||||
stroke: #7b1fa2;
|
||||
stroke-width: 3;
|
||||
}
|
||||
.component-box {
|
||||
fill: #fff3e0;
|
||||
stroke: #f57c00;
|
||||
stroke-width: 2;
|
||||
}
|
||||
.mcp-server-box {
|
||||
fill: #e8f5e8;
|
||||
stroke: #388e3c;
|
||||
stroke-width: 2;
|
||||
}
|
||||
.taskboard-box {
|
||||
fill: #fff8e1;
|
||||
stroke: #ffa000;
|
||||
stroke-width: 2;
|
||||
}
|
||||
.text {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
font-size: 12px;
|
||||
}
|
||||
.title-text {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
font-size: 14px;
|
||||
font-weight: bold;
|
||||
}
|
||||
.small-text {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
font-size: 10px;
|
||||
}
|
||||
.connection-line {
|
||||
stroke: #666;
|
||||
stroke-width: 2;
|
||||
fill: none;
|
||||
}
|
||||
.mcp-line {
|
||||
stroke: #1976d2;
|
||||
stroke-width: 3;
|
||||
fill: none;
|
||||
}
|
||||
.data-flow {
|
||||
stroke: #4caf50;
|
||||
stroke-width: 2;
|
||||
fill: none;
|
||||
stroke-dasharray: 5,5;
|
||||
}
|
||||
.text-bg {
|
||||
fill: white;
|
||||
fill-opacity: 0.9;
|
||||
stroke: #333;
|
||||
stroke-width: 1; rx: 4;
|
||||
}
|
||||
.overlay-text {
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
font-size: 12px;
|
||||
font-weight: bold;
|
||||
}
|
||||
</style>
|
||||
|
||||
<!-- Arrow marker -->
|
||||
<marker id="arrowhead" markerWidth="6" markerHeight="4"
|
||||
refX="5" refY="2" orient="auto">
|
||||
<polygon points="0 0, 6 2, 0 4" fill="none" stroke="#666" stroke-width="0.01" />
|
||||
</marker>
|
||||
|
||||
<!-- MCP Arrow marker -->
|
||||
<marker id="mcpArrow" markerWidth="6" markerHeight="4"
|
||||
refX="5" refY="2" orient="auto">
|
||||
<polygon points="0 0, 6 2, 0 4" fill="#1976d2" stroke="#1976d2" stroke-width="0.01" />
|
||||
</marker>
|
||||
|
||||
<!-- Data flow arrow -->
|
||||
<marker id="dataArrow" markerWidth="6" markerHeight="4"
|
||||
refX="5" refY="2" orient="auto">
|
||||
<polygon points="0 0, 6 2, 0 4" fill="#4caf50" stroke="#4caf50" stroke-width="1" />
|
||||
</marker>
|
||||
</defs>
|
||||
|
||||
<!-- Background -->
|
||||
<rect width="1200" height="800" fill="#fafafa03" />
|
||||
|
||||
<!-- Title -->
|
||||
<text x="600" y="30" text-anchor="middle" class="title-text" font-size="18" fill="#333">
|
||||
Agent Coordinator: MCP Proxy Server Architecture
|
||||
</text>
|
||||
|
||||
<!-- AI Agents Section -->
|
||||
<text x="600" y="55" text-anchor="middle" class="text" fill="#666">
|
||||
Single MCP Interface → Multiple AI Agents → Unified Project Awareness
|
||||
</text>
|
||||
|
||||
<!-- Agent 1 -->
|
||||
<rect x="50" y="80" width="150" height="80" rx="8" class="agent-box" />
|
||||
<text x="125" y="105" text-anchor="middle" class="title-text" fill="#1976d2">Agent 1</text>
|
||||
<text x="125" y="120" text-anchor="middle" class="small-text" fill="#666">Purple Zebra</text>
|
||||
<text x="125" y="135" text-anchor="middle" class="small-text" fill="#666">Capabilities:</text>
|
||||
<text x="125" y="148" text-anchor="middle" class="small-text" fill="#666">coding, testing</text>
|
||||
|
||||
<!-- Agent 2 -->
|
||||
<rect x="250" y="80" width="150" height="80" rx="8" class="agent-box" />
|
||||
<text x="325" y="105" text-anchor="middle" class="title-text" fill="#1976d2">Agent 2</text>
|
||||
<text x="325" y="120" text-anchor="middle" class="small-text" fill="#666">Yellow Elephant</text>
|
||||
<text x="325" y="135" text-anchor="middle" class="small-text" fill="#666">Capabilities:</text>
|
||||
<text x="325" y="148" text-anchor="middle" class="small-text" fill="#666">analysis, docs</text>
|
||||
|
||||
<!-- Agent N -->
|
||||
<rect x="450" y="80" width="150" height="80" rx="8" class="agent-box" />
|
||||
<text x="525" y="105" text-anchor="middle" class="title-text" fill="#1976d2">Agent N</text>
|
||||
<text x="525" y="120" text-anchor="middle" class="small-text" fill="#666">More Agents...</text>
|
||||
<text x="525" y="135" text-anchor="middle" class="small-text" fill="#666">Dynamic</text>
|
||||
<text x="525" y="148" text-anchor="middle" class="small-text" fill="#666">Registration</text>
|
||||
|
||||
<!-- Lines from agents to coordinator (drawn first, behind text) -->
|
||||
<line x1="125" y1="160" x2="130" y2="220" class="mcp-line" marker-end="url(#mcpArrow)" />
|
||||
<line x1="325" y1="160" x2="330" y2="220" class="mcp-line" marker-end="url(#mcpArrow)" />
|
||||
<line x1="525" y1="160" x2="525" y2="220" class="mcp-line" marker-end="url(#mcpArrow)" />
|
||||
|
||||
<!-- MCP Protocol text with background (drawn on top of lines) -->
|
||||
<rect x="200" y="167" width="250" height="25" class="text-bg" />
|
||||
<text x="325" y="185" text-anchor="middle" class="overlay-text">
|
||||
MCP Protocol → Single Proxy Interface
|
||||
</text>
|
||||
|
||||
<!-- Main Coordinator Box -->
|
||||
<rect x="50" y="220" width="600" height="280" rx="12" class="coordinator-box" />
|
||||
<text x="350" y="245" text-anchor="middle" class="title-text" font-size="16">
|
||||
AGENT COORDINATOR (MCP Proxy Server)
|
||||
</text>
|
||||
<text x="350" y="255" text-anchor="middle" class="small-text" fill="#9c27b0">
|
||||
⚡ All tool calls proxy through here → Real-time agent tracking → Full project awareness
|
||||
</text>
|
||||
|
||||
<!-- Core Components Row -->
|
||||
<!-- Task Registry -->
|
||||
<rect x="70" y="260" width="160" height="100" rx="6" class="component-box" />
|
||||
<text x="150" y="280" text-anchor="middle" class="title-text" fill="#f57c00">Task Registry</text>
|
||||
<text x="150" y="298" text-anchor="middle" class="small-text" fill="#666">• Task Queuing</text>
|
||||
<text x="150" y="311" text-anchor="middle" class="small-text" fill="#666">• Agent Matching</text>
|
||||
<text x="150" y="324" text-anchor="middle" class="small-text" fill="#666">• Auto-Tracking</text>
|
||||
<text x="150" y="337" text-anchor="middle" class="small-text" fill="#666">• Progress Monitor</text>
|
||||
<text x="150" y="350" text-anchor="middle" class="small-text" fill="#666">• Conflict Prevention</text>
|
||||
|
||||
<!-- Agent Manager -->
|
||||
<rect x="250" y="260" width="160" height="100" rx="6" class="component-box" />
|
||||
<text x="330" y="280" text-anchor="middle" class="title-text" fill="#f57c00">Agent Manager</text>
|
||||
<text x="330" y="298" text-anchor="middle" class="small-text" fill="#666">• Registration</text>
|
||||
<text x="330" y="311" text-anchor="middle" class="small-text" fill="#666">• Heartbeat Monitor</text>
|
||||
<text x="330" y="324" text-anchor="middle" class="small-text" fill="#666">• Capabilities</text>
|
||||
<text x="330" y="337" text-anchor="middle" class="small-text" fill="#666">• Status Tracking</text>
|
||||
<text x="330" y="350" text-anchor="middle" class="small-text" fill="#666">• Load Balancing</text>
|
||||
|
||||
<!-- Codebase Registry -->
|
||||
<rect x="430" y="260" width="160" height="100" rx="6" class="component-box" />
|
||||
<text x="510" y="280" text-anchor="middle" class="title-text" fill="#f57c00">Codebase Registry</text>
|
||||
<text x="510" y="298" text-anchor="middle" class="small-text" fill="#666">• Cross-Repo</text>
|
||||
<text x="510" y="311" text-anchor="middle" class="small-text" fill="#666">• Dependencies</text>
|
||||
<text x="510" y="324" text-anchor="middle" class="small-text" fill="#666">• Workspace Mgmt</text>
|
||||
<text x="510" y="337" text-anchor="middle" class="small-text" fill="#666">• File Locking</text>
|
||||
<text x="510" y="350" text-anchor="middle" class="small-text" fill="#666">• Version Control</text>
|
||||
|
||||
<!-- Unified Tool Registry -->
|
||||
<rect x="70" y="380" width="520" height="100" rx="6" class="component-box" />
|
||||
<text x="330" y="400" text-anchor="middle" class="title-text" fill="#f57c00">UNIFIED TOOL REGISTRY (Proxy Layer)</text>
|
||||
<text x="330" y="415" text-anchor="middle" class="small-text" fill="#f57c00">Every tool call = Agent presence update + Task tracking + Project awareness</text>
|
||||
|
||||
<!-- Native Tools -->
|
||||
<text x="90" y="435" class="small-text" fill="#666" font-weight="bold">Native Tools:</text>
|
||||
<!-- <text x="90" y="434" class="small-text" fill="#666">register_agent, get_next_task, create_task_set,</text> -->
|
||||
<!-- <text x="90" y="448" class="small-text" fill="#666">complete_task, heartbeat, get_task_board</text> -->
|
||||
<text x="90" y="463" class="small-text" fill="#666" font-weight="bold">Proxied External Tools:</text>
|
||||
|
||||
<!-- External Tools -->
|
||||
<text x="320" y="435" class="small-text" fill="#666">register_agent, get_next_task, create_task_set,</text>
|
||||
<!-- <text x="320" y="420" class="small-text" fill="#666" font-weight="bold">External MCP Tools:</text> -->
|
||||
<text x="320" y="449" class="small-text" fill="#666">complete_task, heartbeat, get_task_board</text>
|
||||
<!-- <text x="320" y="434" class="small-text" fill="#666">read_file, write_file, search_memory,</text> -->
|
||||
<text x="320" y="463" class="small-text" fill="#666">read_file, write_file, search_memory, get_docs</text>
|
||||
|
||||
<!-- VS Code Tools -->
|
||||
<text x="90" y="477" class="small-text" fill="#666" font-weight="bold">VS Code Integration:</text>
|
||||
<text x="320" y="477" class="small-text" fill="#666">get_active_editor, set_selection, install_extension</text>
|
||||
|
||||
<!-- Task Board (Right side) -->
|
||||
<rect x="680" y="220" width="260" height="280" rx="8" class="coordinator-box"/>
|
||||
<text x="810" y="245" text-anchor="middle" class="title-text">Real-Time Task Board</text>
|
||||
|
||||
<!-- Agent Queues -->
|
||||
<rect x="700" y="260" width="100" height="80" rx="4" class="component-box"/>
|
||||
<text x="750" y="275" text-anchor="middle" class="small-text" fill="#666" font-weight="bold">Agent 1 Queue</text>
|
||||
<text x="750" y="290" text-anchor="middle" class="small-text" fill="#4caf50">✓ Task 1</text>
|
||||
<text x="750" y="303" text-anchor="middle" class="small-text" fill="#4caf50">✓ Task 2</text>
|
||||
<text x="750" y="316" text-anchor="middle" class="small-text" fill="#ff9800">→ Task 3</text>
|
||||
<text x="750" y="329" text-anchor="middle" class="small-text" fill="#666">… Task 4</text>
|
||||
|
||||
<rect x="820" y="260" width="100" height="80" rx="4" class="component-box" />
|
||||
<text x="870" y="275" text-anchor="middle" class="small-text" fill="#666" font-weight="bold">Agent 2 Queue</text>
|
||||
<text x="870" y="290" text-anchor="middle" class="small-text" fill="#4caf50">✓ Task 1</text>
|
||||
<text x="870" y="303" text-anchor="middle" class="small-text" fill="#ff9800">→ Task 2</text>
|
||||
<text x="870" y="316" text-anchor="middle" class="small-text" fill="#666">… Task 3</text>
|
||||
<text x="870" y="329" text-anchor="middle" class="small-text" fill="#666">… Task 4</text>
|
||||
|
||||
|
||||
|
||||
<!-- Agent Inboxes -->
|
||||
<rect x="700" y="360" width="100" height="60" rx="4" fill="#e3f2fd" stroke="#1976d2" stroke-width="1" />
|
||||
<text x="750" y="375" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">Agent 1 Inbox</text>
|
||||
<text x="750" y="390" text-anchor="middle" class="small-text" fill="#666">current: task 3</text>
|
||||
<text x="750" y="403" text-anchor="middle" class="small-text" fill="#666">[complete task]</text>
|
||||
<!-- <rect x="700" y="360" width="100" height="60" rx="4" fill="#e3f2fd" stroke="#1976d2" stroke-width="1" />
|
||||
<text x="750" y="375" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">Agent 1 Inbox</text>
|
||||
<text x="750" y="390" text-anchor="middle" class="small-text" fill="#666">current: task 3</text>
|
||||
<text x="750" y="403" text-anchor="middle" class="small-text" fill="#666">[complete task]</text> -->
|
||||
|
||||
<rect x="820" y="360" width="100" height="60" rx="4" fill="#e3f2fd" stroke="#1976d2" stroke-width="1" />
|
||||
<text x="870" y="375" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">Agent 2 Inbox</text>
|
||||
<text x="870" y="390" text-anchor="middle" class="small-text" fill="#666">current: task 2</text>
|
||||
<text x="870" y="403" text-anchor="middle" class="small-text" fill="#666">[complete task]</text>
|
||||
|
||||
<!-- Connection lines from coordinator to external servers (drawn first, behind text) -->
|
||||
<line x1="350" y1="500" x2="110" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
|
||||
<line x1="350" y1="500" x2="250" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
|
||||
<line x1="350" y1="500" x2="390" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
|
||||
<line x1="350" y1="500" x2="530" y2="550" class="connection-line" marker-end="url(#arrowhead)" />
|
||||
|
||||
<!-- Data flow line to task board (drawn first, behind text) -->
|
||||
<line x1="650" y1="350" x2="680" y2="350" class="data-flow" marker-end="url(#dataArrow)" />
|
||||
|
||||
<!-- PROXY arrows showing reverse direction - tools flow UP through coordinator -->
|
||||
<line x1="110" y1="550" x2="330" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
|
||||
<line x1="250" y1="550" x2="340" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
|
||||
<line x1="390" y1="550" x2="360" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
|
||||
<line x1="530" y1="550" x2="370" y2="500" class="mcp-line" marker-end="url(#mcpArrow)" stroke-dasharray="3,3" />
|
||||
|
||||
<!-- External MCP Servers Section title with background -->
|
||||
<rect x="210" y="520" width="280" height="25" class="text-bg" />
|
||||
<text x="350" y="535" text-anchor="middle" class="overlay-text" fill="#388e3c">
|
||||
External MCP Servers (Proxied via Coordinator)
|
||||
</text>
|
||||
|
||||
<!-- Proxy flow label -->
|
||||
<rect x="550" y="520" width="140" height="25" class="text-bg" />
|
||||
<text x="620" y="535" text-anchor="middle" class="small-text" fill="#1976d2" font-weight="bold">
|
||||
⇅ Proxied Tool Calls
|
||||
</text>
|
||||
|
||||
<!-- Data flow label with background -->
|
||||
<rect x="630" y="340" width="80" height="20" class="text-bg" />
|
||||
<text x="670" y="352" text-anchor="middle" class="small-text" fill="#4caf50" font-weight="bold">
|
||||
Live Updates
|
||||
</text>
|
||||
|
||||
<!-- MCP Server boxes -->
|
||||
<rect x="50" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
|
||||
<text x="110" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Filesystem</text>
|
||||
<text x="110" y="585" text-anchor="middle" class="small-text" fill="#666">read_file</text>
|
||||
<text x="110" y="598" text-anchor="middle" class="small-text" fill="#666">write_file</text>
|
||||
<text x="110" y="611" text-anchor="middle" class="small-text" fill="#666">list_directory</text>
|
||||
|
||||
<rect x="190" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
|
||||
<text x="250" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Memory</text>
|
||||
<text x="250" y="585" text-anchor="middle" class="small-text" fill="#666">search_nodes</text>
|
||||
<text x="250" y="598" text-anchor="middle" class="small-text" fill="#666">store_memory</text>
|
||||
<text x="250" y="611" text-anchor="middle" class="small-text" fill="#666">recall_info</text>
|
||||
|
||||
<rect x="330" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
|
||||
<text x="390" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Context7</text>
|
||||
<text x="390" y="585" text-anchor="middle" class="small-text" fill="#666">get_docs</text>
|
||||
<text x="390" y="598" text-anchor="middle" class="small-text" fill="#666">search_docs</text>
|
||||
<text x="390" y="611" text-anchor="middle" class="small-text" fill="#666">get_library</text>
|
||||
|
||||
<rect x="470" y="550" width="120" height="80" rx="6" class="mcp-server-box" />
|
||||
<text x="530" y="570" text-anchor="middle" class="title-text" fill="#388e3c">Sequential</text>
|
||||
<text x="530" y="585" text-anchor="middle" class="small-text" fill="#666">thinking</text>
|
||||
<text x="530" y="598" text-anchor="middle" class="small-text" fill="#666">analyze</text>
|
||||
<text x="530" y="611" text-anchor="middle" class="small-text" fill="#666">problem</text>
|
||||
|
||||
<!-- Key Process Flow -->
|
||||
<text x="350" y="670" text-anchor="middle" class="title-text" fill="#d5d5d5ff">
|
||||
Key Proxy Flow: Agent → Coordinator → External Tools → Presence Tracking
|
||||
</text>
|
||||
|
||||
<text x="50" y="690" class="small-text" fill="#d5d5d5ff">1. Agents connect via single MCP interface</text>
|
||||
<text x="50" y="705" class="small-text" fill="#d5d5d5ff">2. ALL tool calls proxy through coordinator</text>
|
||||
<text x="50" y="720" class="small-text" fill="#d5d5d5ff">3. Coordinator updates agent presence + tracks tasks</text>
|
||||
|
||||
<text x="450" y="690" class="small-text" fill="#d5d5d5ff">4. Agents gain full project awareness via proxy</text>
|
||||
<text x="450" y="705" class="small-text" fill="#d5d5d5ff">5. Real-time coordination prevents conflicts</text>
|
||||
<text x="450" y="720" class="small-text" fill="#d5d5d5ff">6. Single interface → Multiple backends</text>
|
||||
|
||||
<!-- Version info -->
|
||||
<text x="1150" y="790" text-anchor="end" class="small-text" fill="#aaa">
|
||||
Agent Coordinator v0.1.0
|
||||
</text>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 16 KiB |
461
examples/director_demo.exs
Normal file
461
examples/director_demo.exs
Normal file
@@ -0,0 +1,461 @@
|
||||
#!/usr/bin/env elixir
|
||||
|
||||
# Director Management Demo Script
|
||||
#
|
||||
# This script demonstrates the director role functionality:
|
||||
# 1. Register a director agent with oversight capabilities
|
||||
# 2. Register multiple standard agents for the director to manage
|
||||
# 3. Show director observing and managing other agents
|
||||
# 4. Demonstrate task assignment, feedback, and redundancy detection
|
||||
# 5. Show autonomous workflow coordination
|
||||
|
||||
Mix.install([
|
||||
{:agent_coordinator, path: "."}
|
||||
])
|
||||
|
||||
defmodule DirectorDemo do
|
||||
alias AgentCoordinator.{TaskRegistry, Inbox, Agent, Task}
|
||||
|
||||
def run do
|
||||
IO.puts("\n🎬 Director Management Demo Starting...")
|
||||
IO.puts("=" <> String.duplicate("=", 50))
|
||||
|
||||
# Start the Agent Coordinator application
|
||||
{:ok, _} = AgentCoordinator.Application.start(:normal, [])
|
||||
:timer.sleep(2000) # Give more time for startup
|
||||
|
||||
# Setup demo scenario
|
||||
setup_demo_scenario()
|
||||
|
||||
# Demonstrate director capabilities
|
||||
demo_director_observations()
|
||||
demo_task_management()
|
||||
demo_redundancy_detection()
|
||||
demo_autonomous_workflow()
|
||||
|
||||
IO.puts("\n✅ Director Management Demo Complete!")
|
||||
IO.puts("=" <> String.duplicate("=", 50))
|
||||
end
|
||||
|
||||
defp setup_demo_scenario do
|
||||
IO.puts("\n📋 Setting up demo scenario...")
|
||||
|
||||
# Register a global director
|
||||
director_opts = %{
|
||||
role: :director,
|
||||
oversight_scope: :global,
|
||||
capabilities: ["management", "coordination", "oversight", "coding"],
|
||||
workspace_path: "/home/ra/agent_coordinator",
|
||||
codebase_id: "agent_coordinator"
|
||||
}
|
||||
|
||||
{:ok, director_id} = TaskRegistry.register_agent("Director Phoenix Eagle", director_opts)
|
||||
IO.puts("✅ Registered Director: #{director_id}")
|
||||
|
||||
# Register several standard agents for the director to manage
|
||||
agents = [
|
||||
{"Frontend Developer Ruby Shark", %{capabilities: ["coding", "testing"], role: :standard}},
|
||||
{"Backend Engineer Silver Wolf", %{capabilities: ["coding", "analysis"], role: :standard}},
|
||||
{"QA Tester Golden Panda", %{capabilities: ["testing", "documentation"], role: :standard}},
|
||||
{"DevOps Engineer Blue Tiger", %{capabilities: ["coding", "review"], role: :standard}}
|
||||
]
|
||||
|
||||
agent_ids = Enum.map(agents, fn {name, opts} ->
|
||||
base_opts = Map.merge(opts, %{
|
||||
workspace_path: "/home/ra/agent_coordinator",
|
||||
codebase_id: "agent_coordinator"
|
||||
})
|
||||
{:ok, agent_id} = TaskRegistry.register_agent(name, base_opts)
|
||||
IO.puts("✅ Registered Agent: #{name} (#{agent_id})")
|
||||
|
||||
# Add some initial tasks to create realistic scenario
|
||||
add_demo_tasks(agent_id, name)
|
||||
|
||||
agent_id
|
||||
end)
|
||||
|
||||
%{director_id: director_id, agent_ids: agent_ids}
|
||||
end
|
||||
|
||||
defp add_demo_tasks(agent_id, agent_name) do
|
||||
tasks = case agent_name do
|
||||
"Frontend Developer" <> _ -> [
|
||||
{"Implement User Dashboard", "Create responsive dashboard with user stats and activity feed"},
|
||||
{"Fix CSS Layout Issues", "Resolve responsive design problems on mobile devices"},
|
||||
{"Add Dark Mode Support", "Implement theme switching with proper contrast ratios"}
|
||||
]
|
||||
"Backend Engineer" <> _ -> [
|
||||
{"Optimize Database Queries", "Review and optimize slow queries in user management system"},
|
||||
{"Implement API Rate Limiting", "Add rate limiting to prevent API abuse"},
|
||||
{"Fix Authentication Bug", "Resolve JWT token refresh issue causing user logouts"}
|
||||
]
|
||||
"QA Tester" <> _ -> [
|
||||
{"Write End-to-End Tests", "Create comprehensive test suite for user authentication flow"},
|
||||
{"Performance Testing", "Conduct load testing on API endpoints"},
|
||||
{"Fix Authentication Bug", "Validate JWT token refresh fix from backend team"} # Intentional duplicate
|
||||
]
|
||||
"DevOps Engineer" <> _ -> [
|
||||
{"Setup CI/CD Pipeline", "Configure automated testing and deployment pipeline"},
|
||||
{"Monitor System Performance", "Setup monitoring dashboards and alerting"},
|
||||
{"Optimize Database Queries", "Database performance tuning and indexing"} # Intentional duplicate
|
||||
]
|
||||
end
|
||||
|
||||
Enum.each(tasks, fn {title, description} ->
|
||||
task = Task.new(title, description, %{
|
||||
priority: Enum.random([:low, :normal, :high]),
|
||||
codebase_id: "agent_coordinator"
|
||||
})
|
||||
Inbox.add_task(agent_id, task)
|
||||
end)
|
||||
end
|
||||
|
||||
defp demo_director_observations do
|
||||
IO.puts("\n👁️ Director Observation Capabilities")
|
||||
IO.puts("-" <> String.duplicate("-", 40))
|
||||
|
||||
# Get the director agent
|
||||
agents = TaskRegistry.list_agents()
|
||||
director = Enum.find(agents, fn agent -> Agent.is_director?(agent) end)
|
||||
|
||||
if director do
|
||||
IO.puts("🔍 Director '#{director.name}' observing all agents...")
|
||||
|
||||
# Simulate director observing agents
|
||||
args = %{
|
||||
"agent_id" => director.id,
|
||||
"scope" => "codebase",
|
||||
"include_activity_history" => true
|
||||
}
|
||||
|
||||
# This would normally be called through MCP, but we'll call directly for demo
|
||||
result = observe_all_agents_demo(director, args)
|
||||
|
||||
case result do
|
||||
{:ok, observation} ->
|
||||
IO.puts("📊 Observation Results:")
|
||||
IO.puts(" - Total Agents: #{observation.total_agents}")
|
||||
IO.puts(" - Oversight Scope: #{observation.oversight_capability}")
|
||||
|
||||
Enum.each(observation.agents, fn agent_info ->
|
||||
task_count = %{
|
||||
pending: length(agent_info.tasks.pending),
|
||||
in_progress: if(agent_info.tasks.in_progress, do: 1, else: 0),
|
||||
completed: length(agent_info.tasks.completed)
|
||||
}
|
||||
|
||||
IO.puts(" 📋 #{agent_info.name}:")
|
||||
IO.puts(" Role: #{agent_info.role} | Status: #{agent_info.status}")
|
||||
IO.puts(" Tasks: #{task_count.pending} pending, #{task_count.in_progress} active, #{task_count.completed} done")
|
||||
IO.puts(" Capabilities: #{Enum.join(agent_info.capabilities, ", ")}")
|
||||
end)
|
||||
|
||||
{:error, reason} ->
|
||||
IO.puts("❌ Observation failed: #{reason}")
|
||||
end
|
||||
else
|
||||
IO.puts("❌ No director found in system")
|
||||
end
|
||||
end
|
||||
|
||||
defp observe_all_agents_demo(director, args) do
|
||||
# Simplified version of the actual function for demo
|
||||
all_agents = TaskRegistry.list_agents()
|
||||
|> Enum.filter(fn a -> a.codebase_id == director.codebase_id end)
|
||||
|
||||
detailed_agents = Enum.map(all_agents, fn target_agent ->
|
||||
task_info = case Inbox.list_tasks(target_agent.id) do
|
||||
{:error, _} -> %{pending: [], in_progress: nil, completed: []}
|
||||
tasks -> tasks
|
||||
end
|
||||
|
||||
%{
|
||||
agent_id: target_agent.id,
|
||||
name: target_agent.name,
|
||||
role: target_agent.role,
|
||||
capabilities: target_agent.capabilities,
|
||||
status: target_agent.status,
|
||||
codebase_id: target_agent.codebase_id,
|
||||
managed_by_director: target_agent.id in (director.managed_agents || []),
|
||||
tasks: task_info
|
||||
}
|
||||
end)
|
||||
|
||||
{:ok, %{
|
||||
director_id: director.id,
|
||||
scope: "codebase",
|
||||
oversight_capability: director.oversight_scope,
|
||||
agents: detailed_agents,
|
||||
total_agents: length(detailed_agents),
|
||||
timestamp: DateTime.utc_now()
|
||||
}}
|
||||
end
|
||||
|
||||
defp demo_task_management do
|
||||
IO.puts("\n📝 Director Task Management")
|
||||
IO.puts("-" <> String.duplicate("-", 40))
|
||||
|
||||
agents = TaskRegistry.list_agents()
|
||||
director = Enum.find(agents, fn agent -> Agent.is_director?(agent) end)
|
||||
standard_agents = Enum.filter(agents, fn agent -> !Agent.is_director?(agent) end)
|
||||
|
||||
if director && length(standard_agents) > 0 do
|
||||
target_agent = Enum.random(standard_agents)
|
||||
|
||||
IO.puts("🎯 Director assigning new task to #{target_agent.name}...")
|
||||
|
||||
# Create a high-priority coordination task
|
||||
new_task = %{
|
||||
"title" => "Team Coordination Meeting",
|
||||
"description" => "Organize cross-functional team sync to align on project priorities and resolve blockers. Focus on identifying dependencies between frontend, backend, and QA work streams.",
|
||||
"priority" => "high",
|
||||
"file_paths" => []
|
||||
}
|
||||
|
||||
# Director assigns the task
|
||||
task = Task.new(new_task["title"], new_task["description"], %{
|
||||
priority: :high,
|
||||
codebase_id: target_agent.codebase_id,
|
||||
assignment_reason: "Director identified need for team alignment",
|
||||
metadata: %{
|
||||
director_assigned: true,
|
||||
director_id: director.id
|
||||
}
|
||||
})
|
||||
|
||||
case Inbox.add_task(target_agent.id, task) do
|
||||
:ok ->
|
||||
IO.puts("✅ Task assigned successfully!")
|
||||
IO.puts(" Task: #{task.title}")
|
||||
IO.puts(" Assigned to: #{target_agent.name}")
|
||||
IO.puts(" Priority: #{task.priority}")
|
||||
IO.puts(" Reason: #{task.assignment_reason}")
|
||||
|
||||
# Update director's managed agents list
|
||||
updated_director = Agent.add_managed_agent(director, target_agent.id)
|
||||
TaskRegistry.update_agent(director.id, updated_director)
|
||||
|
||||
{:error, reason} ->
|
||||
IO.puts("❌ Task assignment failed: #{reason}")
|
||||
end
|
||||
|
||||
IO.puts("\n💬 Director providing task feedback...")
|
||||
|
||||
# Simulate director providing feedback on existing tasks
|
||||
{:ok, tasks} = Inbox.list_tasks(target_agent.id)
|
||||
if length(tasks.pending) > 0 do
|
||||
sample_task = Enum.random(tasks.pending)
|
||||
|
||||
feedback_examples = [
|
||||
"Consider breaking this task into smaller, more manageable subtasks for better tracking.",
|
||||
"This aligns well with the current sprint goals. Prioritize integration with the new API endpoints.",
|
||||
"Coordinate with the QA team before implementation to ensure test coverage is adequate.",
|
||||
"This task may have dependencies on the backend authentication work. Check with the backend team first."
|
||||
]
|
||||
|
||||
feedback = Enum.random(feedback_examples)
|
||||
|
||||
IO.puts("📋 Feedback for task '#{sample_task.title}':")
|
||||
IO.puts(" 💡 #{feedback}")
|
||||
IO.puts(" ⏰ Timestamp: #{DateTime.utc_now()}")
|
||||
end
|
||||
|
||||
else
|
||||
IO.puts("❌ No director or standard agents found for task management demo")
|
||||
end
|
||||
end
|
||||
|
||||
defp demo_redundancy_detection do
|
||||
IO.puts("\n🔍 Director Redundancy Detection")
|
||||
IO.puts("-" <> String.duplicate("-", 40))
|
||||
|
||||
agents = TaskRegistry.list_agents()
|
||||
director = Enum.find(agents, fn agent -> Agent.is_director?(agent) end)
|
||||
|
||||
if director do
|
||||
IO.puts("🔎 Analyzing tasks across all agents for redundancy...")
|
||||
|
||||
# Collect all tasks from all agents
|
||||
all_agents = Enum.filter(agents, fn a -> a.codebase_id == director.codebase_id end)
|
||||
all_tasks = Enum.flat_map(all_agents, fn agent ->
|
||||
case Inbox.list_tasks(agent.id) do
|
||||
{:error, _} -> []
|
||||
tasks ->
|
||||
(tasks.pending ++ (if tasks.in_progress, do: [tasks.in_progress], else: []))
|
||||
|> Enum.map(fn task -> Map.put(task, :agent_id, agent.id) end)
|
||||
end
|
||||
end)
|
||||
|
||||
IO.puts("📊 Total tasks analyzed: #{length(all_tasks)}")
|
||||
|
||||
# Detect redundant tasks (simplified similarity detection)
|
||||
redundant_groups = detect_similar_tasks_demo(all_tasks)
|
||||
|
||||
if length(redundant_groups) > 0 do
|
||||
IO.puts("⚠️ Found #{length(redundant_groups)} groups of potentially redundant tasks:")
|
||||
|
||||
Enum.each(redundant_groups, fn group ->
|
||||
IO.puts("\n 🔄 Redundant Group: '#{group.similarity_key}'")
|
||||
IO.puts(" Task count: #{group.task_count}")
|
||||
|
||||
Enum.each(group.tasks, fn task ->
|
||||
agent = Enum.find(agents, fn a -> a.id == task.agent_id end)
|
||||
agent_name = if agent, do: agent.name, else: "Unknown Agent"
|
||||
IO.puts(" - #{task.title} (#{agent_name})")
|
||||
end)
|
||||
|
||||
IO.puts(" 🎯 Recommendation: Consider consolidating these similar tasks or clearly define distinct responsibilities.")
|
||||
end)
|
||||
|
||||
total_redundant = Enum.sum(Enum.map(redundant_groups, fn g -> g.task_count end))
|
||||
IO.puts("\n📈 Impact Analysis:")
|
||||
IO.puts(" - Total redundant tasks: #{total_redundant}")
|
||||
IO.puts(" - Potential efficiency gain: #{round(total_redundant / length(all_tasks) * 100)}%")
|
||||
|
||||
else
|
||||
IO.puts("✅ No redundant tasks detected. Teams are well-coordinated!")
|
||||
end
|
||||
|
||||
else
|
||||
IO.puts("❌ No director found for redundancy detection")
|
||||
end
|
||||
end
|
||||
|
||||
defp detect_similar_tasks_demo(tasks) do
|
||||
# Group tasks by normalized title keywords
|
||||
tasks
|
||||
|> Enum.group_by(fn task ->
|
||||
# Normalize title for comparison
|
||||
String.downcase(task.title)
|
||||
|> String.replace(~r/[^\w\s]/, "")
|
||||
|> String.split()
|
||||
|> Enum.take(3)
|
||||
|> Enum.join(" ")
|
||||
end)
|
||||
|> Enum.filter(fn {_key, group_tasks} -> length(group_tasks) > 1 end)
|
||||
|> Enum.map(fn {key, group_tasks} ->
|
||||
%{
|
||||
similarity_key: key,
|
||||
tasks: Enum.map(group_tasks, fn task ->
|
||||
%{
|
||||
task_id: task.id,
|
||||
title: task.title,
|
||||
agent_id: task.agent_id,
|
||||
codebase_id: task.codebase_id
|
||||
}
|
||||
end),
|
||||
task_count: length(group_tasks)
|
||||
}
|
||||
end)
|
||||
end
|
||||
|
||||
defp demo_autonomous_workflow do
|
||||
IO.puts("\n🤖 Director Autonomous Workflow Coordination")
|
||||
IO.puts("-" <> String.duplicate("-", 50))
|
||||
|
||||
agents = TaskRegistry.list_agents()
|
||||
director = Enum.find(agents, fn agent -> Agent.is_director?(agent) end)
|
||||
standard_agents = Enum.filter(agents, fn agent -> !Agent.is_director?(agent) end)
|
||||
|
||||
if director && length(standard_agents) >= 2 do
|
||||
IO.puts("🎭 Simulating autonomous workflow coordination scenario...")
|
||||
IO.puts("\nScenario: Director detects that authentication bug fixes require coordination")
|
||||
IO.puts("between Backend Engineer and QA Tester.")
|
||||
|
||||
# Find agents working on authentication
|
||||
backend_agent = Enum.find(standard_agents, fn agent ->
|
||||
String.contains?(agent.name, "Backend")
|
||||
end)
|
||||
|
||||
qa_agent = Enum.find(standard_agents, fn agent ->
|
||||
String.contains?(agent.name, "QA")
|
||||
end)
|
||||
|
||||
if backend_agent && qa_agent do
|
||||
IO.puts("\n1️⃣ Director sending coordination input to Backend Engineer...")
|
||||
|
||||
coordination_message = """
|
||||
🤖 Director Coordination:
|
||||
|
||||
I've identified that your JWT authentication fix needs to be coordinated with QA testing.
|
||||
|
||||
Action Required:
|
||||
- Notify QA team when your fix is ready for testing
|
||||
- Provide test credentials and reproduction steps
|
||||
- Schedule knowledge transfer session if needed
|
||||
|
||||
This will help avoid testing delays and ensure comprehensive coverage.
|
||||
"""
|
||||
|
||||
# Simulate sending input to backend agent
|
||||
IO.puts("📤 Sending message to #{backend_agent.name}:")
|
||||
IO.puts(" Input Type: chat_message")
|
||||
IO.puts(" Content: [Coordination message about JWT fix coordination]")
|
||||
IO.puts(" Context: authentication_workflow_coordination")
|
||||
|
||||
IO.puts("\n2️⃣ Director sending parallel input to QA Tester...")
|
||||
|
||||
qa_message = """
|
||||
🤖 Director Coordination:
|
||||
|
||||
Backend team is working on JWT authentication fix. Please prepare for coordinated testing.
|
||||
|
||||
Action Required:
|
||||
- Review current authentication test cases
|
||||
- Prepare test environment for JWT token scenarios
|
||||
- Block time for testing once backend fix is ready
|
||||
|
||||
I'll facilitate the handoff between teams when implementation is complete.
|
||||
"""
|
||||
|
||||
IO.puts("📤 Sending message to #{qa_agent.name}:")
|
||||
IO.puts(" Input Type: chat_message")
|
||||
IO.puts(" Content: [Coordination message about authentication testing prep]")
|
||||
IO.puts(" Context: authentication_workflow_coordination")
|
||||
|
||||
IO.puts("\n3️⃣ Director scheduling follow-up coordination...")
|
||||
|
||||
# Create coordination task
|
||||
coordination_task = Task.new(
|
||||
"Authentication Fix Coordination Follow-up",
|
||||
"Check progress on JWT fix coordination between backend and QA teams. Ensure handoff is smooth and testing is proceeding without blockers.",
|
||||
%{
|
||||
priority: :normal,
|
||||
codebase_id: director.codebase_id,
|
||||
assignment_reason: "Autonomous workflow coordination",
|
||||
metadata: %{
|
||||
workflow_type: "authentication_coordination",
|
||||
involves_agents: [backend_agent.id, qa_agent.id],
|
||||
coordination_phase: "follow_up"
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
Inbox.add_task(director.id, coordination_task)
|
||||
IO.puts("✅ Created follow-up coordination task for director")
|
||||
|
||||
IO.puts("\n🎯 Autonomous Workflow Benefits Demonstrated:")
|
||||
IO.puts(" ✅ Proactive cross-team coordination")
|
||||
IO.puts(" ✅ Parallel communication to reduce delays")
|
||||
IO.puts(" ✅ Automated follow-up task creation")
|
||||
IO.puts(" ✅ Context-aware workflow management")
|
||||
IO.puts(" ✅ Human-out-of-the-loop efficiency")
|
||||
|
||||
IO.puts("\n🔮 Next Steps in Full Implementation:")
|
||||
IO.puts(" - VSCode integration for real agent messaging")
|
||||
IO.puts(" - Workflow templates for common coordination patterns")
|
||||
IO.puts(" - ML-based task dependency detection")
|
||||
IO.puts(" - Automated testing trigger coordination")
|
||||
IO.puts(" - Cross-codebase workflow orchestration")
|
||||
|
||||
else
|
||||
IO.puts("❌ Could not find Backend and QA agents for workflow demo")
|
||||
end
|
||||
else
|
||||
IO.puts("❌ Insufficient agents for autonomous workflow demonstration")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Run the demo
|
||||
DirectorDemo.run()
|
||||
@@ -40,8 +40,10 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
"move_file" ->
|
||||
source = Map.get(args, "source")
|
||||
dest = Map.get(args, "destination")
|
||||
files = [source, dest] |> Enum.filter(&(&1))
|
||||
{"Moving #{Path.basename(source || "file")} to #{Path.basename(dest || "destination")}", files}
|
||||
files = [source, dest] |> Enum.filter(& &1)
|
||||
|
||||
{"Moving #{Path.basename(source || "file")} to #{Path.basename(dest || "destination")}",
|
||||
files}
|
||||
|
||||
# VS Code operations
|
||||
"vscode_read_file" ->
|
||||
@@ -54,6 +56,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
|
||||
"vscode_set_editor_content" ->
|
||||
file_path = Map.get(args, "file_path")
|
||||
|
||||
if file_path do
|
||||
{"Editing #{Path.basename(file_path)} in VS Code", [file_path]}
|
||||
else
|
||||
@@ -114,6 +117,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
# Test operations
|
||||
"runTests" ->
|
||||
files = Map.get(args, "files", [])
|
||||
|
||||
if files != [] do
|
||||
file_names = Enum.map(files, &Path.basename/1)
|
||||
{"Running tests in #{Enum.join(file_names, ", ")}", files}
|
||||
@@ -153,6 +157,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
# HTTP/Web operations
|
||||
"fetch_webpage" ->
|
||||
urls = Map.get(args, "urls", [])
|
||||
|
||||
if urls != [] do
|
||||
{"Fetching #{length(urls)} webpages", []}
|
||||
else
|
||||
@@ -162,6 +167,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
# Development operations
|
||||
"get_errors" ->
|
||||
files = Map.get(args, "filePaths", [])
|
||||
|
||||
if files != [] do
|
||||
file_names = Enum.map(files, &Path.basename/1)
|
||||
{"Checking errors in #{Enum.join(file_names, ", ")}", files}
|
||||
@@ -180,6 +186,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
|
||||
"elixir-docs" ->
|
||||
modules = Map.get(args, "modules", [])
|
||||
|
||||
if modules != [] do
|
||||
{"Getting docs for #{Enum.join(modules, ", ")}", []}
|
||||
else
|
||||
@@ -196,6 +203,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
|
||||
"pylanceFileSyntaxErrors" ->
|
||||
file_uri = Map.get(args, "fileUri")
|
||||
|
||||
if file_uri do
|
||||
file_path = uri_to_path(file_uri)
|
||||
{"Checking syntax errors in #{Path.basename(file_path)}", [file_path]}
|
||||
@@ -236,7 +244,7 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
"""
|
||||
def update_agent_activity(agent_id, tool_name, args) do
|
||||
{activity, files} = infer_activity(tool_name, args)
|
||||
|
||||
|
||||
case TaskRegistry.get_agent(agent_id) do
|
||||
{:ok, agent} ->
|
||||
updated_agent = Agent.update_activity(agent, activity, files)
|
||||
@@ -267,11 +275,12 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
|
||||
defp extract_file_path(args) do
|
||||
# Try various common parameter names for file paths
|
||||
args["path"] || args["filePath"] || args["file_path"] ||
|
||||
args["source"] || args["destination"] || args["fileUri"] |> uri_to_path()
|
||||
args["path"] || args["filePath"] || args["file_path"] ||
|
||||
args["source"] || args["destination"] || args["fileUri"] |> uri_to_path()
|
||||
end
|
||||
|
||||
defp uri_to_path(nil), do: nil
|
||||
|
||||
defp uri_to_path(uri) when is_binary(uri) do
|
||||
if String.starts_with?(uri, "file://") do
|
||||
String.replace_prefix(uri, "file://", "")
|
||||
@@ -288,4 +297,4 @@ defmodule AgentCoordinator.ActivityTracker do
|
||||
|> Enum.join(" ")
|
||||
|> String.capitalize()
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -16,7 +16,10 @@ defmodule AgentCoordinator.Agent do
|
||||
:metadata,
|
||||
:current_activity,
|
||||
:current_files,
|
||||
:activity_history
|
||||
:activity_history,
|
||||
:role,
|
||||
:managed_agents,
|
||||
:oversight_scope
|
||||
]}
|
||||
defstruct [
|
||||
:id,
|
||||
@@ -30,11 +33,23 @@ defmodule AgentCoordinator.Agent do
|
||||
:metadata,
|
||||
:current_activity,
|
||||
:current_files,
|
||||
:activity_history
|
||||
:activity_history,
|
||||
:role,
|
||||
:managed_agents,
|
||||
:oversight_scope
|
||||
]
|
||||
|
||||
@type status :: :idle | :busy | :offline | :error
|
||||
@type capability :: :coding | :testing | :documentation | :analysis | :review
|
||||
@type capability ::
|
||||
:coding
|
||||
| :testing
|
||||
| :documentation
|
||||
| :analysis
|
||||
| :review
|
||||
| :management
|
||||
| :coordination
|
||||
| :oversight
|
||||
@type role :: :standard | :director | :project_manager
|
||||
|
||||
@type t :: %__MODULE__{
|
||||
id: String.t(),
|
||||
@@ -48,29 +63,39 @@ defmodule AgentCoordinator.Agent do
|
||||
metadata: map(),
|
||||
current_activity: String.t() | nil,
|
||||
current_files: [String.t()],
|
||||
activity_history: [map()]
|
||||
activity_history: [map()],
|
||||
role: role(),
|
||||
managed_agents: [String.t()],
|
||||
oversight_scope: :codebase | :global
|
||||
}
|
||||
|
||||
def new(name, capabilities, opts \\ []) do
|
||||
workspace_path = Keyword.get(opts, :workspace_path)
|
||||
|
||||
|
||||
# Use smart codebase identification
|
||||
codebase_id = case Keyword.get(opts, :codebase_id) do
|
||||
nil when workspace_path ->
|
||||
# Auto-detect from workspace
|
||||
case AgentCoordinator.CodebaseIdentifier.identify_codebase(workspace_path) do
|
||||
%{canonical_id: canonical_id} -> canonical_id
|
||||
_ -> Path.basename(workspace_path || "default")
|
||||
end
|
||||
|
||||
nil ->
|
||||
"default"
|
||||
|
||||
explicit_id ->
|
||||
# Normalize the provided ID
|
||||
AgentCoordinator.CodebaseIdentifier.normalize_codebase_reference(explicit_id, workspace_path)
|
||||
end
|
||||
|
||||
codebase_id =
|
||||
case Keyword.get(opts, :codebase_id) do
|
||||
nil when workspace_path ->
|
||||
# Auto-detect from workspace
|
||||
case AgentCoordinator.CodebaseIdentifier.identify_codebase(workspace_path) do
|
||||
%{canonical_id: canonical_id} -> canonical_id
|
||||
_ -> Path.basename(workspace_path || "default")
|
||||
end
|
||||
|
||||
nil ->
|
||||
"default"
|
||||
|
||||
explicit_id ->
|
||||
# Normalize the provided ID
|
||||
AgentCoordinator.CodebaseIdentifier.normalize_codebase_reference(
|
||||
explicit_id,
|
||||
workspace_path
|
||||
)
|
||||
end
|
||||
|
||||
# Determine role based on capabilities
|
||||
role = determine_role(capabilities)
|
||||
|
||||
%__MODULE__{
|
||||
id: UUID.uuid4(),
|
||||
name: name,
|
||||
@@ -83,7 +108,11 @@ defmodule AgentCoordinator.Agent do
|
||||
metadata: Keyword.get(opts, :metadata, %{}),
|
||||
current_activity: nil,
|
||||
current_files: [],
|
||||
activity_history: []
|
||||
activity_history: [],
|
||||
role: role,
|
||||
managed_agents: [],
|
||||
oversight_scope:
|
||||
if(role == :director, do: Keyword.get(opts, :oversight_scope, :codebase), else: :codebase)
|
||||
}
|
||||
end
|
||||
|
||||
@@ -98,24 +127,22 @@ defmodule AgentCoordinator.Agent do
|
||||
files: files,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
new_history = [activity_entry | agent.activity_history]
|
||||
|> Enum.take(10)
|
||||
|
||||
%{agent |
|
||||
current_activity: activity,
|
||||
current_files: files,
|
||||
activity_history: new_history,
|
||||
last_heartbeat: DateTime.utc_now()
|
||||
|
||||
new_history =
|
||||
[activity_entry | agent.activity_history]
|
||||
|> Enum.take(10)
|
||||
|
||||
%{
|
||||
agent
|
||||
| current_activity: activity,
|
||||
current_files: files,
|
||||
activity_history: new_history,
|
||||
last_heartbeat: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
|
||||
def clear_activity(agent) do
|
||||
%{agent |
|
||||
current_activity: nil,
|
||||
current_files: [],
|
||||
last_heartbeat: DateTime.utc_now()
|
||||
}
|
||||
%{agent | current_activity: nil, current_files: [], last_heartbeat: DateTime.utc_now()}
|
||||
end
|
||||
|
||||
def assign_task(agent, task_id) do
|
||||
@@ -151,4 +178,55 @@ defmodule AgentCoordinator.Agent do
|
||||
def can_work_cross_codebase?(agent) do
|
||||
Map.get(agent.metadata, :cross_codebase_capable, false)
|
||||
end
|
||||
|
||||
# Director-specific functions
|
||||
|
||||
def is_director?(agent) do
|
||||
agent.role == :director
|
||||
end
|
||||
|
||||
def is_manager?(agent) do
|
||||
agent.role in [:director, :project_manager]
|
||||
end
|
||||
|
||||
def can_manage_agent?(director, target_agent) do
|
||||
case director.oversight_scope do
|
||||
:global -> true
|
||||
:codebase -> director.codebase_id == target_agent.codebase_id
|
||||
end
|
||||
end
|
||||
|
||||
def add_managed_agent(director, agent_id) do
|
||||
if is_manager?(director) do
|
||||
managed_agents = [agent_id | director.managed_agents] |> Enum.uniq()
|
||||
%{director | managed_agents: managed_agents}
|
||||
else
|
||||
director
|
||||
end
|
||||
end
|
||||
|
||||
def remove_managed_agent(director, agent_id) do
|
||||
if is_manager?(director) do
|
||||
managed_agents = director.managed_agents |> Enum.reject(&(&1 == agent_id))
|
||||
%{director | managed_agents: managed_agents}
|
||||
else
|
||||
director
|
||||
end
|
||||
end
|
||||
|
||||
# Private helper to determine role from capabilities
|
||||
defp determine_role(capabilities) do
|
||||
management_caps = [:management, :coordination, :oversight]
|
||||
|
||||
cond do
|
||||
Enum.any?(management_caps, &(&1 in capabilities)) and :oversight in capabilities ->
|
||||
:director
|
||||
|
||||
:management in capabilities ->
|
||||
:project_manager
|
||||
|
||||
true ->
|
||||
:standard
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -12,15 +12,15 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
require Logger
|
||||
|
||||
@type codebase_info :: %{
|
||||
canonical_id: String.t(),
|
||||
display_name: String.t(),
|
||||
workspace_path: String.t(),
|
||||
repository_url: String.t() | nil,
|
||||
git_remote: String.t() | nil,
|
||||
branch: String.t() | nil,
|
||||
commit_hash: String.t() | nil,
|
||||
identification_method: :git_remote | :git_local | :folder_name | :custom
|
||||
}
|
||||
canonical_id: String.t(),
|
||||
display_name: String.t(),
|
||||
workspace_path: String.t(),
|
||||
repository_url: String.t() | nil,
|
||||
git_remote: String.t() | nil,
|
||||
branch: String.t() | nil,
|
||||
commit_hash: String.t() | nil,
|
||||
identification_method: :git_remote | :git_local | :folder_name | :custom
|
||||
}
|
||||
|
||||
@doc """
|
||||
Identify a codebase from a workspace path, generating a canonical ID.
|
||||
@@ -56,11 +56,12 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
}
|
||||
"""
|
||||
def identify_codebase(workspace_path, opts \\ [])
|
||||
|
||||
def identify_codebase(nil, opts) do
|
||||
custom_id = Keyword.get(opts, :custom_id, "default")
|
||||
build_custom_codebase_info(nil, custom_id)
|
||||
end
|
||||
|
||||
|
||||
def identify_codebase(workspace_path, opts) do
|
||||
custom_id = Keyword.get(opts, :custom_id)
|
||||
|
||||
@@ -128,15 +129,16 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
|
||||
defp identify_git_codebase(workspace_path) do
|
||||
with {:ok, git_info} <- get_git_info(workspace_path) do
|
||||
canonical_id = case git_info.remote_url do
|
||||
nil ->
|
||||
# Local git repo without remote
|
||||
"git-local:#{git_info.repo_name}"
|
||||
canonical_id =
|
||||
case git_info.remote_url do
|
||||
nil ->
|
||||
# Local git repo without remote
|
||||
"git-local:#{git_info.repo_name}"
|
||||
|
||||
remote_url ->
|
||||
# Extract canonical identifier from remote URL
|
||||
extract_canonical_from_remote(remote_url)
|
||||
end
|
||||
remote_url ->
|
||||
# Extract canonical identifier from remote URL
|
||||
extract_canonical_from_remote(remote_url)
|
||||
end
|
||||
|
||||
%{
|
||||
canonical_id: canonical_id,
|
||||
@@ -166,7 +168,7 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
identification_method: :folder_name
|
||||
}
|
||||
end
|
||||
|
||||
|
||||
defp identify_folder_codebase(workspace_path) do
|
||||
folder_name = Path.basename(workspace_path)
|
||||
|
||||
@@ -183,6 +185,7 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
end
|
||||
|
||||
defp git_repository?(workspace_path) when is_nil(workspace_path), do: false
|
||||
|
||||
defp git_repository?(workspace_path) do
|
||||
File.exists?(Path.join(workspace_path, ".git"))
|
||||
end
|
||||
@@ -201,26 +204,34 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
commit_hash = String.trim(commit_hash)
|
||||
|
||||
# Try to get remote URL
|
||||
{remote_info, _remote_result_use_me?} = case System.cmd("git", ["remote", "-v"], cd: workspace_path) do
|
||||
{output, 0} when output != "" ->
|
||||
# Parse remote output to extract origin URL
|
||||
lines = String.split(String.trim(output), "\n")
|
||||
origin_line = Enum.find(lines, fn line ->
|
||||
String.starts_with?(line, "origin") and String.contains?(line, "(fetch)")
|
||||
end)
|
||||
{remote_info, _remote_result_use_me?} =
|
||||
case System.cmd("git", ["remote", "-v"], cd: workspace_path) do
|
||||
{output, 0} when output != "" ->
|
||||
# Parse remote output to extract origin URL
|
||||
lines = String.split(String.trim(output), "\n")
|
||||
|
||||
case origin_line do
|
||||
nil -> {nil, :no_origin}
|
||||
line ->
|
||||
# Extract URL from "origin <url> (fetch)"
|
||||
url = line
|
||||
|> String.split()
|
||||
|> Enum.at(1)
|
||||
{url, :ok}
|
||||
end
|
||||
origin_line =
|
||||
Enum.find(lines, fn line ->
|
||||
String.starts_with?(line, "origin") and String.contains?(line, "(fetch)")
|
||||
end)
|
||||
|
||||
_ -> {nil, :no_remotes}
|
||||
end
|
||||
case origin_line do
|
||||
nil ->
|
||||
{nil, :no_origin}
|
||||
|
||||
line ->
|
||||
# Extract URL from "origin <url> (fetch)"
|
||||
url =
|
||||
line
|
||||
|> String.split()
|
||||
|> Enum.at(1)
|
||||
|
||||
{url, :ok}
|
||||
end
|
||||
|
||||
_ ->
|
||||
{nil, :no_remotes}
|
||||
end
|
||||
|
||||
git_info = %{
|
||||
repo_name: repo_name,
|
||||
@@ -267,6 +278,7 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
case Regex.run(regex, url) do
|
||||
[_, owner, repo] ->
|
||||
"github.com/#{owner}/#{repo}"
|
||||
|
||||
_ ->
|
||||
"github.com/unknown"
|
||||
end
|
||||
@@ -279,6 +291,7 @@ defmodule AgentCoordinator.CodebaseIdentifier do
|
||||
case Regex.run(regex, url) do
|
||||
[_, owner, repo] ->
|
||||
"gitlab.com/#{owner}/#{repo}"
|
||||
|
||||
_ ->
|
||||
"gitlab.com/unknown"
|
||||
end
|
||||
|
||||
@@ -14,11 +14,11 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
require Logger
|
||||
alias AgentCoordinator.{MCPServer, ToolFilter, SessionManager}
|
||||
|
||||
plug Plug.Logger
|
||||
plug :match
|
||||
plug Plug.Parsers, parsers: [:json], json_decoder: Jason
|
||||
plug :put_cors_headers
|
||||
plug :dispatch
|
||||
plug(Plug.Logger)
|
||||
plug(:match)
|
||||
plug(Plug.Parsers, parsers: [:json], json_decoder: Jason)
|
||||
plug(:put_cors_headers)
|
||||
plug(:dispatch)
|
||||
|
||||
@doc """
|
||||
Start the HTTP server on the specified port.
|
||||
@@ -26,7 +26,7 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
def start_link(opts \\ []) do
|
||||
port = Keyword.get(opts, :port, 8080)
|
||||
|
||||
Logger.info("Starting Agent Coordinator HTTP interface on port #{port}")
|
||||
IO.puts(:stderr, "Starting Agent Coordinator HTTP interface on port #{port}")
|
||||
|
||||
Plug.Cowboy.http(__MODULE__, [],
|
||||
port: port,
|
||||
@@ -109,9 +109,10 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
all_tools = MCPServer.get_tools()
|
||||
filtered_tools = ToolFilter.filter_tools(all_tools, context)
|
||||
|
||||
tool_allowed = Enum.any?(filtered_tools, fn tool ->
|
||||
Map.get(tool, "name") == tool_name
|
||||
end)
|
||||
tool_allowed =
|
||||
Enum.any?(filtered_tools, fn tool ->
|
||||
Map.get(tool, "name") == tool_name
|
||||
end)
|
||||
|
||||
if not tool_allowed do
|
||||
send_json_response(conn, 403, %{
|
||||
@@ -158,7 +159,8 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
send_json_response(conn, 400, %{error: error})
|
||||
|
||||
unexpected ->
|
||||
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
|
||||
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
|
||||
|
||||
send_json_response(conn, 500, %{
|
||||
error: %{
|
||||
code: -32603,
|
||||
@@ -187,6 +189,7 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
case method do
|
||||
"tools/call" ->
|
||||
tool_name = get_in(enhanced_request, ["params", "name"])
|
||||
|
||||
if tool_allowed_for_context?(tool_name, context) do
|
||||
execute_mcp_request(conn, enhanced_request, context)
|
||||
else
|
||||
@@ -275,20 +278,25 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
case validate_session_for_method("stream/subscribe", conn, context) do
|
||||
{:ok, session_info} ->
|
||||
# Set up SSE headers
|
||||
conn = conn
|
||||
|> put_resp_content_type("text/event-stream")
|
||||
|> put_mcp_headers()
|
||||
|> put_resp_header("cache-control", "no-cache")
|
||||
|> put_resp_header("connection", "keep-alive")
|
||||
|> put_resp_header("access-control-allow-credentials", "true")
|
||||
|> send_chunked(200)
|
||||
conn =
|
||||
conn
|
||||
|> put_resp_content_type("text/event-stream")
|
||||
|> put_mcp_headers()
|
||||
|> put_resp_header("cache-control", "no-cache")
|
||||
|> put_resp_header("connection", "keep-alive")
|
||||
|> put_resp_header("access-control-allow-credentials", "true")
|
||||
|> send_chunked(200)
|
||||
|
||||
# Send initial connection event
|
||||
{:ok, conn} = chunk(conn, format_sse_event("connected", %{
|
||||
session_id: Map.get(session_info, :agent_id, "anonymous"),
|
||||
protocol_version: "2025-06-18",
|
||||
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
|
||||
}))
|
||||
{:ok, conn} =
|
||||
chunk(
|
||||
conn,
|
||||
format_sse_event("connected", %{
|
||||
session_id: Map.get(session_info, :agent_id, "anonymous"),
|
||||
protocol_version: "2025-06-18",
|
||||
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
|
||||
})
|
||||
)
|
||||
|
||||
# Start streaming loop
|
||||
stream_mcp_events(conn, session_info, context)
|
||||
@@ -307,17 +315,22 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
# Send periodic heartbeat for now
|
||||
try do
|
||||
:timer.sleep(1000)
|
||||
{:ok, conn} = chunk(conn, format_sse_event("heartbeat", %{
|
||||
timestamp: DateTime.utc_now() |> DateTime.to_iso8601(),
|
||||
session_id: Map.get(session_info, :agent_id, "anonymous")
|
||||
}))
|
||||
|
||||
{:ok, conn} =
|
||||
chunk(
|
||||
conn,
|
||||
format_sse_event("heartbeat", %{
|
||||
timestamp: DateTime.utc_now() |> DateTime.to_iso8601(),
|
||||
session_id: Map.get(session_info, :agent_id, "anonymous")
|
||||
})
|
||||
)
|
||||
|
||||
# Continue streaming (this would be event-driven in production)
|
||||
stream_mcp_events(conn, session_info, context)
|
||||
rescue
|
||||
# Client disconnected
|
||||
_ ->
|
||||
Logger.info("SSE client disconnected")
|
||||
IO.puts(:stderr, "SSE client disconnected")
|
||||
conn
|
||||
end
|
||||
end
|
||||
@@ -347,10 +360,11 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
|
||||
defp cowboy_dispatch do
|
||||
[
|
||||
{:_, [
|
||||
{"/mcp/ws", AgentCoordinator.WebSocketHandler, []},
|
||||
{:_, Plug.Cowboy.Handler, {__MODULE__, []}}
|
||||
]}
|
||||
{:_,
|
||||
[
|
||||
{"/mcp/ws", AgentCoordinator.WebSocketHandler, []},
|
||||
{:_, Plug.Cowboy.Handler, {__MODULE__, []}}
|
||||
]}
|
||||
]
|
||||
end
|
||||
|
||||
@@ -379,8 +393,10 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
cond do
|
||||
forwarded_for ->
|
||||
forwarded_for |> String.split(",") |> List.first() |> String.trim()
|
||||
|
||||
real_ip ->
|
||||
real_ip
|
||||
|
||||
true ->
|
||||
conn.remote_ip |> :inet.ntoa() |> to_string()
|
||||
end
|
||||
@@ -394,27 +410,37 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
conn
|
||||
|> put_resp_header("access-control-allow-origin", allowed_origin)
|
||||
|> put_resp_header("access-control-allow-methods", "GET, POST, OPTIONS")
|
||||
|> put_resp_header("access-control-allow-headers", "content-type, authorization, mcp-session-id, mcp-protocol-version, x-session-id")
|
||||
|> put_resp_header(
|
||||
"access-control-allow-headers",
|
||||
"content-type, authorization, mcp-session-id, mcp-protocol-version, x-session-id"
|
||||
)
|
||||
|> put_resp_header("access-control-expose-headers", "mcp-protocol-version, server")
|
||||
|> put_resp_header("access-control-max-age", "86400")
|
||||
end
|
||||
|
||||
defp validate_origin(nil), do: "*" # No origin header (direct API calls)
|
||||
# No origin header (direct API calls)
|
||||
defp validate_origin(nil), do: "*"
|
||||
|
||||
defp validate_origin(origin) do
|
||||
# Allow localhost and development origins
|
||||
case URI.parse(origin) do
|
||||
%URI{host: host} when host in ["localhost", "127.0.0.1", "::1"] -> origin
|
||||
%URI{host: host} when host in ["localhost", "127.0.0.1", "::1"] ->
|
||||
origin
|
||||
|
||||
%URI{host: host} when is_binary(host) ->
|
||||
# Allow HTTPS origins and known development domains
|
||||
if String.starts_with?(origin, "https://") or
|
||||
String.contains?(host, ["localhost", "127.0.0.1", "dev", "local"]) do
|
||||
String.contains?(host, ["localhost", "127.0.0.1", "dev", "local"]) do
|
||||
origin
|
||||
else
|
||||
# For production, be more restrictive
|
||||
Logger.warning("Potentially unsafe origin: #{origin}")
|
||||
"*" # Fallback for now, could be more restrictive
|
||||
IO.puts(:stderr, "Potentially unsafe origin: #{origin}")
|
||||
# Fallback for now, could be more restrictive
|
||||
"*"
|
||||
end
|
||||
_ -> "*"
|
||||
|
||||
_ ->
|
||||
"*"
|
||||
end
|
||||
end
|
||||
|
||||
@@ -434,9 +460,10 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
defp validate_mcp_request(params) when is_map(params) do
|
||||
required_fields = ["jsonrpc", "method"]
|
||||
|
||||
missing_fields = Enum.filter(required_fields, fn field ->
|
||||
not Map.has_key?(params, field)
|
||||
end)
|
||||
missing_fields =
|
||||
Enum.filter(required_fields, fn field ->
|
||||
not Map.has_key?(params, field)
|
||||
end)
|
||||
|
||||
cond do
|
||||
not Enum.empty?(missing_fields) ->
|
||||
@@ -460,15 +487,16 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
{session_id, session_info} = get_session_info(conn)
|
||||
|
||||
# Add context metadata to request params
|
||||
enhanced_params = Map.get(mcp_request, "params", %{})
|
||||
|> Map.put("_session_id", session_id)
|
||||
|> Map.put("_session_info", session_info)
|
||||
|> Map.put("_client_context", %{
|
||||
connection_type: context.connection_type,
|
||||
security_level: context.security_level,
|
||||
remote_ip: get_remote_ip(conn),
|
||||
user_agent: context.user_agent
|
||||
})
|
||||
enhanced_params =
|
||||
Map.get(mcp_request, "params", %{})
|
||||
|> Map.put("_session_id", session_id)
|
||||
|> Map.put("_session_info", session_info)
|
||||
|> Map.put("_client_context", %{
|
||||
connection_type: context.connection_type,
|
||||
security_level: context.security_level,
|
||||
remote_ip: get_remote_ip(conn),
|
||||
user_agent: context.user_agent
|
||||
})
|
||||
|
||||
Map.put(mcp_request, "params", enhanced_params)
|
||||
end
|
||||
@@ -479,17 +507,21 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
[session_token] when byte_size(session_token) > 0 ->
|
||||
case SessionManager.validate_session(session_token) do
|
||||
{:ok, session_info} ->
|
||||
{session_info.agent_id, %{
|
||||
token: session_token,
|
||||
agent_id: session_info.agent_id,
|
||||
capabilities: session_info.capabilities,
|
||||
expires_at: session_info.expires_at,
|
||||
validated: true
|
||||
}}
|
||||
{session_info.agent_id,
|
||||
%{
|
||||
token: session_token,
|
||||
agent_id: session_info.agent_id,
|
||||
capabilities: session_info.capabilities,
|
||||
expires_at: session_info.expires_at,
|
||||
validated: true
|
||||
}}
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.warning("Invalid MCP session token: #{reason}")
|
||||
IO.puts(:stderr, "Invalid MCP session token: #{reason}")
|
||||
# Fall back to generating anonymous session
|
||||
anonymous_id = "http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
|
||||
anonymous_id =
|
||||
"http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
|
||||
|
||||
{anonymous_id, %{validated: false, reason: reason}}
|
||||
end
|
||||
|
||||
@@ -498,9 +530,12 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
case get_req_header(conn, "x-session-id") do
|
||||
[session_id] when byte_size(session_id) > 0 ->
|
||||
{session_id, %{validated: false, legacy: true}}
|
||||
|
||||
_ ->
|
||||
# No session header, generate anonymous session
|
||||
anonymous_id = "http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
|
||||
anonymous_id =
|
||||
"http_anonymous_" <> (:crypto.strong_rand_bytes(8) |> Base.encode16(case: :lower))
|
||||
|
||||
{anonymous_id, %{validated: false, anonymous: true}}
|
||||
end
|
||||
end
|
||||
@@ -512,27 +547,31 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
case Map.get(session_info, :validated, false) do
|
||||
true ->
|
||||
{:ok, session_info}
|
||||
|
||||
false ->
|
||||
reason = Map.get(session_info, :reason, "Session not authenticated")
|
||||
{:error, %{
|
||||
code: -32001,
|
||||
message: "Authentication required",
|
||||
data: %{reason: reason}
|
||||
}}
|
||||
|
||||
{:error,
|
||||
%{
|
||||
code: -32001,
|
||||
message: "Authentication required",
|
||||
data: %{reason: reason}
|
||||
}}
|
||||
end
|
||||
end
|
||||
|
||||
defp validate_session_for_method(method, conn, context) do
|
||||
# Define which methods require authenticated sessions
|
||||
authenticated_methods = MapSet.new([
|
||||
"agents/register",
|
||||
"agents/unregister",
|
||||
"agents/heartbeat",
|
||||
"tasks/create",
|
||||
"tasks/complete",
|
||||
"codebase/register",
|
||||
"stream/subscribe"
|
||||
])
|
||||
authenticated_methods =
|
||||
MapSet.new([
|
||||
"agents/register",
|
||||
"agents/unregister",
|
||||
"agents/heartbeat",
|
||||
"tasks/create",
|
||||
"tasks/complete",
|
||||
"codebase/register",
|
||||
"stream/subscribe"
|
||||
])
|
||||
|
||||
if MapSet.member?(authenticated_methods, method) do
|
||||
require_authenticated_session(conn, context)
|
||||
@@ -559,7 +598,8 @@ defmodule AgentCoordinator.HttpInterface do
|
||||
send_json_response(conn, 400, response)
|
||||
|
||||
unexpected ->
|
||||
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
|
||||
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
|
||||
|
||||
send_json_response(conn, 500, %{
|
||||
jsonrpc: "2.0",
|
||||
id: Map.get(mcp_request, "id"),
|
||||
|
||||
@@ -32,6 +32,10 @@ defmodule AgentCoordinator.Inbox do
|
||||
GenServer.call(via_tuple(agent_id), {:add_task, task}, 30_000)
|
||||
end
|
||||
|
||||
def remove_task(agent_id, task_id) do
|
||||
GenServer.call(via_tuple(agent_id), {:remove_task, task_id}, 30_000)
|
||||
end
|
||||
|
||||
def get_next_task(agent_id) do
|
||||
GenServer.call(via_tuple(agent_id), :get_next_task, 15_000)
|
||||
end
|
||||
@@ -92,6 +96,47 @@ defmodule AgentCoordinator.Inbox do
|
||||
{:reply, :ok, new_state}
|
||||
end
|
||||
|
||||
def handle_call({:remove_task, task_id}, _from, state) do
|
||||
# Remove task from pending tasks
|
||||
{removed_task, remaining_pending} =
|
||||
Enum.reduce(state.pending_tasks, {nil, []}, fn task, {found_task, acc} ->
|
||||
if task.id == task_id do
|
||||
{task, acc}
|
||||
else
|
||||
{found_task, [task | acc]}
|
||||
end
|
||||
end)
|
||||
|
||||
# Check if task is currently in progress
|
||||
{new_in_progress, removed_from_progress} =
|
||||
if state.in_progress_task && state.in_progress_task.id == task_id do
|
||||
{nil, state.in_progress_task}
|
||||
else
|
||||
{state.in_progress_task, nil}
|
||||
end
|
||||
|
||||
final_removed_task = removed_task || removed_from_progress
|
||||
|
||||
if final_removed_task do
|
||||
new_state = %{
|
||||
state
|
||||
| pending_tasks: Enum.reverse(remaining_pending),
|
||||
in_progress_task: new_in_progress
|
||||
}
|
||||
|
||||
# Broadcast task removed
|
||||
Phoenix.PubSub.broadcast(
|
||||
AgentCoordinator.PubSub,
|
||||
"agent:#{state.agent_id}",
|
||||
{:task_removed, final_removed_task}
|
||||
)
|
||||
|
||||
{:reply, :ok, new_state}
|
||||
else
|
||||
{:reply, {:error, :task_not_found}, state}
|
||||
end
|
||||
end
|
||||
|
||||
def handle_call(:get_next_task, _from, state) do
|
||||
case state.pending_tasks do
|
||||
[] ->
|
||||
|
||||
@@ -102,7 +102,10 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
metrics: initialize_metrics()
|
||||
}
|
||||
|
||||
Logger.info("Interface Manager starting with config: #{inspect(config.enabled_interfaces)}")
|
||||
IO.puts(
|
||||
:stderr,
|
||||
"Interface Manager starting with config: #{inspect(config.enabled_interfaces)}"
|
||||
)
|
||||
|
||||
# Start enabled interfaces
|
||||
{:ok, state, {:continue, :start_interfaces}}
|
||||
@@ -111,17 +114,18 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
@impl GenServer
|
||||
def handle_continue(:start_interfaces, state) do
|
||||
# Start each enabled interface
|
||||
updated_state = Enum.reduce(state.config.enabled_interfaces, state, fn interface_type, acc ->
|
||||
case start_interface_server(interface_type, state.config, acc) do
|
||||
{:ok, interface_info} ->
|
||||
Logger.info("Started #{interface_type} interface")
|
||||
%{acc | interfaces: Map.put(acc.interfaces, interface_type, interface_info)}
|
||||
updated_state =
|
||||
Enum.reduce(state.config.enabled_interfaces, state, fn interface_type, acc ->
|
||||
case start_interface_server(interface_type, state.config, acc) do
|
||||
{:ok, interface_info} ->
|
||||
IO.puts(:stderr, "Started #{interface_type} interface")
|
||||
%{acc | interfaces: Map.put(acc.interfaces, interface_type, interface_info)}
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.error("Failed to start #{interface_type} interface: #{reason}")
|
||||
acc
|
||||
end
|
||||
end)
|
||||
{:error, reason} ->
|
||||
IO.puts(:stderr, "Failed to start #{interface_type} interface: #{reason}")
|
||||
acc
|
||||
end
|
||||
end)
|
||||
|
||||
{:noreply, updated_state}
|
||||
end
|
||||
@@ -152,11 +156,11 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
updated_interfaces = Map.put(state.interfaces, interface_type, interface_info)
|
||||
updated_state = %{state | interfaces: updated_interfaces}
|
||||
|
||||
Logger.info("Started #{interface_type} interface on demand")
|
||||
IO.puts(:stderr, "Started #{interface_type} interface on demand")
|
||||
{:reply, {:ok, interface_info}, updated_state}
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.error("Failed to start #{interface_type} interface: #{reason}")
|
||||
IO.puts(:stderr, "Failed to start #{interface_type} interface: #{reason}")
|
||||
{:reply, {:error, reason}, state}
|
||||
end
|
||||
else
|
||||
@@ -176,11 +180,11 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
updated_interfaces = Map.delete(state.interfaces, interface_type)
|
||||
updated_state = %{state | interfaces: updated_interfaces}
|
||||
|
||||
Logger.info("Stopped #{interface_type} interface")
|
||||
IO.puts(:stderr, "Stopped #{interface_type} interface")
|
||||
{:reply, :ok, updated_state}
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.error("Failed to stop #{interface_type} interface: #{reason}")
|
||||
IO.puts(:stderr, "Failed to stop #{interface_type} interface: #{reason}")
|
||||
{:reply, {:error, reason}, state}
|
||||
end
|
||||
end
|
||||
@@ -202,7 +206,7 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
updated_interfaces = Map.put(state.interfaces, interface_type, new_interface_info)
|
||||
updated_state = %{state | interfaces: updated_interfaces}
|
||||
|
||||
Logger.info("Restarted #{interface_type} interface")
|
||||
IO.puts(:stderr, "Restarted #{interface_type} interface")
|
||||
{:reply, {:ok, new_interface_info}, updated_state}
|
||||
|
||||
{:error, reason} ->
|
||||
@@ -210,12 +214,12 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
updated_interfaces = Map.delete(state.interfaces, interface_type)
|
||||
updated_state = %{state | interfaces: updated_interfaces}
|
||||
|
||||
Logger.error("Failed to restart #{interface_type} interface: #{reason}")
|
||||
IO.puts(:stderr, "Failed to restart #{interface_type} interface: #{reason}")
|
||||
{:reply, {:error, reason}, updated_state}
|
||||
end
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.error("Failed to stop #{interface_type} interface for restart: #{reason}")
|
||||
IO.puts(:stderr, "Failed to stop #{interface_type} interface for restart: #{reason}")
|
||||
{:reply, {:error, reason}, state}
|
||||
end
|
||||
end
|
||||
@@ -224,9 +228,11 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
@impl GenServer
|
||||
def handle_call(:get_metrics, _from, state) do
|
||||
# Collect metrics from all running interfaces
|
||||
interface_metrics = Enum.map(state.interfaces, fn {interface_type, interface_info} ->
|
||||
{interface_type, get_interface_metrics(interface_type, interface_info)}
|
||||
end) |> Enum.into(%{})
|
||||
interface_metrics =
|
||||
Enum.map(state.interfaces, fn {interface_type, interface_info} ->
|
||||
{interface_type, get_interface_metrics(interface_type, interface_info)}
|
||||
end)
|
||||
|> Enum.into(%{})
|
||||
|
||||
metrics = %{
|
||||
interfaces: interface_metrics,
|
||||
@@ -253,7 +259,7 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
updated_registry = Map.put(state.session_registry, session_id, session_data)
|
||||
updated_state = %{state | session_registry: updated_registry}
|
||||
|
||||
Logger.debug("Registered session #{session_id} for #{interface_type}")
|
||||
IO.puts(:stderr, "Registered session #{session_id} for #{interface_type}")
|
||||
{:noreply, updated_state}
|
||||
end
|
||||
|
||||
@@ -261,14 +267,14 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
def handle_cast({:unregister_session, session_id}, state) do
|
||||
case Map.get(state.session_registry, session_id) do
|
||||
nil ->
|
||||
Logger.debug("Attempted to unregister unknown session: #{session_id}")
|
||||
IO.puts(:stderr, "Attempted to unregister unknown session: #{session_id}")
|
||||
{:noreply, state}
|
||||
|
||||
_session_data ->
|
||||
updated_registry = Map.delete(state.session_registry, session_id)
|
||||
updated_state = %{state | session_registry: updated_registry}
|
||||
|
||||
Logger.debug("Unregistered session #{session_id}")
|
||||
IO.puts(:stderr, "Unregistered session #{session_id}")
|
||||
{:noreply, updated_state}
|
||||
end
|
||||
end
|
||||
@@ -278,7 +284,7 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
# Handle interface process crashes
|
||||
case find_interface_by_pid(pid, state.interfaces) do
|
||||
{interface_type, _interface_info} ->
|
||||
Logger.error("#{interface_type} interface crashed: #{inspect(reason)}")
|
||||
IO.puts(:stderr, "#{interface_type} interface crashed: #{inspect(reason)}")
|
||||
|
||||
# Remove from running interfaces
|
||||
updated_interfaces = Map.delete(state.interfaces, interface_type)
|
||||
@@ -286,14 +292,14 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
|
||||
# Optionally restart if configured
|
||||
if should_auto_restart?(interface_type, state.config) do
|
||||
Logger.info("Auto-restarting #{interface_type} interface")
|
||||
IO.puts(:stderr, "Auto-restarting #{interface_type} interface")
|
||||
Process.send_after(self(), {:restart_interface, interface_type}, 5000)
|
||||
end
|
||||
|
||||
{:noreply, updated_state}
|
||||
|
||||
nil ->
|
||||
Logger.debug("Unknown process died: #{inspect(pid)}")
|
||||
IO.puts(:stderr, "Unknown process died: #{inspect(pid)}")
|
||||
{:noreply, state}
|
||||
end
|
||||
end
|
||||
@@ -305,18 +311,18 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
updated_interfaces = Map.put(state.interfaces, interface_type, interface_info)
|
||||
updated_state = %{state | interfaces: updated_interfaces}
|
||||
|
||||
Logger.info("Auto-restarted #{interface_type} interface")
|
||||
IO.puts(:stderr, "Auto-restarted #{interface_type} interface")
|
||||
{:noreply, updated_state}
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.error("Failed to auto-restart #{interface_type} interface: #{reason}")
|
||||
IO.puts(:stderr, "Failed to auto-restart #{interface_type} interface: #{reason}")
|
||||
{:noreply, state}
|
||||
end
|
||||
end
|
||||
|
||||
@impl GenServer
|
||||
def handle_info(message, state) do
|
||||
Logger.debug("Interface Manager received unexpected message: #{inspect(message)}")
|
||||
IO.puts(:stderr, "Interface Manager received unexpected message: #{inspect(message)}")
|
||||
{:noreply, state}
|
||||
end
|
||||
|
||||
@@ -369,11 +375,21 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
interface_mode = System.get_env("MCP_INTERFACE_MODE", "stdio")
|
||||
|
||||
case interface_mode do
|
||||
"stdio" -> [:stdio]
|
||||
"http" -> [:http]
|
||||
"websocket" -> [:websocket]
|
||||
"all" -> [:stdio, :http, :websocket]
|
||||
"remote" -> [:http, :websocket]
|
||||
"stdio" ->
|
||||
[:stdio]
|
||||
|
||||
"http" ->
|
||||
[:http]
|
||||
|
||||
"websocket" ->
|
||||
[:websocket]
|
||||
|
||||
"all" ->
|
||||
[:stdio, :http, :websocket]
|
||||
|
||||
"remote" ->
|
||||
[:http, :websocket]
|
||||
|
||||
_ ->
|
||||
# Check for comma-separated list
|
||||
if String.contains?(interface_mode, ",") do
|
||||
@@ -400,14 +416,17 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
end
|
||||
|
||||
defp update_http_config_from_env(config) do
|
||||
config = case System.get_env("MCP_HTTP_PORT") do
|
||||
nil -> config
|
||||
port_str ->
|
||||
case Integer.parse(port_str) do
|
||||
{port, ""} -> put_in(config, [:http, :port], port)
|
||||
_ -> config
|
||||
end
|
||||
end
|
||||
config =
|
||||
case System.get_env("MCP_HTTP_PORT") do
|
||||
nil ->
|
||||
config
|
||||
|
||||
port_str ->
|
||||
case Integer.parse(port_str) do
|
||||
{port, ""} -> put_in(config, [:http, :port], port)
|
||||
_ -> config
|
||||
end
|
||||
end
|
||||
|
||||
case System.get_env("MCP_HTTP_HOST") do
|
||||
nil -> config
|
||||
@@ -472,7 +491,8 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
# WebSocket is handled by the HTTP server, so just mark it as enabled
|
||||
interface_info = %{
|
||||
type: :websocket,
|
||||
pid: :embedded, # Embedded in HTTP server
|
||||
# Embedded in HTTP server
|
||||
pid: :embedded,
|
||||
started_at: DateTime.utc_now(),
|
||||
config: config.websocket
|
||||
}
|
||||
@@ -516,18 +536,46 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
|
||||
defp handle_stdio_loop(state) do
|
||||
# Handle MCP JSON-RPC messages from STDIO
|
||||
# Use different approaches for Docker vs regular environments
|
||||
if docker_environment?() do
|
||||
handle_stdio_docker_loop(state)
|
||||
else
|
||||
handle_stdio_regular_loop(state)
|
||||
end
|
||||
end
|
||||
|
||||
defp handle_stdio_regular_loop(state) do
|
||||
case IO.read(:stdio, :line) do
|
||||
:eof ->
|
||||
Logger.info("STDIO interface shutting down (EOF)")
|
||||
IO.puts(:stderr, "STDIO interface shutting down (EOF)")
|
||||
exit(:normal)
|
||||
|
||||
{:error, reason} ->
|
||||
Logger.error("STDIO error: #{inspect(reason)}")
|
||||
IO.puts(:stderr, "STDIO error: #{inspect(reason)}")
|
||||
exit({:error, reason})
|
||||
|
||||
line ->
|
||||
handle_stdio_message(String.trim(line), state)
|
||||
handle_stdio_loop(state)
|
||||
handle_stdio_regular_loop(state)
|
||||
end
|
||||
end
|
||||
|
||||
defp handle_stdio_docker_loop(state) do
|
||||
# In Docker, use regular IO.read instead of Port.open({:fd, 0, 1})
|
||||
# to avoid "driver_select stealing control of fd=0" conflicts with external MCP servers
|
||||
# This allows external servers to use pipes while Agent Coordinator reads from stdin
|
||||
case IO.read(:stdio, :line) do
|
||||
:eof ->
|
||||
IO.puts(:stderr, "STDIO interface shutting down (EOF)")
|
||||
exit(:normal)
|
||||
|
||||
{:error, reason} ->
|
||||
IO.puts(:stderr, "STDIO error: #{inspect(reason)}")
|
||||
exit({:error, reason})
|
||||
|
||||
line ->
|
||||
handle_stdio_message(String.trim(line), state)
|
||||
handle_stdio_docker_loop(state)
|
||||
end
|
||||
end
|
||||
|
||||
@@ -555,16 +603,18 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
"message" => "Parse error: #{Exception.message(e)}"
|
||||
}
|
||||
}
|
||||
|
||||
IO.puts(Jason.encode!(error_response))
|
||||
|
||||
e ->
|
||||
# Try to get the ID from the malformed request
|
||||
id = try do
|
||||
partial = Jason.decode!(json_line)
|
||||
Map.get(partial, "id")
|
||||
rescue
|
||||
_ -> nil
|
||||
end
|
||||
id =
|
||||
try do
|
||||
partial = Jason.decode!(json_line)
|
||||
Map.get(partial, "id")
|
||||
rescue
|
||||
_ -> nil
|
||||
end
|
||||
|
||||
error_response = %{
|
||||
"jsonrpc" => "2.0",
|
||||
@@ -574,6 +624,7 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
"message" => "Internal error: #{Exception.message(e)}"
|
||||
}
|
||||
}
|
||||
|
||||
IO.puts(Jason.encode!(error_response))
|
||||
end
|
||||
end
|
||||
@@ -600,7 +651,8 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
defp get_interface_metrics(:websocket, interface_info) do
|
||||
%{
|
||||
type: :websocket,
|
||||
status: :running, # Embedded in HTTP server
|
||||
# Embedded in HTTP server
|
||||
status: :running,
|
||||
uptime: DateTime.diff(DateTime.utc_now(), interface_info.started_at, :second),
|
||||
embedded: true
|
||||
}
|
||||
@@ -646,4 +698,21 @@ defmodule AgentCoordinator.InterfaceManager do
|
||||
end
|
||||
|
||||
defp deep_merge(_left, right), do: right
|
||||
|
||||
# Check if running in Docker environment
|
||||
defp docker_environment? do
|
||||
# Check common Docker environment indicators
|
||||
# Check if we're running under a container init system
|
||||
System.get_env("DOCKER_CONTAINER") != nil or
|
||||
System.get_env("container") != nil or
|
||||
System.get_env("DOCKERIZED") != nil or
|
||||
File.exists?("/.dockerenv") or
|
||||
(File.exists?("/proc/1/cgroup") and
|
||||
File.read!("/proc/1/cgroup") |> String.contains?("docker")) or
|
||||
String.contains?(to_string(System.get_env("PATH", "")), "/app/") or
|
||||
case File.read("/proc/1/comm") do
|
||||
{:ok, comm} -> String.trim(comm) in ["bash", "sh", "docker-init", "tini"]
|
||||
_ -> false
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -78,7 +78,7 @@ defmodule AgentCoordinator.SessionManager do
|
||||
}
|
||||
}
|
||||
|
||||
Logger.info("SessionManager started with #{state.config.expiry_minutes}min expiry")
|
||||
IO.puts(:stderr, "SessionManager started with #{state.config.expiry_minutes}min expiry")
|
||||
{:ok, state}
|
||||
end
|
||||
|
||||
@@ -99,7 +99,7 @@ defmodule AgentCoordinator.SessionManager do
|
||||
new_sessions = Map.put(state.sessions, session_token, session_data)
|
||||
new_state = %{state | sessions: new_sessions}
|
||||
|
||||
Logger.debug("Created session #{session_token} for agent #{agent_id}")
|
||||
IO.puts(:stderr, "Created session #{session_token} for agent #{agent_id}")
|
||||
{:reply, {:ok, session_token}, new_state}
|
||||
end
|
||||
|
||||
@@ -136,7 +136,12 @@ defmodule AgentCoordinator.SessionManager do
|
||||
session_data ->
|
||||
new_sessions = Map.delete(state.sessions, session_token)
|
||||
new_state = %{state | sessions: new_sessions}
|
||||
Logger.debug("Invalidated session #{session_token} for agent #{session_data.agent_id}")
|
||||
|
||||
IO.puts(
|
||||
:stderr,
|
||||
"Invalidated session #{session_token} for agent #{session_data.agent_id}"
|
||||
)
|
||||
|
||||
{:reply, :ok, new_state}
|
||||
end
|
||||
end
|
||||
@@ -161,7 +166,7 @@ defmodule AgentCoordinator.SessionManager do
|
||||
end)
|
||||
|
||||
if length(expired_sessions) > 0 do
|
||||
Logger.debug("Cleaned up #{length(expired_sessions)} expired sessions")
|
||||
IO.puts(:stderr, "Cleaned up #{length(expired_sessions)} expired sessions")
|
||||
end
|
||||
|
||||
new_state = %{state | sessions: Map.new(active_sessions)}
|
||||
|
||||
@@ -17,7 +17,12 @@ defmodule AgentCoordinator.Task do
|
||||
:cross_codebase_dependencies,
|
||||
:created_at,
|
||||
:updated_at,
|
||||
:metadata
|
||||
:metadata,
|
||||
:feedback,
|
||||
:director_notes,
|
||||
:assignment_reason,
|
||||
:refinement_history,
|
||||
:blocking_issues
|
||||
]}
|
||||
defstruct [
|
||||
:id,
|
||||
@@ -32,7 +37,12 @@ defmodule AgentCoordinator.Task do
|
||||
:cross_codebase_dependencies,
|
||||
:created_at,
|
||||
:updated_at,
|
||||
:metadata
|
||||
:metadata,
|
||||
:feedback,
|
||||
:director_notes,
|
||||
:assignment_reason,
|
||||
:refinement_history,
|
||||
:blocking_issues
|
||||
]
|
||||
|
||||
@type status :: :pending | :in_progress | :completed | :failed | :blocked
|
||||
@@ -51,7 +61,12 @@ defmodule AgentCoordinator.Task do
|
||||
cross_codebase_dependencies: [%{codebase_id: String.t(), task_id: String.t()}],
|
||||
created_at: DateTime.t(),
|
||||
updated_at: DateTime.t(),
|
||||
metadata: map()
|
||||
metadata: map(),
|
||||
feedback: String.t() | nil,
|
||||
director_notes: String.t() | nil,
|
||||
assignment_reason: String.t() | nil,
|
||||
refinement_history: [map()],
|
||||
blocking_issues: [String.t()]
|
||||
}
|
||||
|
||||
def new(title, description, opts \\ []) do
|
||||
@@ -78,7 +93,12 @@ defmodule AgentCoordinator.Task do
|
||||
cross_codebase_dependencies: get_opt.(:cross_codebase_dependencies, []),
|
||||
created_at: now,
|
||||
updated_at: now,
|
||||
metadata: get_opt.(:metadata, %{})
|
||||
metadata: get_opt.(:metadata, %{}),
|
||||
feedback: nil,
|
||||
director_notes: nil,
|
||||
assignment_reason: nil,
|
||||
refinement_history: [],
|
||||
blocking_issues: []
|
||||
}
|
||||
end
|
||||
|
||||
@@ -115,4 +135,109 @@ defmodule AgentCoordinator.Task do
|
||||
dependencies = [dependency | task.cross_codebase_dependencies]
|
||||
%{task | cross_codebase_dependencies: dependencies, updated_at: DateTime.utc_now()}
|
||||
end
|
||||
|
||||
# Director management functions
|
||||
|
||||
def add_feedback(task, feedback, director_id) do
|
||||
refinement_entry = %{
|
||||
type: "feedback_added",
|
||||
director_id: director_id,
|
||||
content: feedback,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
%{
|
||||
task
|
||||
| feedback: feedback,
|
||||
refinement_history: [refinement_entry | task.refinement_history],
|
||||
updated_at: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
|
||||
def add_director_notes(task, notes, director_id) do
|
||||
refinement_entry = %{
|
||||
type: "director_notes_added",
|
||||
director_id: director_id,
|
||||
content: notes,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
%{
|
||||
task
|
||||
| director_notes: notes,
|
||||
refinement_history: [refinement_entry | task.refinement_history],
|
||||
updated_at: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
|
||||
def set_assignment_reason(task, reason, director_id) do
|
||||
refinement_entry = %{
|
||||
type: "assignment_reason_set",
|
||||
director_id: director_id,
|
||||
reason: reason,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
%{
|
||||
task
|
||||
| assignment_reason: reason,
|
||||
refinement_history: [refinement_entry | task.refinement_history],
|
||||
updated_at: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
|
||||
def add_blocking_issue(task, issue, director_id) do
|
||||
new_issues = [issue | task.blocking_issues] |> Enum.uniq()
|
||||
|
||||
refinement_entry = %{
|
||||
type: "blocking_issue_added",
|
||||
director_id: director_id,
|
||||
issue: issue,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
%{
|
||||
task
|
||||
| blocking_issues: new_issues,
|
||||
refinement_history: [refinement_entry | task.refinement_history],
|
||||
updated_at: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
|
||||
def remove_blocking_issue(task, issue, director_id) do
|
||||
new_issues = task.blocking_issues |> Enum.reject(&(&1 == issue))
|
||||
|
||||
refinement_entry = %{
|
||||
type: "blocking_issue_removed",
|
||||
director_id: director_id,
|
||||
issue: issue,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
%{
|
||||
task
|
||||
| blocking_issues: new_issues,
|
||||
refinement_history: [refinement_entry | task.refinement_history],
|
||||
updated_at: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
|
||||
def reassign(task, new_agent_id, director_id, reason) do
|
||||
refinement_entry = %{
|
||||
type: "task_reassigned",
|
||||
director_id: director_id,
|
||||
from_agent_id: task.agent_id,
|
||||
to_agent_id: new_agent_id,
|
||||
reason: reason,
|
||||
timestamp: DateTime.utc_now()
|
||||
}
|
||||
|
||||
%{
|
||||
task
|
||||
| agent_id: new_agent_id,
|
||||
assignment_reason: reason,
|
||||
refinement_history: [refinement_entry | task.refinement_history],
|
||||
updated_at: DateTime.utc_now()
|
||||
}
|
||||
end
|
||||
end
|
||||
|
||||
@@ -20,22 +20,28 @@ defmodule AgentCoordinator.ToolFilter do
|
||||
Context information about the client connection.
|
||||
"""
|
||||
defstruct [
|
||||
:connection_type, # :local, :remote, :web
|
||||
:client_info, # Client identification
|
||||
:capabilities, # Client declared capabilities
|
||||
:security_level, # :trusted, :sandboxed, :restricted
|
||||
:origin, # For web clients, the origin domain
|
||||
:user_agent # Client user agent string
|
||||
# :local, :remote, :web
|
||||
:connection_type,
|
||||
# Client identification
|
||||
:client_info,
|
||||
# Client declared capabilities
|
||||
:capabilities,
|
||||
# :trusted, :sandboxed, :restricted
|
||||
:security_level,
|
||||
# For web clients, the origin domain
|
||||
:origin,
|
||||
# Client user agent string
|
||||
:user_agent
|
||||
]
|
||||
|
||||
@type client_context :: %__MODULE__{
|
||||
connection_type: :local | :remote | :web,
|
||||
client_info: map(),
|
||||
capabilities: [String.t()],
|
||||
security_level: :trusted | :sandboxed | :restricted,
|
||||
origin: String.t() | nil,
|
||||
user_agent: String.t() | nil
|
||||
}
|
||||
connection_type: :local | :remote | :web,
|
||||
client_info: map(),
|
||||
capabilities: [String.t()],
|
||||
security_level: :trusted | :sandboxed | :restricted,
|
||||
origin: String.t() | nil,
|
||||
user_agent: String.t() | nil
|
||||
}
|
||||
|
||||
# Tool name patterns that indicate local-only functionality (defined as function to avoid compilation issues)
|
||||
defp local_only_patterns do
|
||||
@@ -198,12 +204,16 @@ defmodule AgentCoordinator.ToolFilter do
|
||||
description = Map.get(tool, "description", "")
|
||||
|
||||
# Check against known local-only tool names
|
||||
name_is_local = tool_name in get_local_only_tool_names() or
|
||||
Enum.any?(local_only_patterns(), &Regex.match?(&1, tool_name))
|
||||
name_is_local =
|
||||
tool_name in get_local_only_tool_names() or
|
||||
Enum.any?(local_only_patterns(), &Regex.match?(&1, tool_name))
|
||||
|
||||
# Check description for local-only indicators
|
||||
description_is_local = String.contains?(String.downcase(description),
|
||||
["filesystem", "file system", "vscode", "terminal", "local file", "directory"])
|
||||
description_is_local =
|
||||
String.contains?(
|
||||
String.downcase(description),
|
||||
["filesystem", "file system", "vscode", "terminal", "local file", "directory"]
|
||||
)
|
||||
|
||||
# Check tool schema for local-only parameters
|
||||
schema_is_local = has_local_only_parameters?(tool)
|
||||
@@ -214,19 +224,39 @@ defmodule AgentCoordinator.ToolFilter do
|
||||
defp get_local_only_tool_names do
|
||||
[
|
||||
# Filesystem tools
|
||||
"read_file", "write_file", "create_file", "delete_file",
|
||||
"list_directory", "search_files", "move_file", "get_file_info",
|
||||
"list_allowed_directories", "directory_tree", "edit_file",
|
||||
"read_text_file", "read_multiple_files", "read_media_file",
|
||||
"read_file",
|
||||
"write_file",
|
||||
"create_file",
|
||||
"delete_file",
|
||||
"list_directory",
|
||||
"search_files",
|
||||
"move_file",
|
||||
"get_file_info",
|
||||
"list_allowed_directories",
|
||||
"directory_tree",
|
||||
"edit_file",
|
||||
"read_text_file",
|
||||
"read_multiple_files",
|
||||
"read_media_file",
|
||||
|
||||
# VSCode tools
|
||||
"vscode_create_file", "vscode_write_file", "vscode_read_file",
|
||||
"vscode_delete_file", "vscode_list_directory", "vscode_get_active_editor",
|
||||
"vscode_set_editor_content", "vscode_get_selection", "vscode_set_selection",
|
||||
"vscode_show_message", "vscode_run_command", "vscode_get_workspace_folders",
|
||||
"vscode_create_file",
|
||||
"vscode_write_file",
|
||||
"vscode_read_file",
|
||||
"vscode_delete_file",
|
||||
"vscode_list_directory",
|
||||
"vscode_get_active_editor",
|
||||
"vscode_set_editor_content",
|
||||
"vscode_get_selection",
|
||||
"vscode_set_selection",
|
||||
"vscode_show_message",
|
||||
"vscode_run_command",
|
||||
"vscode_get_workspace_folders",
|
||||
|
||||
# Terminal/process tools
|
||||
"run_in_terminal", "get_terminal_output", "terminal_last_command",
|
||||
"run_in_terminal",
|
||||
"get_terminal_output",
|
||||
"terminal_last_command",
|
||||
"terminal_selection"
|
||||
]
|
||||
end
|
||||
@@ -238,8 +268,10 @@ defmodule AgentCoordinator.ToolFilter do
|
||||
# Look for file path parameters or other local indicators
|
||||
Enum.any?(properties, fn {param_name, param_schema} ->
|
||||
param_name in ["path", "filePath", "file_path", "directory", "workspace_path"] or
|
||||
String.contains?(Map.get(param_schema, "description", ""),
|
||||
["file path", "directory", "workspace", "local"])
|
||||
String.contains?(
|
||||
Map.get(param_schema, "description", ""),
|
||||
["file path", "directory", "workspace", "local"]
|
||||
)
|
||||
end)
|
||||
end
|
||||
|
||||
@@ -251,20 +283,25 @@ defmodule AgentCoordinator.ToolFilter do
|
||||
Map.get(connection_info, :remote_ip) == "127.0.0.1" -> :local
|
||||
Map.get(connection_info, :remote_ip) == "::1" -> :local
|
||||
Map.has_key?(connection_info, :remote_ip) -> :remote
|
||||
true -> :local # Default to local for stdio
|
||||
# Default to local for stdio
|
||||
true -> :local
|
||||
end
|
||||
end
|
||||
|
||||
defp determine_security_level(connection_type, connection_info) do
|
||||
case connection_type do
|
||||
:local -> :trusted
|
||||
:local ->
|
||||
:trusted
|
||||
|
||||
:remote ->
|
||||
if Map.get(connection_info, :secure, false) do
|
||||
:sandboxed
|
||||
else
|
||||
:restricted
|
||||
end
|
||||
:web -> :sandboxed
|
||||
|
||||
:web ->
|
||||
:sandboxed
|
||||
end
|
||||
end
|
||||
|
||||
@@ -278,5 +315,4 @@ defmodule AgentCoordinator.ToolFilter do
|
||||
tools
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
@@ -21,7 +21,8 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
:connection_info
|
||||
]
|
||||
|
||||
@heartbeat_interval 30_000 # 30 seconds
|
||||
# 30 seconds
|
||||
@heartbeat_interval 30_000
|
||||
|
||||
@impl WebSock
|
||||
def init(opts) do
|
||||
@@ -37,7 +38,7 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
# Start heartbeat timer
|
||||
Process.send_after(self(), :heartbeat, @heartbeat_interval)
|
||||
|
||||
Logger.info("WebSocket connection established: #{session_id}")
|
||||
IO.puts(:stderr, "WebSocket connection established: #{session_id}")
|
||||
|
||||
{:ok, state}
|
||||
end
|
||||
@@ -64,7 +65,7 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
|
||||
@impl WebSock
|
||||
def handle_in({_binary, [opcode: :binary]}, state) do
|
||||
Logger.warning("Received unexpected binary data on WebSocket")
|
||||
IO.puts(:stderr, "Received unexpected binary data on WebSocket")
|
||||
{:ok, state}
|
||||
end
|
||||
|
||||
@@ -95,20 +96,24 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
|
||||
@impl WebSock
|
||||
def handle_info(message, state) do
|
||||
Logger.debug("Received unexpected message: #{inspect(message)}")
|
||||
IO.puts(:stderr, "Received unexpected message: #{inspect(message)}")
|
||||
{:ok, state}
|
||||
end
|
||||
|
||||
@impl WebSock
|
||||
def terminate(:remote, state) do
|
||||
Logger.info("WebSocket connection closed by client: #{state.session_id}")
|
||||
IO.puts(:stderr, "WebSocket connection closed by client: #{state.session_id}")
|
||||
cleanup_session(state)
|
||||
:ok
|
||||
end
|
||||
|
||||
@impl WebSock
|
||||
def terminate(reason, state) do
|
||||
Logger.info("WebSocket connection terminated: #{state.session_id}, reason: #{inspect(reason)}")
|
||||
IO.puts(
|
||||
:stderr,
|
||||
"WebSocket connection terminated: #{state.session_id}, reason: #{inspect(reason)}"
|
||||
)
|
||||
|
||||
cleanup_session(state)
|
||||
:ok
|
||||
end
|
||||
@@ -183,10 +188,7 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
}
|
||||
}
|
||||
|
||||
updated_state = %{state |
|
||||
client_context: client_context,
|
||||
connection_info: connection_info
|
||||
}
|
||||
updated_state = %{state | client_context: client_context, connection_info: connection_info}
|
||||
|
||||
{:reply, {:text, Jason.encode!(response)}, updated_state}
|
||||
end
|
||||
@@ -245,7 +247,8 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
{:reply, {:text, Jason.encode!(response)}, updated_state}
|
||||
|
||||
unexpected ->
|
||||
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
|
||||
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
|
||||
|
||||
error_response = %{
|
||||
"jsonrpc" => "2.0",
|
||||
"id" => Map.get(message, "id"),
|
||||
@@ -264,7 +267,8 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
"id" => Map.get(message, "id"),
|
||||
"error" => %{
|
||||
"code" => -32601,
|
||||
"message" => "Tool not available for #{state.client_context.connection_type} clients: #{tool_name}"
|
||||
"message" =>
|
||||
"Tool not available for #{state.client_context.connection_type} clients: #{tool_name}"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -287,7 +291,7 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
|
||||
defp handle_initialized_notification(_message, state) do
|
||||
# Client is ready to receive notifications
|
||||
Logger.info("WebSocket client initialized: #{state.session_id}")
|
||||
IO.puts(:stderr, "WebSocket client initialized: #{state.session_id}")
|
||||
{:ok, state}
|
||||
end
|
||||
|
||||
@@ -304,7 +308,7 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
{:ok, state}
|
||||
|
||||
unexpected ->
|
||||
Logger.error("Unexpected MCP response: #{inspect(unexpected)}")
|
||||
IO.puts(:stderr, "Unexpected MCP response: #{inspect(unexpected)}")
|
||||
{:ok, state}
|
||||
end
|
||||
else
|
||||
@@ -325,14 +329,15 @@ defmodule AgentCoordinator.WebSocketHandler do
|
||||
# Add session tracking info to the message
|
||||
params = Map.get(message, "params", %{})
|
||||
|
||||
enhanced_params = params
|
||||
|> Map.put("_session_id", state.session_id)
|
||||
|> Map.put("_transport", "websocket")
|
||||
|> Map.put("_client_context", %{
|
||||
connection_type: state.client_context.connection_type,
|
||||
security_level: state.client_context.security_level,
|
||||
session_id: state.session_id
|
||||
})
|
||||
enhanced_params =
|
||||
params
|
||||
|> Map.put("_session_id", state.session_id)
|
||||
|> Map.put("_transport", "websocket")
|
||||
|> Map.put("_client_context", %{
|
||||
connection_type: state.client_context.connection_type,
|
||||
security_level: state.client_context.security_level,
|
||||
session_id: state.session_id
|
||||
})
|
||||
|
||||
Map.put(message, "params", enhanced_params)
|
||||
end
|
||||
|
||||
@@ -1,106 +0,0 @@
|
||||
{
|
||||
"config": {
|
||||
"auto_restart_delay": 1000,
|
||||
"heartbeat_interval": 10000,
|
||||
"max_restart_attempts": 3,
|
||||
"startup_timeout": 30000
|
||||
},
|
||||
"interfaces": {
|
||||
"enabled_interfaces": ["stdio"],
|
||||
"stdio": {
|
||||
"enabled": true,
|
||||
"handle_stdio": true,
|
||||
"description": "Local MCP interface for VSCode and direct clients"
|
||||
},
|
||||
"http": {
|
||||
"enabled": false,
|
||||
"port": 8080,
|
||||
"host": "localhost",
|
||||
"cors_enabled": true,
|
||||
"description": "HTTP REST API for remote MCP clients"
|
||||
},
|
||||
"websocket": {
|
||||
"enabled": false,
|
||||
"port": 8081,
|
||||
"host": "localhost",
|
||||
"description": "WebSocket interface for real-time web clients"
|
||||
},
|
||||
"auto_restart": {
|
||||
"stdio": false,
|
||||
"http": true,
|
||||
"websocket": true
|
||||
},
|
||||
"tool_filtering": {
|
||||
"local_only_tools": [
|
||||
"read_file", "write_file", "create_file", "delete_file",
|
||||
"list_directory", "search_files", "move_file", "get_file_info",
|
||||
"vscode_*", "run_in_terminal", "get_terminal_output"
|
||||
],
|
||||
"always_safe_tools": [
|
||||
"register_agent", "create_task", "get_task_board",
|
||||
"heartbeat", "create_entities", "sequentialthinking"
|
||||
]
|
||||
}
|
||||
},
|
||||
"servers": {
|
||||
"mcp_filesystem": {
|
||||
"type": "stdio",
|
||||
"command": "bunx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/ra"],
|
||||
"auto_restart": true,
|
||||
"description": "Filesystem operations server with heartbeat coverage",
|
||||
"local_only": true
|
||||
},
|
||||
"mcp_memory": {
|
||||
"type": "stdio",
|
||||
"command": "bunx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-memory"],
|
||||
"auto_restart": true,
|
||||
"description": "Memory and knowledge graph server",
|
||||
"local_only": false
|
||||
},
|
||||
"mcp_sequentialthinking": {
|
||||
"type": "stdio",
|
||||
"command": "bunx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
|
||||
"auto_restart": true,
|
||||
"description": "Sequential thinking and reasoning server",
|
||||
"local_only": false
|
||||
},
|
||||
"mcp_context7": {
|
||||
"type": "stdio",
|
||||
"command": "bunx",
|
||||
"args": ["-y", "@upstash/context7-mcp"],
|
||||
"auto_restart": true,
|
||||
"description": "Context7 library documentation server",
|
||||
"local_only": false
|
||||
}
|
||||
},
|
||||
"examples": {
|
||||
"stdio_mode": {
|
||||
"description": "Traditional MCP over stdio for local clients",
|
||||
"command": "./scripts/mcp_launcher_multi.sh stdio",
|
||||
"use_case": "VSCode MCP integration, local development"
|
||||
},
|
||||
"http_mode": {
|
||||
"description": "HTTP REST API for remote clients",
|
||||
"command": "./scripts/mcp_launcher_multi.sh http 8080",
|
||||
"use_case": "Remote API access, web applications, CI/CD"
|
||||
},
|
||||
"websocket_mode": {
|
||||
"description": "WebSocket for real-time web clients",
|
||||
"command": "./scripts/mcp_launcher_multi.sh websocket 8081",
|
||||
"use_case": "Real-time web dashboards, live collaboration"
|
||||
},
|
||||
"remote_mode": {
|
||||
"description": "Both HTTP and WebSocket on same port",
|
||||
"command": "./scripts/mcp_launcher_multi.sh remote 8080",
|
||||
"use_case": "Complete remote access with both REST and real-time"
|
||||
},
|
||||
"all_mode": {
|
||||
"description": "All interface modes simultaneously",
|
||||
"command": "./scripts/mcp_launcher_multi.sh all 8080",
|
||||
"use_case": "Development, testing, maximum compatibility"
|
||||
}
|
||||
}
|
||||
}
|
||||
12
nats-server.conf
Normal file
12
nats-server.conf
Normal file
@@ -0,0 +1,12 @@
|
||||
port: 4222
|
||||
|
||||
jetstream {
|
||||
store_dir: /var/lib/nats/jetstream
|
||||
max_memory_store: 1GB
|
||||
max_file_store: 10GB
|
||||
}
|
||||
|
||||
http_port: 8222
|
||||
log_file: "/var/log/nats-server.log"
|
||||
debug: false
|
||||
trace: false
|
||||
@@ -6,20 +6,27 @@
|
||||
|
||||
set -e
|
||||
|
||||
CALLER_PWD="${PWD}"
|
||||
WORKSPACE_DIR="${MCP_WORKSPACE_DIR:-$CALLER_PWD}"
|
||||
|
||||
export PATH="$HOME/.asdf/shims:$PATH"
|
||||
|
||||
# Change to the project directory
|
||||
cd "$(dirname "$0")/.."
|
||||
export MCP_WORKSPACE_DIR="$WORKSPACE_DIR"
|
||||
|
||||
# Set environment
|
||||
export MIX_ENV="${MIX_ENV:-dev}"
|
||||
export NATS_HOST="${NATS_HOST:-localhost}"
|
||||
export NATS_PORT="${NATS_PORT:-4222}"
|
||||
|
||||
# Log startup
|
||||
# Log startup with workspace information
|
||||
echo "Starting AgentCoordinator Unified MCP Server..." >&2
|
||||
echo "Environment: $MIX_ENV" >&2
|
||||
echo "NATS: $NATS_HOST:$NATS_PORT" >&2
|
||||
echo "Caller PWD: $CALLER_PWD" >&2
|
||||
echo "Workspace Directory: $WORKSPACE_DIR" >&2
|
||||
echo "Agent Coordinator Directory: $(pwd)" >&2
|
||||
|
||||
# Start the Elixir application with unified MCP server
|
||||
exec mix run --no-halt -e "
|
||||
@@ -37,67 +44,7 @@ end
|
||||
# Log that we're ready
|
||||
IO.puts(:stderr, \"Unified MCP server ready with automatic task tracking\")
|
||||
|
||||
# Handle MCP JSON-RPC messages through the unified server
|
||||
defmodule UnifiedMCPStdio do
|
||||
def start do
|
||||
spawn_link(fn -> message_loop() end)
|
||||
Process.sleep(:infinity)
|
||||
end
|
||||
|
||||
defp message_loop do
|
||||
case IO.read(:stdio, :line) do
|
||||
:eof ->
|
||||
IO.puts(:stderr, \"Unified MCP server shutting down\")
|
||||
System.halt(0)
|
||||
{:error, reason} ->
|
||||
IO.puts(:stderr, \"IO Error: #{inspect(reason)}\")
|
||||
System.halt(1)
|
||||
line ->
|
||||
handle_message(String.trim(line))
|
||||
message_loop()
|
||||
end
|
||||
end
|
||||
|
||||
defp handle_message(\"\"), do: :ok
|
||||
defp handle_message(json_line) do
|
||||
try do
|
||||
request = Jason.decode!(json_line)
|
||||
|
||||
# Route through unified MCP server for automatic task tracking
|
||||
response = AgentCoordinator.MCPServer.handle_mcp_request(request)
|
||||
IO.puts(Jason.encode!(response))
|
||||
rescue
|
||||
e in Jason.DecodeError ->
|
||||
error_response = %{
|
||||
\"jsonrpc\" => \"2.0\",
|
||||
\"id\" => nil,
|
||||
\"error\" => %{
|
||||
\"code\" => -32700,
|
||||
\"message\" => \"Parse error: #{Exception.message(e)}\"
|
||||
}
|
||||
}
|
||||
IO.puts(Jason.encode!(error_response))
|
||||
e ->
|
||||
# Try to get the ID from the malformed request
|
||||
id = try do
|
||||
partial = Jason.decode!(json_line)
|
||||
Map.get(partial, \"id\")
|
||||
rescue
|
||||
_ -> nil
|
||||
end
|
||||
|
||||
error_response = %{
|
||||
\"jsonrpc\" => \"2.0\",
|
||||
\"id\" => id,
|
||||
\"error\" => %{
|
||||
\"code\" => -32603,
|
||||
\"message\" => \"Internal error: #{Exception.message(e)}\"
|
||||
}
|
||||
}
|
||||
IO.puts(Jason.encode!(error_response))
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
UnifiedMCPStdio.start()
|
||||
"
|
||||
# STDIO handling is now managed by InterfaceManager, not here
|
||||
# Just keep the process alive
|
||||
Process.sleep(:infinity)
|
||||
"
|
||||
|
||||
@@ -10,10 +10,14 @@
|
||||
|
||||
set -e
|
||||
|
||||
CALLER_PWD="${PWD}"
|
||||
WORKSPACE_DIR="${MCP_WORKSPACE_DIR:-$CALLER_PWD}"
|
||||
|
||||
export PATH="$HOME/.asdf/shims:$PATH"
|
||||
|
||||
# Change to the project directory
|
||||
cd "$(dirname "$0")/.."
|
||||
export MCP_WORKSPACE_DIR="$WORKSPACE_DIR"
|
||||
|
||||
# Parse command line arguments
|
||||
INTERFACE_MODE="${1:-stdio}"
|
||||
|
||||
@@ -1,73 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Ultra-minimal test that doesn't start the full application
|
||||
|
||||
echo "🔬 Ultra-Minimal AgentCoordinator Test"
|
||||
echo "======================================"
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo "📋 Testing compilation..."
|
||||
if mix compile >/dev/null 2>&1; then
|
||||
echo "✅ Compilation successful"
|
||||
else
|
||||
echo "❌ Compilation failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📋 Testing MCP server without application startup..."
|
||||
if timeout 10 mix run --no-start -e "
|
||||
# Load compiled modules without starting application
|
||||
Code.ensure_loaded(AgentCoordinator.MCPServer)
|
||||
|
||||
# Test MCP server directly
|
||||
try do
|
||||
# Start just the required processes manually
|
||||
{:ok, _} = Registry.start_link(keys: :unique, name: AgentCoordinator.InboxRegistry)
|
||||
{:ok, _} = Phoenix.PubSub.start_link(name: AgentCoordinator.PubSub)
|
||||
|
||||
# Start TaskRegistry without NATS
|
||||
{:ok, _} = GenServer.start_link(AgentCoordinator.TaskRegistry, [nats: nil], name: AgentCoordinator.TaskRegistry)
|
||||
|
||||
# Start MCP server
|
||||
{:ok, _} = GenServer.start_link(AgentCoordinator.MCPServer, %{}, name: AgentCoordinator.MCPServer)
|
||||
|
||||
IO.puts('✅ Core components started')
|
||||
|
||||
# Test MCP functionality
|
||||
response = AgentCoordinator.MCPServer.handle_mcp_request(%{
|
||||
\"jsonrpc\" => \"2.0\",
|
||||
\"id\" => 1,
|
||||
\"method\" => \"tools/list\"
|
||||
})
|
||||
|
||||
case response do
|
||||
%{\"result\" => %{\"tools\" => tools}} when is_list(tools) ->
|
||||
IO.puts(\"✅ MCP server working (#{length(tools)} tools)\")
|
||||
_ ->
|
||||
IO.puts(\"❌ MCP server not working: #{inspect(response)}\")
|
||||
end
|
||||
|
||||
rescue
|
||||
e ->
|
||||
IO.puts(\"❌ Error: #{inspect(e)}\")
|
||||
end
|
||||
|
||||
System.halt(0)
|
||||
"; then
|
||||
echo "✅ Minimal test passed!"
|
||||
else
|
||||
echo "❌ Minimal test failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🎉 Core MCP functionality works!"
|
||||
echo ""
|
||||
echo "📝 The hanging issue was due to NATS persistence trying to connect."
|
||||
echo " Your MCP server core functionality is working perfectly."
|
||||
echo ""
|
||||
echo "🚀 To run with proper NATS setup:"
|
||||
echo " 1. Make sure NATS server is running: sudo systemctl start nats"
|
||||
echo " 2. Or run: nats-server -js -p 4222 -m 8222 &"
|
||||
echo " 3. Then use: ../scripts/mcp_launcher.sh"
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Quick test script to verify Agentecho "💡 Next steps:"
|
||||
echo " 1. Run scripts/setup.sh to configure VS Code integration"
|
||||
echo " 2. Or test manually with: scripts/mcp_launcher.sh"rdinator works without getting stuck
|
||||
|
||||
echo "🧪 Quick AgentCoordinator Test"
|
||||
echo "=============================="
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo "📋 Testing basic compilation..."
|
||||
if mix compile --force >/dev/null 2>&1; then
|
||||
echo "✅ Compilation successful"
|
||||
else
|
||||
echo "❌ Compilation failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📋 Testing application startup (without persistence)..."
|
||||
if timeout 10 mix run -e "
|
||||
Application.put_env(:agent_coordinator, :enable_persistence, false)
|
||||
{:ok, _apps} = Application.ensure_all_started(:agent_coordinator)
|
||||
IO.puts('✅ Application started successfully')
|
||||
|
||||
# Quick MCP server test
|
||||
response = AgentCoordinator.MCPServer.handle_mcp_request(%{
|
||||
\"jsonrpc\" => \"2.0\",
|
||||
\"id\" => 1,
|
||||
\"method\" => \"tools/list\"
|
||||
})
|
||||
|
||||
case response do
|
||||
%{\"result\" => %{\"tools\" => tools}} when is_list(tools) ->
|
||||
IO.puts(\"✅ MCP server working (#{length(tools)} tools available)\")
|
||||
_ ->
|
||||
IO.puts(\"❌ MCP server not responding correctly\")
|
||||
end
|
||||
|
||||
System.halt(0)
|
||||
"; then
|
||||
echo "✅ Quick test passed!"
|
||||
else
|
||||
echo "❌ Quick test failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🎉 AgentCoordinator is ready!"
|
||||
echo ""
|
||||
echo "🚀 Next steps:"
|
||||
echo " 1. Run ./setup.sh to configure VS Code integration"
|
||||
echo " 2. Or test manually with: ./mcp_launcher.sh"
|
||||
echo " 3. Or run Python example: python3 mcp_client_example.py"
|
||||
@@ -145,7 +145,7 @@ if [ -f "$SETTINGS_FILE" ]; then
|
||||
echo "$MCP_CONFIG" | jq -s '.[0] * .[1]' "$SETTINGS_FILE" - > "$SETTINGS_FILE.tmp"
|
||||
mv "$SETTINGS_FILE.tmp" "$SETTINGS_FILE"
|
||||
else
|
||||
echo "⚠️ jq not found. Please manually add MCP configuration to $SETTINGS_FILE"
|
||||
echo "jq not found. Please manually add MCP configuration to $SETTINGS_FILE"
|
||||
echo "Add this configuration:"
|
||||
echo "$MCP_CONFIG"
|
||||
fi
|
||||
@@ -153,25 +153,25 @@ else
|
||||
echo "$MCP_CONFIG" > "$SETTINGS_FILE"
|
||||
fi
|
||||
|
||||
echo "✅ VS Code settings updated"
|
||||
echo "VS Code settings updated"
|
||||
|
||||
# Test MCP server
|
||||
echo -e "\n🧪 Testing MCP server..."
|
||||
echo -e "\nTesting MCP server..."
|
||||
cd "$PROJECT_DIR"
|
||||
if timeout 5 ./scripts/mcp_launcher.sh >/dev/null 2>&1; then
|
||||
echo "✅ MCP server test passed"
|
||||
echo "MCP server test passed"
|
||||
else
|
||||
echo "⚠️ MCP server test timed out (this is expected)"
|
||||
echo "MCP server test timed out (this is expected)"
|
||||
fi
|
||||
|
||||
# Create desktop shortcut for easy access
|
||||
echo -e "\n🖥️ Creating desktop shortcuts..."
|
||||
echo -e "\nCreating desktop shortcuts..."
|
||||
|
||||
# Start script
|
||||
cat > "$PROJECT_DIR/start_agent_coordinator.sh" << 'EOF'
|
||||
#!/bin/bash
|
||||
cd "$(dirname "$0")"
|
||||
echo "🚀 Starting AgentCoordinator..."
|
||||
echo "Starting AgentCoordinator..."
|
||||
|
||||
# Start NATS if not running
|
||||
if ! pgrep -f nats-server > /dev/null; then
|
||||
@@ -191,7 +191,7 @@ chmod +x "$PROJECT_DIR/start_agent_coordinator.sh"
|
||||
# Stop script
|
||||
cat > "$PROJECT_DIR/stop_agent_coordinator.sh" << 'EOF'
|
||||
#!/bin/bash
|
||||
echo "🛑 Stopping AgentCoordinator..."
|
||||
echo "Stopping AgentCoordinator..."
|
||||
|
||||
# Stop NATS if we started it
|
||||
if [ -f /tmp/nats.pid ]; then
|
||||
@@ -203,24 +203,24 @@ fi
|
||||
pkill -f "scripts/mcp_launcher.sh" || true
|
||||
pkill -f "agent_coordinator" || true
|
||||
|
||||
echo "✅ AgentCoordinator stopped"
|
||||
echo "AgentCoordinator stopped"
|
||||
EOF
|
||||
|
||||
chmod +x "$PROJECT_DIR/stop_agent_coordinator.sh"
|
||||
|
||||
echo "✅ Created start/stop scripts"
|
||||
echo "Created start/stop scripts"
|
||||
|
||||
# Final instructions
|
||||
echo -e "\n🎉 Setup Complete!"
|
||||
echo -e "\nSetup Complete!"
|
||||
echo "==================="
|
||||
echo ""
|
||||
echo "📋 Next Steps:"
|
||||
echo "Next Steps:"
|
||||
echo ""
|
||||
echo "1. 🔄 Restart VS Code to load the new MCP configuration"
|
||||
echo "1. Restart VS Code to load the new MCP configuration"
|
||||
echo " - Close all VS Code windows"
|
||||
echo " - Reopen VS Code in your project"
|
||||
echo ""
|
||||
echo "2. 🤖 GitHub Copilot should now have access to AgentCoordinator tools:"
|
||||
echo "2. GitHub Copilot should now have access to AgentCoordinator tools:"
|
||||
echo " - register_agent"
|
||||
echo " - create_task"
|
||||
echo " - get_next_task"
|
||||
@@ -233,14 +233,13 @@ echo " - Ask Copilot: 'Register me as an agent with coding capabilities'"
|
||||
echo " - Ask Copilot: 'Create a task to refactor the login module'"
|
||||
echo " - Ask Copilot: 'Show me the task board'"
|
||||
echo ""
|
||||
echo "📂 Useful files:"
|
||||
echo " Useful files:"
|
||||
echo " - Start server: $PROJECT_DIR/start_agent_coordinator.sh"
|
||||
echo " - Stop server: $PROJECT_DIR/stop_agent_coordinator.sh"
|
||||
echo " - Test client: $PROJECT_DIR/mcp_client_example.py"
|
||||
echo " - VS Code settings: $SETTINGS_FILE"
|
||||
echo ""
|
||||
echo "🔧 Manual start (if needed):"
|
||||
echo " cd $PROJECT_DIR && ./scripts/mcp_launcher.sh"
|
||||
echo ""
|
||||
echo "💡 Tip: The MCP server will auto-start when Copilot needs it!"
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
@@ -14,16 +14,19 @@ IO.puts("=" |> String.duplicate(60))
|
||||
try do
|
||||
TaskRegistry.start_link()
|
||||
rescue
|
||||
_ -> :ok # Already started
|
||||
# Already started
|
||||
_ -> :ok
|
||||
end
|
||||
|
||||
try do
|
||||
MCPServer.start_link()
|
||||
rescue
|
||||
_ -> :ok # Already started
|
||||
# Already started
|
||||
_ -> :ok
|
||||
end
|
||||
|
||||
Process.sleep(1000) # Give services time to start
|
||||
# Give services time to start
|
||||
Process.sleep(1000)
|
||||
|
||||
# Test 1: Register two agents
|
||||
IO.puts("\n1️⃣ Registering two test agents...")
|
||||
@@ -58,23 +61,27 @@ resp1 = MCPServer.handle_mcp_request(agent1_req)
|
||||
resp2 = MCPServer.handle_mcp_request(agent2_req)
|
||||
|
||||
# Extract agent IDs
|
||||
agent1_id = case resp1 do
|
||||
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||
data = Jason.decode!(text)
|
||||
data["agent_id"]
|
||||
_ ->
|
||||
IO.puts("❌ Failed to register agent 1: #{inspect(resp1)}")
|
||||
System.halt(1)
|
||||
end
|
||||
agent1_id =
|
||||
case resp1 do
|
||||
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||
data = Jason.decode!(text)
|
||||
data["agent_id"]
|
||||
|
||||
agent2_id = case resp2 do
|
||||
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||
data = Jason.decode!(text)
|
||||
data["agent_id"]
|
||||
_ ->
|
||||
IO.puts("❌ Failed to register agent 2: #{inspect(resp2)}")
|
||||
System.halt(1)
|
||||
end
|
||||
_ ->
|
||||
IO.puts("❌ Failed to register agent 1: #{inspect(resp1)}")
|
||||
System.halt(1)
|
||||
end
|
||||
|
||||
agent2_id =
|
||||
case resp2 do
|
||||
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||
data = Jason.decode!(text)
|
||||
data["agent_id"]
|
||||
|
||||
_ ->
|
||||
IO.puts("❌ Failed to register agent 2: #{inspect(resp2)}")
|
||||
System.halt(1)
|
||||
end
|
||||
|
||||
IO.puts("✅ Agent 1 (Alpha Wolf): #{agent1_id}")
|
||||
IO.puts("✅ Agent 2 (Beta Tiger): #{agent2_id}")
|
||||
@@ -219,7 +226,7 @@ history_req1 = %{
|
||||
history_resp1 = MCPServer.handle_mcp_request(history_req1)
|
||||
IO.puts("Agent 1 history: #{inspect(history_resp1)}")
|
||||
|
||||
IO.puts("\n" <> "=" |> String.duplicate(60))
|
||||
IO.puts(("\n" <> "=") |> String.duplicate(60))
|
||||
IO.puts("🎉 AGENT-SPECIFIC TASK POOLS TEST COMPLETE!")
|
||||
IO.puts("✅ Each agent now has their own task pool")
|
||||
IO.puts("✅ No more task chaos or cross-contamination")
|
||||
|
||||
@@ -202,6 +202,7 @@ defmodule AgentTaskPoolTest do
|
||||
%{"result" => %{"content" => [%{"text" => text}]}} ->
|
||||
data = Jason.decode!(text)
|
||||
data["agent_id"]
|
||||
|
||||
_ ->
|
||||
"unknown"
|
||||
end
|
||||
|
||||
@@ -30,24 +30,27 @@ Process.sleep(1000)
|
||||
IO.puts("\n2️⃣ Creating agent-specific tasks...")
|
||||
|
||||
# Tasks for Agent 1
|
||||
task1_agent1 = Task.new("Fix auth bug", "Debug authentication issue", %{
|
||||
priority: :high,
|
||||
assigned_agent: agent1.id,
|
||||
metadata: %{agent_created: true}
|
||||
})
|
||||
task1_agent1 =
|
||||
Task.new("Fix auth bug", "Debug authentication issue", %{
|
||||
priority: :high,
|
||||
assigned_agent: agent1.id,
|
||||
metadata: %{agent_created: true}
|
||||
})
|
||||
|
||||
task2_agent1 = Task.new("Add auth tests", "Write auth tests", %{
|
||||
priority: :normal,
|
||||
assigned_agent: agent1.id,
|
||||
metadata: %{agent_created: true}
|
||||
})
|
||||
task2_agent1 =
|
||||
Task.new("Add auth tests", "Write auth tests", %{
|
||||
priority: :normal,
|
||||
assigned_agent: agent1.id,
|
||||
metadata: %{agent_created: true}
|
||||
})
|
||||
|
||||
# Tasks for Agent 2
|
||||
task1_agent2 = Task.new("Write API docs", "Document endpoints", %{
|
||||
priority: :normal,
|
||||
assigned_agent: agent2.id,
|
||||
metadata: %{agent_created: true}
|
||||
})
|
||||
task1_agent2 =
|
||||
Task.new("Write API docs", "Document endpoints", %{
|
||||
priority: :normal,
|
||||
assigned_agent: agent2.id,
|
||||
metadata: %{agent_created: true}
|
||||
})
|
||||
|
||||
# Add tasks to respective inboxes
|
||||
Inbox.add_task(agent1.id, task1_agent1)
|
||||
@@ -76,7 +79,12 @@ IO.puts("\n4️⃣ Checking remaining tasks...")
|
||||
status1 = Inbox.get_status(agent1.id)
|
||||
status2 = Inbox.get_status(agent2.id)
|
||||
|
||||
IO.puts("Agent 1: #{status1.pending_count} pending, current: #{if status1.current_task, do: status1.current_task.title, else: "none"}")
|
||||
IO.puts("Agent 2: #{status2.pending_count} pending, current: #{if status2.current_task, do: status2.current_task.title, else: "none"}")
|
||||
IO.puts(
|
||||
"Agent 1: #{status1.pending_count} pending, current: #{if status1.current_task, do: status1.current_task.title, else: "none"}"
|
||||
)
|
||||
|
||||
IO.puts(
|
||||
"Agent 2: #{status2.pending_count} pending, current: #{if status2.current_task, do: status2.current_task.title, else: "none"}"
|
||||
)
|
||||
|
||||
IO.puts("\n🎉 SUCCESS! Agent-specific task pools working!")
|
||||
|
||||
@@ -90,14 +90,17 @@ defmodule SessionManagementTest do
|
||||
case Jason.decode(body) do
|
||||
{:ok, %{"result" => _result}} ->
|
||||
IO.puts(" ✅ Valid MCP response received")
|
||||
|
||||
{:ok, %{"error" => error}} ->
|
||||
IO.puts(" ⚠️ MCP error: #{inspect(error)}")
|
||||
|
||||
_ ->
|
||||
IO.puts(" ❌ Invalid response format")
|
||||
end
|
||||
|
||||
{:ok, %HTTPoison.Response{status_code: status_code, body: body}} ->
|
||||
IO.puts("❌ Request failed with status #{status_code}")
|
||||
|
||||
case Jason.decode(body) do
|
||||
{:ok, parsed} -> IO.puts(" Error: #{inspect(parsed)}")
|
||||
_ -> IO.puts(" Body: #{body}")
|
||||
|
||||
@@ -10,6 +10,7 @@ Process.sleep(1000)
|
||||
|
||||
# Test 1: Initialize call (system call, should work without agent_id)
|
||||
IO.puts("Testing initialize call...")
|
||||
|
||||
init_request = %{
|
||||
"jsonrpc" => "2.0",
|
||||
"id" => 1,
|
||||
@@ -31,6 +32,7 @@ IO.puts("Initialize response: #{inspect(init_response)}")
|
||||
|
||||
# Test 2: Tools/list call (system call, should work without agent_id)
|
||||
IO.puts("\nTesting tools/list call...")
|
||||
|
||||
tools_request = %{
|
||||
"jsonrpc" => "2.0",
|
||||
"id" => 2,
|
||||
@@ -42,6 +44,7 @@ IO.puts("Tools/list response: #{inspect(tools_response)}")
|
||||
|
||||
# Test 3: Register agent call (should work)
|
||||
IO.puts("\nTesting register_agent call...")
|
||||
|
||||
register_request = %{
|
||||
"jsonrpc" => "2.0",
|
||||
"id" => 3,
|
||||
@@ -59,7 +62,8 @@ register_response = GenServer.call(AgentCoordinator.MCPServer, {:mcp_request, re
|
||||
IO.puts("Register agent response: #{inspect(register_response)}")
|
||||
|
||||
# Test 4: Try a call that requires agent_id (should fail without agent_id)
|
||||
IO.puts("\nTesting call that requires agent_id (should fail)...")
|
||||
IO.puts("Testing call that requires agent_id (should fail)...")
|
||||
|
||||
task_request = %{
|
||||
"jsonrpc" => "2.0",
|
||||
"id" => 4,
|
||||
@@ -76,4 +80,4 @@ task_request = %{
|
||||
task_response = GenServer.call(AgentCoordinator.MCPServer, {:mcp_request, task_request})
|
||||
IO.puts("Task creation response: #{inspect(task_response)}")
|
||||
|
||||
IO.puts("\n✅ All tests completed!")"
|
||||
IO.puts("All tests completed!")
|
||||
|
||||
@@ -11,14 +11,17 @@ IO.puts("Testing VS Code tool integration...")
|
||||
|
||||
# Check if VS Code tools are available
|
||||
tools = AgentCoordinator.MCPServer.get_tools()
|
||||
vscode_tools = Enum.filter(tools, fn tool ->
|
||||
case Map.get(tool, "name") do
|
||||
"vscode_" <> _ -> true
|
||||
_ -> false
|
||||
end
|
||||
end)
|
||||
|
||||
vscode_tools =
|
||||
Enum.filter(tools, fn tool ->
|
||||
case Map.get(tool, "name") do
|
||||
"vscode_" <> _ -> true
|
||||
_ -> false
|
||||
end
|
||||
end)
|
||||
|
||||
IO.puts("Found #{length(vscode_tools)} VS Code tools:")
|
||||
|
||||
Enum.each(vscode_tools, fn tool ->
|
||||
IO.puts(" - #{tool["name"]}")
|
||||
end)
|
||||
@@ -27,4 +30,4 @@ if length(vscode_tools) > 0 do
|
||||
IO.puts("✅ VS Code tools are properly integrated!")
|
||||
else
|
||||
IO.puts("❌ VS Code tools are NOT integrated")
|
||||
end
|
||||
end
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
Add comprehensive agent activity tracking
|
||||
|
||||
- Enhanced Agent struct with current_activity, current_files, and activity_history fields
|
||||
- Created ActivityTracker module to infer activities from tool calls
|
||||
- Integrated activity tracking into MCP server tool routing
|
||||
- Updated task board APIs to include activity information
|
||||
- Agents now show real-time status like 'Reading file.ex', 'Editing main.py', 'Sequential thinking', etc.
|
||||
- Added activity history to track recent agent actions
|
||||
- All file operations and tool calls are now tracked and displayed
|
||||
Reference in New Issue
Block a user