Skip to main content

Direct MCP Tool Execution Implementation

This document provides a comprehensive overview of the Direct MCP Tool Execution feature implemented across the LLM Platform AI Platform.

Overviewโ€‹

The Direct MCP Tool Execution feature provides an alternative integration path for executing Model Context Protocol (MCP) tools directly via a RESTful API endpoint, bypassing the need for an MCP server. This approach enhances cross-platform accessibility and system resilience.

Architectureโ€‹

The implementation follows these key architectural principles:

  1. Primary Path Preservation: The MCP server remains the primary execution path for tools, maintaining backward compatibility.
  2. Fallback Resilience: Direct execution serves as a fallback when the MCP server is unavailable.
  3. Cross-Platform Accessibility: The REST API endpoint enables non-Drupal systems to access MCP tools.
  4. Configurability: All components support enabling/disabling direct execution via configuration.

Componentsโ€‹

The implementation spans multiple projects in the LLM Platform AI Platform:

1. Neurosymbolic Intelligence Module (Drupal)โ€‹

  • REST API Endpoint: Implements a new REST endpoint at /api/neurosymbolic/mcp/execute-tool
  • Controller Method: Adds an executeMcpTool() method to NeuroSymbolicRestController
  • OpenAPI Documentation: Documents the new endpoint in the OpenAPI specification
  • Functional Tests: Verifies the endpoint's functionality and error handling

2. LLM-MCP (Model Context Protocol Server)โ€‹

  • DrupalContextBridge: Enhanced to support direct tool execution via Drupal REST API
  • MCPToolExecutor: Updated to use direct execution as a fallback when standard execution fails
  • MCPToolRegistry: Added methods to enable/disable direct Drupal fallback
  • Configuration: Added environment variables and config options to control fallback behavior

3. BFLLM (LLM Inference Engine)โ€‹

  • NeuroSymbolicService: Enhanced with direct MCP tool execution capability
  • Fallback Mechanism: Implemented automatic fallback to direct execution when standard execution fails
  • Configuration: Added environment variables and config options to control direct execution
  • Examples: Added example code demonstrating direct execution usage

4. llmcli (Command Line Interface)โ€‹

  • MCPClient: Enhanced to support direct tool execution and automatic fallback
  • Configuration: Added options to control direct execution behavior
  • Tests: Added tests to verify direct execution and fallback functionality

5. llm-gateway-sdk (API Client Libraries)โ€‹

  • McpService: Added a new method for direct tool execution
  • Tests: Added tests to verify direct execution functionality

Integration Flowโ€‹

The integration flow for the feature is as follows:

  1. Primary Path: Client calls an MCP tool via the standard MCP server
  2. Fallback Path: If the MCP server is unavailable, client falls back to direct execution via the Drupal REST API
  3. Execution: The tool is executed by the Drupal module using the standard tool registry
  4. Response: Results are returned to the client in a standardized format

Configuration Optionsโ€‹

The direct execution capability can be configured in several ways:

Environment Variablesโ€‹

# For DrupalContextBridge in LLM-MCP
export DRUPAL_DIRECT_TOOL_EXECUTION=true

# For NeuroSymbolicService in BFLLM
export ENABLE_DIRECT_DRUPAL_EXECUTION=true
export DRUPAL_BASE_URL=https://example.com

Programmatic Configurationโ€‹

// For DrupalContextBridge in LLM-MCP
const bridge = new DrupalContextBridge({
directToolExecution: true,
baseUrl: 'https://example.com',
});

// For NeuroSymbolicService in BFLLM
const service = new NeuroSymbolicService({
enableDirectDrupalExecution: true,
drupalBaseUrl: 'https://example.com',
});

// For MCPClient in llmcli
const client = new MCPClient({
useDrupalDirect: true,
drupalBaseUrl: 'https://example.com',
});

API Referenceโ€‹

REST API Endpointโ€‹

POST /api/neurosymbolic/mcp/execute-tool

Request body:
{
"tool_name": string,
"parameters": object
}

Response:
{
"success": boolean,
"result": any,
"error": string (optional)
}

DrupalContextBridge (LLM-MCP)โ€‹

// Execute a tool directly via Drupal
async executeToolDirectly(
toolName: string,
parameters: Record<string, any> = {}
): Promise<any>

NeuroSymbolicService (BFLLM)โ€‹

// Execute a tool directly via Drupal
async executeMcpToolDirectly(
toolName: string,
parameters: Record<string, any> = {}
): Promise<any>

// Execute a tool with fallback to direct execution
async executeMcpTool(
toolName: string,
parameters: Record<string, any> = {},
mcpExecutor: any,
options: { useFallback?: boolean } = {}
): Promise<any>

MCPClient (llmcli)โ€‹

// Execute a tool directly via Drupal
async executeToolDirectly(
toolName: string,
parameters: Record<string, unknown> = {}
): Promise<any>

// Execute a tool with automatic fallback
async executeTool(
toolName: string,
parameters: Record<string, unknown> = {}
): Promise<any>

Testingโ€‹

Comprehensive tests were implemented across all projects:

  1. Functional Tests: Testing the Drupal REST API endpoint
  2. Unit Tests: Testing individual components in each project
  3. Integration Tests: Testing cross-project integration
  4. Configuration Tests: Testing different configuration options

Example Usageโ€‹

Direct Executionโ€‹

// Using NeuroSymbolicService
const service = new NeuroSymbolicService({
enableDirectDrupalExecution: true,
drupalBaseUrl: 'https://example.com',
});

const result = await service.executeMcpToolDirectly('knowledge_graph_search', {
query: 'artificial intelligence',
});

Fallback Executionโ€‹

// Using MCPClient with automatic fallback
const client = new MCPClient({
mcpServerUrl: 'https://mcp.example.com',
useDrupalDirect: true,
drupalBaseUrl: 'https://example.com',
});

try {
// This will automatically try direct execution if MCP server is unavailable
const result = await client.executeTool('knowledge_graph_search', {
query: 'artificial intelligence',
});
} catch (error) {
console.error('Both execution methods failed:', error);
}

Conclusionโ€‹

The Direct MCP Tool Execution feature enhances the LLM Platform AI Platform by providing:

  1. Improved Resilience: Fallback mechanism when the MCP server is unavailable
  2. Enhanced Accessibility: Simplified integration for external systems
  3. Reduced Dependencies: Less reliance on the MCP server for basic tool execution
  4. Flexible Configuration: Options to control direct execution behavior

This implementation ensures the platform can continue to provide reliable AI capabilities even in challenging network conditions or during MCP server maintenance.

Next Stepsโ€‹

Potential future enhancements include:

  1. Performance Optimization: Caching common tool executions for improved response times
  2. Authentication Enhancements: Advanced authentication mechanisms for the REST API endpoint
  3. Metrics and Monitoring: Advanced telemetry for direct execution requests
  4. Load Balancing: Intelligent distribution of requests between MCP server and direct execution

Document Version: 1.0.0
Last Updated: June 1, 2025
Status: Implementation Complete