Implementation Notes: Slack MCP Client
Current State
Project Goal: Build a Slack bot that interacts with external tools and data sources via the Model Context Protocol (MCP), implemented in Go.
Architecture: See
README.md
for the high-level design. The core components are the Slack integration, the Go application (slack-mcp-client
), and external MCP servers.Slack Integration: Full-featured Slack client using
slack-go/slack
with Socket Mode for secure communication, supporting mentions, direct messages, and rich Block Kit formatting.MCP Client Configuration: Uses a flexible configuration approach for multiple MCP servers through
mcp-servers.json
following the schema defined inmcp-schema.json
.LLM Integration: Multi-provider LLM support through a factory pattern with LangChain as the gateway, supporting OpenAI, Anthropic, and Ollama providers.
Architecture & Implementation
Core Components
Configuration System (
internal/config/
)Configuration loaded from environment variables for Slack credentials and LLM settings
MCP server configurations from JSON file following the schema with
mcpServers
propertyEach server defined with
command
,args
,mode
, and optionalenv
propertiesSupport for both HTTP/SSE and stdio transport modes
LLM provider configuration with factory pattern support
Slack Client (
internal/slack/
)Connects to Slack using Socket Mode for secure, firewall-friendly communication
Handles app mentions and direct messages
Processes user prompts and forwards them to LLM providers
Advanced message formatting with Block Kit support through
internal/slack/formatter/
Returns responses and tool results back to Slack channels/DMs with rich formatting
MCP Client (
internal/mcp/
)Support for multiple transport protocols:
HTTP/SSE (Server-Sent Events): For real-time communication with web-based MCP servers
stdio: For local development with command-line tools
Dynamic initialization with proper command line argument parsing
Runtime discovery of available tools from MCP servers
Uses mcp-go v0.31.0 with unified transport interface
LLM Provider System (
internal/llm/
)Factory pattern for provider registration and initialization
Registry system for managing multiple LLM providers
Configuration-driven provider setup
LangChain as the unified gateway for all providers
Support for OpenAI, Anthropic, and Ollama providers
Availability checking and fallback mechanisms
Handler System (
internal/handlers/
)Interface-based design for tool handlers
LLM-MCP Bridge for detecting tool invocation patterns
Registry for centralized handler management
Support for both structured JSON tool calls and natural language detection
Common Utilities (
internal/common/
)Structured logging with hierarchical loggers (
internal/common/logging/
)Standardized error handling (
internal/common/errors/
)HTTP client with retry logic (
internal/common/http/
)Shared types and utilities
MCP Server Configuration
MCP servers are configured using a JSON file following this structure:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/tuannvm/Projects",
"/Users/tuannvm/Downloads"
],
"env": {
"DEBUG": "mcp:*"
}
},
"github": {
"command": "github-mcp-server",
"args": ["stdio"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-github-token"
}
},
"web-server": {
"mode": "http",
"url": "http://localhost:8080/mcp",
"initialize_timeout_seconds": 30
}
}
}
LLM Provider Configuration
LLM providers are configured in the main configuration with factory pattern support:
llm_provider: "openai" # Which provider to use
llm_providers:
openai:
type: "openai"
model: "gpt-4o"
# api_key loaded from OPENAI_API_KEY env var
ollama:
type: "ollama"
model: "llama3"
base_url: "http://localhost:11434"
anthropic:
type: "anthropic"
model: "claude-3-5-sonnet-20241022"
# api_key loaded from ANTHROPIC_API_KEY env var
Main Process Flow
Start-up Sequence:
Parse command-line arguments with debug flags
Load environment variables and configuration file
Initialize structured logging system
Initialize LLM provider registry with configured providers
Initialize MCP clients for each configured server
Connect to Slack using Socket Mode
Message Processing:
Receive messages from Slack (mentions or DMs)
Forward messages to configured LLM provider through registry
Process LLM response through LLM-MCP Bridge
If tool invocation is detected:
Execute appropriate MCP tool call
Format tool results with Slack-compatible formatting
Return final response to Slack with Block Kit formatting
Tool Detection & Execution:
JSON pattern matching for structured tool invocations
Regular expression matching for natural language requests
Validate against available tools discovered from MCP servers
Execute tool call with appropriate parameters
Process and format tool results with rich Slack formatting
Slack Message Formatting
The application includes a comprehensive Slack formatting system:
Automatic Format Detection:
Plain text with mrkdwn formatting
JSON Block Kit structures
Structured data converted to Block Kit
Markdown Support:
Automatic conversion from standard Markdown to Slack mrkdwn
Support for bold, italic, strikethrough, code blocks, lists, links
Quoted string conversion to inline code blocks
Block Kit Support:
Headers, sections, fields, actions, dividers
Automatic field truncation for Slack limits
Rich interactive components
Debugging & Troubleshooting
Logging System:
Structured logging with different levels (Debug, Info, Warn, Error)
Component-specific loggers for better tracking
Environment variable support for log level configuration
MCP Transport Issues:
Resolved stdio transport issues by sequential processing
Proper timeout handling for server initialization
Support for both HTTP/SSE and stdio transports
Configuration Validation:
Automatic fallback to available providers
Server disable/enable functionality
Environment variable overrides
Future Improvements
Enhanced Tool Discovery
Better caching of discovered tools
Dynamic tool refresh capabilities
Tool usage analytics
Advanced LLM Features
Function calling support for compatible providers
Conversation context management
Multi-turn conversation support
Monitoring & Observability
Metrics collection for tool usage
Performance monitoring
Health check endpoints
Security Enhancements
Tool permission controls
User-based access restrictions
API key rotation support
Last updated
Was this helpful?