Slack MCP Client
A production-ready bridge between Slack and AI models with full MCP compatibility.
This client enables AI models (OpenAI, Anthropic, Ollama) to interact with real tools and systems through Slack conversations. Built on the industry-standard Model Context Protocol (MCP), it provides secure access to filesystems, databases, Kubernetes clusters, Git repositories, and custom tools.
Compatible with MCP Specification 2025-06-18 - Compliant with the latest Model Context Protocol standards
Key Features
Universal MCP Compatibility - Supports all transport methods (HTTP, SSE, stdio)
Multi-Provider LLM Support - OpenAI GPT-4o, Anthropic Claude, Ollama local models
Agent Mode - Multi-step reasoning with LangChain for complex workflows
RAG Integration - Knowledge base with semantic search capabilities
Thread-Aware Context - Maintains separate conversation history per Slack thread
User Context Integration - Personalized responses with cached user information
Unique Tool Naming - Server-prefixed tool names prevent conflicts across MCP servers
Production Ready - Comprehensive configuration, monitoring, and security
Use Cases
DevOps Teams - Infrastructure automation and monitoring through Slack
Development Teams - Code review, Git operations, and file management
Support Teams - Database queries, system status checks, and troubleshooting
General Use - AI assistance with actual tools and system integration
MCP Compatibility
Compliant with the official Model Context Protocol (2025-06-18 specification):
All Transport Methods - HTTP, SSE, and stdio protocols
JSON-RPC 2.0 - Standard communication protocol
Official MCP Servers - Compatible with all modelcontextprotocol/servers
Custom MCP Servers - Works with any MCP-compliant server
Security Standards - Implements user consent, data privacy, and tool safety requirements
Authenticating to SSE MCP Servers
Authentication with Server-Sent Events (SSE) MCP servers can be achieved using the following setup:
Example:
{
"httpHeaders": {
"Authorization": "Bearer YOUR_TOKEN_HERE"
}
}
Make sure to replace YOUR_TOKEN_HERE
with your actual token for authentication.
How It Works
User interacts through Slack, sending messages that trigger intelligent AI workflows
LLM-MCP Bridge serves as the intelligent orchestration layer that:
Routes requests to appropriate LLM providers (OpenAI, Anthropic, Ollama)
Chooses between Agent Mode (multi-step reasoning) or Standard Mode (single response)
Integrates RAG system for knowledge retrieval and context enhancement
Manages tool discovery and execution across multiple MCP servers
Knowledge & Memory system provides contextual intelligence:
Simple JSON store for lightweight document storage
OpenAI Vector Store for semantic search and enterprise-grade RAG
Tool Ecosystem connects to diverse external systems:
Filesystem operations for file management
Git integration for repository interactions
Kubernetes cluster management and monitoring
Custom tools via HTTP, SSE, or stdio protocols
Infrastructure ensures production-ready deployment:
Unified JSON configuration with environment variable support
Prometheus metrics for observability and monitoring
OpenTelemetry tracing with Langfuse and OTLP providers
Structured logging for debugging and analytics
Features
✅ Multi-Mode MCP Client:
Server-Sent Events (SSE) for real-time communication with automatic retry
HTTP transport for JSON-RPC
stdio for local development and testing
✅ Slack Integration:
Uses Socket Mode for secure, firewall-friendly communication
Works with both channels and direct messages
Rich message formatting with Markdown and Block Kit
Thread-aware conversation tracking with separate context per thread
User context caching for personalized interactions
Customizable bot behavior and message history
✅ Multi-Provider LLM Support:
OpenAI (GPT-4, GPT-4o, etc.)
Anthropic (Claude 3.5 Sonnet, etc.)
Ollama (Local LLMs like Llama, Mistral, etc.)
Native tool calling and agent mode support
LangChain gateway for unified API
✅ Agent Mode:
Autonomous AI agents powered by LangChain
Multi-step reasoning and tool orchestration
Configurable agent iterations and behavior
Streaming responses with real-time updates
Advanced prompt engineering capabilities
✅ RAG (Retrieval-Augmented Generation):
Multiple providers: Simple JSON storage, OpenAI Vector Store
Reusable vector stores with
vectorStoreId
supportConfigurable search parameters and similarity metrics
PDF ingestion with intelligent chunking
CLI tools for document management
✅ Unified Configuration:
Single JSON configuration file with JSON schema validation
Comprehensive timeout and retry configuration
Environment variable substitution and overrides
All underlying package options exposed
Smart defaults with full customization capability
Server-prefixed tool names to prevent naming conflicts
✅ Production Ready:
Docker container support with GHCR publishing
Kubernetes Helm charts with OCI registry
Comprehensive logging and error handling
Test coverage with security scanning
✅ Monitoring & Observability:
Prometheus metrics integration
Tool invocation tracking with error rates
LLM token usage monitoring by model and type
OpenTelemetry tracing with Langfuse and simple providers
Configurable observability providers with graceful fallbacks
Comprehensive span tracking for LLM operations and tool calls
Configurable metrics endpoint and logging levels
Installation
From Binary Release
Download the latest binary from the GitHub releases page or install using Go:
# Install latest version using Go
go install github.com/tuannvm/slack-mcp-client@latest
# Or build from source
git clone https://github.com/tuannvm/slack-mcp-client.git
cd slack-mcp-client
make build
# Binary will be in ./bin/slack-mcp-client
Running Locally with Binary
After installing the binary, you can run it locally with the following steps:
Set up environment variables:
# Using environment variables directly
export SLACK_BOT_TOKEN="xoxb-your-bot-token"
export SLACK_APP_TOKEN="xapp-your-app-token"
export OPENAI_API_KEY="sk-your-openai-key"
export OPENAI_MODEL="gpt-4o"
export LOG_LEVEL="info"
# Or create a .env file and source it
cat > .env << EOL
SLACK_BOT_TOKEN="xoxb-your-bot-token"
SLACK_APP_TOKEN="xapp-your-app-token"
OPENAI_API_KEY="sk-your-openai-key"
OPENAI_MODEL="gpt-4o"
LOG_LEVEL="info"
EOL
source .env
Create a unified configuration file:
# Create config.json with the new unified configuration format
cat > config.json << EOL
{
"\$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "\${SLACK_BOT_TOKEN}",
"appToken": "\${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "\${OPENAI_API_KEY}",
"temperature": 0.7
}
}
},
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "\$HOME"]
}
},
"monitoring": {
"enabled": true,
"metricsPort": 8080,
"loggingLevel": "info"
},
"observability": {
"enabled": true,
"provider": "simple-otel",
"endpoint": "${OTEL_EXPORTER_OTLP_ENDPOINT}",
"serviceName": "slack-mcp-client",
"serviceVersion": "1.0.0"
}
}
EOL
Run the application:
# Run with unified configuration (looks for config.json in current directory)
slack-mcp-client --config config.json
# Enable debug mode with structured logging
slack-mcp-client --config config.json --debug
# Validate configuration before running
slack-mcp-client --config-validate --config config.json
# Configure metrics port via config file or flag
slack-mcp-client --config config.json --metrics-port 9090
Migrating from Legacy Configuration
If you have an existing mcp-servers.json
file from a previous version, you can migrate to the new unified configuration format:
# Automatic migration (recommended)
slack-mcp-client --migrate-config --config legacy-mcp-servers.json --output config.json
# Manual migration: Use examples as templates
cp examples/minimal.json config.json
# Edit config.json with your specific settings
# Validate the new configuration
slack-mcp-client --config-validate --config config.json
The new configuration format provides:
Single File: All settings in one
config.json
fileJSON Schema: IDE support with autocomplete and validation
Environment Variables: Use
${VAR_NAME}
syntax for secretsSmart Defaults: Minimal configuration required for basic usage
Comprehensive Options: All underlying package settings exposed
The application will connect to Slack and start listening for messages. You can check the logs for any errors or connection issues.
RAG Setup and Usage
The client includes an improved RAG (Retrieval-Augmented Generation) system that's compatible with LangChain Go and provides professional-grade performance:
Quick Start with RAG
Enable RAG in your configuration:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}"
}
}
},
"rag": {
"enabled": true,
"provider": "simple",
"chunkSize": 1000,
"providers": {
"simple": {
"databasePath": "./knowledge.json"
},
"openai": {
"indexName": "my-knowledge-base",
"vectorStoreId": "vs_existing_store_id",
"dimensions": 1536,
"maxResults": 10
}
}
}
}
Ingest documents using CLI:
# Ingest PDF files from a directory
slack-mcp-client --rag-ingest ./company-docs --rag-db ./knowledge.json
# Test search functionality
slack-mcp-client --rag-search "vacation policy" --rag-db ./knowledge.json
# Get database statistics
slack-mcp-client --rag-stats --rag-db ./knowledge.json
Use in Slack:
Once configured, the LLM can automatically search your knowledge base:
User: "What's our vacation policy?"
AI: "Let me search our knowledge base for vacation policy information..." (Automatically searches RAG database)
AI: "Based on our company policy documents, you get 15 days of vacation..."
RAG Features
🎯 Smart Search: Advanced relevance scoring with word frequency, filename boosting, and phrase matching
🔗 LangChain Compatible: Drop-in replacement for standard vector stores
📈 Extensible: Easy to add vector embeddings and other backends
Custom Prompts and Assistants
The client supports advanced prompt engineering capabilities for creating specialized AI assistants:
System Prompts
Create custom AI personalities and behaviors:
# Create a custom system prompt file
cat > sales-assistant.txt << EOL
You are SalesGPT, a helpful sales assistant specializing in B2B software sales.
Your expertise includes:
- Lead qualification and discovery
- Solution positioning and value propositions
- Objection handling and negotiation
- CRM best practices and sales processes
Always:
- Ask qualifying questions to understand prospect needs
- Provide specific, actionable sales advice
- Reference industry best practices
- Maintain a professional yet friendly tone
When discussing pricing, always emphasize value over cost.
EOL
# Use the custom prompt
slack-mcp-client --system-prompt ./sales-assistant.txt
Configuration-Based Prompts
Define prompts in your configuration:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"customPrompt": "You are a helpful DevOps assistant specializing in Kubernetes and cloud infrastructure.",
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}",
"temperature": 0.7
}
}
}
}
Assistant Roles
Create specialized assistants for different use cases:
DevOps Assistant: Kubernetes, Docker, CI/CD expertise
Sales Assistant: Lead qualification, objection handling
HR Assistant: Policy questions, onboarding guidance
Support Assistant: Customer issue resolution
Code Review Assistant: Security, performance, best practices
Agent Mode
Agent Mode enables more interactive and context-aware conversations using LangChain's agent framework. Instead of single-prompt interactions, agents can engage in multi-step reasoning, use tools more strategically, and maintain better context throughout conversations.
How Agent Mode Works
Agent Mode uses LangChain's conversational agent framework to provide:
Interactive Conversations: Maintains context across multiple exchanges
Strategic Tool Usage: Agents decide when and how to use available tools
Multi-Step Reasoning: Can break down complex problems into manageable steps
Streaming Responses: Provides real-time updates during processing
User Context Integration: Incorporates cached user information for personalized responses
Thread Context Awareness: Maintains separate conversation history per Slack thread
Agent Mode Configuration
Enable Agent Mode in your configuration file:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"useAgent": true,
"customPrompt": "You are a DevOps expert specializing in Kubernetes and cloud infrastructure. Always think through problems step by step.",
"maxAgentIterations": 20,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}",
"temperature": 0.7
}
}
},
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
},
"github": {
"command": "github-mcp-server",
"args": ["stdio"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}
Configuration Options
llm.useAgent
: Enable agent mode (default: false)llm.useNativeTools
: Use native LangChain tools vs system prompt-based tools (default: false)llm.customPrompt
: System prompt for agent behaviorllm.maxAgentIterations
: Maximum agent reasoning steps (default: 20)
Agent vs Standard Mode
Standard Mode:
Single-prompt interactions
Tools described in system prompt as JSON schemas
Direct tool call parsing and execution
More predictable token usage
Simpler conversation flow
Agent Mode:
Multi-turn conversational interactions
Context-aware tool usage decisions
Better user context integration
More natural conversation flow
Enhanced reasoning capabilities
Agent Mode Examples
Interactive Development Consultation:
User: "I need help optimizing my React app performance"
Agent Response:
🤖 I'd be happy to help optimize your React app performance! Let me understand your current setup better.
[Agent maintains conversation context and asks relevant follow-up questions]
Agent: "What specific performance issues are you experiencing? Are you seeing slow renders, large bundle sizes, or something else?"
User: "The app takes too long to load initially"
Agent: "Let me check your current bundle setup and suggest optimizations..."
[Agent uses filesystem tools to analyze the project structure and provides targeted advice]
Contextual Problem Solving:
User: "Can you help me with my deployment pipeline?"
Agent Response:
🤖 I'll help you with your deployment pipeline. Since I know you're working on a React project, let me check your current CI/CD setup.
[Agent leverages previous conversation context and user information to provide personalized assistance]
[Agent strategically uses relevant tools based on the conversation flow]
Agent Mode Best Practices
System Prompts: Design clear, specific system prompts that guide the agent's behavior
Tool Selection: Provide relevant tools for the agent's domain
Context Management: Agents maintain better context across conversations
User Personalization: Leverage user context integration for personalized responses
Tool Strategy: Choose between native tools or system prompt-based tools based on your needs
Limitations and Considerations
OpenAI Agent: Native OpenAI agent in langchaingo has known issues, uses conversational agent as workaround
LangChain Dependency: Agent mode requires LangChain provider
Permissions: May require additional Slack permissions for user information retrieval
Performance: Agent mode may have different performance characteristics than standard mode
Kubernetes Deployment with Helm
For deploying to Kubernetes, a Helm chart is available in the helm-chart
directory. This chart provides a flexible way to deploy the slack-mcp-client with proper configuration and secret management.
Installing from GitHub Container Registry
The Helm chart is also available directly from GitHub Container Registry, allowing for easier installation without needing to clone the repository:
# Add the OCI repository to Helm (only needed once)
helm registry login ghcr.io -u USERNAME -p GITHUB_TOKEN
# Pull the Helm chart
helm pull oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0
# Or install directly
helm install my-slack-bot oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0 -f values.yaml
You can check available versions by visiting the GitHub Container Registry in your browser.
Prerequisites
Kubernetes 1.16+
Helm 3.0+
Slack Bot and App tokens
Basic Installation
# Create a values file with your configuration
cat > values.yaml << EOL
secret:
create: true
env:
SLACK_BOT_TOKEN: "xoxb-your-bot-token"
SLACK_APP_TOKEN: "xapp-your-app-token"
OPENAI_API_KEY: "sk-your-openai-key"
OPENAI_MODEL: "gpt-4o"
LOG_LEVEL: "info"
# Optional: Configure MCP servers
configMap:
create: true
EOL
# Install the chart
helm install my-slack-bot ./helm-chart/slack-mcp-client -f values.yaml
Configuration Options
The Helm chart supports various configuration options including:
Setting resource limits and requests
Configuring MCP servers via ConfigMap
Managing sensitive data via Kubernetes secrets
Customizing deployment parameters
For more details, see the Helm chart README.
Using the Docker Image from GHCR
The Helm chart uses the Docker image from GitHub Container Registry (GHCR) by default. You can specify a particular version or use the latest tag:
# In your values.yaml
image:
repository: ghcr.io/tuannvm/slack-mcp-client
tag: "latest" # Or use a specific version like "1.0.0"
pullPolicy: IfNotPresent
To manually pull the image:
# Pull the latest image
docker pull ghcr.io/tuannvm/slack-mcp-client:latest
# Or pull a specific version
docker pull ghcr.io/tuannvm/slack-mcp-client:1.0.0
If you're using private images, you can configure image pull secrets in your values:
imagePullSecrets:
- name: my-ghcr-secret
Docker Compose for Local Testing
For local testing and development, you can use Docker Compose to easily run the slack-mcp-client along with additional MCP servers.
Setup
Create a
.env
file with your credentials:
# Create .env file from example
cp .env.example .env
# Edit the file with your credentials
nano .env
Create a
mcp-servers.json
file (or use the example):
# Create mcp-servers.json from example
cp mcp-servers.json.example mcp-servers.json
# Edit if needed
nano mcp-servers.json
Start the services:
# Start services in detached mode
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
Docker Compose Configuration
The included docker-compose.yml
provides:
Environment variables loaded from
.env
fileVolume mounting for MCP server configuration
Examples of connecting to additional MCP servers (commented out)
version: '3.8'
services:
slack-mcp-client:
image: ghcr.io/tuannvm/slack-mcp-client:latest
container_name: slack-mcp-client
environment:
- SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN}
- SLACK_APP_TOKEN=${SLACK_APP_TOKEN}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OPENAI_MODEL=${OPENAI_MODEL:-gpt-4o}
volumes:
- ./mcp-servers.json:/app/mcp-servers.json:ro
You can easily extend this setup to include additional MCP servers in the same network.
Slack App Setup
Create a new Slack app at https://api.slack.com/apps
Enable Socket Mode and generate an app-level token
Check
Allow users to send Slash commands and messages from the chat tab
in App Home page, to enable direct message to Slack app.Add the following Bot Token Scopes:
app_mentions:read
chat:write
im:history
im:read
im:write
users:read
users.profile:read
Enable Event Subscriptions and subscribe to:
app_mention
message.im
Install the app to your workspace
For detailed instructions on Slack app configuration, token setup, required permissions, and troubleshooting common issues, see the Slack Configuration Guide.
LLM Integration
The client supports multiple LLM providers through a flexible integration system:
LangChain Gateway
The LangChain gateway enables seamless integration with various LLM providers:
OpenAI: Native support for GPT models (default)
Ollama: Local LLM support for models like Llama, Mistral, etc.
Extensible: Can be extended to support other LangChain-compatible providers
LLM-MCP Bridge
The custom LLM-MCP bridge layer enables any LLM to use MCP tools without requiring native function-calling capabilities:
Universal Compatibility: Works with any LLM, including those without function-calling
Pattern Recognition: Detects when a user prompt or LLM response should trigger a tool call
Natural Language Support: Understands both structured JSON tool calls and natural language requests
Configuration
LLM providers can be configured via environment variables or command-line flags:
# Set OpenAI as the provider (default)
export LLM_PROVIDER="openai"
export OPENAI_MODEL="gpt-4o"
# Use Anthropic
export LLM_PROVIDER="anthropic"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"
# Or use Ollama
export LLM_PROVIDER="ollama"
export LANGCHAIN_OLLAMA_URL="http://localhost:11434"
export LANGCHAIN_OLLAMA_MODEL="llama3"
Switching Between Providers
You can easily switch between providers by changing the LLM_PROVIDER
environment variable:
# Use OpenAI
export LLM_PROVIDER=openai
# Use Anthropic
export LLM_PROVIDER=anthropic
# Use Ollama (local)
export LLM_PROVIDER=ollama
Configuration
The client uses two main configuration approaches:
Environment Variables
Configure LLM providers and Slack integration using environment variables:
SLACK_BOT_TOKEN
Bot token for Slack API
(required)
SLACK_APP_TOKEN
App-level token for Socket Mode
(required)
OPENAI_API_KEY
API key for OpenAI authentication
(required)
OPENAI_MODEL
OpenAI model to use
gpt-4o
ANTHROPIC_API_KEY
API key for Anthropic authentication
(required for Anthropic)
ANTHROPIC_MODEL
Anthropic model to use
claude-3-5-sonnet-20241022
LOG_LEVEL
Logging level (debug, info, warn, error)
info
LLM_PROVIDER
LLM provider to use (openai, anthropic, ollama)
openai
LANGCHAIN_OLLAMA_URL
URL for Ollama when using LangChain
http://localhost:11434
LANGCHAIN_OLLAMA_MODEL
Model name for Ollama when using LangChain
llama3
LANGFUSE_ENDPOINT
Langfuse API endpoint for observability
(optional)
LANGFUSE_PUBLIC_KEY
Langfuse public key for authentication
(optional)
LANGFUSE_SECRET_KEY
Langfuse secret key for authentication
(optional)
OTEL_EXPORTER_OTLP_ENDPOINT
OTLP endpoint for simple tracing
(optional)
Monitoring & Observability Configuration
The client includes comprehensive monitoring capabilities with both metrics and distributed tracing:
Prometheus Metrics
Metrics Endpoint: Accessible at
/metrics
on the configured portDefault Port: 8080 (configurable via
--metrics-port
flag)Metrics Available:
slackmcp_tool_invocations_total
: Counter for tool invocations with labels for tool name, server, and error statusslackmcp_llm_tokens
: Histogram for LLM token usage by type and model
OpenTelemetry Tracing
Supported Providers:
simple-otel
: Basic OpenTelemetry tracing to OTLP endpoints (requires endpoint configuration)langfuse-otel
: Advanced LLM observability with Langfuse integration (requires endpoint and auth)disabled
: No tracing (default when no endpoint configured)
Automatic Fallbacks: Failed providers automatically fall back to disabled state
Comprehensive Tracking: Spans for LLM operations, tool calls, and user interactions with detailed attributes
Example configuration and usage:
# Access metrics endpoint
curl http://localhost:8080/metrics
# Run with custom metrics port
slack-mcp-client --metrics-port 9090
# Enable Langfuse tracing (example)
export LANGFUSE_ENDPOINT="https://cloud.langfuse.com"
export LANGFUSE_PUBLIC_KEY="pk-your-public-key"
export LANGFUSE_SECRET_KEY="sk-your-secret-key"
# Enable simple OTLP tracing (example)
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
Unified Configuration Format
All configuration is now managed through a single config.json
file with comprehensive options:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}",
"messageHistory": 50,
"thinkingMessage": "Processing..."
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"useAgent": false,
"customPrompt": "You are a helpful assistant.",
"maxAgentIterations": 20,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}",
"temperature": 0.7,
"maxTokens": 2000
},
"anthropic": {
"model": "claude-3-5-sonnet-20241022",
"apiKey": "${ANTHROPIC_API_KEY}",
"temperature": 0.7
}
}
},
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
"initializeTimeoutSeconds": 30,
"tools": {
"allowList": ["read_file", "write_file", "list_directory"],
"blockList": ["delete_file"]
}
},
"web-api": {
"url": "http://localhost:8080/mcp",
"transport": "sse",
"initializeTimeoutSeconds": 30
}
},
"rag": {
"enabled": true,
"provider": "openai",
"chunkSize": 1000,
"providers": {
"openai": {
"vectorStoreId": "vs_existing_store_id",
"dimensions": 1536,
"maxResults": 10
}
}
},
"timeouts": {
"httpRequestTimeout": "30s",
"toolProcessingTimeout": "3m",
"mcpInitTimeout": "30s"
},
"retry": {
"maxAttempts": 3,
"baseBackoff": "500ms",
"maxBackoff": "5s"
},
"monitoring": {
"enabled": true,
"metricsPort": 8080,
"loggingLevel": "info"
},
"observability": {
"enabled": true,
"provider": "langfuse-otel",
"endpoint": "${LANGFUSE_ENDPOINT}",
"publicKey": "${LANGFUSE_PUBLIC_KEY}",
"secretKey": "${LANGFUSE_SECRET_KEY}",
"serviceName": "slack-mcp-client",
"serviceVersion": "1.0.0"
}
}
For detailed configuration options and migration guides, see the Configuration Guide.
Automatic Reload Feature
The client supports optional automatic reloading to handle MCP server restarts without downtime - perfect for Kubernetes deployments where MCP servers may restart independently.
Note: The reload feature is disabled by default and must be explicitly enabled in your configuration file.
Configuration
To enable reload functionality, add reload settings to your config.json
:
{
"version": "2.0",
"reload": {
"enabled": true,
"interval": "30m"
}
}
Configuration Options:
enabled
: Must be set totrue
to activate reload functionality (default:false
)interval
: Time between automatic reloads (default:"30m"
, minimum:"10s"
)
Usage
Automatic Reload: When enabled, the application automatically reloads at the configured interval to reconnect to MCP servers and refresh tool discovery.
Manual Reload: Even with automatic reload disabled, you can trigger manual reloads using signals:
# In Kubernetes
kubectl exec -it <pod-name> -- kill -USR1 1
# Local process
kill -USR1 <process-id>
Benefits
Zero Downtime: Application stays running during reload
Kubernetes-Friendly: Pod continues running while application components restart
Opt-in: Disabled by default, only enabled when explicitly configured
Flexible: Both automatic (periodic) and manual (signal) triggers
Safe: Minimum interval validation prevents excessive reloading
When enabled, the reload feature automatically:
Reconnects to all configured MCP servers
Rediscovers available tools
Refreshes configuration settings
Maintains Slack connection throughout the process
Perfect for production environments where MCP servers may restart due to updates, scaling, or maintenance.
Slack-Formatted Output
The client includes a comprehensive Slack-formatted output system that enhances message display in Slack:
Automatic Format Detection: Automatically detects message type (plain text, markdown, JSON Block Kit, structured data) and applies appropriate formatting
Markdown Formatting: Supports Slack's mrkdwn syntax with automatic conversion from standard Markdown
Converts
**bold**
to*bold*
for proper Slack bold formattingPreserves inline code, block quotes, lists, and other formatting elements
Quoted String Enhancement: Automatically converts double-quoted strings to inline code blocks for better visualization
Example:
"namespace-name"
becomes`namespace-name`
in SlackImproves readability of IDs, timestamps, and other quoted values
Block Kit Integration: Converts structured data to Block Kit layouts for better visual presentation
Automatically validates against Slack API limits
Falls back to plain text if Block Kit validation fails
For more details, see the Slack Formatting Guide.
Transport Modes
The client supports three transport modes:
SSE (default): Uses Server-Sent Events for real-time communication with the MCP server, includes automatic retry logic for enhanced reliability
HTTP: Uses HTTP POST requests with JSON-RPC for communication
stdio: Uses standard input/output for local development and testing
Documentation
Comprehensive documentation is available in the docs/
directory:
Configuration & Setup
Slack Configuration Guide - Complete guide for setting up your Slack app, including required permissions, tokens, and troubleshooting common issues
Development & Implementation
Implementation Notes - Detailed technical documentation covering the current architecture, core components, and implementation details
Requirements Specification - Comprehensive requirements documentation including implemented features, quality requirements, and future enhancements
User Guides
Slack Formatting Guide - Complete guide to message formatting including Markdown-to-Slack conversion, Block Kit layouts, and automatic format detection
RAG Implementation Guide - Detailed guide for the improved RAG system with LangChain Go compatibility and performance optimizations
RAG SQLite Implementation - Implementation plan for native Go SQLite integration with ChatGPT-like upload experience
Testing Guide - Comprehensive testing documentation covering unit tests, integration tests, manual testing procedures, and debugging
Quick Links
Setup: Start with the Slack Configuration Guide for initial setup
Agent Mode: See the Agent Mode section above for autonomous AI agents with tool chaining
RAG: Check the RAG Implementation Guide for document knowledge base integration
Formatting: See the Slack Formatting Guide for message formatting capabilities
RAG SQLite: See the RAG SQLite Implementation for native Go implementation with modern upload UX
Development: Check the Implementation Notes for technical details
Testing: Use the Testing Guide for testing procedures and debugging
Monitoring: See the metrics configuration section above for Prometheus integration
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
CI/CD and Releases
This project uses GitHub Actions for continuous integration and GoReleaser for automated releases.
Continuous Integration Checks
Our CI pipeline performs the following checks on all PRs and commits to the main branch:
Code Quality
Linting: Using golangci-lint to check for common code issues and style violations
Go Module Verification: Ensuring go.mod and go.sum are properly maintained
Formatting: Verifying code is properly formatted with gofmt
Security
Vulnerability Scanning: Using govulncheck to check for known vulnerabilities in dependencies
Dependency Scanning: Using Trivy to scan for vulnerabilities in dependencies
SBOM Generation: Creating a Software Bill of Materials for dependency tracking
Testing
Unit Tests: Running tests with race detection and code coverage reporting
Build Verification: Ensuring the codebase builds successfully
Release Process
When changes are merged to the main branch:
CI checks are run to validate code quality and security
If successful, a new release is automatically created with:
Semantic versioning based on commit messages
Binary builds for multiple platforms
Docker image publishing to GitHub Container Registry
Helm chart publishing to GitHub Container Registry
Last updated
Was this helpful?