Slack MCP Client in Go
Last updated
Was this helpful?
Last updated
Was this helpful?
This project provides a Slack bot client that serves as a bridge between Slack and Model Context Protocol (MCP) servers. By leveraging Slack as the user interface, it allows LLM models to interact with multiple MCP servers using standardized MCP tools.
This project implements a Slack bot client that acts as a bridge between Slack and Model Context Protocol (MCP) servers. It uses Slack as a user interface while enabling LLM models to communicate with various MCP servers through standardized MCP tools.
Important distinction: This client is not designed to interact with the Slack API directly as its primary purpose. However, it can achieve Slack API functionality by connecting to a dedicated Slack MCP server (such as ) alongside other MCP servers.
User interacts only with Slack, sending messages through the Slack interface
Slack Bot Service is a single process that includes:
The Slack Bot component that handles Slack messages
The MCP Client component that communicates with MCP servers
The MCP Client forwards requests to the appropriate MCP Server(s)
MCP Servers execute their respective tools and return results
✅ Multi-Mode MCP Client:
Server-Sent Events (SSE) for real-time communication
HTTP transport for JSON-RPC
stdio for local development and testing
✅ Slack Integration:
Uses Socket Mode for secure, firewall-friendly communication
Works with both channels and direct messages
Rich message formatting with Markdown and Block Kit
Automatic conversion of quoted strings to code blocks for better readability
✅ LLM Support:
Native OpenAI integration
LangChain gateway for multiple LLM providers
✅ Tool Registration: Dynamically register and call MCP tools
✅ Docker container support
After installing the binary, you can run it locally with the following steps:
Set up environment variables:
Create an MCP servers configuration file:
Run the application:
The application will connect to Slack and start listening for messages. You can check the logs for any errors or connection issues.
For deploying to Kubernetes, a Helm chart is available in the helm-chart
directory. This chart provides a flexible way to deploy the slack-mcp-client with proper configuration and secret management.
The Helm chart is also available directly from GitHub Container Registry, allowing for easier installation without needing to clone the repository:
You can check available versions by visiting the GitHub Container Registry in your browser.
Kubernetes 1.16+
Helm 3.0+
Slack Bot and App tokens
The Helm chart supports various configuration options including:
Setting resource limits and requests
Configuring MCP servers via ConfigMap
Managing sensitive data via Kubernetes secrets
Customizing deployment parameters
The Helm chart uses the Docker image from GitHub Container Registry (GHCR) by default. You can specify a particular version or use the latest tag:
To manually pull the image:
If you're using private images, you can configure image pull secrets in your values:
For local testing and development, you can use Docker Compose to easily run the slack-mcp-client along with additional MCP servers.
Create a .env
file with your credentials:
Create a mcp-servers.json
file (or use the example):
Start the services:
The included docker-compose.yml
provides:
Environment variables loaded from .env
file
Volume mounting for MCP server configuration
Examples of connecting to additional MCP servers (commented out)
You can easily extend this setup to include additional MCP servers in the same network.
Create a new Slack app at https://api.slack.com/apps
Enable Socket Mode and generate an app-level token
Add the following Bot Token Scopes:
app_mentions:read
chat:write
im:history
im:read
im:write
Enable Event Subscriptions and subscribe to:
app_mention
message.im
Install the app to your workspace
The client supports multiple LLM providers through a flexible integration system:
The LangChain gateway enables seamless integration with various LLM providers:
OpenAI: Native support for GPT models (default)
Ollama: Local LLM support for models like Llama, Mistral, etc.
Extensible: Can be extended to support other LangChain-compatible providers
The custom LLM-MCP bridge layer enables any LLM to use MCP tools without requiring native function-calling capabilities:
Universal Compatibility: Works with any LLM, including those without function-calling
Pattern Recognition: Detects when a user prompt or LLM response should trigger a tool call
Natural Language Support: Understands both structured JSON tool calls and natural language requests
LLM providers can be configured via environment variables or command-line flags:
The client can be configured using the following environment variables:
SLACK_BOT_TOKEN
Bot token for Slack API
(required)
SLACK_APP_TOKEN
App-level token for Socket Mode
(required)
OPENAI_API_KEY
API key for OpenAI authentication
(required)
OPENAI_MODEL
OpenAI model to use
gpt-4o
LOG_LEVEL
Logging level (debug, info, warn, error)
info
LLM_PROVIDER
LLM provider to use (openai, ollama, etc.)
openai
LANGCHAIN_OLLAMA_URL
URL for Ollama when using LangChain
http://localhost:11434
LANGCHAIN_OLLAMA_MODEL
Model name for Ollama when using LangChain
llama3
The client includes a comprehensive Slack-formatted output system that enhances message display in Slack:
Automatic Format Detection: Automatically detects message type (plain text, markdown, JSON Block Kit, structured data) and applies appropriate formatting
Markdown Formatting: Supports Slack's mrkdwn syntax with automatic conversion from standard Markdown
Converts **bold**
to *bold*
for proper Slack bold formatting
Preserves inline code, block quotes, lists, and other formatting elements
Quoted String Enhancement: Automatically converts double-quoted strings to inline code blocks for better visualization
Example: "namespace-name"
becomes `namespace-name`
in Slack
Improves readability of IDs, timestamps, and other quoted values
Block Kit Integration: Converts structured data to Block Kit layouts for better visual presentation
Automatically validates against Slack API limits
Falls back to plain text if Block Kit validation fails
The client supports three transport modes:
SSE (default): Uses Server-Sent Events for real-time communication with the MCP server
HTTP: Uses HTTP POST requests with JSON-RPC for communication
stdio: Uses standard input/output for local development and testing
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
This project uses GitHub Actions for continuous integration and GoReleaser for automated releases.
Our CI pipeline performs the following checks on all PRs and commits to the main branch:
Linting: Using golangci-lint to check for common code issues and style violations
Go Module Verification: Ensuring go.mod and go.sum are properly maintained
Formatting: Verifying code is properly formatted with gofmt
Vulnerability Scanning: Using govulncheck to check for known vulnerabilities in dependencies
Dependency Scanning: Using Trivy to scan for vulnerabilities in dependencies
SBOM Generation: Creating a Software Bill of Materials for dependency tracking
Unit Tests: Running tests with race detection and code coverage reporting
Build Verification: Ensuring the codebase builds successfully
When changes are merged to the main branch:
CI checks are run to validate code quality and security
If successful, a new release is automatically created with:
Semantic versioning based on commit messages
Binary builds for multiple platforms
Docker image publishing to GitHub Container Registry
Helm chart publishing to GitHub Container Registry
For more details, see the .
For detailed instructions on Slack app configuration, token setup, required permissions, and troubleshooting common issues, see the .
For more details, see the and .