A conversational AI bot for Webex that:
Run the package directly from PyPI using UVX:
uvx webex-bot-ai
For development or running from source:
git clone https://github.com/mhajder/webex-bot-ai.git
cd webex-bot-ai
uv synccp .env.example .env
# Edit .env with your configuration.env:WEBEX_ACCESS_TOKEN=your_webex_bot_token OPENAI_API_KEY=your_openai_api_key
webex-bot-ai
| Variable | Description | Default |
|---|---|---|
WEBEX_ACCESS_TOKEN |
Webex bot access token (required) | - |
BOT_NAME |
Bot name for mention handling | Assistant |
BOT_DISPLAY_NAME |
Display name in Webex | AI Assistant |
| Variable | Description | Default |
|---|---|---|
LLM_MODEL |
LiteLLM model identifier | gpt-4o-mini |
LLM_TEMPERATURE |
Sampling temperature (0.0-2.0) | 0.7 |
LLM_MAX_TOKENS |
Maximum response tokens | 2048 |
LLM_API_BASE |
Custom API endpoint | - |
# OpenAI LLM_MODEL=gpt-4o-mini OPENAI_API_KEY=sk-... # Ollama (local) LLM_MODEL=ollama_chat/gpt-oss:120b LLM_API_BASE=http://localhost:11434 # OpenRouter LLM_MODEL=openrouter/meta-llama/llama-3.1-70b-instruct OPENROUTER_API_KEY=sk-or-... # Anthropic LLM_MODEL=claude-3-sonnet-20240229 ANTHROPIC_API_KEY=sk-ant-...
# Restrict to specific users WEBEX_APPROVED_USERS=user1@example.com,user2@example.com # Restrict to specific email domains WEBEX_APPROVED_DOMAINS=example.com # Restrict to specific rooms WEBEX_APPROVED_ROOMS=room_id_1,room_id_2
Connect to MCP HTTP transport servers for extended tool capabilities:
MCP_ENABLED=true MCP_REQUEST_TIMEOUT=30 # Single server MCP_SERVERS=[{"name": "my-server", "url": "http://localhost:8000/mcp", "enabled": true}] # Multiple servers with auth MCP_SERVERS=[ {"name": "tools-server", "url": "http://localhost:8000/mcp", "enabled": true}, {"name": "secure-server", "url": "https://api.example.com/mcp", "auth_token": "your-token", "enabled": true} ]
Enable error tracking and performance monitoring with Sentry:
# Install with Sentry support
uv sync --extra sentryConfigure Sentry via environment variables:
| Variable | Description | Default |
|---|---|---|
SENTRY_DSN |
Sentry DSN (enables Sentry when set) | - |
SENTRY_TRACES_SAMPLE_RATE |
Trace sampling rate (0.0-1.0) | 1.0 |
SENTRY_SEND_DEFAULT_PII |
Include PII in events | true |
SENTRY_ENVIRONMENT |
Environment name (e.g., production) |
- |
SENTRY_RELEASE |
Release/version identifier | Package version |
SENTRY_PROFILE_SESSION_SAMPLE_RATE |
Profile session sampling rate | 1.0 |
SENTRY_PROFILE_LIFECYCLE |
Profile lifecycle mode | trace |
SENTRY_ENABLE_LOGS |
Enable logging integration | true |
Example configuration:
# Enable Sentry error tracking SENTRY_DSN=https://your-key@o12345.ingest.us.sentry.io/6789 SENTRY_ENVIRONMENT=production
Start a conversation: Mention the bot in a Webex space:
@BotName What is AI?
Follow-up in thread: Reply in the same thread for context-aware responses:
@BotName Tell me more.
The bot maintains context within the thread, so you can have natural conversations.
# Lint code uv run ruff check src/ # Format code uv run ruff format src/ # Fix linting issues uv run ruff check src/ --fix
src/commands/LLM_MODEL using LiteLLM syntaxThis project is licensed under the MIT License - see the LICENSE file for details.
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community