Cisco provides Code Exchange for convenience and informational purposes only, with no support of any kind. This page contains information and links from third-party websites that are governed by their own separate terms. Reference to a project or contributor on this page does not imply any affiliation with or endorsement by Cisco. Please note that some of the repositories in Code Exchange may be enabled to interact with third-party Generative AI platforms outside of Cisco’s control, and users should review those third-party terms and privacy statements to understand how data is processed, stored or used, including input data."
It's a simple integration between Cisco's Webex Skill SDK and ChatGPT Library.
It basically forwards all of your inquiries to this Webex Assistant Skill to ChatGPT.
Yeah, that's basically what it does and yes I agree it's indeed pretty cool.
For now we'll settle for this (very) high level design to give you an idea of how it's done.
(Yes, this schema is done with excalidraw)
I would say it's pretty simple since I've containerized the whole thing to make it more portable and as much easy to use as possible, so that you don't have to fiddle with SDKs and stuff.
I'm going to explain the logic behind this one day in the following section, but for now just jump to the TL;DR section.
A Linux server with Docker installed (or any other container tool of your choice, e.g. podman + buildah... I used Docker though).
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
A public fully qualified domain name pointing to your VPS hosting the containers and reachability from the Internet on 443 TCP port.
Public certificate for your fqdn.
sudo docker run -it --rm --name certbot \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
-p 80:80 \
certbot/certbot certonly
/etc/letsencrypt/live/ ->your fqdn<- /fullchain.pem
/etc/letsencrypt/live/ ->your fqdn<- /privkey.pem
OpenAI account with an API token
This is what it looks like to fire it up it on a virtual private server, but you could as well build your container image and deploy it in your preferred cloud container, or just scrap the ngnix layer altogether and use a lambda function... I'll try to enrich with other deploy scenarios in the future.
Clone the repo
git clone https://github.com/bbird81/webex-skill-gpt.git
cd webex-skill-gpt
Use your certificate
fullchain.pem
and privkey.pem
; they must be in Base64 format (if you used certbot, files already have these names).containers/nginx-reverse-proxy/certificates
directory.Fire up your container
Use docker compose to build and start your containers.
3.1. Edit .env file with your favourite editor and fill OpenAI API token and FQDN of your server:
# Insert OpenAPI token below
OPENAPI_KEY = "..."
# Fully Qualified Domain Name of your Webex Skill URL
FQDN = "..."
3.2. Do docker compose:
docker compose up -d
Create the skill
Create the skill following this tutorial.
** Please pay attention that the URL in the Webex Skill portal MUST end with /parse **
Secret and public key can be found in the output of the docker compose logs, by issuing:
docker compose logs
For further details on logs, please check the Troubleshooting section
Proceed to enable the skill in Webex Control Hub
Test & Invoke it!
You might want to test it using this web tool or, if you're feeling lucky, just try it on a compatible device using the expression:
"Hey Webex, tell <your skill name> <your request to ChatGPT>".
The complete guide for creating a generic Webex Assistant Skill can be found here:
https://developer.webex.com/docs/api/guides/webex-assistant-skills-guide
Everybody knows this: you can't win them all :-/
You might have received some errors during the installation, I'm going to address the most common here; please open a issue if all of this doesn't work for you and I'll see what I can do.
Never forget that this is a proof of concept and provided "as is" so mind that every effort from my side is BEST EFFORT: please be kind and patient.
webex-skill-gpt-uvicorn-backend-1 | OpenAI API token not set: ABORTING!
webex-skill-gpt-nginx-reverse-proxy-1 | FQDN env var not set: ABORTING!
.env
file and rebuild the container images with the following command:docker compose up -d --build --force-recreate
docker compose logs --since 3m
containers/nginx-reverse-proxy/certificates
directory are not in base64 format, an error like this might show:
webex-skill-gpt-nginx-reverse-proxy-1 | 2023/06/28 16:30:28 [emerg] 10#10: cannot load certificate "/etc/ssl/certs/nginx/fullchain.pem": PEM_read_bio_X509_AUX() failed (SSL: error:04800064:PEM routines::bad base64 decode)
webex-skill-gpt-nginx-reverse-proxy-1 | nginx: [emerg] cannot load certificate "/etc/ssl/certs/nginx/fullchain.pem": PEM_read_bio_X509_AUX() failed (SSL: error:04800064:PEM routines::bad base64 decode)
-----BEGIN CERTIFICATE-----
IUUouds90udsab91290ueubei39329 ... some multi-line rubbish here ...
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
UUouds90udsab91290ueubei39329 ... some multi-line rubbish here ...
-----END PRIVATE KEY-----
openssl
. Also Google (or Duck Duck GO!) is your friend :-).Owner
Contributors
Categories
CollaborationToolsProducts
WebexProgramming Languages
PythonShellLicense
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community