Generate Python code around the REST API of Cisco Catalyst Center (formerly Cisco DNA Center). You can blueprint this project for any other REST API.
Features:
Check out the YouTube Video!
User query: "get me a list of all end of sales devices. include authentication"
User query: "export a summary of all clients"
User query: "get me a list of all devices and export this list as a spreadsheet locally. include the authentication function"
Clone the code & install all required libraries (recommended with Poetry):
At first install poetry on your computer:
pip install poetry
Then, create all dependencies within a virtual environment (using poetry):
git clone https://github.com/flopach/create-your-own-api-assistant-cisco-catalyst-center &&
cd create-your-own-api-assistant-cisco-catalyst-center &&
poetry install &&
poetry shell
Decide which LLM to use: OpenAI or Open-Source LLM with Ollama
.env
file.Open main.py and change the parameters if needed. Default settings are: OpenAI with the model "gpt-3.5-turbo".
Run the server and start chatting at http://localhost:8000/:
chainlit run main.py
You should see a chat window. If it is your first run, type importdata
to load, chunk and embed all data in the vector database.
Note: Depending on the LLM, settings and config this can take some time. You can see the current status in the terminal.
Example: Using llama3 with no full data import on a Macbook Pro M1 (16GB RAM) took around 10 minutes.
The app follows the RAG architecture (Retrieval Augmented Generation). This is an efficient approach where relevant context data will be added to the LLM-query of the user.
Components overview:
In a RAG architecture, you embed your own (local) data into a vector database. When the user is asking the LLM to generate data (= LLM-inferencing), some context data is also sent together with the user-query to the LLM to generate output.
3 different examples of how to import and pre-process the data:
Note: Generating new data with the API specification can be time intense and is therefore optional per default. It takes approximately 1 hour with OpenAI APIs (GPT-3.5-turbo) and around 10 hours with llama3-8B on a Macbook Pro M1 (16GB RAM).
That's why I have already included the generated data in a JSON file
extended_apispecs_documentation.json
located in the/data
folder. This data is generated with GPT-3.5-turbo.
Once the data is imported, the user can query the LLM.
Q: Is this app ready for production? Is providing 100% accurate results?
No. This project can be useful especially when developing new applications with the Catalyst Center REST API, but the output might not be 100% correct.
Q: How can the LLM generate better results?
Try out other chunking methods, add other relevant data, extend the system prompt, include the response schemas from the openAPI specifications, etc. There are many ways you can tweak this app even more to generate better results!
Q: Should I use OpenAI or Ollama? What is your experience?
Try them both! You will see different performances for each LLM. I got better results with GPT-3.5.turbo compared to llama3-8B.
Q: How can I test the Python code if I don't have access to a Cisco Catalyst Center?
You can use a DevNet sandbox for free. Use the Catalyst Center always-on sandbox: Copy the URL + credentials into your script.
Q: How can I change the bot name and auto-collapse messages?
Some chainlit settings need to be set in the configuration file which can not be changed during runtime. Therefore, only the default parameters are used.
You can change these settings in .chainlit/config.toml:
[UI]
# Name of the app and chatbot.
name = "Catalyst Center API+Code Assistant"
# Large size content are by default collapsed for a cleaner ui
default_collapse_content = false
1.0 - initial version. RAG with Catalyst Center 2.3.7
This project is licensed under the Cisco Sample Code License 1.1 - see the LICENSE.md file for details.
The Cisco Catalyst User Guide (PDF document) located in the "data" folder is copyright by Cisco.
Owner
Contributors
Categories
Products
Catalyst CenterProgramming Languages
PythonLicense
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community