Important
See AI-Network-Troubleshooting-PoC which is a newer and better version of this demo. No further updates will be done to this repository.
๐๏ธ Slides and Recording ๐ฅ - here ๐
This demo is built to showcase how you AI might assist you in troubleshooting network issues.
The components used by this demo are:
gpt-4-turbo-preview
was used. ๐For this demo one alarm was created.
When the average number of ISIS neighbors in a lapse of 30 second is less than the average number of ISIS neighbors in a lapse of 30 minutes, the alarm will trigger a webhook for the LLM.
This signal that a stable ISIS neighbor that was working on the last 30 minutes was lost, and allows to work with N
number of ISIS neighbors.
For the demo to work, you must set the next environment variables. You can either export
the environment variables or create a .env
file with them. See .env.local for an example.
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY> WEBEX_TEAMS_ACCESS_TOKEN=<YOUR_TEAM_ACCESS_TOKEN> WEBEX_APPROVED_USERS_MAIL=<MAILS_OF_USERS_APPROVED_SEPARATED_BY_COMMAS> WEBEX_USERNAME=<YOUR_WEBEX_USERNAME> WEBEX_ROOM_ID=<THE_WEBEX_ROOM_ID>
NOTE: The webex variables are only needed if you interact with the LLM using webex.
If you prefer to use another client, you need to:
To get your webex token go to https://developer.webex.com/docs/bots and create a bot.
To get the WEBEX_ROOM_ID
the easiest way is to open a room with your bot in the webex app. Once you have your room, you can get the WEBEX_ROOM_ID
by using API list room, use your token created before.
For testing, you can use the GRAFANA_WEB_HOOK
env var to send webhooks to other site, such as https://webhook.site/
If you have access to smith.langchain.com (recommended for view LLM operations) add your project ID and API key.
GRAFANA_WEB_HOOK=<WEB_HOOK_URL> LANGCHAIN_PROJECT=<YOUR_LANGCHAIN_PROJECT_ID> LANGCHAIN_API_KEY=<YOUR_LANGCHAIN_API_KEY> LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
The .env.local file is used to define all variables used by the containers.
In a production environment, this file should be kept out of version control using the .gitignore
file.
This demo uses a CML instance from the Cisco DevNet sandbox. You can also use a dedicated CML instance or a NSO sandbox. ๐๏ธ
After acquiring your sandbox, stop the default topology and wipe it out. ๐งน
Then, import the topology file used for this demo and start the lab.
The TIG stack requires Docker and IP reachability to the CML instance. For this demo, I used my laptop.
To start the TIG stack do.
./build_run_telegraf.sh ./build_run_influxdb.sh ./build_run_grafana.sh
docker exec -it telegraf bash
and then tail -F /tmp/telegraf-grpc.log to see Telegraf logs.General > Network Telemetry
to see the grafana dashboard.The llm_agent directory provides all the code used to run the LLM.
In this demo, the LLM is run using a Python virtual environment. Ensure that you install the requirementes listed.
The entry point for the application is the app file
NOTE: In the upcoming weeks, a container will be added for the LLM
The demo involves shutting down one interface, causing an ISIS failure, and allowing the LLM to diagnose the issue and implement a fix.
In the images below, GigabitEthernet5
was shutting down on cat8000-v0
resulting in losing its ISIS adjacency with cat8000-v2
On Grafana, you can observe the ISIS count decreasing and triggering an alarm.
Next, you will receive a webex notification from grafana and the LLM will receive the webhook. The webhook triggers the LLM to start looking at what the issue is and how to resolve it.
./build_run_grafana.sh
to destroy and create the container.If the CML lab is not reachable from your laptop, it's usually due to a connectivity issue between the devices and the CML bridge. Here are some steps to resolve this:
Connectivity usually starts to work after about 5 minutes.
In more drastic cases, restart the cat8kv from the CML GUI.
Owner
Contributors
Categories
Products
IOS XECisco Modeling Labs (CML)WebexProgramming Languages
PythonLicense
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community