This Proof of Concept (PoC) demonstrates how a group of agents can work together to resolve a network issue, specifically an ISIS adjacency issue.
The TIG (Telegraf, InfluxDB, Grafana) stack monitors devices and sends an alert to Langgraph whenever an ISIS neighbor is lost. This alert triggers the agents to work, and you can review the summary on Langgraph Studio to decide the next steps.
You can watch the demo in action (about 7 minutes, no sound).
The demo is split into three separate repositories:
When the graph receives a request, the node_coordinator
validates the info and passes it to the node_orchestrator
, which decides which network agents to call. Each agent connects to devices, gathers data, and returns a report. When all agents finish, their reports go to the node_root_cause_analyzer
, which determines the root cause. If more details are needed, it requests them from the node_orchestrator
. Otherwise, it sends the final findings to the node_report_generator
.
Network agents:
agent_isis
: Retrieves ISIS info.agent_routing
: Retrieves routing info.agent_log_analyzer
: Checks logs.agent_device_health
: Retrieves device health.agent_network_interface
: Retrieves interfaces/config.agent_interface_actions
: Performs interface actions.Create an .env
file in the root directory and set your keys there.
LANGSMITH_TRACING=true LANGSMITH_ENDPOINT="https://api.smith.langchain.com" LANGSMITH_API_KEY=<langsmith_token> LANGSMITH_PROJECT="oncall-netops" OPENAI_API_KEY=<openai_token>
Import the remote repositories used as git
submodules:
make build-repos
Build the TIG stack, pyATS server, and webhook proxy. You can deploy each component separately (refer to their respective repositories for more info).
make build-demo
Note
If any required environment variable is missing, the make
target will fail and print which environment variable is missing.
Note
Update: LangGraph Studio Desktop has been discontinued by LangChain. The only available option is now the LangGraph Server CLI with the web-based Studio interface.
Use the LangGraph Server CLI to run the server in the terminal. You can access the web version of LangGraph Studio through your browser.
PYATS_API_SERVER
http://localhost:57000
.57000
.LANGGRAPH_API_HOST
grafana-to-langgraph-proxy
with the Langgraph API server.http://host.docker.internal:56000
,See the .env.example file for the rest of the environment variables used. These are set by the Makefile.
Dependencies are automatically managed by uv
during the build process.
Start the server with:
make run-environment
β― make run-environment langgraph dev --port 56000 WARNING:langgraph_api.cli:python_dotenv is not installed. Environment variables will not be available. INFO:langgraph_api.cli: Welcome to β¦ βββββββββββββ¬βββββββββ¬ β¬ β βββ€ββββ β¬β β¦ββ¬ββββ€ββββββ€ β©βββ΄ β΄ββββββββββ΄βββ΄ β΄β΄ β΄ β΄ - π API: http://127.0.0.1:56000 - π¨ Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:56000 - π API Docs: http://127.0.0.1:56000/docs This in-memory server is designed for development and testing. For production use, please use LangGraph Cloud.
If you have issues with the web version, make sure:
If you don't want to use the web version, you can still see the operations in the terminal, but it is hard to follow and interact with due to the amount of output.
There are three devices involved in this demo. They run ISIS between them. You can inspect the topology here.
The use case built in this demo is when an ISIS neighbor is lost. Grafana detects the lost neighbor and sends an automatic alert to the graph. You can replicate the scenario by shutting down an ISIS interface like GigabitEthernet5
on cat8000-v0
of the XE devices and see what happens.
The alert triggers a background job in Langgraph Studio. You won't be able to see the graph running in the GUI until it finishes (tool limitation at this point). Inspect the logs if you want to see what is happening.
Once the graph is finished, you can see the results and interact with the agents. The threads won't autorefresh to show you the output. Switch to another thread and go back to see the results. Use the User Request field to interact with the graph about the alert received.
Note
If youβre curious about the other inputs, theyβre used by the agents for different tasks. This is the state shared across the agents.
You can also use the graph to interact with the network devices without an alert. If so, use the same User Request field and provide the device hostname: cat8000v-0
, cat8000v-1
, or cat8000v-2
(a future improvement).
Here you can see the traces from one execution of the demo. There you can find state, runs, inputs, and outputs.
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community