NOTE: For a more recently developed collector with more output flexibility and support, please evaluate usage of the following Telegraf plugins for your use case: cisco_telemetry_mdt and cisco_telemetry_gnmi.
A Model-Driven Telemetry collector based on the open-source tool
pipeline
including enhancements and bug fixes.
pipeline-gnmi
is a Model-Driven Telemetry (MDT) collector based on the open-source tool pipeline
which has gNMI support and fixes for maintainability (e.g. Go modules) and compatibility (e.g. Kafka version support). It supports MDT from IOS XE, IOS XR, and NX-OS enabling end-to-end Cisco MDT collection for DIY operators.
The original pipeline README is included here for reference.
pipeline-gnmi is written in Go and targets Go 1.11+. Windows and MacOS/Darwin support is experimental.
git clone https://github.com/cisco-ie/pipeline-gnmi
cd pipeline-gnmi
make build
go get github.com/cisco-ie/pipeline-gnmi
to be located in $GOPATH/bin
pipeline configuration support is maintained and detailed in the original README. Sample configuration is supplied as pipeline.conf.
This project introduces support for gNMI.
gNMI is a standardized and cross-platform protocol for network management and telemetry. gNMI does not require prior sensor path configuration on the target device, merely enabling gRPC/gNMI is enough. Sensor paths are requested by the collector (e.g. pipeline). Subscription type (interval, on-change, target-defined) can be specified per path.
Filtering of retrieved sensor values can be done directly at the input stage through selectors in the configuration file,
by defining all the sensor paths that should be stored in a TSDB or forwarded via Kafka. Regular metrics filtering through metrics.json files is ignored and not implemented, due to the lack of user-friendliness of the configuration.
[mygnmirouter]
stage = xport_input
type = gnmi
server = 10.49.234.114:57777
Sensor Path to subscribe to. No configuration on the device necessary
Appending an @ with a parameter specifies subscription type:
@x where x is a positive number indicates a fixed interval, e.g. @10 -> every 10 seconds
@change indicates only changes should be reported
omitting @ and parameter will do a target-specific subscriptions (not universally supported)
path1 = Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters@10
#path2 = /interfaces/interface/state@change
Whitelist the actual sensor values we are interested in (1 per line) and drop the rest.
This replaces metrics-based filtering for gNMI input - which is not implemented.
Note: Specifying one or more selectors will drop all other sensor values and is applied for all paths.
#select1 = Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters/packets-sent
#select2 = Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters/packets-received
Suppress redundant messages (minimum hearbeat interval)
If set and 0 or positive, redundant messages should be suppressed by the server
If greater than 0, the number of seconds after which a measurement should be sent, even if no change has occured
#heartbeat_interval = 0
tls = false
username = cisco
password = ...
This project supports Kafka 2.x by requiring the Kafka version (kafkaversion
) to be specified in the config file stage. This is a requirement of the underlying Kafka library and ensures that the library is communicating with the Kafka brokers effectively.
[kafkaconsumer]
topic=mdt
consumergroup=pipeline-gnmi
type=kafka
stage=xport_input
brokers=kafka-host:9092
encoding=gpb
datachanneldepth=1000
kafkaversion=2.1.0
This project has improved Docker support. The Dockerfile uses multi-stage builds and
builds Pipeline from scratch. The configuration file can now be created from environment variables directly,
e.g.
PIPELINE_default_id=pipeline
PIPELINE_mygnmirouter_stage=xport_input
PIPELINE_mygnmirouter_type=gnmi
is translated into a pipeline.conf with following contents:
[default]
id = pipeline
[mygnmirouter]
stage = xport_input
type = gnmi
If the special variable _password is used, the value is encrypted using the pipeline RSA key before being written to
the password option. Similarly _secret can be used, then the value is read from the file whose name is given as
value, encrypted using the pipeline RSA key and then written as password option. If the Pipeline RSA key is not
given or does not exist it is created upon creation of the container.
Additionally, existing replays of sensor data can be fed in efficiently using xz-compressed files.
pipeline-gnmi is licensed with Apache License, Version 2.0, per pipeline.
For support, please open a GitHub Issue or email cisco-ie@cisco.com.
Chris Cassar for implementing pipeline
used by anyone interested in MDT, Steven Barth for gNMI plugin development, and the Cisco teams implementing MDT support in the platforms.
This use case shows how to use the Pipeline tool chain to collect telemetry data from network devices from multiple vendors. It supports collecting data from IOS XE, IOS XR, and NX-OS network operating systems.
Access to network devices running IOS XR, IOS XE, or NX-OS network operating systems. You could try this use case on a DevNet Sandbox.
You must have a data output and display system such as Kafka, Prometheus, or InfluxDB to see the streaming telemetry data visually. Since the code is based on Go, there are no dependencies otherwise for the code itself.
The interface, gNMI, is a network management interface defined by OpenConfig, which is mostly led by Google. It provides configuration management and streaming telemetry in a single protocol. The interface is independent of the data model, and based on the Google RPC (Remote Procedure Call) framework. This combination of rich tooling provides high-performance management.
Pipeline, a tool collection, consumes IOS XR telemetry streams directly from the router or indirectly from a publish / subscribe bus. Once collected, Pipeline can perform some limited transformations of the data and forwards the resulting content on to a downstream, typically off-the-shelf, consumer. Supported downstream consumers include Apache Kafka, Influxdata TICK stack, Prometheus, dedicated gRPC clients, as well as dump-to-file for diagnostics. Other consumers such as Elasticsearch or Splunk can be set up to consume transformed telemetry data from the Kafka bus. Transformations performed by pipeline include producing JSON (from GPB/GPBKB inputs), template-based transformation, and metrics extraction (for TSDB consumption). The binary for Pipeline is included under bin. This binary, together with the configuration file pipeline.conf and a metrics.json file (only if you want to export telemetry metrics to InfluxDB or Prometheus), is all you need to collect telemetry from Cisco IOS XR and NX-OS routers.
This project introduces support for gNMI. The gNMI protocol is a standardized and cross-platform protocol for network management and telemetry. With gNMI, you are not required to have prior sensor path configuration on the target device, merely enabling gRPC/gNMI is enough. Sensor paths are requested by the collector, in this example, Pipeline. You can specify subscription type (interval, on-change, target-defined) per path.
This project supports Kafka 2.x by requiring the Kafka version (kafkaversion
) to be specified in the config file stage. This is a requirement of the underlying Kafka library and ensures that the library is communicating with the Kafka brokers effectively. This project has improved Docker support. The Dockerfile uses multi-stage builds and builds Pipeline from scratch. The configuration file can now be created from environment variables directly.
pipeline-gnmi is written in Go and targets Go 1.11+. Windows and MacOS/Darwin support is experimental.
git clone https://github.com/cisco-ie/pipeline-gnmicd pipeline-gnmimake build
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community