This documentation and the Cisco Observability Platform functionalities it describes are subject to change. Data saved on the platform may disappear and APIs may change without notice.


Introduction

Cisco Codex enables you to plug-in your custom logic into the data ingestion and the transformation pipelines of the Cisco Observability Platform. You can write custom logic to transform the incoming telemetry data and the Metric, Events, Logs, and Traces (MELT) data from one form to another.

For example, you can use Codex to:

  • transform an incoming metric data and generate event data based on specified conditions. For example, generate critical or warning events based on health metric of an entity.

  • transform an incoming trace data and generate metric data. For example, generate a metric spanCount from an incoming trace that reports the number of spans.

Data Processing Stages

Cisco Observability Platform is designed to ingest and process vast amounts of MELT data. The data processing pipeline has various stages. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually gets stored in the data store where the data can be queried by using the Unified Query Language (UQL).

Each stage allows the customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered. Codex introduces the concept of workflows, which is based on the CNCF Serverless Workflow language specification and are written in JSONata as the default expression language.

Note: This document assumes that you are well versed with Serverless Workflow language specification. We recommend that you familiarize yourself with the Serverless Workflow language before you start writing Codex workflows because the Codex workflows use this specification.

Events- the Communication Medium

Each data processing stage communicates with other stages through events. Each event has an associated category, which determines whether a specific stage can subscribe to or publish that event.

There are two categories for data-related events:

  • data:observation – a category of events with publish-only permissions. It can be considered as the side-effects of processing an original event. For example, an entity derived from resource attributes in an OpenTelemetry metric packet.

  • data:trigger – subscribe-only events that are emitted after all the mutations have been completed.

There are five observation event types in the platform:

  • entity.observed – A Flexible Meta Model (FMM) entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one.

  • association.observed – An FMM association was discovered while processing some data. Depending on the cardinality of the association, the update logic can differ.

  • extension.observed – FMM extension attributes were discovered while processing some data. A target entity must already exist.

  • measurement.received – a measurement event that contributes to a specific FMM metric.

  • event.received – raises a new FMM event.

There are three trigger event types in the platform, one for each data kind: metric.enriched, event.enriched, trace.enriched. All three events are emitted from the final Tag enrichment tap.

Each event is registered in a platform’s knowledge store, so that they are easily discoverable. To list all available events, use fsoc to query them:

fsoc knowledge get --type=contracts:cloudevent --filter="data.category eq 'data:trigger'" --layer-type=TENANT

All event types are versioned to allow for evolution and are qualified with platform solution identifiers for isolation. For example, a fully qualified ID of measurement.received event is platform:measurement.received.v1.