Developing NSO Services

Introduction

This section describes how to develop a service application. A service application maps input parameters to create, modify, and delete a service instance into the resulting native commands to devices in the network. The input parameters are given from a northbound system such as a self-service portal via API calls to NSO or a network engineer using any of the NSO User Interfaces such as the NSO CLI.

Figure 14. 10 000 Feet View of Service Applications
10 000 Feet View of Service Applications

The service application has a single task: from a given set of input parameters for a service instance modification, calculate the minimal set of device interface operations to achieve the desired service change.

It is very important that the service application supports any change, i.e., full create, delete, and update of any service parameter.

Definitions

Below follows a set of definitions that is used throughout this section:

Service type

A specific type of service like "L2 VPN", "L3 VPN", "VLAN", "Firewall Rule set".

Service instance

A specific instance of a service type, such as "ACME L3 VPN"

Service model

The schema definition for a service type. In NSO YANG is used as the schema language to define service types. Service models are used in different contexts/systems and therefore have slightly different meanings. In the context of NSO, a service model is a black-box specification of the attributes required to instantiate the service.

This is different from service models in ITIL-based CMDBs or OSS inventory systems, where a service model is more of a white-box model that describes the complete structure.

Service application

The code that implements a service, i.e., maps the parameters for a service instance to device configuration.

Device configuration

Network devices are configured to perform network functions. Every service instance results in corresponding device configuration changes. The dominating way to represent and change device configurations in current networks are CLI representations and sequences. NETCONF represents the configuration as XML instance documents corresponding to the YANG schema.

The Fundamentals

Mapping

Developing a service application that transforms a service request to corresponding device configurations is done differently in NSO than in other tools on the market. It is therefore important to understand the underlying fundamental concepts and how they differ from what you might assume.

As a developer you need to express the mapping from a YANG service model to the corresponding device YANG model. This is a declarative mapping in the sense that no sequencing is defined.

Note well that irrespective of the underlying device type and corresponding native device interface, the mapping is towards a YANG device model, not the native CLI for example. This means that as you write the service mapping, you do not have to worry about the syntax of different devices' CLI commands or in which order these commands are sent to the devices. This is all taken care of by the NSO device manager.

The above means that implementing a service in NSO is reduced to transforming the input data structure (described in YANG) to device data structures (also described in YANG).

Who writes the models?

  • Developing the service model is part of developing the service application and is covered later in this chapter.

  • Every device NED comes with a corresponding device YANG model. This model has been designed by the NED developer to capture the configuration data that is supported by the device.

This means that a service application has two primary artifacts: a YANG service model and a mapping definition to the device YANG as illustrated below.

Figure 15. Service Application Artifacts
Service Application Artifacts


At this point you should realize the following:

  • The mapping is not defined using workflows, or sequences of device commands.

  • The mapping is not defined in the native device interface language.

FASTMAP and Transactions

A common problem for systems that tries to automate service activation is that a "back-end" needs to be defined for every possible service instance change. Take for example a L3 VPN, a northbound system or a network engineer may during a service life-cycle want to:

  • Create the VPN

  • Add a leg to the VPN

  • Remove a leg from the VPN

  • Modify the bandwidth of a VPN leg

  • Change the interface of a VPN leg

  • ...

  • Delete the VPN

The possible run-time changes for an existing service instance are numerous. If a developer has to define a back-end for every possible change, like a script or a workflow, the task is daunting, error-prone, and never-ending.

NSO reduces this problem to a single data-mapping definition for the "create" scenario. At run-time NSO will render the minimum change for any possible change like all the ones mentioned below. This is managed by the FASTMAP algorithm explained later in this section.

Another challenge in traditional systems is that a lot of code goes into managing error scenarios. The NSO built-in transaction manager takes that away from the developer of the Service Application.

Auto-rendering from the Service Model

Since NSO automatically renders the northbound APIs and database schema from the YANG models, NSO enables a DevOps way of working with service models. A new service model can be defined as part of a package and loaded into NSO. An existing service model can be modified and the package upgraded. All northbound APIs and User Interfaces are automatically re-rendered to cater for the new models or updated models.

Writing the Service Model

The YANG Service Model specifies the input parameters to NSO. For a specific service model think of the parameters that a northbound system sends to NSO or the parameters that a network engineer needs to enter in the NSO CLI.

This model can be iterated without having any mapping defined. Write the YANG model, reload the service package in NSO and try the model with network engineers or northbound systems.

The result of this exercises for a L3 VPN service might be:

  • VPN name

  • AS Number

  • End-point CE device and interface

  • End-point PE device and interface

Finding the Mapping

The most straight-forward way of finding the mapping is to create one example of the service instance manually on the devices. Either create it using the native device interface and then synchronize the configuration into NSO, or use the NSO CLI to create the device configuration.

Based on this example device configuration for a service instance, note which part of the device configuration are variables resulting from the service configuration.

The figure below illustrates an example VPN configuration. Configuration items in bold are variables that are mapped from the service input.

Figure 16. Example L3 VPN Device Configuration
Example L3 VPN Device Configuration


Now look at the attributes of the service model and make sure you have a clear picture how the values are mapped into the corresponding device configuration.

Mapping Iterations

During the above exercises you might come into a situation where the input parameters for a service are not sufficient to render the device configuration.

Examples:

  • Assume the northbound system only provides the CE device and wants NSO to pick the right PE.

  • Assume the northbound system wants NSO to pick an IP address and does not pass that as an input parameter.

This is part of the service design iteration. If the input parameters are not sufficient to define the corresponding device configuration you either add more attributes to the service model so that the device configuration data can be defined as a pure data model mapping or you assume the mapping can fetch the missing pieces.

In the latter case there are several alternatives. All of these will be explained in detail later. Typical patterns are listed below:

  • If the mapping needs pre-configured data, you can define a YANG data model for this data. For example, in the VPN case NSO could have a list of CE-PE links loaded into NSO and the mapping then uses this list to find the PE for a CE and the PE therefore does not need to be part of the service model.

  • If the mapping needs to request data from an external system, for example query an IP address manager for the IP addresses, you can use the Reactive FASTMAP pattern.

  • Use NSO to handle allocation of resources like VLAN IDs etc. A package can be defined to manage VLAN pools within NSO and the mapping then requests a new VLAN from the VLAN pool and therefore it needs not to be passed as input. The Reactive FASTMAP pattern is used in this case as well.

Strategies to Implement the Mapping

This section gives an overview of the different design patterns to define the mapping. NSO provides three different ways to express the YANG service model to YANG device model mapping:

Service templates

If the mapping is a pure data model mapping without any complex calculations, algorithms, external call-outs, resource management or Reactive FASTMAP patterns, the mapping can be defined as service templates. Service templates requires no programming skills and are derived from example device configurations. They are therefore well suited for network engineers. See examples.ncs/service-provider/simple-mpls-vpn for an example.

Java and configuration templates

This is the most common technique for real-life deployments. A thin layer of Java implements the device-type independent algorithms and passes variables to templates that maps this into device specific configurations across vendors. The templates are often defined "per feature". This means that the Java code calculates a number of variables that are device independent. The Java code then applies templates with these variables as inputs, and the templates maps this to the various device types. All device-specifics are done in the templates, thus keeping the Java code clean. See examples.ncs/service-provider/mpls-vpn for an example.

Java only

There are no real benefits of this approach compared to the above combination of Java and templates. This more depends on the skills of the developer, programmers with less networking skills might prefer this approach. Abstracting away different device vendors are often more cumbersome than in the Java and templates approach. See examples.ncs/datacenter/datacenter for an example.

Creating an NSO Service Application

The purpose of this section is to outline the overall steps in NSO to create a service application. The following sections exemplifies these steps for the different mapping strategies. The command ncs-make-package in NSO 5.7 Manual Pages is used in these examples to create a skeleton service package.

All of the below assume you have a NSO local installation (see NSO Local Install in NSO Installation Guide, and have created an NSO instance with ncs-setup in NSO 5.7 Manual Pages This command creates the NSO instance in a directory, called the NSO runtime directory, which is specified on the command line:

$ ncs-setup --dest ./ncs-run

In this example the NSO runtime directory is ./ncs-run.

  1. Generate a service package in the packages directory in the runtime directory. In this example, the package name is vlan, and it is a service package with java code and templates:

    $ cd ncs-run/packages
    $ ncs-make-package --service-skeleton TYPE PACKAGE-NAME
  2. Edit the skeleton YANG service model in the generated package. The YANG file resides in PACKAGE-NAME/src/yang

  3. Build the service model:

    $ cd PACKAGE-NAME/src
    $ make

  4. Try the service model in the NSO CLI. In order to have NSO to load the new package including the service model do:

    admin@ncs# packages reload

  5. Iterate the above steps from Step 2 until you have a service model you are happy with.

  6. If the service does not have any templates, continue with Step 11

    Create an example device configuration either directly on the devices or by using the NSO CLI. This can be done either using netsim or real devices. In case the configuration was created directly on the devices, synchronize the configuration back into NSO:

    admin@ncs# devices sync-from

  7. Save the example device configuration as an XML file which is the format of templates.

    admin@ncs# show full-configuration devices devices config ... | display xml | file save mytemplate.xml

  8. Move the XML file to the template folder of the package.

  9. Replace hard-coded values of the XML template with variables referring to the service model or variables passed from the Java code. This is explained in detail later in this section.

  10. If this template is a template without any Java code make sure the service-point name in the YANG service model has a corresponding service-point in the XML file. Again this is explained in detail later.

  11. If a Java mapping layer is included, modify the Java in the src/java directory. Build the Java code:

    $ cd PACKAGE-NAME/src
    $ make

  12. Reload the packages; this reloads both the data models and the Java code:

    admin@ncs# packages reload

  13. Try the mapping by creating and modifying service instances in the CLI. Validate the changes by:

    admin@ncs(config)# commit dry-run outformat native

Mapping using Service Templates

In this example, you will create a simple VLAN service using a mapping with service templates only (i.e., no Java code). To keep the example simple, it will use only one single device type (IOS).

Preparation

In order to reuse an existing environment for NSO and netsim, the examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/ example is used. Make sure you have stopped any running NSO and netsim.

  1. Navigate to the example directory:

    $ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios

  2. Now you need to create a environment for the simulated IOS devices. This is done using the command ncs-netsim in NSO 5.7 Manual Pages .

    $ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c
    DEVICE c0 CREATED
    DEVICE c1 CREATED
    DEVICE c2 CREATED

    This command creates the simulated network in ./netsim.

  3. Next, you need an NSO instance with the simulated network:

    $ ncs-setup --netsim-dir ./netsim --dest .
    Using netsim dir ./netsim

Defining the Service Model

  1. The first step is to generate a skeleton package for a service. (For details on packages see the section called “Packages” in NSO 5.7 Getting Started Guide). The package is called vlan:

    $ cd packages
    $ ncs-make-package --service-skeleton template vlan

    This results in a directory structure:

    vlan
       load-dir
       package-meta-data.xml
       src
       templates

    For now lets focus on the vlan/src/yang/vlan.yang file.

    module vlan {
      namespace "http://com/example/vlan";
      prefix vlan;
    
      import ietf-inet-types {
        prefix inet;
      }
      import tailf-ncs {
        prefix ncs;
      }
    
      augment /ncs:services {
        list vlan {
          key name;
    
          uses ncs:service-data;
          ncs:servicepoint "vlan";
    
          leaf name {
            type string;
          }
    
          // may replace this with other ways of referring to the devices.
          leaf-list device {
            type leafref {
              path "/ncs:devices/ncs:device/ncs:name";
            }
          }
    
          // replace with your own stuff here
          leaf dummy {
            type inet:ipv4-address;
          }
        }
      }
    }

    If this is your first exposure to YANG you can see that the modeling language is very straightforward and easy to understand. See RFC 6020 for more details and examples for YANG.

    The concepts you should understand in the above generated skeleton are:

    1. The vlan service list is augmented into the services tree in NSO. This specifies the path to reach vlans in the CLI, REST etc. There is no requirements on where the service shall be added into ncs, if you want vlans to be at the top-level, just remove the augments statement.

    2. The two lines of uses ncs:service-data and ncs:servicepoint "vlan" tells NSO that this is a service.

  2. The next step is to modify the skeleton service YANG model and add the real parameters.

    So, if a user wants to create a new VLAN in the network what should the parameters be? A very simple service model could look like below (modify the src/yang/vlan.yang file):

       augment /ncs:services {
        list vlan {
          key name;
    
          uses ncs:service-data;
          ncs:servicepoint "vlan";
    
          leaf name {
            type string;
          }
    
          leaf vlan-id {
            type uint32 {
              range "1..4096";
            }
          }
    
          list device-if {
            key "device-name";
              leaf device-name {
                type leafref {
                  path "/ncs:devices/ncs:device/ncs:name";
                }
              }
              leaf interface-type {
                type enumeration {
                  enum FastEthernet;
                  enum GigabitEthernet;
                  enum TenGigabitEthernet;
                }
              }
              leaf interface {
                type string;
              }
          }
        }
    }

    This simple VLAN service model says:

    1. Each VLAN must have a unique name, for example "net-1".

    2. The VLAN has an id from 1 to 4096.

    3. The VLAN is attached to a list of devices and interfaces. In order to make this example as simple as possible the interface reference is selected by picking the type and then the name as a plain string.

  3. The next step is to build the data model:

    $ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/packages/vlan/src
    $ make
    .../ncsc  `ls vlan-ann.yang  > /dev/null 2>&1 && echo "-a vlan-ann.yang"` \
                  -c -o ../load-dir/vlan.fxs yang/vlan.yang

    A nice property of NSO is that already at this point you can load the service model into NSO and try if it works well in the CLI etc. Nothing will happen to the devices since the mapping is not defined yet. This is normally the way to iterate a model; load it into NSO, test the CLI towards the network engineers, make changes, reload it into NSO etc.

  4. Go to the root directory of the simulated-ios example:

    $ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios

  5. Start netsim and NSO:

    $ ncs-netsim start
    DEVICE c0 OK STARTED
    DEVICE c1 OK STARTED
    DEVICE c2 OK STARTED
    $ ncs --with-package-reload

    When NSO was started above, you gave NSO a parameter to reload all packages so that the newly added vlan package is included. Without this parameter, NSO starts with the same packages as last time. Packages can also be reloaded without starting and stopping NSO.

  6. Start the NSO CLI:

    $ ncs_cli -C -u admin

  7. Since this is the first time NSO is started with some devices, you need to make sure NSO synchronizes its database with the devices:

    admin@ncs# devices sync-from
    sync-result {
        device c0
        result true
    }
    sync-result {
        device c1
        result true
    }
    sync-result {
        device c2
        result true
    }

  8. At this point we have a service model for VLANs, but no mapping of VLAN to device configurations. This is fine; you can try the service model and see if it makes sense. Create a VLAN service:

    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# services vlan net-0 vlan-id 1234 \
            device-if c0 interface-type FastEthernet interface 1/0
    admin@ncs(config-device-if-c0)# top
    admin@ncs(config)# show configuration
    services vlan net-0
     vlan-id 1234
     device-if c0
      interface-type FastEthernet
      interface      1/0
     !
    !
    admin@ncs(config)# services vlan net-0 vlan-id 1234 \
            device-if c1 interface-type FastEthernet interface 1/0
    admin@ncs(config-device-if-c1)# top
    admin@ncs(config)# show configuration
    services vlan net-0
     vlan-id 1234
     device-if c0
      interface-type FastEthernet
      interface      1/0
     !
     device-if c1
      interface-type FastEthernet
      interface      1/0
     !
    !
    admin@ncs(config)# commit dry-run outformat native
    admin@ncs(config)# commit
    Commit complete.

    Committing service changes at this point has no effect on the devices since there is no mapping defined. This is why the output to the command commit dry-run outformat native doesn't show any output. The service instance data will just be stored in the data base in NSO.

    Note that you get tab completion on the devices since they are references to device names in CDB. You also get tab completion for interface types since the types are enumerated in the model. However the interface name is just a string, and you have to type the correct interface name. For service models where there is only one device type like in this simple example, a reference to the ios interface name according to the IOS model could be used. However that makes the service model dependent on the underlying device types and if another type is added, the service model needs to be updated and this is most often not desired. There are techniques to get tab completion even when the data type is a string, but this is omitted here for simplicity.

    Make sure you delete the vlan service instance before moving on with the example:

    admin@ncs(config)# no services vlan
    admin@ncs(config)# commit
    Commit complete.

Defining the Template

  1. Now it is time to define the mapping from service configuration to actual device configuration. The first step is to understand the actual device configuration. In this example, this is done by manually configuring one vlan on a device. This concrete device configuration is a starting point for the mapping; it shows the expected result of applying the service.

    admin@ncs(config)# devices device c0 config ios:vlan 1234
    admin@ncs(config-vlan)# top
    admin@ncs(config)# devices device c0 config ios:interface \
            FastEthernet 10/10 switchport trunk allowed vlan 1234
    admin@ncs(config-if)# top
    admin@ncs(config)# show configuration
    devices device c0
     config
      ios:vlan 1234
      !
      ios:interface FastEthernet10/10
       switchport trunk allowed vlan 1234
      exit
     !
    !

  2. The concrete configuration above has the interface and VLAN hard-wired. This is what we now will make into a template. It is always recommended to start like this and create a concrete representation of the configuration the template shall create. Templates are device-configuration where parts of the config is represented as variables. These kind of templates are represented as XML files. Display the device configuration as XML:

    admin@ncs(config)# show full-configuration devices device c0 \
            config ios:vlan | display xml
    
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
      <device>
        <name>c0</name>
          <config>
          <vlan xmlns="urn:ios">
            <vlan-list>
              <id>1234</id>
            </vlan-list>
          </vlan>
          </config>
      </device>
      </devices>
    </config>
    
    
    admin@ncs(config)# show full-configuration devices device c0 \
            config ios:interface FastEthernet 10/10 | display xml
    
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
      <device>
        <name>c0</name>
          <config>
          <interface xmlns="urn:ios">
          <FastEthernet>
            <name>10/10</name>
            <switchport>
              <trunk>
                <allowed>
                  <vlan>
                    <vlans>1234</vlans>
                  </vlan>
                </allowed>
              </trunk>
            </switchport>
          </FastEthernet>
          </interface>
          </config>
      </device>
      </devices>
    </config>

  3. Now, we shall build that template. When the package was created a skeleton XML file was created in packages/vlan/templates/vlan.xml

    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="vlan">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <!--
              Select the devices from some data structure in the service
              model. In this skeleton the devices are specified in a leaf-list.
              Select all devices in that leaf-list:
          -->
          <name>{/device}</name>
          <config>
            <!--
                Add device-specific parameters here.
                In this skeleton the service has a leaf "dummy"; use that
                to set something on the device e.g.:
                <ip-address-on-device>{/dummy}</ip-address-on-device>
            -->
          </config>
        </device>
      </devices>
    </config-template>

    We need to specify the right path to the devices. In our case the devices are identified by /device-if/device-name (see the YANG service model).

    For each of those devices we need to add the VLAN and change the specified interface configuration. Copy the XML config from the CLI and replace with variables:

    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="vlan">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/device-if/device-name}</name>
          <config>
            <vlan xmlns="urn:ios">
              <vlan-list tags="merge">
                <id>{../vlan-id}</id>
              </vlan-list>
            </vlan>
            <interface xmlns="urn:ios">
              <?if {interface-type='FastEthernet'}?>
                <FastEthernet tags="nocreate">
                  <name>{interface}</name>
                  <switchport>
                    <trunk>
                      <allowed>
                        <vlan tags="merge">
                          <vlans>{../vlan-id}</vlans>
                        </vlan>
                      </allowed>
                    </trunk>
                  </switchport>
                </FastEthernet>
              <?end?>
              <?if {interface-type='GigabitEthernet'}?>
                <GigabitEthernet tags="nocreate">
                  <name>{interface}</name>
                  <switchport>
                    <trunk>
                      <allowed>
                        <vlan tags="merge">
                          <vlans>{../vlan-id}</vlans>
                        </vlan>
                      </allowed>
                    </trunk>
                  </switchport>
                </GigabitEthernet>
              <?end?>
              <?if {interface-type='TenGigabitEthernet'}?>
                <TenGigabitEthernet tags="nocreate">
                  <name>{interface}</name>
                  <switchport>
                    <trunk>
                      <allowed>
                        <vlan tags="merge">
                          <vlans>{../vlan-id}</vlans>
                        </vlan>
                      </allowed>
                    </trunk>
                  </switchport>
                </TenGigabitEthernet>
              <?end?>
            </interface>
          </config>
        </device>
      </devices>
    </config-template>

    Walking through the template can give a better idea of how it works. For every /device-if/device-name from the service instance do the following:

    1. Add the vlan to the vlan-list, the tag "merge" tells the template to merge the data into an existing list (default is to replace).

    2. For every interface within that device, add the vlan to the allowed vlans and set mode to trunk. The tag "nocreate" tells the template to not create the named interface if it does not exist.

    Tip

    While experimenting with the template it can be helpful to remove the nocreate tag. In that way you will always create configuration from the template even if the interface does not exist.

    It is important to understand that every path in the template above refers to paths from the service model in vlan.yang.

    For details on the template syntax, see the section called “Service Templates”

  4. Throw away the uncommitted changes to the device, and request NSO to reload the packages:

    admin@ncs(config)# exit no-confirm
    admin@ncs# packages reload
    reload-result {
        package cisco-ios
        result true
    }
    reload-result {
        package vlan
        result true
    }

    Previously we started NSO with a reload package option, the above shows how to do the same without starting and stopping NSO.

  5. We can now create services that will make things happen in the network. Create a VLAN service:

    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 \
            interface-type FastEthernet interface 1/0
    admin@ncs(config-device-if-c0)# top
    admin@ncs(config)# services vlan net-0 device-if c1 \
            interface-type FastEthernet interface 1/0
    admin@ncs(config-device-if-c1)# top
    admin@ncs(config)# show configuration
    services vlan net-0
     vlan-id 1234
     device-if c0
      interface-type FastEthernet
      interface      1/0
     !
     device-if c1
      interface-type FastEthernet
      interface      1/0
     !
    !
    admin@ncs(config)# commit dry-run outformat native
    native {
        device {
            name c0
            data vlan 1234
                 !
                 interface FastEthernet1/0
                  switchport trunk allowed vlan 1234
                 exit
        }
        device {
            name c1
            data vlan 1234
                 !
                 interface FastEthernet1/0
                  switchport trunk allowed vlan 1234
                 exit
        }
    }
    admin@ncs(config)# commit | details
    ...
    Commit complete.

    Note that the commit command stored the service data in NSO, and at the same time pushed the changes to the two devices affected by the service.

  6. The VLAN service instance can now be changed:

    admin@ncs(config)# services vlan net-0 vlan-id 1222
    admin@ncs(config-vlan-net-0)# top
    admin@ncs(config)# show configuration
    services vlan net-0
     vlan-id 1222
    !
    admin@ncs(config)# commit dry-run outformat native
    native {
        device {
            name c0
            data no vlan 1234
                 vlan 1222
                 !
                 interface FastEthernet1/0
                  switchport trunk allowed vlan 1222
                 exit
        }
        device {
            name c1
            data no vlan 1234
                 vlan 1222
                 !
                 interface FastEthernet1/0
                  switchport trunk allowed vlan 1222
                 exit
        }
    }
    admin@ncs(config)# commit
    Commit complete.

    It is important to understand what happens above. When the VLAN id is changed, NSO is able to calculate the minimal required changes to the configuration. The same situation holds true for changing elements in the configuration or even parameters of those elements. In this way NSO does not need any explicit mappings to for a VLAN change or deletion. NSO does not overwrite a new configuration on the old configuration. Adding an interface to the same service works the same:

    admin@ncs(config)# services vlan net-0 device-if c2 \
            interface-type FastEthernet interface 1/0
    admin@ncs(config-device-if-c2)# top
    admin@ncs(config)# commit dry-run outformat native
    native {
        device {
            name c2
            data vlan 1222
                 !
                 interface FastEthernet1/0
                  switchport trunk allowed vlan 1222
                 exit
        }
    }
    admin@ncs(config)# commit
    Commit complete.

  7. To clean up the configuration on the devices, run the delete command as shown below:

    admin@ncs(config)# no services vlan net-0
    admin@ncs(config)# commit dry-run outformat native
    native {
        device {
            name c0
            data no vlan 1222
                 interface FastEthernet1/0
                  no switchport trunk allowed vlan 1222
                 exit
        }
        device {
            name c1
            data no vlan 1222
                 interface FastEthernet1/0
                  no switchport trunk allowed vlan 1222
                 exit
        }
        device {
            name c2
            data no vlan 1222
                 interface FastEthernet1/0
                  no switchport trunk allowed vlan 1222
                 exit
        }
    }
    admin@ncs(config)# commit
    Commit complete.

  8. To make the VLAN service package complete edit the vlan/package-meta-data.xml to reflect the service model purpose.

This example showed how to use template-based mapping. NSO also allows for programmatic mapping and also a combination of the two approaches. The latter is very flexible, if some logic need to be attached to the service provisioning that is expressed as templates and the logic applies device agnostic templates.

Mapping using Java

Overview

This section will illustrate how to implement a simple VLAN service in Java. The end-result will be the same as shown previously using templates but this time implemented in Java instead.

Note well that the examples in this section are extremely simplified from a networking perspective in order to illustrate the concepts.

We will first look at the following preparatory steps:

  1. Prepare a simulated environment of Cisco IOS devices: in this example we start from scratch in order to illustrate the complete development process. We will not reuse any existing NSO examples.

  2. Generate a template service skeleton package: use NSO tools to generate a Java based service skeleton package.

  3. Write and test the VLAN Service Model.

  4. Analyze the VLAN service mapping to IOS configuration.

The above steps are no different from defining services using templates. Next is to start playing with the Java Environment:

  1. Configuring start and stop of the Java VM.

  2. First look at the Service Java Code: introduction to service mapping in Java.

  3. Developing by tailing log files.

  4. Developing using Eclipse.

Setting up the environment

We will start by setting up a run-time environment that includes simulated Cisco IOS devices and configuration data for NSO. Make sure you have sourced the ncsrc file. Create a directory somewhere like:

$ mkdir ~/vlan-service
$ cd ~/vlan-service

Now lets create a simulated environment with 3 IOS devices and a NSO that is ready to run with this simulated network:

$ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c
$ ncs-setup --netsim-dir ./netsim/ --dest ./

Start the simulator and NSO:

$ ncs-netsim start
DEVICE c0 OK STARTED
DEVICE c1 OK STARTED
DEVICE c2 OK STARTED
$ ncs

Use the Cisco CLI towards one of the devices:

$ ncs-netsim cli-i c0
admin connected from 127.0.0.1 using console on ncs
c0> enable
c0# configure
Enter configuration commands, one per line. End with CNTL/Z.
c0(config)# show full-configuration
no service pad
no ip domain-lookup
no ip http server
no ip http secure-server
ip routing
ip source-route
ip vrf my-forward
bgp next-hop Loopback 1
!
...

Use the NSO CLI to get the configuration:

$ ncs_cli -C -u admin

admin connected from 127.0.0.1 using console on ncs
admin@ncs# devices sync-from
sync-result {
    device c0
    result true
}
sync-result {
    device c1
    result true
}
sync-result {
    device c2
    result true
}
admin@ncs# config
Entering configuration mode terminal

admin@ncs(config)# show full-configuration devices device c0 config
devices device c0
 config
  no ios:service pad
  ios:ip vrf my-forward
   bgp next-hop Loopback 1
  !
  ios:ip community-list 1 permit
  ios:ip community-list 2 deny
  ios:ip community-list standard s permit
  no ios:ip domain-lookup
  no ios:ip http server
  no ios:ip http secure-server
  ios:ip routing
...

Finally, set VLAN information manually on a device to prepare for the mapping later.

admin@ncs(config)# devices device c0 config ios:vlan 1234 
admin@ncs(config)# devices device c0 config ios:interface
                   FastEthernet 1/0 switchport mode trunk 
admin@ncs(config-if)# switchport trunk allowed vlan 1234 
admin@ncs(config-if)# top 

admin@ncs(config)# show configuration 
devices device c0
 config
  ios:vlan 1234
  !
  ios:interface FastEthernet1/0
   switchport mode trunk
   switchport trunk allowed vlan 1234
  exit
 !
!

admin@ncs(config)# commit 

Creating a service package

In the run-time directory you created:

$ ls -F1
README.ncs
README.netsim
logs/
ncs-cdb/
ncs.conf
netsim/
packages/
scripts/
state/

Note the packages directory, cd to it:

$ cd packages
$ ls -l
total 8
cisco-ios -> .../packages/neds/cisco-ios

Currently there is only one package, the Cisco IOS NED. We will now create a new package that will contain the VLAN service.

$ ncs-make-package --service-skeleton java vlan
$ ls
cisco-ios vlan

This creates a package with the following structure:

Figure 17. Package Structure
Package Structure


During the rest of this section we will work with the vlan/src/yang/vlan.yang and vlan/src/java/src/com/example/vlan/vlanRFS.java files.

The Service Model

Edit the vlan/src/yang/vlan.yang according to below:

  augment /ncs:services {
    list vlan {
      key name;

      uses ncs:service-data;
      ncs:servicepoint "vlan-servicepoint";
      leaf name {
        type string;
      }

      leaf vlan-id {
        type uint32 {
          range "1..4096";
        }
      }

      list device-if {
        key "device-name";
          leaf device-name {
            type leafref {
              path "/ncs:devices/ncs:device/ncs:name";
            }
          }
          leaf interface {
            type string;
          }
      }
    }
  }

This simple VLAN service model says:

  1. We give a VLAN a name, for example net-1

  2. The VLAN has an id from 1 to 4096

  3. The VLAN is attached to a list of devices and interfaces. In order to make this example as simple as possible the interface name is just a string. A more correct and useful example would specify this is a reference to an interface to the device, but for now it is better to keep the example simple.

Make sure you do keep the lines generated by the ncs-make-package:

uses ncs:service-data;
ncs:servicepoint "vlan-servicepoint";

The first line expands to a YANG structure that is shared amongst all services. The second line connects the service to the Java callback.

To build this service model cd to packages/vlan/src and type make (assuming you have the make build system installed).

$ cd packages/vlan/src/
$ make

We can now test the service model by requesting NSO to reload all packages:

$ ncs_cli -C -U admin
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
result Done

You can also stop and start NSO, but then you have to pass the option --with-package-reload when starting NSO. This is important, NSO does not by default take any changes in packages into account when restarting. When packages are reloaded the state/packages-in-use is updated.

Now, create a VLAN service, (nothing will happen since we have not defined any mapping).

admin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 interface 1/0
admin@ncs(config-device-if-c0)# top
admin@ncs(config)# commit

Ok, that worked let us move on and connect that to some device configuration using Java mapping. Note well that Java mapping is not needed, templates are more straight-forward and recommended but we use this as an "Hello World" introduction to Java Service Programming in NSO. Also at the end we will show how to combine Java and templates. Templates are used to define a vendor independent way of mapping service attributes to device configuration and Java is used as a thin layer before the templates to do logic, call-outs to external systems etc.

Managing the NSO Java VM

The default configuration of the Java VM is:

admin@ncs(config)# show full-configuration java-vm | details
java-vm stdout-capture enabled
java-vm stdout-capture file ./logs/ncs-java-vm.log
java-vm connect-time           60
java-vm initialization-time    60
java-vm synchronization-timeout-action log-stop
java-vm jmx jndi-address 127.0.0.1
java-vm jmx jndi-port 9902
java-vm jmx jmx-address 127.0.0.1
java-vm jmx jmx-port 9901

By default, ncs will start the Java VM invoking the command $NCS_DIR/bin/ncs-start-java-vm That script will invoke

$ java com.tailf.ncs.NcsJVMLauncher

The class NcsJVMLauncher contains the main() method. The started java vm will automatically retrieve and deploy all java code for the packages defined in the load-path of the ncs.conf file. No other specification than the package-meta-data.xml for each package is needed.

The verbosity of Java error messages can be controlled by:

admin@ncs(config)# java-vm exception-error-message verbosity
Possible completions:
  standard  trace  verbose

For more detail on the Java VM settings see The NSO Java VM.

A first look at Java Development

The service model and the corresponding Java callback is bound by the service point name. Look at the service model in packages/vlan/src/yang:

Figure 18. VLAN Service model service-point
VLAN Service model service-point


The corresponding generated Java skeleton, (one print hello world statement added):

Figure 19. Java Service Create Callback
Java Service Create Callback


Modify the generated code to include the print "Hello World!" statement in the same way. Re-build the package:

$ cd packages/vlan/src/
$ make

Whenever a package has changed we need to tell NSO to reload the package. There are three ways:

  1. Just reload the implementation of a specific package, will not load any model changes: admin@ncs# packages package vlan redeploy

  2. Reload all packages including any model changes: admin@ncs# packages reload

  3. Restart NSO with reload option: $ncs --with-package-reload

When that is done we can create a service (or modify an existing) and the callback will be triggered:

admin@ncs(config)# vlan net-0 vlan-id 888
admin@ncs(config-vlan-net-0)# commit

Now, have a look in the logs/ncs-java-vm.log:

$ tail ncs-java-vm.log
...
<INFO> 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
       - REDEPLOY PACKAGE COLLECTION  --> OK
<INFO> 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
       - REDEPLOY ["vlan"] --> DONE
<INFO> 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
       - DONE COMMAND --> REDEPLOY_PACKAGE
<INFO> 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
       - READ SOCKET =>
Hello World!

Tailing the ncs-java-vm.log is one way of developing. You can also start and stop the Java VM explicitly and see the trace in the shell. First of all tell NSO not to start the VM by adding the following snippet to ncs.conf:

<java-vm>
    <auto-start>false</auto-start>
</java-vm>

Then, after restarting NSO or reloading the configuration, from the shell prompt:

$ ncs-start-java-vm
.....
.. all stdout from JVM

So modifying or creating a VLAN service will now have the "Hello World!" string show up in the shell. You can modify the package and reload/redeploy and see the output.

Using Eclipse

First of all generate environment for Eclipse:

$ ncs-setup --eclipse-setup

This will generate two files, .classpath and .project. If we add this directory to eclipse as a "File->New->Java Project", uncheck the "Use the default location" and enter the directory where the .classpath and .project have been generated. We're immediately ready to run this code in eclipse.

Figure 20. Creating the project in Eclipse
Creating the project in Eclipse


All we need to do is to choose the main() routine in the NcsJVMLauncher class. The eclipse debugger works now as usual, and we can at will start and stop the Java code.

One caveat here which is worth mentioning is that there are a few timeouts between NSO and the Java code that will trigger when we sit in the debugger. While developing with the eclipse debugger and breakpoints we typically want to disable all these timeouts. First we have 3 timeouts in ncs.conf that matter. Set the three values of /ncs-config/japi/new-session-timeout /ncs-config/japi/query-timeout /ncs-config/japi/connect-timeout to a large value. See man page ncs.conf(5) for a detailed description on what those values are. If these timeouts are triggered, NSO will close all sockets to the Java VM and all bets are off.

$ cp $NCS_DIR/etc/ncs/ncs.conf .

Edit the file and enter the following XML entry just after the Webui entry.

<japi>
    <new-session-timeout>PT1000S</new-session-timeout>
    <query-timeout>PT1000S</query-timeout>
    <connect-timeout>PT1000S</connect-timeout>
</japi>

Now restart ncs, and from now on start it as

$ ncs -c ./ncs.conf

You can verify that the Java VM is not running by checking the package status:

admin@ncs# show packages package vlan
packages package vlan
 package-version 1.0
 description     "Skeleton for a resource facing service - RFS"
 ncs-min-version 3.0
 directory       ./state/packages-in-use/1/vlan
 component RFSSkeleton
  callback java-class-name [ com.example.vlan.vlanRFS ]
 oper-status java-uninitialized

Create a new project and start the launcher main in Eclipse:

Figure 21. Starting the NSO JVM from Eclipse
Starting the NSO JVM from Eclipse


You can start and stop the Java VM from Eclipse. Note well that this is not needed since the change cycle is: modify the Java code, make in the src directory and then reload the package. All while NSO and the JVM is running. Change the VLAN service and see the console output in Eclipse:

Figure 22. Console output in Eclipse
Console output in Eclipse


Another option is to have Eclipse connect to the running VM. Start the VM manually with the -d option.

$ ncs-start-java-vm -d
Listening for transport dt_socket at address: 9000
NCS JVM STARTING
...

Then you can setup Eclipse to connect to the NSO Java VM:

Figure 23. Connecting to NSO Java VM Remote with Eclipse
Connecting to NSO Java VM Remote with Eclipse


In order for Eclipse to show the NSO code when debugging add the NSO Source Jars, (add external Jar in Eclipse):

Figure 24. Adding the NSO source Jars
Adding the NSO source Jars


Navigate to the service create for the VLAN service and add a breakpoint:

Figure 25. Setting a break-point in Eclipse
Setting a break-point in Eclipse


Commit a change of a VLAN service instance and Eclipse will stop at the breakpoint:

Figure 26. Service Create breakpoint
Service Create breakpoint


Writing the service code

Fetching the service attributes

So the problem at hand is that we have service parameters and a resulting device configuration. Previously in this user guide we showed how to do that with templates. The same principles apply in Java. The service model and the device models are YANG models in NSO irrespective of the underlying protocol. The Java mapping code transforms the service attributes to the corresponding configuration leafs in the device model.

The NAVU API lets the Java programmer navigate the service model and the device models as a DOM tree. Have a look at the create signature:

 @ServiceCallback(servicePoint="vlan-servicepoint",
        callType=ServiceCBType.CREATE)
    public Properties create(ServiceContext context,
                             NavuNode service,
                             NavuNode ncsRoot,
                             Properties opaque)
                             throws DpCallbackException {

Two NAVU nodes are passed: the actual service serviceinstance and the NSO root ncsRoot.

We can have a first look at NAVU be analyzing the first try statement:

try {
            // check if it is reasonable to assume that devices
            // initially has been sync-from:ed
            NavuList managedDevices =
            ncsRoot.container("devices").list("device");
            for (NavuContainer device : managedDevices) {
                if (device.list("capability").isEmpty()) {
                    String mess = "Device %1$s has no known capabilities, " +
                                   "has sync-from been performed?";
                    String key = device.getKey().elementAt(0).toString();
                    throw new DpCallbackException(String.format(mess, key));
                }
            }

NAVU is a lazy evaluated DOM tree that represents the instantiated YANG model. So knowing the NSO model: devices/device, (container/list) corresponds to the list of capabilities for a device, this can be retrieved by ncsRoot.container("devices").list("device").

The service node can be used to fetch the values of the VLAN service instance:

  • vlan/name

  • vlan/vlan-id

  • vlan/device-if/device and vlan/device-if/interface

A first snippet that iterates the service model and prints to the console looks like below:

Figure 27. The first example
The first example


The com.tailf.conf package contains Java Classes representing the YANG types like ConfUInt32.

Try it out by the following sequence:

  1. Rebuild the Java Code : in packages/vlan/src type make.

  2. Reload the package : in the NSO Cisco CLI do admin@ncs# packages package vlan redeploy.

  3. Create or modify a vlan service: in NSO CLI admin@ncs(config)# services vlan net-0 vlan-id 844 device-if c0 interface 1/0, and commit.

Mapping service attributes to device configuration

Figure 28. Fetching values from the service instance
Fetching values from the service instance


Remember the service attribute is passed as a parameter to the create method. As a starting point, look at the first three lines:

  1. To reach a specific leaf in the model use the NAVU leaf method with the name of the leaf as parameter. This leaf then has various methods like getting the value as a string.

  2. service.leaf("vlan-id") and service.leaf(vlan._vlan_id_) are two ways of referring to the vlan-id leaf of the service. The latter alternative uses symbols generated by the compilation steps. If this alternative is used, you get the benefit of compilation time checking. From this leaf you can get the value according to the type in the YANG model ConfUInt32 in this case.

  3. Line 3 shows an example of casting between types. In this case we prepare the VLAN ID as a 16 unsigned int for later use.

Next step is to iterate over the devices and interfaces. The NAVU elements() returns the elements of a NAVU list.

Figure 29. Iterating a list in the service model
Iterating a list in the service model


In order to write the mapping code, make sure you have an understanding of the device model. One good way of doing that is to create a corresponding configuration on one device and then display that with pipe target "display xpath". Below is a CLI output that shows the model paths for "FastEthernet 1/0":

admin@ncs% show devices device c0 config ios:interface
           FastEthernet 1/0 | display xpath 

/devices/device[name='c0']/config/ios:interface/
         FastEthernet[name='1/0']/switchport/mode/trunk

/devices/device[name='c0']/config/ios:interface/
         FastEthernet[name='1/0']/switchport/trunk/allowed/vlan/vlans [ 111 ]

Another useful tool is to render a tree view of the model:

$ pyang -f jstree tailf-ned-cisco-ios.yang -o ios.html

This can then be opened in a Web browser and model paths are shown to the right:

Figure 30. The Cisco IOS Model
The Cisco IOS Model


Now, we replace the print statements with setting real configuration on the devices.

Figure 31. Setting the VLAN list
Setting the VLAN list


Let us walk through the above code line by line. The device-name is a leafref. The deref method returns the object that the leafref refers to. The getParent() might surprise the reader. Look at the path for a leafref: /device/name/config/ios:interface/name. The name leafref is the key that identifies a specific interface. The deref returns that key, while we want to have a reference to the interface, (/device/name/config/ios:interface), that is the reason for the getParent().

The next line sets the vlan-list on the device. Note well that this follows the paths displayed earlier using the NSO CLI. The sharedCreate() is important, it creates device configuration based on this service, and it says that other services might also create the same value, "shared". Shared create maintains reference counters for the created configuration in order for the service deletion to delete the configuration only when the last service is deleted. Finally the interface name is used as a key to see if the interface exists, "containsNode()".

The last step is to update the VLAN list for each interface. The code below adds an element to the VLAN leaf-list.

// The interface
NavuNode theIf = feIntfList.elem(feIntfName);
theIf.container("switchport").
      sharedCreate().
      container("mode").
      container("trunk").
      sharedCreate();
// Create the VLAN leaf-list element
theIf.container("switchport").
      container("trunk").
      container("allowed").
      container("vlan").
      leafList("vlans").
      sharedCreate(vlanID16);

The above create method is all that is needed for create, read, update and delete. NSO will automatically handle any changes, like changing the VLAN ID, adding an interface to the VLAN service and deleting the service. Play with the CLI and modify and delete VLAN services and make sure you realize this. This is handled by the FASTMAP engine, it renders any change based on the single definition of the create method.

Mapping using Java combined with Templates

Overview

We have shown two ways of mapping a service model to device configurations, service templates and Java. The mapping strategy using only Java is illustrated in the Figure below.

Figure 32. Flat mapping with Java
Flat mapping with Java


This strategy has some drawbacks:

  • Managing different device vendors. If we would introduce more vendors in the network this would need to be handled by the Java code. Of course this can be factored into separate classes in order to keep the general logic clean and just passing the device details to specific vendor classes, but this gets complex and will always require Java programmers for introducing new device types.

  • No clear separation of concerns, domain expertise. The general business logic for a service is one thing, detailed configuration knowledge of device types something else. The latter requires network engineers and the first category is normally separated into a separate team that deals with OSS integration.

Java and templates can be combined according to below:

Figure 33. Two layered mapping using feature templates
Two layered mapping using feature templates


In this model the Java layer focus on required logic, but it never touches concrete device models from various vendors. The vendor specific details are abstracted away using feature templates. The templates takes variables as input from the service logic, and the templates in turn transforms these into concrete device configuration. Introducing of a new device type does not affect the Java mapping.

This approach has several benefits:

  • The service logic can be developed independently of device types.

  • New device types can be introduced at runtime without affecting service logic.

  • Separation of concerns: network engineers are comfortable with templates, they look like a configuration snippet. They have the expertise how configuration is applied to real devices. People defining the service logic often are more programmers, they need to interface with other systems etc, this suites a Java layer.

Note that the logic layer does not understand the device types, the templates will dynamically apply the correct leg of the template depending on which device is touched.

The VLAN Feature Template

From an abstraction point of view we want a template that takes the following variables:

  • VLAN id

  • Device and interface

So the mapping logic can just pass these variables to the feature template and it will apply it to a multi-vendor network.

Create a template as described before.

  • Create a concrete configuration on a device, or several devices of different type

  • Request NSO to display that as XML

  • Replace values with variables

This results in a feature template like below:

<!-- Feature Parameters -->
<!-- $DEVICE -->
<!-- $VLAN_ID -->
<!-- $INTF_NAME -->

<config-template xmlns="http://tail-f.com/ns/config/1.0"
                 servicepoint="vlan">
  <devices xmlns="http://tail-f.com/ns/ncs">
    <device>
      <name>{$DEVICE}</name>
      <config>
        <vlan xmlns="urn:ios" tags="merge">
          <vlan-list>
            <id>{$VLAN_ID}</id>
          </vlan-list>
        </vlan>
        <interface xmlns="urn:ios" tags="merge">
          <FastEthernet tags="nocreate">
            <name>{$INTF_NAME}</name>
            <switchport>
              <trunk>
                <allowed>
                  <vlan tags="merge">
                    <vlans>{$VLAN_ID}</vlans>
                  </vlan>
                </allowed>
              </trunk>
            </switchport>
          </FastEthernet>
        </interface>
      </config>
    </device>
  </devices>
</config-template>

This template only maps to Cisco IOS devices (the xmlns="urn:ios" namespace), but you can add "legs" for other device types at any point in time and reload the package.

Note

Nodes set with a template variable evaluating to the empty string are ignored, e.g., the setting <some-tag>{$VAR}</some-tag> is ignored if the template variable $VAR evaluates to the empty string. However, this does not apply to XPath expressions evaluating to the empty string. A template variable can be surrounded by the XPath function string() if it is desirable to set a node to the empty string.

The VLAN Java Logic

The Java mapping logic for applying the template is shown below:

Figure 34. Mapping logic using template
Mapping logic using template


Note that the Java code has no clue about the underlying device type, it just passes the feature variables to the template. At run-time you can update the template with mapping to other device types. The Java-code stays untouched, if you modify an existing VLAN service instance to refer to the new device type the commit will generate the corresponding configuration for that device.

The smart reader will complain, "why do we have the Java layer at all?", this could have been done as a pure template solution. That is true, but now this simple Java layer gives room for arbitrary complex service logic before applying the template.

Steps to Build a Java and Template Solution

The steps to build the solution described in this section are:

  1. Create a run-time directory: $ mkdir ~/service-template; cd ~/service-template

  2. Generate a netsim environment: $ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c

  3. Generate the NSO runtime environment: $ ncs-setup --netsim-dir ./netsim --dest ./

  4. Create the VLAN package in the packages directory: $ cd packages; ncs-make-package --service-skeleton java vlan

  5. Create a template directory in the VLAN package: $ cd vlan; mkdir templates

  6. Save the above described template in packages/vlan/templates

  7. Create the YANG service model according to above: packages/vlan/src/yang/vlan.yang

  8. Update the Java code according to above: packages/vlan/src/java/src/com/example/vlan/vlanRFS.java

  9. Build the package: in packages/vlan/src do make

  10. Start NSO

Service Mapping: Putting Things Together

The purpose of this section is to show a more complete example of a service mapping. It is based based on the example examples.ncs/service-provider/mpls-vpn.

Auxiliary Service Data

In the previous sections we have looked at service mapping when the input parameters are enough to generate the corresponding device configurations. In many cases this is not the case. The service mapping logic may need to reach out to other data in order to generate the device configuration. This is common in the following scenarios:

  • Policies: it might make sense to define policies that can be shared between service instances. The policies, for example QoS, have data models of their own (not service models) and the mapping code reads from that.

  • Topology information: the service mapping might need to know connected devices, like which PE the CE is connected to.

  • Resources like VLAN IDs, IP addresses: these might not be given as input parameters. This can be modeled separately in NSO or fetched from an external system.

It is important to design the service model to consider the above examples: what is input? what is available from other sources? This example illustrates how to define QoS policies "on the side". A reference to an existing QoS policy is passed as input. This is a much better principle than giving all QoS parameters to every service instance. Note well that if you modify the QoS definitions that services are referring to, this will not change the existing services. In order to have the service to read the changed policies you need to perform a re-deploy on the service.

This example also uses a list that maps every CE to a PE. This list needs to be populated before any service is created. The service model only has the CE as input parameter, and the service mapping code performs a lookup in this list to get the PE. If the underlying topology changes a service re-deploy will adopt the service to the changed CE-PE links. See more on topology below.

NSO has a package to manage resources like VLAN and IP addresses as a pool within NSO. In this way the resources are managed within the transaction. The mapping code could also reach out externally to get resources. The Reactive FASTMAP pattern is recommended for this.

Topology

Using topology information in the instantiation of a NSO service is a common approach, but also an area with many misconceptions. Just like a service in NSO takes a black-box view of the configuration needed for that service in the network NSO treats topologies in the same way. It is of course common that you need to reference topology information in the service but it is highly desirable to have a decoupled and self-sufficient service that only uses the part of the topology that is interesting/needed for the specific service should be used.

Other parts of the topology could either be handled by other services or just let the network state sort it out, it does not necessarily relate to configuration the network. A routing protocol will for example handle the IP path through the network.

It is highly desirable to not introduce unneeded dependencies towards network topologies in your service.

To illustrate this, lets look at a Layer3 MPLS VPN service. A logical overview of an MPLS VPN with three endpoints could look something like this. CE routers connecting to PE routers, that are connected to an MPLS core network. In the MPLS core network there are a number of P routers.

Figure 35. Simple MPLS VPN Topology
Simple MPLS VPN Topology


In the service model you only want to configure the CE devices to use as endpoints. In this case topology information could be used to sort out what PE router each CE router is connected to. However what type of topology do you need. Lets look at a more detailed picture of what the L1 and L2 topology could look like for one side of the picture above.

Figure 36. L1-L2 Topology
L1-L2 Topology


In pretty much all networks there is an access network between the CE and PE router. In the picture above the CE routers are connected to local Ethernet switches connected to a local Ethernet access network, connected through optical equipment. The local Ethernet access network is connected to a regional Ethernet access network, connected to the PE router. Most likely the physical connections between the devices in this picture has been simplified, in the real world redundant cabling would be used. The example above is of course only one example of how an access network could look like and it is very likely that a service provider have different access technologies. For example Ethernet, ATM, or a DSL based access network.

Depending on how you design the L3VPN service, the physical cabling or the exact traffic path taken in the layer 2 Ethernet access network might not be that interesting, just like we don't make any assumptions or care about how traffic is transported over the MPLS core network. In both these cases we trust the underlying protocols handling state in the network, spanning tree in the Ethernet access network, and routing protocols like BGP in the MPLS cloud. Instead in this case it could make more sense to have a separate NSO service for the access network, both so it can be reused for both for example L3VPN's and L2VPN's but also to not tightly couple to the access network with the L3VPN service since it can be different (Ethernet or ATM etc.).

Looking at the topology again from the L3VPN service perspective, if services assume that the access network is already provisioned or taken care of by another service, it could look like this.

Figure 37. Black-box topology
Black-box topology


The information needed to sort out what PE router a CE router is connected to as well as configuring both CE and PE routers is:

  • Interface on the CE router that is connected to the PE router, and IP address of that interface.

  • Interface on the PE router that is connected to the CE router, and IP address to the interface.

Creating a Multi-Vendor Service

This section describes the creation of a MPLS L3VPN service in a multi vendor environment applying the concepts described above. The example discussed can be found in examples.ncs/service-provider/mpls-vpn. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS based CE routers.

The goal with the NSO service is to setup a MPLS Layer3 VPN on a number of CE router endpoints using BGP as the CE-PE routing protocol. Connectivity between the CE and PE routers is done through a Layer2 Ethernet access network, which is out of scope for this service. In a real world scenario the access network could for example be handled by another service.

In the example network we can also assume that the MPLS core network already exists and is configured.

Figure 38. The MPLS VPN Example
The MPLS VPN Example


YANG Service Model Design

When designing service YANG models there are a number of things to take into consideration. The process usually involves the following steps:

  1. Identify the resulting device configurations for a deployed service instance.

  2. Identify what parameters from the device configurations that are common and should be put in the service model.

  3. Ensure that the scope of the service and the structure of the model works with the NSO architecture and service mapping concepts. For example, avoid unnecessary complexities in the code to work with the service parameters.

  4. Ensure that the model is structured in a way so that integration with other systems north of NSO works well. For example, ensure that the parameters in the service model map to the needed parameters from an ordering system.

Step 1 and 2: Device Configurations and Identifying parameters

Deploying a MPLS VPN in the network results in the following basic CE and PE configurations. The snippets below only include the Cisco IOS and Cisco IOS-XR configurations. In a real process all applicable device vendor configurations should be analyzed.

Example 117. CE Router Config
  interface GigabitEthernet0/1.77
   description Link to PE / pe0 - GigabitEthernet0/0/0/3
   encapsulation dot1Q 77
   ip address 192.168.1.5 255.255.255.252
   service-policy output volvo
  !
  policy-map volvo
   class class-default
    shape average 6000000
   !
  !
 interface GigabitEthernet0/11
   description volvo local network
   ip address 10.7.7.1 255.255.255.0
  exit
  router bgp 65101
   neighbor 192.168.1.6 remote-as 100
   neighbor 192.168.1.6 activate
   network 10.7.7.0
  !


Example 118. PE Router Config
  vrf volvo
   address-family ipv4 unicast
    import route-target
     65101:1
    exit
    export route-target
     65101:1
    exit
   exit
  exit
  policy-map volvo-ce1
   class class-default
    shape average 6000000 bps
   !
   end-policy-map
  !
  interface GigabitEthernet 0/0/0/3.77
   description Link to CE / ce1 - GigabitEthernet0/1
   ipv4 address 192.168.1.6 255.255.255.252
   service-policy output volvo-ce1
   vrf         volvo
   encapsulation dot1q 77
  exit
  router bgp 100
   vrf volvo
    rd 65101:1
    address-family ipv4 unicast
    exit
    neighbor 192.168.1.5
     remote-as 65101
     address-family ipv4 unicast
      as-override
     exit
    exit
   exit
  exit


The device configuration parameters that need to be uniquely configured for each VPN have been marked in bold.

Step 3 and 4: Model Structure and Integration with other Systems

When configuring a new MPLS l3vpn in the network we will have to configure all CE routers that should be interconnected by the VPN, as well as the PE routers they connect to.

However when creating a new l3vpn service instance in NSO it would be ideal if only the endpoints (CE routers) are needed as parameters to avoid having knowledge about PE routers in a northbound order management system. This means a way to use topology information is needed to derive or compute what PE router a CE router is connected to. This makes the input parameters for a new service instance very simple. It also makes the entire service very flexible, since we can move CE and PE routers around, without modifying the service configuration.

Resulting YANG Service Model:

container vpn {

  list l3vpn {
    tailf:info "Layer3 VPN";

    uses ncs:service-data;
    ncs:servicepoint l3vpn-servicepoint;

    key name;
    leaf name {
      tailf:info "Unique service id";
      type string;
    }
    leaf as-number {
      tailf:info "MPLS VPN AS number.";
      mandatory true;
      type uint32;
    }

    list endpoint {
      key id;
      leaf id {
        tailf:info "Endpoint identifier";
        type string;
      }
      leaf ce-device {
         mandatory true;
         type leafref {
           path "/ncs:devices/ncs:device/ncs:name";
         }
      }
      leaf ce-interface {
        mandatory true;
        type string;
      }
      leaf ip-network {
        tailf:info “private IP network”;
        mandatory true;
        type inet:ip-prefix;
      }
      leaf bandwidth {
        tailf:info "Bandwidth in bps";
        mandatory true;
        type uint32;
      }
    }
  }
}

The snipped above contains the l3vpn service model. The structure of the model is very simple. Every VPN has a name, an as-number and a list of all the endpoints in the VPN. Each endpoint has:

  • A unique id

  • A reference to a device (a CE router in our case)

  • A pointer to the LAN local interface on the CE router. This is kept as a string since we want this to work in a multi-vendor environment.

  • LAN private IP network

  • Bandwidth on the VPN connection.

To be able to derive the CE to PE connections we use a very simple topology model. Notice that this YANG snippet does not contain any servicepoint, which means that this is not a service model but rather just a YANG schema letting us store information in CDB.

container topology {
  list connection {
    key name;
    leaf name {
      type string;
    }
    container endpoint-1 {
      tailf:cli-compact-syntax;
      uses connection-grouping;
    }
    container endpoint-2 {
      tailf:cli-compact-syntax;
      uses connection-grouping;
    }
    leaf link-vlan {
      type uint32;
    }
  }
}

grouping connection-grouping {
  leaf device {
    type leafref {
      path "/ncs:devices/ncs:device/ncs:name";
    }
  }
  leaf interface {
    type string;
  }
  leaf ip-address {
    type tailf:ipv4-address-and-prefix-length;
  }
}

The model basically contains a list of connections, where each connection points out the device, interface and ip-address in each of the connection.

Defining the Mapping

Since we need to lookup which PE routers to configure using the topology model in the mapping logic it is not possible to use a declarative configuration template based mapping. Using Java and configuration templates together is the right approach.

The Java logic lets you set a list of parameters that can be consumed by the configuration templates. One huge benefit of this approach is that all the parameters set in the Java code is completely vendor agnostic. When writing the code there is no need for knowledge of what kind of devices or vendors that exists in the network, thus creating an abstraction of vendor specific configuration. This also means that in to create the configuration template there is no need to have knowledge of the service logic in the Java code. The configuration template can instead be created and maintained by subject matter experts, the network engineers.

With this service mapping approach it makes sense to modularize the service mapping by creating configuration templates on a per feature level, creating an abstraction for a feature in the network. In this example means we will create the following templates:

  • CE router

  • PE router

This is both to make services easier to maintain and create but also to create components that are reusable from different services. This can of course be even more detailed with templates with for example BGP or interface configuration if needed.

Since the configuration templates are decoupled from the service logic it is also possible to create and add additional templates in a running NSO system. You can for example add a CE router from a new vendor to the layer3 VPN service by only creating a new configuration template, using the set of parameters from the service logic, to a running NSO system without changing anything in the other logical layers.

Figure 39. The MPLS VPN Example
The MPLS VPN Example


The Java Code

The Java code part for the service mapping is very simple and follows the following pseudo code steps:

READ topology
FOR EACH endpoint
        USING topology
DERIVE connected-pe-router
                READ ce-pe-connection
        SET pe-parameters
        SET ce-parameters
        APPLY TEMPLATE l3vpn-ce
        APPLY TEMPLATE l3vpn-pe

This section will go through relevant parts of the Java outlined by the pseudo code above. The code starts with defining the configuration templates and reading the list of endpoints configured and the topology. The Navu API is used for navigating the data models.

Template peTemplate = new Template(context, "l3vpn-pe");
        Template ceTemplate = new Template(context,"l3vpn-ce");
        NavuList endpoints = service.list("endpoint");
        NavuContainer topology = ncsRoot.getParent().
                container("http://com/example/l3vpn").
                container("topology");

The next step is iterating over the VPN endpoints configured in the service, find out connected PE router using small helper methods navigating the configured topology.

 for(NavuContainer endpoint : endpoints.elements()) {
            try {
                String ceName =  endpoint.leaf("ce-device").valueAsString();
                // Get the PE connection for this endpoint router
                NavuContainer conn =
                    getConnection(topology,
                                  endpoint.leaf("ce-device").valueAsString());
                NavuContainer peEndpoint = getConnectedEndpoint(
                                                conn,ceName);
                NavuContainer ceEndpoint = getMyEndpoint(
                                                conn,ceName);

The parameter dictionary is created from the TemplateVariables class and is populated with appropriate parameters.

TemplateVariables vpnVar = new TemplateVariables();
vpnVar.putQuoted("PE",peEndpoint.leaf("device").valueAsString());
vpnVar.putQuoted("CE",endpoint.leaf("ce-device").valueAsString());
vpnVar.putQuoted("VLAN_ID", vlan.valueAsString());
vpnVar.putQuoted("LINK_PE_ADR",
getIPAddress(peEndpoint.leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_CE_ADR",
                getIPAddress(ceEndpoint. leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_MASK",
                getNetMask(ceEndpoint. leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_PREFIX",
                getIPPrefix(ceEndpoint.leaf("ip-address").valueAsString()));

The last step after all parameters have been set is applying the templates for the CE and PE routers for this VPN endpoint.

peTemplate.apply(service, vpnVar);
ceTemplate.apply(service, vpnVar);

Configuration Templates

The configuration templates are XML templates based on the structure of device YANG models.There is a very easy way to create the configuration templates for the service mapping if NSO is connected to a device with the appropriate configuration on it, using the following steps.

  1. Configure the device with the appropriate configuration.

  2. Add the device to NSO

  3. Sync the configuration to NSO.

  4. Display the device configuration in XML format.

  5. Save the XML output to a configuration template file and replace configured values with parameters

The commands in NSO give the following output. To make the example simpler only the BGP part of the configuration is used

admin@ncs# devices device ce1 sync-from
admin@ncs# show running-config devices device ce1 config \
        ios:router bgp | display xml

<config xmlns="http://tail-f.com/ns/config/1.0">
  <devices xmlns="http://tail-f.com/ns/ncs">
  <device>
    <name>ce1</name>
      <config>
         <router xmlns="urn:ios">
          <bgp>
            <as-no>65101</as-no>
            <neighbor>
              <id>192.168.1.6</id>
              <remote-as>100</remote-as>
              <activate/>
            </neighbor>
            <network>
              <number>10.7.7.0</number>
            </network>
          </bgp>
        </router>
      </config>
  </device>
  </devices>
</config>

The final configuration template with the replaced parameters marked in bold is shown below. If the parameter starts with a $-sign is taken from the Java parameter dictionary, otherwise it is a direct xpath reference to the value from the service instance.

<config-template xmlns="http://tail-f.com/ns/config/1.0">
  <devices xmlns="http://tail-f.com/ns/ncs">
    <device tags="nocreate">
      <name>{$CE}</name>
      <config>
       <router xmlns="urn:ios" tags="merge">
          <bgp>
            <as-no>{/as-number}</as-no>
            <neighbor>
              <id>{$LINK_PE_ADR}</id>
              <remote-as>100</remote-as>
              <activate/>
            </neighbor>
            <network>
              <number>{$LOCAL_CE_NET}</number>
            </network>
          </bgp>
        </router>
      </config>
    </device>
  </devices>
</config-template>

FASTMAP Description

FASTMAP covers the complete service life-cycle: creating, changing and deleting the service. The solution requires a minimum amount of code for mapping from a service model to a device model.

FASTMAP is based on generating changes from an initial create. When the service instance is created the reverse of the resulting device configuration is stored together with the service instance. If an NSO user later changes the service instance, NSO first applies (in a transaction) the reverse diff of the service, effectively undoing the previous results of the service creation code. Then it runs the logic to create the service again, and finally executes a diff to current configuration. This diff is then sent to the devices.

Note

This means that it is very important that the service create code produces the same device changes for a given set of input parameters every time it is executed. See the section called “ Persistent FASTMAP Properties ” for techniques to achieve this.

If the service instance is deleted, NSO applies the reverse diff of the service, effectively removing all configuration changes the service did from the devices.

Figure 40. FASTMAP Create a Service
FASTMAP Create a Service

Assume we have a service model that defines a service with attributes X, Y, and Z. The mapping logic calculates that attributes A, B, and C shall be created on the devices. When the service is instantiated, the inverse of the corresponding device attributes A, B, and C are stored with the service instance in the NSO data-store CDB. This inverse answers the question: what should be done to the network to bring it back to the state before the service was instantiated.

Now let us see what happens if one service attribute is changed. In the scenario below the service attribute Z is changed. NSO will execute this as if the service was created from scratch. The resulting device configurations are then compared with the actual configuration and the minimum diff is sent to the devices. Note that this is managed automatically, there is no code to handle "change Z".

Figure 41. FASTMAP Change a Service
FASTMAP Change a Service

When a user deletes a service instance NSO can pick up the stored device configuration and delete that:

Figure 42. FASTMAP Delete a Service
FASTMAP Delete a Service

Reactive FASTMAP

A FASTMAP service is not allowed to perform explicit function calls that have side effects. The only action a service is allowed to take is to modify the configuration of the current transaction. For example, a service may not invoke a RPC to allocate a resource or start a virtual machine. All such actions must take place before the service is created and provided as input parameters to the service. The reason for this restriction is that the FASTMAP code may be executed as part of a commit dry-run, or the commit may fail, in which case the side effects would have to be undone.

Reactive FASTMAP is a design pattern that provides a side-effect free solution to invoking RPCs from a service. In the services discussed previously in this chapter, the service was modeled in such a way that all required parameters were given to the service instance. The mapping logic code could immediately do its work.

Sometimes this is not possible. Two examples where Reactive FASTMAP is the solution are:

  1. A resource is allocated from an external system, such as an IP address or vlan id. It's not possible to do this allocation from within the normal FASTMAP create() code since there is no ways to deallocate the resource on commit abort or failure, and when the service is deleted. Furthermore, the create() code runs within the transaction lock. The time spent in the create() should be as short as possible.

  2. The service requires the start of one or more Virtual Machines, Virtual Network Functions. The VMs don't yet exist, and the create() code needs to trigger something that starts the VMs, and then later, when the VMs are operational, configure them.

The basic idea is to let the create() code not just write data in the /ncs:devices tree, but also write data in some auxiliary data structure. A CDB subscriber subscribes to that auxiliary data structure and perform the actual side effect, for example a resource allocation. The response is written to CDB as operational data where the service can read it during subsequent invocations.

The pseudo code for a Reactive FASTMAP service that allocates an id from an id pool may look like this:

    create(serv) {
       /* request resource allocation */
       ResourceAllocator.requestId(serv, idPool, allocId);

       /* check for allocation response */
       if (!ResourceAllocator.idReady(idPool, allocId))
          return;

       /* read allocation id */
       id = ResourceAllocator.idRead(idPool, allocId);

       /* use id in device config */
       configure(id)
    }

The actual deployment of a Reactive FASTMAP service will involve multiple executions of the create() code.

  1. In the first run the code will request an id by writing an allocation request to the resource manager tree. It will then check if the response is ready, which it will not be, and return.

  2. The resource manager subscribes to changes to the resource manager tree, looking for allocation request to be created and deleted. In this case a new allocation request is created. The resource manager allocates the resource and write the response in a CDB oper leaf. Finally the resource manager trigger a service reactive-re-deploy action.

  3. The create() is run for a second time. The code will create the allocation request, just as it did the first time, then check if the response is ready. This time it will be ready and the code can proceed to read the allocated id, and use it in its configuration.

Let us make a small digress on the reactive-re-deploy action mentioned above. Any service will expose both a re-deploy and a reactive-re-deploy action. Both actions are similar in that they activate the FASTMAP algorithm and invokes the service create() logic. However, while re-deploy is user facing and has e.g. dry-run functionality, the reactive-re-deploy is specifically tailored for the Reactive FASTMAP pattern. Hence the reactive-re-deploy takes no arguments and has no extra functionality, instead it performs the re-deploy as the same user and with the same commit parameters as the original service commit. Also the reactive-re-deploy will make a "shallow" re-deploy in the sense that underlying stacked services will not be re-deployed. This "shallow" feature is important when stacked services are used for performance optimization reasons. In the rest of this chapter when service re-deploy is mentioned we will imply that this is performed using the reactive-re-deploy action.

In the above ResourceAllocator example, when the service is deleted we want the allocated id to be returned to the resource manager and become available for others to allocate. This is achieved as follows.

  1. The service is deleted with the consequence that all configuration that the service created during its deployment will be removed, in particular the id allocation request will be removed.

  2. Since the resource manager subscribes to changes in the resource manager tree it will be notified that an allocation request has been deleted. It can then release the resource allocated for this specific request.

Other side effects can be handled in similar ways, for example, starting virtual machines, updating external servers etc. The resource-manager-example and id-allocator-example packages can be found in examples.ncs//service-provider/virtual-mpls-vpn

Note

All packages and NEDs used in the examples are just example packages/NEDs, and are in no way production ready packages nor are they supported. There are official Function Packs (a collection of packages) and NEDs which resembles the packages used in the examples, but they are not the same. Never consider packages and NEDs found in the example collection to be official supported packages.

The example in examples.ncs/getting-started/developing-with-ncs/4-rfs-service has a package called vlan-reactive-fastmap that implements external allocation of a unit and a vlan-id for the service. The code consists of three parts:

  1. The YANG model, which is very similar to the vlan package previously described in this chapter. The difference is that two parameters are missing, the unit and the vlan-id.

    Another difference is that a parallel list structure to the services is maintained. The list entries contain help data and eventually the operational data holding the missing parameters will end up there.

  2. The create() method. This code drives the Reactive FASTMAP loop forward. The YANG model for the service has this structure

    module: alloc-vlan-service
       +--rw alloc-vlan* [name]
          +--rw name                        string
          +--rw iface                       string
          +--rw description                 string
          +--rw arp*                        enumeration

    The parallel auxiliary model is:

    module: alloc-vlan-service
       +--rw alloc-vlan-data* [name]
       |  +--rw name                     string
       |  +--rw request-allocate-unit!
       |  |  +--ro unit?   string
       |  +--rw request-allocate-vid!
       |     +--ro vlan-id?   uint16

    When the create() method gets called the code creates an allocation request by writing config data into the buddy list entry. It then checks its "buddy" list entry to see if the unit and the vlan-id are there. If they are, the FASTMAP code starts to write into the /ncs:devices tree. If they are not it returns.

  3. A CDB subscriber that subscribes to the /alloc-vlan-data tree where the normal FASTMAP create() code writes. The CDB subscriber picks up, in this case for example the "CREATE" of /alloc-vlan-data[name="KEY"]/request-allocate-unit and allocates a unit number, writes that number as operational data in the /alloc-vlan-data tree, and finally redeploys the service, thus triggering the call of create() again. This loop of create(), CDB subscriber, redeploy continues until the create() decides that it has all required data to enter the normal FASTMAP phase, where the code writes to the /ncs:devices tree.

There are many variations on this same pattern that can be applied. The common theme is that the create() code relies on auxiliary operational data to be filled in. This data contains the missing parameters.

Progress reporting using plan-data

Since the life-cycle of a Reactive FASTMAP service is more complex, there is also a need to report the progress of the service in addition to the success or failure of individual transactions. A Reactive FASTMAP service is expected to be designed with a self-sustained chain of reactive-re-deploy until the service has completed. If this chain of reactive-re-deploy is broken the service will fail to complete even though no transaction failure has occurred.

To support Reactive FASTMAP service progress reporting there is a YANG grouping, ncs:plan-data, and Java API utility PlanComponent. It is a recommendation to implement this into the service to promote a "standardized" view of progress reporting.

The YANG submodule defining the ncs:plan-data grouping is named tailf-ncs-plan.yang and contains the following:

submodule tailf-ncs-plan {
  yang-version 1.1;
  belongs-to tailf-ncs {
    prefix ncs;
  }

  import ietf-yang-types {
    prefix yang;
  }

  import tailf-common {
    prefix tailf;
  }

  include tailf-ncs-common;
  include tailf-ncs-services;
  include tailf-ncs-devices;
  include tailf-ncs-log;

  organization "Tail-f Systems";

  description
    "This submodule contains a collection of YANG definitions for
     configuring plans in NCS.

     Copyright 2016-2022 Cisco Systems, Inc.
     All rights reserved.
     Permission is hereby granted to redistribute this file without
     modification.";

  revision 2022-05-12 {
    description
      "Released as part of NCS-5.7.4.

       Non-backwards-compatible changes have been introduced.

       The 'lsa-service-list' leaf-list in the service's private data
       has changed type to yang:xpath1.0.";
  }

  revision 2021-12-17 {
    description
      "Released as part of NCS-5.7.

       Non-backwards-compatible changes have been introduced.

       Obsoleted the usage of the 'service-commit-queue' grouping.

       Added YANG extensions 'all' and 'any'.

       Added YANG extensions 'all' and 'any' as valid substatements
       to the 'pre-condition' YANG extension.

       Added grouping 'pre-condition-grouping' that contains
       data to express nano-service pre-conditions.

       Added status obsolete to leafs 'create-monitor', 'create-trigger-expr',
       'delete-monitor' and 'delete-trigger-expr' in the 'pre-conditions'
       container in the 'nano-plan-components' grouping.

       Added containers 'create' and 'delete' to the 'pre-conditions'
       containers in the 'nano-plan-components' grouping. Both of the
       new containers contain the grouping 'pre-condition-grouping'.

       Add the converge-on-re-deploy extension.

       Updated plan-location type to yang:xpath1.0.

       Update the description of the self-as-service-status extension.";
  }

  revision 2021-09-02 {
    description
      "Released as part of NCS-5.6.

       Updated the description and added the 'sync' parameter to the
       /zombies/service/reactive-re-deploy action.

       Remove mandatory statement from status leaf in plan-state-change
       notification.

       Added commit-queue container and trace-id leaf to plan-state-change
       notification.

       Remove unique statement from /services/plan-notifications/subscription.";
  }

  revision 2021-02-09 {
    description
      "Released as part of NCS-5.5.1.

       Added the 'ncs-commit-params' grouping to the
       /zombies/service/re-deploy action input parameters.

       Added /zombies/service/latest-commit-parameters leaf.";
  }

  revision 2020-11-26 {
    description
      "Released as part of NCS-5.5.

       Add when statement to variables.

       Add self-as-service-status to plan-outline.

       Added the ncs-commit-params grouping to the force-back-track input
       parameters.

       Add new extension deprecates-component and a version leaf to
       nano components.";
  }

  revision 2020-06-25 {
    description
      "Released as part of NCS-5.4.

       Add trigger-on-delete trigger type to precondition monitors.

       Add sync option to post actions.

       Remove the experimental tag from the plan-location statement.

       Add purge action to side-effect queue and make the automatic
       cleanup of the side-effect queue configurable.

       Add force-commit to create and delete.

       Add zombies/service/pending-delete leaf.

       Add zombies/service/plan/commit-queue container.

       Use service-commit-queue grouping under zombies/service.

       Add load-device-config action under
       commit-queue/queue-item/failed-device for zombies.

       Add canceled as a valid value of side-effect-queue/status.
       The status of a side-effect is canceled if a related
       commit-queue item has failed. Canceled side-effects are
       cleaned up as part of the side-effect queue's automatic cleanup.";
  }

  revision 2019-11-28 {
    description
      "Released as part of NCS-5.3.

       Added dry-run option to the zombie resurrect action.

       Added service log and error-info to zombies.

       Added reactive-re-deploy action to revive and
       reactive-re-deploy a zombie.";

  }

  revision 2019-04-09 {
    description
      "Released as part of NCS-5.1.

       Added operation leaf to plan-state-change notification and to
       subscription list.

       Added ned-id-list in the service's private data.";
  }

  revision 2018-11-12 {
    description
      "Released as part of NCS-4.7.2.

       Major changes to nano services.";
  }

  revision 2018-06-21 {
    description
      "Released as part of NCS-4.7.

       Added commit-queue container in plan.";
  }

  revision 2017-03-16 {
    description
      "Released as part of NCS-4.4.

       Added error-info container in plan for additional error information.";
  }

  revision 2016-11-24 {
    description
      "Released as part of NCS-4.3.

       Major additions to this submodule to incorporate Nano Services.";
  }

  revision 2016-05-26 {
    description
      "Initial revision";
  }


  typedef plan-xpath {
    type yang:xpath1.0;
    description
      "This type represents an XPath 1.0 expression that is evaluated
       in the following context:

         o  The set of namespace declarations are the prefixes defined
            in all YANG modules implemented, mapped to the namespace
            defined in the corresponding module.

         o  The set of variable bindings contains all variables
            declared with 'ncs:variable' that are in scope, and all
            variables defined in the service code's 'opaque' key-value
            list (if any), and the following variables:

            'SERVICE': a nodeset with the service instance node as the
                       only member, or no nodes if the service
                       instances is being deleted.

            'ZOMBIE':  a nodeset with the service instance node as the
                       only member when it is being deleted, or no
                       nodes if the service instance exists.

            'PLAN':    a nodeset with the 'plan' container for the service
                       instance as the only member.


          o  The function library is the core function library.

          o  If this expression is in a descendant to a 'ncs:foreach'
             statement, the context node is the node in the node set
             in the 'ncs:foreach' result.  Otherwise, the context node
             is initially the service instance node.
       ";
  }

  /*
   * Plan Component Types
   */

  typedef plan-component-type-t {
    description
      "This is a base type from which all service specific plan components
       can be derived.";
    type identityref {
      base plan-component-type;
    }
  }

  identity plan-component-type {
    description
      "A service plan consists of several different plan components.
       Each plan component moves forward in the plan as the service
       comes closer to fulfillment.";
  }

  identity self {
    description
      "A service should when it constructs it's plan, include a column
       of type 'self', this column can be used by upper layer software to
       determine which state the service is in as a whole.";
    base plan-component-type;
  }


  /*
   * Plan States
   */

  typedef plan-state-name-t {
    description
      "This is a base type from which all plan component specific states can
       be derived.";
    type identityref {
      base plan-state;
    }
  }

  typedef plan-state-operation-t {
    type enumeration {
      enum created {
        tailf:code-name "plan_state_created";
      }
      enum modified {
        tailf:code-name "plan_state_modified";
      }
      enum deleted {
        tailf:code-name "plan_state_deleted";
      }
    }
  }

  typedef plan-state-status-t {
    type enumeration {
      enum not-reached;
      enum reached;
      enum failed {
        tailf:code-name "plan_failed";
      }
    }
  }

  typedef side-effect-q-status-t {
    type enumeration {
      enum not-reached;
      enum reached;
      enum failed {
        tailf:code-name "plan_failed";
      }
      enum canceled {
        tailf:code-name "effect_canceled";
      }
    }
  }

  typedef plan-state-action-status-t {
    type enumeration {
      enum not-reached;
      enum create-reached;
      enum delete-reached;
      enum failed {
        tailf:code-name "plan_action_failed";
      }
      enum create-init;
      enum delete-init;
    }
  }

  identity plan-state {
    description
      "This is the base identity for plan states. A plan component in a
       plan goes through certain states, some, such as 'init' and
       'ready', are specified here, and the application augments these
       with app specific states.";
  }

  identity init {
    description
      "The init state in all plan state lists, primarily used as a
       place holder with a time stamp.";
    base plan-state;
  }

  identity ready {
    description
      "The final state in a 'state list' in the plan";
    base plan-state;
  }

  /*
   * Plan Notifications
   */

  augment "/ncs:services" {
    container plan-notifications {
      description
        "Configuration to send plan-state-change notifications for
         plan state transitions. A notification can be configured to
         be sent when a specified service's plan component enters a
         given state.

         The built in stream 'service-state-changes' is used to send
         these notifications.";
      list subscription {
        key name;
        description
          "A list of our plan notification subscriptions.";

        leaf name {
          type string;
          description
            "A unique identifier for this subscription.";
        }
        leaf service-type {
          type tailf:node-instance-identifier;
          tailf:cli-completion-actionpoint "servicepoints-with-plan";
          description
            "The type of service. If not set, all service types are
             subscribed.";
        }
        leaf component-type {
          type plan-component-type-t;
          description
            "The type of component in the service's plan. If not set,
             all component types of the specified service types are
             subscribed.";
        }
        leaf state {
          type plan-state-name-t;
          description
            "The name of the state for the component in the service's plan.
             If not set, all states of the specified service types and
             plan components are subscribed.";
        }
        leaf operation {
          type plan-state-operation-t;
          description
            "The type of operation performed on the state(s) in the
             component(s). If not set, all operations are subscribed.";
        }
      }
    }
  }

  notification plan-state-change {
    description
      "This notification indicates that the specified service's
       plan component has entered the given state.

       This notification is not sent unless the system has been
       configured to send the notification for the service type.";
    leaf service {
      type instance-identifier;
      mandatory true;
      description
        "A reference to the service whose plan has been changed.";
    }
    leaf component {
      type string;
      description
        "Refers to the name of a component in the service's plan;
         plan/component/name.";
    }
    leaf state {
      type plan-state-name-t;
      mandatory true;
      description
        "Refers to the name of the new state for the component in
         the service's plan;
         plan/component/state";
    }
    leaf operation {
      type plan-state-operation-t;
      description
        "The type of operation performed on the given state.";
    }
    leaf status {
      type plan-state-status-t;
      description
        "Refers to the status of the new state for the component in
         the service's plan;
         plan/component/state/status";
    }
    container commit-queue {
      presence "The service is being committed through the commit queue.";
      list queue-item {
        key id;
        max-elements 1;
        leaf id {
          type uint64;
          description
            "If the queue item in the commit queue refers to this service
             this is the queue number.";
        }
        leaf tag {
          type string;
          description
            "Opaque tag set in the commit.";
        }
      }
    }
    leaf trace-id {
      type string;
      description
        "The trace id assigned to the commit that last changed
         the service instance.";
    }
  }

  /*
   * Groupings
   */

  grouping plan-data {
    description
      "This grouping contains the plan data that can show the
       progress of a Reactive FASTMAP service. This grouping is optional
       and should only be used by services i.e lists or presence containers
       that uses the ncs:servicepoint callback";
    container plan {
      config false;
      tailf:cdb-oper {
        tailf:persistent true;
      }
      uses plan-components;
      container commit-queue {
        presence "The service is being committed through the commit queue.";
        list queue-item {
          key id;
          leaf id {
            type uint64;
            description
              "If the queue item in the commit queue refers to this service
               this is the queue number.";
          }
        }
      }
      leaf failed {
        type empty;
        description
          "This leaf is present if any plan component in the plan is in
           a failed state; i.e., a state with status 'failed', or
           if the service failed to push its changes to the network.";
      }
      container error-info {
        presence "Additional info if plan has failed";
        leaf message {
          type string;
          description
            "An explanatory message for the failing plan.";
        }
        leaf log-entry {
          type instance-identifier {
            require-instance false;
          }
          description
            "Reference to a service log entry with additional information.";
        }
      }
    }
    container plan-history {
      config false;
      tailf:cdb-oper {
        tailf:persistent true;
      }
      list plan {
        key time;
        description
          "Every time the plan changes its structure, i.e., a
           plan component is added or deleted, or a state is added or
           deleted in a plan component, a copy of the old plan is stored
           in the plan history list.";

        leaf time {
          type yang:date-and-time;
          tailf:cli-value-display-template "$(.|datetime)";
        }
        uses plan-components;
      }
    }
  }

  grouping plan-components {
    description
      "This grouping contains a list of components that reflects the
       different steps or stages that a Reactive FASTMAP service comprises.";
    list component {
      ordered-by user;
      key name;
      description
        "A component has a type and a list of states.
         It is required that the first plan component is of type ncs:self.
         It is also required that the first state of a component is ncs:init
         and the last state is ncs:ready.
         A service can in addition to the 'self' component have any number of
         components. These additional components will have types that are
         defined by user specified YANG identities.";

      uses plan-component-body {
        refine "state/status" {
          mandatory true;
        }
      }
    }
  }

  grouping plan-component-body {
    leaf name {
      type string;
    }
    leaf type {
      description
        "The plan component type is defined by an YANG identity.
         It is used to identify the characteristics of a certain component.
         Therefore, if two components in the same service are of the same
         type they should be identical with respect to number, type and order
         of their contained states.";

      type plan-component-type-t;
      mandatory true;
    }
    list state {
      description
        "A plan state represents a certain step or stage that a service needs
         to execute and/or reach. It is identified as an YANG identity.
         There are two predefined states ncs:init and ncs:ready which is the
         first respectively last state of a plan component.";

      ordered-by user;
      key name;
      leaf name {
        tailf:alt-name state;
        type plan-state-name-t;
      }
      leaf status {
        description
          "A plan state is always in one of three states 'not-reached' when
           the state has not been executed, 'reached' when the state has been
           executed and 'failed' it the state execution failed.";

        type plan-state-status-t;
      }
      leaf when {
        type yang:date-and-time;
        tailf:cli-value-display-template "$(.|datetime)";
        when '../status != "not-reached"';
        description
          "The time this state was successfully reached or failed.";
      }
      leaf service-reference {
        description
          "If this component reflects the state of some other data, e.g
           an instantiated RFS, an instantiated CFS or something else, this
           optional field can be set to point to that instance";
        type instance-identifier {
          require-instance false;
        }
        tailf:display-column-name "ref";
      }
    }
  }

  /*
   * Nano-service related definitions
   */

  grouping force-back-track-action {
    tailf:action force-back-track {
      tailf:info "Force a component to back-track";
      description
        "Forces an existing component to start back-tracking";
      tailf:actionpoint ncsinternal {
        tailf:internal;
      }
      input {
        leaf back-tracking-goal {
          type leafref {
            path "../../state/name";
          }
          description
            "Target state for back-track.";
        }
        uses ncs-commit-params;
      }
      output {
        leaf result {
          type boolean;
          description
            "Set to true if the forced back tracking was successful,
             otherwise false.";
        }
        leaf info {
          type string;
          description
            "A message explaining why the forced back tracking wasn't
             successful.";
        }
      }
    }
  }

  grouping post-action-input-params {
    description
      "A Nano service post-action can choose to implement this grouping
       as its input parameters. If so the action will be invoked with:
         * opaque-props    - The list of name, value pairs in the service opaque
         * component-props - The list of component properties for
                             the invoking plan component state.

       post-actions that does not implement this grouping as its input
       parameters will be invoked with an empty parameter list.";
    list opaque-props {
      key name;
      leaf name {
        type string;
      }
      leaf value {
        type string;
      }
    }
    list component-props {
      key name;
      leaf name {
        type string;
      }
      leaf value {
        type string;
      }
    }
  }

  grouping nano-plan-data {
    description
      "This grouping is required for nano services. It replaces the
       plan-data grouping. This grouping contains an executable plan
       that has additional state data which is internally used to
       control service execution.";
    uses nano-plan;
  }

  grouping nano-plan {
    container plan {
      config false;
      tailf:cdb-oper {
        tailf:persistent true;
      }
      uses nano-plan-components {
        augment "component" {
          uses force-back-track-action;
        }
      }
      container commit-queue {
        presence "The service is being committed through the commit queue.";
        list queue-item {
          key id;
          leaf id {
            type uint64;
            description
              "If the queue item in the commit queue refers to this service
               this is the queue number.";
          }
        }
      }
      leaf failed {
        type empty;
        description
          "This leaf is present if any plan component in the plan is in
           a failed state; i.e., a state with status 'failed', or
           if the service failed to push its changes to the network.";
      }
      container error-info {
        presence "Additional info if plan has failed";
        leaf message {
          type string;
          description
            "An explanatory message for the failing plan.";
        }
        leaf log-entry {
          type instance-identifier {
            require-instance false;
          }
          description
            "Reference to a service log entry with additional information.";
        }
      }
      leaf deleting {
        tailf:hidden fastmap-private;
        type empty;
      }
      leaf service-location {
        tailf:hidden fastmap-private;
        type instance-identifier {
          require-instance false;
        }
      }
    }
  }

  grouping nano-plan-components {
    description
      "This grouping contains a list of components that reflects the
       different steps or stages that a nano service comprises.";
    list component {
      ordered-by user;
      key "type name";
      description
        "A component has a type and a list of states.  It is required
         that the first plan component is of type ncs:self.  It is
         also required that the first state of a component is ncs:init
         and the last state is ncs:ready.  A service can in addition
         to the 'self' component have any number of components. These
         additional components will have types that are defined by
         user specified YANG identities.";

      uses plan-component-body {
        augment "state" {
          leaf create-cb {
            tailf:hidden full;
            description
              "indicate if a create callback should be registered
               for this state";
            type boolean;
          }

          leaf create-force-commit {
            tailf:hidden full;
            description
              "Indicate if the current transaction should be commited before
               running any later states.";
            type boolean;
            default false;
          }

          leaf delete-cb {
            tailf:hidden full;
            description
              "indicate if a delete callback should be registered
               for this state";
            type boolean;
          }

          leaf delete-force-commit {
            tailf:hidden full;
            description
              "Indicate if the current transaction should be commited before
               running any later states.";
            type boolean;
            default false;
          }

          container pre-conditions {
            tailf:display-groups "summary";
            description
              "Pre-conditions for a state controls whether or not a
               state should be executed. There are separate conditions
               for the 'create' and 'delete' case. At create the
               create conditions checked and if possible executed with
               the ultimate goal for the state of having status
               'reached'. At the 'delete' case the delete conditions
               control whether the state changes should be deleted
               with the ultimate goal of the state having status
               'not-reached'";

            presence "Preconditions for executing the plan state";

            // Kept for backwards compatability
            leaf create-trigger-expr {
              status obsolete;
              type yang:xpath1.0;
            }

            leaf create-monitor {
              status obsolete;
              type yang:xpath1.0;
            }

            leaf delete-trigger-expr {
              status obsolete;
              type yang:xpath1.0;
            }

            leaf delete-monitor {
              status obsolete;
              type yang:xpath1.0;
            }

            grouping pre-condition-grouping {
              leaf fun {
                type enumeration {
                  enum all {
                    tailf:code-name fun-all;
                  }
                  enum any {
                    tailf:code-name fun-any;
                  }
                }
              }
              list pre-condition {
                key id;
                leaf id {
                  type string;
                }
                leaf monitor {
                  type yang:xpath1.0;
                }
                leaf trigger-expr {
                  type yang:xpath1.0;
                }
              }
            }
            container create {
              presence "Create precondition exists";
              uses pre-condition-grouping;
            }
            container delete {
              presence "Delete precondition exists";
              uses pre-condition-grouping;
            }
          }

          container post-actions {
            tailf:display-groups "summary";

            description
              "Post-actions are called after successful execution of a
               state.  These are optional and there are separate
               action that can be set for the 'create' and 'delete'
               case respectively.

               These actions are put as requests in the
               side-effect-queue and are executed asynchronously with
               respect to the original service transaction.";

            presence "Asynchronous side-effects after successful execution";
            leaf create-action-node {
              description
                "This leaf identifies the node on which a specified
                 action resides. This action is called after this state
                 as got a 'reached' status.";
              type yang:xpath1.0;
            }
            leaf create-action-name {
              description
                "The name of the action.";
              type string;
            }
            leaf create-action-result-expr {
              description
                "An action responds with a structured result. A certain
                 value could indicate an error or a successful result, e.g.
                 'result true'.

                 This statement describes an XPath expression to
                 evaluate the result of the action so that the
                 side-effect-queue can indicate action errors.

                 The result of the expression is converted to a boolean using
                 the standard XPath rules.  If the result is 'true' the action
                 is reported as successful, otherwise as failed.

                 The context for evaluating this expression is the
                 resulting xml tree of the action.

                 The set of namespace declarations are all available namespaces,
                 with the prefixes defined in the modules.";
              type yang:xpath1.0;
            }
            choice create-action-operation-mode {
              description
                "Specifies if the create post action should be run synchronously
                 or not.";
              leaf create-action-async {
                type empty;
              }
              leaf create-action-sync {
                type empty;
              }
              default create-action-async;
            }
            leaf delete-action-node {
              description
                "This leaf identifies the node on which a specified
                 action resides. This action is called after this state
                 as got a 'not-reached' status.";
              type yang:xpath1.0;
            }
            leaf delete-action-name {
              description
                "The name of the action.";
              type string;
            }
            leaf delete-action-result-expr {
              description
                "An action responds with a structured result. A certain
                 value could indicate an error or a successful result, e.g.
                 'result true'.

                 This statement describes an XPath expression to evaluate the
                 result of the action so that the side-effect-queue can
                 indicate action errors.

                 The result of the expression is converted to a boolean using
                 the standard XPath rules.  If the result is 'true' the action
                 is reported as successful, otherwise as failed.

                 The context for evaluating this expression is the
                 resulting xml tree of the action.

                 The set of namespace declarations are all available namespaces,
                 with the prefixes defined in the modules.";
              type yang:xpath1.0;
            }
            choice delete-action-operation-mode {
              description
                "Specifies if the delete post action should be run synchronously
                 or not.";
              leaf delete-action-async {
                type empty;
              }
              leaf delete-action-sync {
                type empty;
              }
              default delete-action-async;
            }
          }

          leaf post-action-status {
            when '../post-actions';
            type plan-state-action-status-t;
            description
              "This leaf is initially set to 'not-reached'.

               If a post-action was specified, and returned
               successfully, this leaf will be set to 'create-reached'
               if the component is not back-tracking, and
               'delete-reached' if it is back-tracking.

               If the post-action did not return successfully, this
               leaf is set to 'failed'.";
          }

          container modified {
            tailf:display-groups "summary";
            config false;
            tailf:callpoint ncs {
              tailf:internal;
            }
            description
              "Devices and other services this service has modified directly or
              indirectly (through another service).";
            tailf:info
              "Devices and other services this service modified directly or
               indirectly.";
            leaf-list devices {
              tailf:info
                "Devices this service modified directly or indirectly";
              type leafref {
                path "/ncs:devices/ncs:device/ncs:name";
              }
            }
            leaf-list services {
              tailf:info
                "Services this service modified directly or indirectly";
              type instance-identifier {
                require-instance false;
              }
            }
            leaf-list lsa-services {
              tailf:info
                "Services residing on remote LSA nodes this service
                has modified directly or indirectly.";
              type instance-identifier {
              require-instance false;
              }
            }
          }

          container directly-modified {
            tailf:display-groups "summary";
            config false;
            tailf:callpoint ncs {
              tailf:internal;
            }
            description
              "Devices and other services this service has explicitly
              modified.";
            tailf:info
              "Devices and other services this service has explicitly
              modified.";
            leaf-list devices {
              tailf:info
                "Devices this service has explicitly modified.";
              type leafref {
                path "/ncs:devices/ncs:device/ncs:name";
              }
            }
            leaf-list services {
              tailf:info
                "Services this service has explicitly modified.";
              type instance-identifier {
                require-instance false;
              }
            }
            leaf-list lsa-services {
              tailf:info
                "Services residing on remote LSA nodes this service
                has explicitly modified.";
              type instance-identifier {
                require-instance false;
              }
            }
          }

          uses service-get-modifications;

          container private {
            description
              "NCS service related internal data stored here.";
            tailf:hidden fastmap-private;
            ncs:ncs-service-private;
            leaf diff-set {
              description
                "Internal node use by NCS service manager to remember
                 the reverse diff for a service instance. This is the
                 data that is used by FASTMAP";
              tailf:hidden full;
              type binary;
            }
            leaf forward-diff-set {
              description
                "Internal node use by NCS service manager to remember
                 the forwards diff for a service instance. This data is
                 is used to produce the proper 'get-modifications' output";
              tailf:hidden full;
              type binary;
            }
            leaf-list device-list {
              description
                "A list of managed devices this state has manipulated.";
              tailf:hidden full;
              type string;
            }
            leaf-list ned-id-list {
              description
                "A list of NED identities this service instance has
                 manipulated.";
              tailf:hidden full;
              type string;
            }
            leaf-list service-list {
              description
                "A list of services this state has manipulated.";
              tailf:hidden full;
              type instance-identifier {
                require-instance false;
              }
            }
            leaf-list lsa-service-list {
              description
                "A list of LSA services this service instance has manipulated.";
              tailf:hidden full;
              type yang:xpath1.0;
            }
          }
        }
      }
      container private {
        description
          "NCS service related internal data stored here.";
        tailf:hidden fastmap-private;

        container property-list {
          description
            "FASTMAP service component instance data used by the
             service implementation.";
          list property {
            key name;
            leaf name {
              type string;
            }
            leaf value {
              type string;
            }
          }
        }
      }
      leaf back-track {
        type boolean;
        default false;
      }
      leaf back-track-goal {
        tailf:alt-name goal;
        type plan-state-name-t;
      }
      leaf version {
        tailf:hidden full;
        type uint32;
      }
    }
  }

  grouping nano-plan-history {
    container plan-history {
      config false;
      tailf:cdb-oper {
        tailf:persistent true;
      }
      list plan {
        key time;
        description
          "Every time the plan changes its structure, i.e., a
           plan component is added or deleted, or a state is added or
           deleted in a plan component, a copy of the old plan is stored
           in the plan history list.";

        leaf time {
          type yang:date-and-time;
          tailf:cli-value-display-template "$(.|datetime)";
        }
        uses nano-plan-components;
      }
    }
  }

  /*
   * Internal structures
   */

  container side-effect-queue {
    list side-effect {
      config false;
        tailf:cdb-oper {
        tailf:persistent true;
      }

      key id;
      leaf id {
        description
          "Unique identification of the side-effect action";
        type string;
      }
      leaf created {
        type yang:date-and-time;
      }
      leaf invoked {
        type yang:date-and-time;
      }
      leaf service {
        description
          "The service that added the side effect.";
        type string;
      }
      leaf requestor {
        description
          "Path to the requester of side-effect.
           Typically a plan state for a service.";
        type string;
      }
      leaf requestor-op {
        description
          "The base operation for the request-or when issuing the side-effect.";
        type enumeration {
          enum create {
            tailf:code-name op_create;
          }
          enum delete {
            tailf:code-name op_delete;
          }
        }
      }
      leaf action-node {
        description
          "This leaf identifies the node on which a specified
           action resides.";
        type yang:xpath1.0;
      }
      leaf action-name {
        description
          "The name of the action.";
        type yang:yang-identifier;
      }
      list variable {
        key name;
        description
          "A list of variable bindings that will be part of the
           context when the action-node path expression is evaluated.";
        leaf name {
          type string;
          description
            "The name of the variable";
        }
        leaf value {
          type yang:xpath1.0;
          mandatory true;
          description
            "An XPath expression that will be the value of the variable
             'name'. Note that both expressions and path expressions are
             allowed, which implies that literals must be quoted.";
        }
      }
      leaf result-expr {
        description
          "An action responds with a structured result. A certain
           value could indicate an error or a successful result, e.g.
           'result true'.

           This statement describes an XPath expression to evaluate the
           result of the action so that the side-effect-queue can
           indicate action errors.

           The result of the expression is converted to a boolean using
           the standard XPath rules.  If the result is 'true' the action
           is reported as successful, otherwise as failed.

           The context for evaluating this expression is the
           resulting xml tree of the action.

           There are no variable bindings in this evaluation.
           The set of namespace declarations are all available namespaces,
           with the prefixes defined in the modules.";
        type yang:xpath1.0;
      }
      leaf status {
        description
          "Resulting status to be set as the request's post-action-status.";
        type side-effect-q-status-t;
      }
      leaf error-message {
        description
          "An additional error message for the action if this is applicable.
           I.e. an error is thrown.";
        type string;
      }
      leaf u-info {
        tailf:hidden full;
        type binary;
      }
      leaf sync {
        type boolean;
      }
    }

    container settings {
      description
        "Settings related to the side effect queue.";
      container automatic-purge {
        description
          "Settings for the automatic purging of side effects.";
        container failed-queue-time {
          description
            "The time failed side effects should be kept in the queue.";
          choice failed-queue-time-choice {
            leaf forever {
              description
                "Failed side effects should be kept forever.";
              type empty;
            }
            leaf seconds {
              type uint16;
            }
            leaf minutes {
              type uint16;
            }
            leaf hours {
              type uint16;
            }
            leaf days {
              type uint16;
              default 7;
            }
            default days;
          }
        }
      }
    }

    tailf:action invoke {
      tailf:info "Invoke queued side-effects asynchronously";
      description
        "Invokes all not already executing/executed side-effects in the
         side effect queue.";
      tailf:actionpoint ncsinternal {
        tailf:internal;
      }
      input {
      }
      output {
        leaf num-invoked {
          type uint32;
        }
      }
    }

    tailf:action purge {
      tailf:info "Purge all failed side effects";
      description
        "Purge all failed side effects.";
      tailf:actionpoint ncsinternal {
        tailf:internal;
      }
      input {
      }
      output {
        leaf purged-side-effects {
          type uint16;
        }
      }
    }
  }

  container zombies {
    config false;
    tailf:cdb-oper {
      tailf:persistent true;
    }
    description
      "Container for deleted Nano Services that still perform staged deletes.";

    list service {
      key service-path;
      leaf service-path {
        description
          "The path to where the service resided that has been deleted
           and become a zombie.";
        type string;
      }
      leaf delete-path {
        description
          "The path to the node nearest to the top that was deleted and resulted
           in this service becoming a zombie.";
        type string;
      }
      leaf pending-delete {
        type empty;
      }

      leaf diffset {
        tailf:hidden full;
        type binary;
      }

      leaf latest-commit-params {
        tailf:hidden full;
        type binary;
        description
          "Latest transactions commit parameters are stored there, these are
           used in reactive-re-deploy actions that must have the same
           parameters as the original service commit.";
      }
      leaf latest-u-info {
        tailf:hidden full;
        type binary;
        description
          "Latest transactions user info is stored there, these are
           used in reactive-re-deploy actions that must be performed by
           a user with the same user info.";
      }

      container private {
        leaf-list device-list {
          description
            "A list of managed devices this state has manipulated.";
          tailf:hidden full;
          type string;
        }
      }

      container plan {
        uses nano-plan-components {
          augment "component" {
            uses force-back-track-action;
          }
        }
        container commit-queue {
          presence "The service is being committed through the commit queue.";
          list queue-item {
            key id;
            leaf id {
              type uint64;
              description
                "If the queue item in the commit queue refers to this service
                 this is the queue number.";
            }
          }
        }

        leaf failed {
          tailf:code-name "failedx";
          type empty;
        }
        container error-info {
          presence "Additional info if plan has failed";
          leaf message {
            type string;
            description
              "An explanatory message for the failing plan.";
          }
          leaf log-entry {
            type instance-identifier {
              require-instance false;
            }
            description
              "Reference to a service log entry with additional information.";
          }
        }
        leaf deleting {
          tailf:hidden fastmap-private;
          type empty;
        }
      }

      tailf:action re-deploy {
        tailf:info "revive the zombie and re-deploy it.";
        description
          "The nano service became a zombie since it was deleted but not
           all delete pre-conditions was fulfilled. This action revives the
           zombie service and re-deploys and stores it back as a zombie if
           necessary. This will be performed with the user who requested the
           action.";
        tailf:actionpoint ncsinternal {
          tailf:internal;
        }
        input {
          uses ncs-commit-params;
        }
        output {
          uses ncs-commit-result;
        }
      }
      tailf:action reactive-re-deploy {
        tailf:info "revive the zombie and reactive re-deploy it.";
        description
          "The nano service became a zombie since it was deleted but not
           all delete pre-conditions was fulfilled. This action revives the
           zombie service and re-deploys and stores it back as a zombie if
           necessary. This will be performed with the same user as the original
           commit.

           By default this action is asynchronous and returns nothing.";

        tailf:actionpoint ncsinternal {
          tailf:internal;
        }
        input {
          leaf sync {
            description
              "By default the action is asynchronous, i.e. it does not wait for
               the service to be re-deployed. Use this leaf to get synchronous
               behaviour and block until the service re-deploy transaction is
               committed. It also means that the action will possibly return
               a commit result, such as commit queue id if any, or an
               error if the transaction failed.";
            type empty;
          }
        }
        output {
          uses ncs-commit-result;
        }
      }
      tailf:action resurrect {
        tailf:info "Load the zombie back as service in current state.";
        description
          "The zombie resurrection is used to stop the progress of a staged
           nano service delete and restore current state as is.";
        tailf:actionpoint ncsinternal {
          tailf:internal;
        }
        input {
          container dry-run {
            presence "";
            leaf outformat {
              type outformat3;
            }
          }
        }
        output {
          leaf result {
            type string;
          }
          choice outformat {
            case case-xml {
              uses dry-run-xml;
            }
            case case-cli {
              uses dry-run-cli;
            }
            case case-native {
              uses dry-run-native;
            }
          }
        }
      }
      uses log-data;

      uses service-commit-queue {
        status obsolete;
        augment "commit-queue/queue-item/failed-device" {
          tailf:action load-device-config {
            tailf:display-when "../config-data != ''";
            tailf:info "Load device configuration into an open transaction.";
            description
              "Load device configuration into an open transaction.";
            tailf:actionpoint ncsinternal {
              tailf:internal;
            }
            input {
            }
            output {
              leaf result {
                type string;
              }
            }
          }
        }
      }
    }
  }

  /*
   * Plan Extension Statements
   */

  extension plan-outline {
    argument id {
      tailf:arg-type {
        type tailf:identifier;
      }
    }
    tailf:occurence "*";
    tailf:use-in "module";
    tailf:use-in "submodule";
    tailf:substatement "description";
    tailf:substatement "ncs:self-as-service-status" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:component-type" {
      tailf:occurence "+";
    }
    description
      "This statement is optionally used in a node that defines a
       service to document its plan.  It is required for a nano
       service.

       A plan is outlined by listing all component-types that the
       service can instantiate, and their related states.  Note that
       a specific service instance may instantiate zero, one, or more
       components of a certain type.

       It is required that a plan has one component of type ncs:self.";
  }

  extension self-as-service-status {
    description
      "If this statement has been set on a plan outline the self components
       init and ready states status will reflect the overall status of the
       service.

       The self components ready state will not be set to reached until all
       other components ready states have been set to reached and all post
       actions have been run successfully. Likewise when deleting a service
       the init state will not be set to not-reached (and the service deleted)
       until all other components init states have had their status
       set to not-reached and any post actions have been run successfully.

       If any state in a component, other than the self component, or post
       action have failed the ready/init state of the self component will
       also be set to failed to reflect that the service has failed.";
  }

  extension component-type {
    argument name {
      tailf:arg-type {
        type tailf:identifier-ref;
      }
    }
    tailf:substatement "description";
    tailf:substatement "ncs:state" {
      tailf:occurence "*";
    }
    description
      "This statement identifies the component type, which is a
       reference to a YANG identity.

       A component-type contains an ordered list of states which in
       also are references to YANG identities.  It is required that the
       first state in a component-type is ncs:init and the last state
       is ncs:ready.

       Each state represents a unit of work performed by the
       service when a certain pre condition is satisfied.";
  }

  extension state {
    argument name {
      tailf:arg-type {
        type tailf:identifier-ref;
      }
    }
    tailf:substatement "description";
    tailf:substatement "ncs:create" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:delete" {
      tailf:occurence "?";
    }

    description
      "This statement identifies the state, which is a reference to a
       YANG identity.

       It represents a unit of work performed by the service when a
       certain pre condition is satisfied.";
  }

  extension create {
    tailf:substatement "description";
    tailf:substatement "ncs:nano-callback" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:pre-condition" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:post-action-node" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:force-commit" {
      tailf:occurence "?";
    }

    description
      "This statement defines nano service state characteristics for
       entering this state.

       The component will advance to this state when it is not back
       tracking, it has reached its previous state, and the
       'pre-condition' is met.

       If the 'nano-callback' statement is defined, it means that
       there is a callback function (or template) that will be invoked
       before this state is entered.

       The 'post-action-node' optionally defines an action to be
       invoked when this state has been entered.";
  }

  extension delete {
    tailf:substatement "description";
    tailf:substatement "ncs:nano-callback" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:pre-condition" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:post-action-node" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:force-commit" {
      tailf:occurence "?";
    }

    description
      "This statement defines nano service state characteristics for
       leaving this state.

       The component will advance to this state when it is back
       tracking, it has reached its following state, and the
       'pre-condition' is met.

       If the 'nano-callback' statement is defined, it means that
       there is a callback function (or template) that will be invoked
       before this state is left.

       The 'post-action-node' optionally defines an action to be
       invoked when this state has been left.";
  }

  extension nano-callback {
    description
      "This statement indicates that a callback function (or a
       template) is defined for this state and operation.";
  }

  extension post-action-node {
    argument xpath {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";
    tailf:substatement "ncs:action-name" {
      tailf:occurence "1";
    }
    tailf:substatement "ncs:result-expr" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:sync" {
      tailf:occurence "?";
    }

    description
      "This statement defined a action side-effect to be executed
       after the state has been successfully been executed.

       This statement argument is the node where the action resides.

       This action is executed asynchronously with respect to initial
       service transaction. The result is manifested as a value in
       the requesting plan states post-action-status leaf.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }

  extension action-name {
    argument name {
      tailf:arg-type {
        type string;
      }
    }
    tailf:substatement "description";
    description
      "The name of the action.";
  }

  extension result-expr {
    argument xpath {
      tailf:arg-type {
        type yang:xpath1.0;
      }
    }
    tailf:substatement "description";
    description
      "An action responds with a structured result.  A certain value
       can indicate an error or a successful result, e.g.,
       'result true'.

       The result of the expression is converted to a boolean using
       the standard XPath rules.  If the result is 'true' the action
       is reported as successful, otherwise as failed.

       The context for evaluating this expression is the
       resulting xml tree of the action.

       There are no variable bindings in this evaluation.
       The set of namespace declarations are all available namespaces,
       with the prefixes defined in the modules.";
  }

  extension sync {
    description
      "Run the action synchronosly so that later states cannot proceed before
       the post action has finished running successfully.";
  }

  extension force-commit {
    description
      "Force a commit before any later states can proceed.";
  }

  /*
   * Behavior tree extensions for nano services
   */

  extension service-behavior-tree {
    argument servicepoint {
      tailf:arg-type {
        type tailf:identifier;
      }
    }
    tailf:occurence "*";
    tailf:use-in "module";
    tailf:use-in "submodule";
    tailf:substatement "description";
    tailf:substatement "ncs:plan-outline-ref" {
      tailf:occurence "1";
    }
    tailf:substatement "ncs:plan-location" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:selector" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:multiplier" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:converge-on-re-deploy" {
      tailf:occurence "?";
    }
    description
      "This statement is used to define the behavior tree for a nano
       service.

       The argument to this statement is the name of the service point
       for the nano service.

       The behavior tree consists of control flow nodes and execution
       nodes.

       There are two types of control flow nodes, defined with the
       'ncs:selector' and 'ncs:multiplier' statements.

       There is one type of execution nodes, defined with the
       'ncs:create-component' statement.

       A behavior tree is evaluated by evaluating all top control flow
       nodes, in order.  When a control flow node is evaluated, it
       checks if it should evaluate its children.  How this is done
       depend on the type of control flow node.  When an execution
       node is reached, the resulting component-type is added as a
       component to the plan and given a component-name.

       This process of dynamically instantiating a plan with its
       components by evaluation of the behavior tree is called
       synthesizing the plan.";
  }

  extension plan-outline-ref {
    argument id {
      tailf:arg-type {
        type tailf:identifier-ref;
      }
    }
    description
      "The name of the plan outline that the behavior tree will use
       to synthesize a service instance's plan.";
  }

  extension plan-location {
    argument path {
      tailf:arg-type {
        type yang:xpath1.0;
      }
    }

    description
      "XPath starting with absolute or relative path to a list or container
       where the plan is stored. Use this only if the plan is stored outside
       the service.

       The XPath expression is evaluated using the nano service as the context
       node, and the expression must return a node set.

       If the target lies within lists, all keys must be specified. A key
       either has a value, or a reference to a key of the service using the
       function current() as starting point for an XPath location path.
       For example:

         /a/b[k1='paul'][k2=current()/k]/c";
  }

  /* Control flow nodes */

  extension selector {
    tailf:substatement "description";
    tailf:substatement "ncs:pre-condition" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:observe" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:variable" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:selector" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:multiplier" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:create-component" {
      tailf:occurence "*";
    }
    description
      "This control flow node synthesizes its children
       that have their pre-conditions met.

       All 'ncs:variable' statements in this statement will have their
       XPath context node set to each node in the resulting node set.";
  }

  extension multiplier {
    tailf:substatement "description";
    tailf:substatement "ncs:pre-condition" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:observe" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:foreach" {
      tailf:occurence "1";
    }
    description
      "This control flow node synthesizes zero or more copies of
       its children.

       When this node is evaluated, it evaluates the 'foreach'
       expression.  For each node in the resulting node set, it
       synthesizes all children that have their pre-conditions
       met.";
  }

  extension foreach {
    argument xpath {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";
    tailf:substatement "ncs:when" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:variable" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:selector" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:multiplier" {
      tailf:occurence "*";
    }
    tailf:substatement "ncs:create-component" {
      tailf:occurence "*";
    }

    description
      "This statement's argument is an XPath expression for the node set
       that is the basis for a multiplier selection.  For each node in
       the resulting node set the children will be evaluated.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }

  extension when {
    argument xpath {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";

    description
      "This optional statement describes an XPath expression that is
       used to further filter the selection of nodes from the
       node set in a multiplier component or the variables that should
       be created for a component.

       The result of the expression is converted to a boolean using
       the standard XPath rules.  If the result is 'true' the node is
       added to the node set or the variable is added to the variable
       list.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }

  /* Execution nodes */

  extension create-component {
    argument name {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";
    tailf:substatement "ncs:component-type-ref" {
      tailf:occurence "1";
    }
    tailf:substatement "ncs:pre-condition" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:observe" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:deprecates-component" {
      tailf:occurence "*";
    }

    description
      "When this execution node is evaluated, it instantiates a component
       in the service's plan.

       The name of the component is the result of evaluating the XPath
       expression and convert the result to a string.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }

  extension component-type-ref {
    argument name {
      tailf:arg-type {
        type tailf:identifier-ref;
      }
    }
    description
      "This statement identifies the component type for the component.
       It must refer to a component-type defined in the plan-outline
       for the service.";
  }

  /* Common substatements */

  extension pre-condition {
    tailf:substatement "description";
    tailf:substatement "ncs:monitor" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:all" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:any" {
      tailf:occurence "?";
    }
    description
      "This statement defines a pre-condition that must hold for
       further evaluation/execution to proceed.

       If the pre-condition is not satisfied a kicker will be created
       with the same monitor to observe the changes and then
       re-deploy the service.";
  }

  extension observe {
    tailf:substatement "description";
    tailf:substatement "ncs:monitor" {
      tailf:occurence "1";
    }
    description
      "If a control flow node has been successfully evaluated, this
       statement's 'monitor' will be installed as a kicker, which will
       re-deploy the service if the monitor's trigger conditions are met.";
  }

  extension monitor {
    argument node {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";
    tailf:substatement "ncs:trigger-on-delete" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:trigger-expr" {
      tailf:occurence "?";
    }
    description
      "If a node that matches the value of this statement and the
       'trigger' expression evaluates to true, this condition is
       satisfied. If the child statement 'trigger-on-delete' is used
       this condition will be satisfied when no nodes matches the
       value of this statement. Note that only one of 'trigger' and
       'trigger-on-delete' can be used as child statements to this
       statement.

       The argument to this statement is like an instance-identifier,
       but a list may be specified without any keys.  This is treated
       like a wildcard that matches all entries in the list.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }

  extension trigger-on-delete {
    description
      "Specify if the monitored node should be checked if it has been deleted.";
  }

  extension trigger-expr {
    argument xpath {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";

    description
      "This optional statement is used to further filter nodes
       in a given nodeset.

       The result of the expression is converted to a boolean using
       the standard XPath rules.  If the result is 'true' the condition
       is satisfied, otherwise it is not satisfied.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }


  extension variable {
    argument name {
      tailf:arg-type {
        type string;
      }
    }

    tailf:substatement "description";
    tailf:substatement "ncs:when" {
      tailf:occurence "?";
    }
    tailf:substatement "ncs:value-expr" {
      tailf:occurence "?";
    }

    description
      "This statement defines an XPath variable with a name and a
       value.  The value is evaluated as an XPath expression.

       A variable called FOO can thus be retrieved as '{$FOO}'.

       These variables can for example be used in a 'multiplier'
       control flow node to create unique names of duplicated
       components.  The child components can be given names like
       'comp_{$FOO}', and when that expression is evaluated,
       the resulting component will have a name with {$FOO}
       substituted with the value of the variable 'FOO'.";
  }

  extension value-expr {
    argument xpath {
      tailf:arg-type {
        type plan-xpath;
      }
    }
    tailf:substatement "description";

    description
      "This statement defines an XPath expression that when evaluated
       constitutes a value for a variable.

       The XPath expression is evaluated in the context described for
       'plan-xpath'.";
  }

  extension deprecates-component {
    argument name {
      tailf:arg-type {
        type plan-xpath;
      }
    }

    tailf:substatement "ncs:component-type-ref" {
      tailf:occurence "?";
    }

    description
      "Indicate that the component deprecates another deleted component and
       that it will produce the same configuration as the old component. Before
       running this component the reverse diffsets for all the old components
       states will be applied and the component will be deleted.

       If the old component is still present after synthesizing the behaviour
       tree this statement will be ignored. ";
  }

  extension any {
    tailf:substatement "ncs:monitor" {
      tailf:occurence "*";
    }

    description
      "This extension is used inside of pre-condition extensions
       to allow multiple monitors inside of a single pre-condition.
       A pre-condition using this extension is satisfied if at least
       one of the monitors given as argument evaluates to true.

       This extension uses short-circuit evaluation, i.e., if one of the
       monitors given as argument evaluates to true the evaluation
       will stop.";
  }

  extension all {
    tailf:substatement "ncs:monitor" {
      tailf:occurence "*";
    }

    description
      "This extension is used inside of pre-condition extensions
       to allow multiple monitors inside of a single pre-condition.
       A pre-condition using this extension is satisfied if all
       of the monitors given as argument evaluates to true.

       This extension uses short-circuit evaluation, i.e., if one of
       the monitors given as argument evaluates to false the evaluation
       will stop.";
  }

  extension converge-on-re-deploy {
    status deprecated;
    description
      "Do not converge a service in the transaction in which it is created.
       On service creation the service will only synthesize the plan and
       schedule a reactive-re-deploy of itself.

       By default a service starts converging in the transaction in which
       it is created, but in certain scenarios this might not be the desired
       behaviour. E.g. when executing a service through the commit queue with
       error recovery set to rollback on error, this will ensure that the
       service intent is still present even when there are errors in the
       commit queue.

       Note: In a future release this behaviour will be the default and
             this setting will be removed, hence the deprecated status.";
  }
}

The ncs:plan-data grouping is defined as operational data that is supposed to be added to the Reactive FASTMAP service yang with a uses ncs:plan-data YANG directive.

A plan consists of one or many component entries. Each component has a name and a type. The type is an identityref and the service must therefore define identities for the types of components it uses. There is one predefined component type named self and a service with a plan is expected to have at least the self component defined.

Each component consists of two or more state entries where the state name is an identityref. The service must define identities for the states it wants to use. There are two predefined states init and ready and each plan component is expected to have init as its first state and ready as its last.

A state has a status leaf which can take one of the values not-reached, reached or failed.

The purpose of the self component is to show the overall progress of the Reactive FASTMAP service and the self component ready state should have status reached if and only if the service has completed successfully. All other components and states are optional and should be used to show the progress in more detail if necessary.

The plan should be defined and the statuses written inside the service create() method. Hence the same FASTMAP logic applies to the plan as for any other configuration data. This implies that the plan has to be defined completely at create() as if this was the first definition. If a service modification or reactive-re-deploy leave out a state or component, that has been earlier defined, this state or component will be removed.

When the status leaf in a component state changes value NSO will log the time of the status change in the when leaf. Furthermore when there is a structural changes of the plan, i.e added/removed components or states, NSO will log this in the plan-history list. The Reactive FASTMAP service need not and should not attempt doing this logging inside the create method.

A plan also defines an empty leaf failed. NSO will set this leaf when there exists states in the plan with status failed. As such this is an aggregation to make it easy to verify if a RFM service is progressing without problems.

In the Java API there exist a utility class to help writing plan data in the service create method. This class is called PlanComponent and has the following methods:

public class PlanComponent {

    /**
    * Creation of a plan component.
    * It uses a NavuNode pointing to the service. This is normally the same
    * NavuNode as supplied as an argument to the service create() method.
    *
    * @param service
    * @param name
    * @param componentType
    * @throws NavuException
    */
    public PlanComponent(NavuNode service,
                         String name,
                         String componentType) throws NavuException;

    /**
     * This method supplies a state to the specific component.
     * The initial status for this state can be ncs:reached or ncs:not-reached
     * and is indicated by setting the reached boolean to true or false
     * respectively
     *
     * @param stateName
     * @param reached
     * @return
     * @throws NavuException
     */
    public PlanComponent append(String stateName) throws NavuException;

    /**
     * Setting status to ncs:not-reached for a specific state in the
     * plan component
     *
     * @param stateName
     * @return
     * @throws NavuException
     */
    public PlanComponent setNotReached(String stateName) throws NavuException;

    /**
     * Setting status to ncs:reached for a specific state in the plan component
     *
     * @param stateName
     * @return
     * @throws NavuException
     */
    public PlanComponent setReached(String stateName) throws NavuException;

    /**
     * Setting status to ncs:failed for a specific state in the plan component
     *
     * @param stateName
     * @return
     * @throws NavuException
     */
    public PlanComponent setFailed(String stateName) throws NavuException;

}

The constructor for the PlanComponent takes the service NavuNode from the create() method together with the component name and type. The type is either ncs:self or any other type defined as an identity in the service YANG module. The PlanComponent instance has an append() method to add new states which is either ncs:init, ncs:ready or any other state defined as an identity in the service YANG module. The setNotReached(), setReached() or setFailed() methods are used to set the current status of a given state.

Example: use of plan-data in the virtual-mpls-vpn

The following shows the use of plan-data in the examples.ncs/service-provider/virtual-mpls-vpn example. The objective in this example is to create and maintain a plan that has one main self component together with one component for each endpoint in the service. The endpoints can make use of either physical or virtual devices. If the endpoint uses a virtual device the corresponding plan component will contain additional states to reflect the staged setup of the virtual device.

In the service YANG file named l3vpn.yang we define the identity for the endpoint component type and the service specific states for the different components:

            ....

  identity l3vpn {
    base ncs:component-type;
  }

  identity pe-created {
    base ncs:plan-state;
  }
  identity ce-vpe-topo-added {
    base ncs:plan-state;
  }
  identity vpe-p0-topo-added {
    base ncs:plan-state;
  }
  identity qos-configured {
    base ncs:plan-state;
  }

  container vpn {
    list l3vpn {
      description "Layer3 VPN";

      key name;
      leaf name {
        tailf:info "Unique service id";
        tailf:cli-allow-range;
        type string;
      }

      uses ncs:plan-data;
      uses ncs:service-data;
      ncs:servicepoint l3vpn-servicepoint;

            ....

In the service list definition the plan data is introduced using the uses ncs:plan-data directive.

In the service create() method we introduce a Java Properties instance where we temporarily store data for the relevant Reactive FASTMAP steps that currently are completed. We create a private method writePlanData() that can write the plan with this Properties instance as input. Before we return from the create() method we call the writePlanData() method. The following code snippets from the class l3vpnRFS.java illustrates this design:

Initially we create a Properties instance called rfmProgress:

  @ServiceCallback(servicePoint = "l3vpn-servicepoint",
                   callType = ServiceCBType.CREATE)
  public Properties create(ServiceContext context,
                           NavuNode service,
                           NavuNode ncsRoot, Properties opaque)
    throws ConfException
  {
    WebUILogger.log(LOGGER, "***** Create/Reactive-re-deploy ************");

    Properties rfmProgress = new Properties();

For each Reactive FastMap step that we reach we store some relevant data in the rfmProgress Properties instance:

    StringBuffer stb = new StringBuffer();
    for (NavuContainer endpoint : endpoints.elements()) {
       if (stb.length() > 0) {
           stb.append(",");
       }
       stb.append(endpoint.leaf(l3vpn._id).valueAsString());
    }
    rfmProgress.setProperty("endpoints", stb.toString());

    String tenant = service.leaf(l3vpn._name).valueAsString();
    String deploymentName = "vpn";

    String virtualPEName =
            Helper.makeDevName(tenant, deploymentName, "CSR", "esc0");

    for (NavuContainer endpoint : endpoints.elements()) {
      try {
        String endpointId = endpoint.leaf(l3vpn._id).valueAsString();

        String ceName = endpoint.leaf(l3vpn._ce_device).valueAsString();

        if (CEonVPE.contains(ceName)) {
          rfmProgress.setProperty(endpointId + ".ONVPE", "true");


          if (!createVirtualPE(context, service, ncsRoot, ceName, tenant,
                               deploymentName)) {
            // We cannot continue with this CE until redeploy
            continue;
          }
          rfmProgress.setProperty(endpointId + ".pe-created", "DONE");

          LOGGER.info("device ready, continue: " + virtualPEName);

          addToTopologyRole(ncsRoot, virtualPEName, "pe");

          // Add CE-VPE topology, reuse old topology connection if available
          NavuContainer conn = getConnection(
            topology, endpoint.leaf(l3vpn._ce_device).valueAsString(), "pe");
          if (!addToTopology(service, ncsRoot, conn,
                ceName, "GigabitEthernet0/8",
                virtualPEName, "GigabitEthernet2",
                ceName, Settings.ipPoolPE_CE)) {
            // We cannot continue with this CE until redeploy
            continue;
          }
          rfmProgress.setProperty(endpointId + ".ce-vpe-topo-added", "DONE");

Before we return from the create() method we call the writePlanData() method passing in the rfmProgress instance:

    writePlanData(service, rfmProgress);

    return opaque;
  }

The writePlanData() method first creates all components and sets the default values for all statuses. Then we read the rfmProgress instance and change the states for the all the Reactive FASTMAP steps that we have reached. In the end we check if the self component ready state has been reached. The reason for initially writing the complete plan with default values for the statuses is not to miss a component that have not made any progress yet, remember this is FASTMAP, components and states that was written in an earlier reactive-re-deploy but is not written now will be deleted by NSO. The writePlanData() method has the following design:

  private void writePlanData(NavuNode service, Properties rfmProgress)
      throws NavuException {
      try {

          PlanComponent self = new PlanComponent(service, "self", "ncs:self");
          // Initial plan
          self.appendState("ncs:init").
               appendState("ncs:ready");
          self.setReached("ncs:init");

          String eps = rfmProgress.getProperty("endpoints");
          String ep[] = eps.split(",");

          boolean ready = true;
          for (String p : ep) {
              boolean onvpe = false;
              if (rfmProgress.containsKey(p + ".ONVPE")) {
                  onvpe = true;
              }
              PlanComponent pcomp = new PlanComponent(service,
                                                      "endpoint-" + p,
                                                      "l3vpn:l3vpn");
              // Initial plan
              pcomp.appendState("ncs:init");
              pcomp.setReached("ncs:init");
              if (onvpe) {
                  pcomp.appendState("l3vpn:pe-created").
                  appendState("l3vpn:ce-vpe-topo-added").
                  appendState("l3vpn:vpe-p0-topo-added");
              }
              pcomp.appendState("l3vpn:qos-configured").
                    appendState("ncs:ready");

              boolean p_ready = true;
              if (onvpe) {
                  if (rfmProgress.containsKey(p + ".pe-created")) {
                      pcomp.setReached("l3vpn:pe-created");
                  } else {
                      p_ready = false;
                  }
                  if (rfmProgress.containsKey(p + ".ce-vpe-topo-added")) {
                      pcomp.setReached("l3vpn:ce-vpe-topo-added");
                  } else {
                      p_ready = false;
                  }
                  if (rfmProgress.containsKey(p + ".vpe-p0-topo-added")) {
                      pcomp.setReached("l3vpn:vpe-p0-topo-added");
                  } else {
                      p_ready = false;
                  }
              }
              if (rfmProgress.containsKey(p + ".qos-configured")) {
                  pcomp.setReached("l3vpn:qos-configured");
              } else {
                  p_ready = false;
              }

              if (p_ready) {
                  pcomp.setReached("ncs:ready");
              } else {
                  ready = false;
              }
          }

          if (ready) {
              self.setReached("ncs:ready");
          }
      } catch (Exception e) {
          throw new NavuException("could not update plan.", e);
      }

  }

Running the example and showing the plan while the chain of reactive-re-deploy is still in execution could look something like the following:

ncs# show vpn l3vpn volvo plan
NAME                    TYPE   STATE              STATUS       WHEN
------------------------------------------------------------------------------------
self                    self   init               reached      2016-04-08T09:22:40
                               ready              not-reached  -
endpoint-branch-office  l3vpn  init               reached      2016-04-08T09:22:40
                               qos-configured     reached      2016-04-08T09:22:40
                               ready              reached      2016-04-08T09:22:40
endpoint-head-office    l3vpn  init               reached      2016-04-08T09:22:40
                               pe-created         not-reached  -
                               ce-vpe-topo-added  not-reached  -
                               vpe-p0-topo-added  not-reached  -
                               qos-configured     not-reached  -
                               ready              not-reached  -

Service Progress Monitoring

To support Service Progress Monitoring (SPM) there are two YANG groupings, ncs:service-progress-monitoring-data and ncs:service-progress-monitoring-trigger-action.

For an example of using service progress monitoring, see getting-started/developing-with-ncs/25-service-progress-monitoring The YANG submodule defining service progress monitoring including the groupings is named tailf-ncs-service-progress-monitoring.yang and contains the following:

submodule tailf-ncs-service-progress-monitoring {
  yang-version 1.1;
  belongs-to tailf-ncs {
    prefix ncs;
  }

  import ietf-yang-types {
    prefix yang;
  }
  import tailf-common {
    prefix tailf;
  }

  include tailf-ncs-plan;

  organization "Tail-f Systems";

  description
    "This submodule contains a collection of YANG definitions for
     Service Progress Monitoring (SPM) in NCS.

     Copyright 2018 Cisco Systems, Inc.
     All rights reserved.
     Permission is hereby granted to redistribute this file without
     modification.";

  revision 2018-06-01 {
    description
      "Initial revision";
  }

  /*
   * Plan Component State
   */

  identity any-state {
    description
      "Can be used in SPM and plan trigger policies to denote any plan state.";
    base ncs:plan-state;
  }

  /*
   * Plan Component Types
   */

  identity any {
    description
      "Can be used in SPM and plan triggers to denote any component type.";
    base ncs:plan-component-type;
  }

  /*
   * Groupings
   */

  typedef spm-trigger-status {
    type enumeration {
      enum passed {
        tailf:code-name spm-passed;
      }
      enum failed {
        tailf:code-name spm-failed;
      }
    }
  }

  grouping service-progress-monitoring-trigger-action {
    tailf:action timeout {
      description
        "This action should be used by a custom model that is separate
         from the service (which may be made by someone else),
         and it must be refined with an actionpoint.

         Any callback action to be invoked when SPM trigger must
         always have the five leaves defined as input to this action
         as initial arguments, they are populated by the NSO system.";

      input {
        leaf service {
          description
            "The path to the service.";
          type instance-identifier;
          mandatory true;
        }

        leaf trigger {
          description "The name of the trigger that fired.";
          type leafref {
            path "/ncs:service-progress-monitoring/ncs:trigger/ncs:name";
          }
          mandatory true;
        }

        leaf policy {
          description "The name of the policy that fired.";
          type leafref {
            path "/ncs:service-progress-monitoring/ncs:policy/ncs:name";
          }
          mandatory true;
        }

        leaf timeout {
          description "What timeout has triggered.";
          type enumeration {
            enum violation {tailf:code-name spm-violation-timeout;}
            enum jeopardy {tailf:code-name spm-jeopardy-timeout;}
            enum success {tailf:code-name spm-success-timeout;}
          }
          mandatory true;
        }

        leaf status {
          description "SPM passed or failed.";
          type spm-trigger-status;
          mandatory true;
        }
      }
    }
  }

  grouping service-progress-monitoring-data {
    container service-progress-monitoring {
      config false;

      description
        "Service Progress Monitoring triggers.
         A service may have multiple SPMs.
         For example, if a CPE is added at a later stage it would have
         its own SPM defined, separate from the main SPM of the service.
         However, in many cases there will be just one SPM per service.

         The overall status for a trigger can be determined by reading
         the trigger-status{<name>}/status leaf. The success-time
         leaf will be set when the policy evaluates to true, i.e. when
         that part of the product is considered to be delivered by the
         policy expression. Note that this is operational data.
         ";

      list trigger-status {
        description
          "The operation status of the trigger.";

        key name;

        leaf name {
          type string;
          description
            "The trigger name.";
        }

        leaf policy {
          type string;
          description
            "Name of policy.";
        }

        leaf start-time {
          type yang:date-and-time;
          tailf:cli-value-display-template "$(.|datetime)";
          description
            "Time when the triggers started ticking.";
        }

        leaf jeopardy-time {
          type yang:date-and-time;
          tailf:cli-value-display-template "$(.|datetime)";
          description
            "Time when the conditions are evaluated for a jeopardy trigger.";
        }

        leaf jeopardy-result {
          type spm-trigger-status;
          description
            "The result will be 'passed' if no jeopardy was detected at
             jeopardy-time, 'failed' if it was detected. It is not set until
             it has been evaluated. It will be set to 'passed' if the
             condition is satisfied prior to the timeout expiring as well.";
        }

        leaf violation-time {
          type yang:date-and-time;
          tailf:cli-value-display-template "$(.|datetime)";
          description
            "Time when the conditions are evaluated for a violation trigger.";
        }

        leaf violation-result {
          type spm-trigger-status;
          description
            "The result will be 'passed' if no violation was detected at
             violation-time, 'failed' if it was detected. It is not set until
             it has been evaluated. It will be set to 'passed' if the
             condition is satisfied prior to the timeout expiring as well.";
        }

        leaf status {
          type enumeration {
            enum running {
              tailf:code-name spm-running;
              description
              "Service Progress Monitoring has been started but
               not yet triggered";
            }
            enum jeopardized {
              tailf:code-name spm-jeopardized;
              description
                "The jeopardy timer has triggered and the policy has evaluated
                 to false.";
            }
            enum violated {
              tailf:code-name spm-violated;
              description
                "The violation timer has triggered and the policy has evaluated
                 to false.";
            }
            enum successful {
              tailf:code-name spm-successful;
              description
                "One of the timers have triggered and the policy has evaluated
                 to true.";
            }
          }
        }

        leaf success-time {
          type yang:date-and-time;
          tailf:cli-value-display-template "$(.|datetime)";
          description
            "Time when the conditions were evaluated to true,
             i.e SPM was successful.";
        }
      }
    }
  }

  container service-progress-monitoring {
    tailf:info "Service Progress Monitoring policies";

    list policy {
      tailf:info "Policy definitions for Service Progress Monitoring";
      description
        "A list of all the policies.";

      key name;
      leaf name {
        type string;
        description
          "The name of the policy.";
      }

      leaf violation-timeout {
        tailf:info "Violation timeout in seconds";
        mandatory true;
        type uint32;
        units "seconds";
        description
          "The timeout in seconds for a policy to be violated.";
      }

      leaf jeopardy-timeout {
        tailf:info "Jeopardy timeout in seconds";
        mandatory true;
        type uint32;
        units "seconds";
        description
          "The timeout in seconds for a policy to be in jeopardy.";
      }

      list condition {
        min-elements 1;
        description
          "A list of the conditions that decides whether a policy is
           fulfilled or not.";

        key name;

        leaf name {
          type string;
          description
            "Name of the condition.";
        }

        list component-type {
          min-elements 1;

          description
            "Each condition can specify what state must be reached for
             a portion of the components to not trigger the action below.";

          key type;

          leaf type {
            description
              "We can either specify a particular component name
               (trigger/component) or a component-type (which may
               exist in several instances).";
            type union {
              type ncs:plan-component-type-t;
              type enumeration {
                enum "component-name" {
                  tailf:code-name spm-component-name;
                }
              }
            }
          }

          leaf what {
            description
              "Condition put on the component with respect to the
               ../plan-state and ../status.

               So, either:

                 1. X % of the component states has the status set.

                 2. All of the component states has the status set.

                 3. At least one of the components states has the status set.
              ";
            mandatory true;
            type union {
              type uint32 {
                range "0..100";
              }
              type enumeration {
                enum all{
                  tailf:code-name spm-what-all;
                }
                enum at-least-one {
                  tailf:code-name spm-what-at-least-one;
                }
              }
            }
          }

          leaf plan-state {
            mandatory true;
            type ncs:plan-state-name-t;
            description
              "The plans state. init, ready or any specific for the
               component.";
          }

          leaf status {
            type ncs:plan-state-status-t;
            default "reached";
            description
              "status of the new state for the component in the service's plan.
               reached not-reached or failed.";
          }
        }
      }

      container action {
        leaf action-path {
          type instance-identifier {
            require-instance false;
          }
        }
        leaf always-call {
          type boolean;
          default "false";
          description
            "If set to true, the action will be invoked also when
             the condition is evaluated to 'passed'.";
        }
      }
    }

    list trigger {
      description
        "A list of all the triggers. A trigger is used to apply a SPM policy
         to a service.";

      key name;

      leaf name {
        type string;
        description
          "Name of the trigger.";
      }

      leaf description {
        type string;
        description
          "Service Progress Monitoring trigger description.";
      }

      leaf policy {
        tailf:info "Service Progress Monitoring Policy";
        mandatory true;
        description
          "A reference to a policy that should be used with this trigger.";
        type leafref {
          path "/ncs:service-progress-monitoring/policy/name";
        }
      }

      leaf start-time {
        type yang:date-and-time;
        tailf:cli-value-display-template "$(.|datetime)";
        description
          "Optionally provide a start-time.
           If this is unset the SPM server will set the start-time to
           the commit time of the trigger.";
      }

      leaf component {
        type string;
        description
          "If the policy contains a condition with the key component-name,
           this is the component to apply the condition to.";
      }

      leaf target {
        mandatory true;
        description
          "Instance identifier to whichever service the SPM policy should
           be applied. Typically this is the creator of the trigger instance.";
        type instance-identifier {
          require-instance true;
        }
      }
    }
  }
}

Performance Considerations

When using the Reactive FASTMAP technique the service tends to be re-deployed multiple times for the service to be fully deployed; i.e., the create() function is executed more frequently. This makes it desirable to reduce the execution time of the create() function as much as possible.

Normal code performance optimization methods should be used, but there are a couple of techniques that can be used that are specific to the Reactive FASTMAP pattern.

  1. Stacked services (see the section called “Stacked Services and Shared Structures”) can be a very efficient technique to reduce both the size of the service diff-set and the execution time.

    For example, if a service applies a template to configure a device, then all changes resulting from this will be stored in the diff-set of the service. During a re-deploy all changes will first be undone to later be restored when the template is applied.

    A more efficient solution is to use a stacked service to apply the template. The input parameters to the stacked service will be the variables that would go into the template. The stacked service would pick them up and apply the original template. As a consequence the diff-set resulting from applying the template ends up in the stacked service, and as long as there are no changes in the input parameter to the stacked service its create() code will not have to run. Instead of applying the same template multiple times the template will only be applied once.

  2. CDB subscriber refactoring. Stacked services can be used when no response is required from the factored out code. However, if the create() code contains a CPU intensive computation that takes a number of input parameters and produce some result, then it would be desirable to also minimize the number of times this computation is performed, and to perform it outside the database lock.

    This can be done by treating the problem similarly to resource allocation above - create a configuration tree where computation requests can be written. A CDB subscriber is registered to subscribe to this tree. Whenever a new request is commited it performs the computation and writes the result into a CDB operational data leaf, and re-deploys the service that requested the computation.

    As a consequence of this the computation will take place outside the lock, and the computation will only be performed once for each set of input parameters. The cost of this technique is that an extra re-deploy will be performed. The service pseudo-code looks like this:

        create(serv) {
            /* request computation */
            create("/compute-something{id}");
            setElem("/compute-something{id}/param1", value1);
            setElem("/compute-something{id}/param2", value2);
    
            /* check for allocation response */
            if (!exists("/compute-something{id}/response"))
                return;
    
            /* read result */
            res = getElem("/compute-something{id}/response");
    
            /* use res in device config */
            configure(res)
        }

Services that involve virtual devices, NFV

Virtual devices are increasingly popular and it is very convenient to dynamically start them from a service. However, a service that starts virtual devices has a number of issues to deal with, for example: how to start the virtual device, how to stop it, how to react to changes in the device status (for example scale in/scale out), and how to allocate and free VM licenses.

A device cannot both be started and configured in the same transaction since it takes some time for the device to be started, and it also needs to be added to the device tree before a service can configure it.

The Reactive FASTMAP pattern is ideally suited for the task of dealing with the above issues.

Starting a virtual machine

Starting a virtual device is a multi-step process consisting of:

  1. Instructing a VIM or VNF-M to start the virtual device with some input parameters (which image, cpu settings, day0 configuration etc).

  2. Waiting for the virtual device to be started, the VIM/VNF-M may signal this through some event, or polling of some state might be necessary.

  3. Mount the device in the NSO device tree.

  4. Fetch ssh-keys and perform sync-from on the newly created device.

The device is then ready to actually be configured.

There are several ways to achieve the above process with Reactive FASTMAP. One solution is implemented in the vm-manager and vm-manager-esc packages found in the example examples.ncs/service-provider/virtual-mpls-vpn.

Using these packages the service does not directly talk to the VIM/VNF-M but instead registers a vm-manager/start request using the vm-manager API. This is done by adding a list instance in the /vm-manager/start list.

The contract with the vm-manager is that it should be responsible for starting the virtual device, adding it to the /devices/device tree, perform sync-from, setting the /devices/device/vmm:ready leaf to true, and finally re-deploy the service that made the start request. This greatly simplifies the implementation of the service that would otherwise have to perform all those operations itself.

The vm-manager package is only an interface package. It must be combined with a package that actually talks to the VIM/VNF-M. In the virtual-mpls-vpn example this is done through a package called vm-manager-esc that interfaces with a VNF-M called ESC. The vm-manager-esc package subscribes to changes in the /vm-manager/start configuration tree provided by the vm-manager package. Whenever a new request is created in that tree it attempts to start the corresponding VM on the indicated ESC device.

When the vm-manager-esc package receives a CREATE event in the /vm-manager/start list it initiates starting the VM. This involves a number of steps and components. In addition to the CDB subscriber for the /vm-manager/start tree it also has the following parts.

  1. A CDB subscriber (notif nubscriber) that subscribes to NETCONF notifications from the ESC device. NETCONF notification are used to communicate the state of the virtual machine. Events are sent when a new VM is registered, when it is started, when it has become alive, when it stops etc. The vm-manager-esc package needs to react differently to the different events, and ignore some of them.

  2. A local service (vm-manager/esc) for starting the VM on the ESC. The CDB subscriber that subscribes to the /vm-manager/start list will create new instances of this service whenever a new vm-manager/start request is received, and delete the corresponding service when a vm-manager/start entry is deleted. The reason the CDB subscriber doesn't configure the ESC directly is that it would then have to keep track of what to delete when the vm-manager/start entry is deleted, but perhaps more importantly, if resources should be allocated, for example a management IP, then this can done conveniently from inside a service using the resource manager package.

The vm-manager/esc service writes configuration to the ESC device to start a new VM, to monitor it, and to send NETCONF notifications on state changes. It may also perform resource allocation and other activities.

When the notif subscriber receives a VM ALIVE event it mounts the device in the device tree, performs fetch ssh keys and sync-from, sets the ready leaf to true, and re-deploys the service that requested the VM. The user of the ready leaf is critical. The original service cannot just inspect the devices tree to see if the device is there. The device being in the devices tree is no guarantee for it being ready to configure.

Stopping a virtual machine

Stopping a virtual machine is almost as complicated as starting it. If a service both starts a virtual device and configure it there will be a problem when the service is deleted (or when the service is reconfigured to not start the virtual device). When the service is deleted the configuration that the service has created will be deleted, including both the configuration to start the VM and the configuration on the VM.

If the service configured the VIM/VNF-M directly the result would be that the VIM/VNF-M would be told to stop the VM at the same time as NSO is trying to change the configuration on the VM (deleting the configuration that the service created). This results in a race condition that frequently results in an error (the VM is spun down while NSO is talking to it, trying to delete its configuration).

This problem is handled by using the vm-manager package between the service and the VIM/VNF-M. When a service is deleted the vm-manager/start configuration is deleted. This in turn will trigger the CDB subscriber to stop the service, but this will be done after the service delete transaction has been completed, and consequently after NSO has removed the configuration that the service created on the device. The race condition is avoided.

Another problem is how to remove the device from the NSO device tree. A service that directly configures the VIM/VNF-M would have to use some trick to deal with this. The vm-manager-esc package can handle this directly in the CDB vm-manager/start subscriber. When it registers a delete of a vm-manager/start instance it deletes the corresponding vm-service, but also the devices that have been mounted. If scaling is supported by the VIM/VNF-M there might be multiple entries in the NSO device tree that must be deleted. The vm-manager YANG model contains a list of names of all devices mounted as a response to a vm-manager/start request. This list can be read both by the initiating service, but also by the vm-manager-esc CDB subscriber to know which devices to delete.

How to handle Licenses

Licences for virtual machines may or may not be a problem. If a license server is used the VM has to register for a license when it is started. This procedure would typically consist of some configuration that the initiating service would apply, or it can be part of the day0 config, in which case it is applied when the VM is started.

The real problem is usually to de-register a license. When a VM is stopped it is desirable to release the license if a license server is used. This process typically consists of deleting some configuration on the device and then waiting for the device to talk to the license server.

This complicates the device delete process a bit. Not only should the device be stopped but it must be a staged process where the device config first is removed, then the license released, and then, when the device has actually released the license, instruct the VIM/VNF-M to stop the device.

There are at least two solutions to this problem, with slightly different trade-offs.

  1. The device NED is modified to deal with license release such that when it receives a license delete command, it detects this and waits until the license has actually be released before returning. This assumes that the license was applied as part of the device configuration that the initial service applied.

    The drawback of this approach is that the commit may be slow since it will delay until the license has been released. The advantage is that it is easy to implement.

  2. The specific vm-manager package, vm-manager-esc in our example, could be modified to release the license before instructing the VIM/VNF-M to stop the VM. This is more efficient, but also a bit more complicated. The CDB subscriber that listens to vm-manager/start modifications would detect a DELETE operation and before removing the device from the NSO device tree it would invoke a license release action on the device. The NED implementing this action (as a NED command) would release the license and then wait until the device has actually released the license before returning. The CDB subscriber would then proceed to delete the device from the NSO device tree, and the vm-service instance. This whole procedure could be spawned off in a separate thread to avoid blocking other vm-manager/start operations.

Advanced Mapping Techniques

Create Methods

What happens when several service instances share a resource that may or may not exist before the first service instance is created? If the service implementation without any distinction just checks if the resource exists and creates it if it doesn't, then the create will be stored in the first created service instance's reversed diff. This implies that if the first instance is removed, then the shared resource is also be removed with it, leaving all other service instances without the shared resource.

A solution to this problem is the sharedCreate() and sharedSet() functionality that is part of both the Maapi and Navu APIs. The sharedCreate() method is used to create data in the /ncs:devices/device tree that may be shared by several service instances. Everything that is created gets a reference counter associated to it. With this counter the FASTMAP algorithm can keep track of the usage and only delete data when the last service instance referring to this data is removed. Furthermore, everything that is created using the sharedCreate() method also gets an additional attribute set, called "Backpointer", which points back to the service instance that created the entity in the first place. This makes it possible to look at the /devices tree and answer the question which parts of the device configuration was created by which service(s)

In the examples.ncs/getting-started/developing-with-ncs/4-rfs-service example there is a vlan package that uses the shared create functionality:

            //Now we will need to iterate over all of our managed
            //devices and do a shareCreate of the interface and the unit

            //Get the list of all managed devices.
            NavuList managedDevices = root.container("devices").list("device");

            for(NavuContainer deviceContainer : managedDevices.elements()){

                NavuContainer ifs = deviceContainer.container("config").
                    container("r", "sys").container("interfaces");

                // execute as shared create of the path
                //   /interfaces/interface[name='x']/unit[name='i']

                NavuContainer iface =
                    ifs.list("interface").sharedCreate(
                        vlan.leaf("iface").value());
                iface.leaf("enabled").sharedCreate();

               NavuContainer unit = iface.
                        list("unit").sharedCreate(
                            vlan.leaf("unit").value());

                unit.leaf("vlan-id").sharedSet(vlan.leaf("vid").value());
                unit.leaf("enabled").sharedSet(new ConfBool(true));
                unit.leaf("description").sharedSet(
                    vlan.leaf("description").value());
                for (ConfValue arpValue : vlan.leafList("arp")) {
                    unit.leafList("arp").sharedCreate(arpValue);
                }
            }

Build the example and create two services on the same interface:

$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service
$ make clean all
$ ncs-netsim start
$ ncs
$ ncs_cli -C -u admin
admin@ncs# configure
admin@ncs(config)# devices sync-from
admin@ncs(config)# services vlan s1 iface ethX unit 1 vid 1 description descr1
admin@ncs(config-vlan-s1)# commit
admin@ncs(config-vlan-s1)# top
admin@ncs(config)# services vlan s2 iface ethX unit 2 vid 2 description descr2
admin@ncs(config-vlan-s2)# commit
admin@ncs(config-vlan-s2)# top

We can now look at the device data for one of the relevant devices. We are especially interested in the Refcount and the Backpointer attributes that are used by the NSO FASTMAP algorithm to deduce when the data is eligible for deletion:

admin@ncs(config)# show full-configuration devices device ex0 \
    config r:sys interfaces interface | display service-meta-data
  ...
  /* Refcount: 2 */
  /* Backpointer: [ /ncs:services/vl:vlan[vl:name='s1'] /ncs:services/vl:vlan[vl:name='s2'] ] */
  r:sys interfaces interface ethX
  ...

If we now delete the first service instance, the device interface still exists, but with a decremented reference counter:

admin@ncs(config)# no services vlan s1
admin@ncs(config)# commit
admin@ncs(config)# show full-configuration devices device ex0 \
    config r:sys interfaces interface | display service-meta-data
  ...
  /* Refcount: 1 */
  /* Backpointer: [ /ncs:services/vl:vlan[vl:name='s2'] ] */
  r:sys interfaces interface ethX
  ...

Persistent FASTMAP Properties

In the service application there are often cases where the code needs to make some decision based upon something that cannot be derived directly from the input parameters to the service.

One such example could be allocating an new IP address. Suppose the implementation keeps a list of allocated addresses in the configuration. When the create() code runs, it finds a free address, stores it in the list, and uses it in the device configuration. This would work and for a while it would seem that all is well, but there is a fallacy in this implementation. The problem is that if a service is modified, the FASTMAP algorithm first removes all settings of the original create, including also the IP allocation. When the create() method is called to recreate the service there is no guarantee that the implementation will find the same free address in the list. This implies that a simple update of any service model leaf may change the allocated IP address.

NSO has built-in support to prevent this. What it comes down to is that the service implementation need to have persistently stored properties for each service instance that can be used in conjunction with the FASTMAP algorithm. These properties are found in the opaque argument in the Java API service interface.

public Properties create(ServiceContext context,
                         NavuNode service,
                         NavuNode root,
                         Properties opaque)

The opaque properties object is made available as an argument to the service create() method. When a service instance is first created this object is null. The code can add properties to it, and it returns the possibly updated opaque object, which NSO stores with the service instance. Later when the service instance is updated NSO will pass the stored opaque to create().

Note

It is vital that the create() returns the opaque object that was passed to it, even if the method itself does not use it. The reason for this is that, as we will see in the section called “ Pre and post hooks ” the create() method is not the only callback that uses this opaque object. The opaque object can actually be chained in several different callbacks. Having a return null; in the create() method is not good practice.

A pseudo code implementation of our IP allocation scenario could then look something like the following:

@ServiceCallback(servicePoint="my-service",
                 callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
                         NavuNode service,
                         NavuNode root,
                         Properties opaque)
    throws DpCallbackException {
    String allocIP = null;
    if (opaque != null) {
        allocIP = opaque.getProperty("ALLOCATED_IP");
    }

    if (allocIP == null) {
        // This implies that the service instance is created for the first
        // time the allocation algorithm should execute

        ...

        // The allocated IP should be stored in the opaque properties
        if (opaque == null) {
            opaque = new Properties();
        }

        ...

        opaque.setProperty.setProperty("ALLOCATED_IP, allocIP);
    }

    // It is important that the opaque Properties object is returned
    // or else it will not be stored together with the service instance
    return opaque;
}

Pre and post hooks

There are scenarios where some handling in a service implementation still should be outside the scope of the FASTMAP algorithm. For instance when a service require some functionality in a device to be enabled and should make settings to enable this functionality if it is not enabled already. If that enabling should still be set on the device even though the service interface is later removed, then the FASTMAP comes short in doing this.

For this reason there are two extra methods in the DpServiceCallback interface, preModification and postModification that if registered will be called before respectively after the FASTMAP algorithm modifies device data:

@ServiceCallback(servicePoint = "",
                 callType = ServiceCBType.PRE_MODIFICATION)
public Properties preModification(ServiceContext context,
                                  ServiceOperationType operation,
                                  ConfPath path,
                                  Properties opaque)
                                  throws DpCallbackException;

@ServiceCallback(servicePoint = "",
                 callType = ServiceCBType.POST_MODIFICATION)
public Properties postModification(ServiceContext context,
                                   ServiceOperationType operation,
                                   ConfPath path,
                                   Properties opaque)
                                   throws DpCallbackException;

The pre/postModification methods have an context argument of type ServiceContext which contains methods to retrieve NavuNodes pointing to the service instance and the ncs model rootnode. Data that is modified using these NavuNodes will be handled outside the scope of the FASTMAP algorithm and therefore untouched by changes of the service instance (if not changed in another pre/postModification callback):

public interface ServiceContext {
...
    public NavuNode getServiceNode() throws ConfException;

    public NavuNode getRootNode() throws ConfException;}

The pre/postModification also has an operation argument of enum type ServiceOperationType which describes the type of change that current service instance is subject to:

public enum ServiceOperationType {
    CREATE,
    UPDATE,
    DELETE;
...
}

In addition to the above arguments the pre/postModification methods also has an path argument that points to the current service instance and the opaque Properties object corresponding to this service instance. Hence the opaque can be first created in a preModification methods, passed to, and modified, in the FASTMAP create() method and in the end also handled in a postModification method before stored with the the service instance.

The examples.ncs/getting-started/developing-with-ncs/15-pre-modification example show how a preModification method can be used to permanently set a dns server in the device configuration. This dns server is thought of as a prerequisite for the service instances and should always be set for the devices. Instead of having to fail in the fastmap service when the prerequisite is not fulfilled. The preModification can instead check and set the config. We have the following preModification code:

    @ServiceCallback(servicePoint = "vpnep-servicepoint",
                     callType = ServiceCBType.PRE_MODIFICATION)
    public Properties preModification(ServiceContext context,
                                      ServiceOperationType operation,
                                      ConfPath path,
                                      Properties opaque)
                                      throws DpCallbackException {
        try {
            vpnep vep = new vpnep();
            if (ServiceOperationType.DELETE.equals(operation)) {
                return opaque;
            }

            // get the in transaction changes for the current
            // service instance
            NavuNode service = context.getRootNode().container(Ncs._services_).
                namespace(vpnep.id).list(vpnep._vpn_endpoint_).
                elem((ConfKey) path.getKP()[0]);
            List<NavuNode> changedNodes = service.getChanges(true);

            for (NavuNode n : changedNodes) {
                if (n.getName().equals(vpnep._router_)) {
                    NavuLeaf routerName = (NavuLeaf) n;
                    NavuNode deviceNameNode = routerName.deref().get(0);
                    NavuContainer device =
                        (NavuContainer) deviceNameNode.getParent();

                    String routerNs = "http://example.com/router";
                    NavuContainer sys = device.container(Ncs._config_).
                        namespace(routerNs).container("sys");

                    NavuList serverList = sys.container("dns").list("server");

                    if (!serverList.containsNode("10.10.10.1")) {
                        serverList.create("10.10.10.1");
                    }
                    break;
                }
            }
        } catch (Exception e) {
            throw new DpCallbackException("Pre modification failed",e);
        }
        return opaque;
    }

We walk trough this example code and explain what it does. The first part is a check of which operation is being performed. If the operation is a delete we can return. We always return the opaque passed to us as an argument. Even though this is a delete it is not necessarily the last callback in the callback chain, if we would return null we would impose a null opaque to later callbacks.

            if (ServiceOperationType.DELETE.equals(operation)) {
                return opaque;
            }

Next we need check if the router leaf of the service has changed in the transaction. This leaf is mandatory, but if the operation is an UPDATE then this leaf is not necessarily changed. The following code snippet navigates to the relevant service instance NavuNode and get the list of all changed NavuNodes in this transaction and for this service instance:

            NavuNode service = context.getRootNode().container(Ncs._services_).
                namespace(vpnep.id).list(vpnep._vpn_endpoint_).
                elem((ConfKey) path.getKP()[0]);
            List<NavuNode> changedNodes = service.getChanges(true);

We check if any of the changed NavuNodes is the is the router leaf which is of type leafref to a device name under the /ncs:devices/device tree:

            for (NavuNode n : changedNodes) {
                if (n.getName().equals(vpnep._router_)) {
                    NavuLeaf routerName = (NavuLeaf) n;

If the router leaf has changed, since it is an leafref to another leaf we can deref it and get the device name leaf in the /ncs:devices/device tree. Note that in the general case a deref will not necessarily return a singular NavuNode, but in this case it will and therefore we can just call get(0) on the deref list of NavuNodes. We want to get the device container NavuNode and we can retrieve this as the parent node of the deviceName leaf.

                    NavuNode deviceNameNode = routerName.deref().get(0);
                    NavuContainer device =
                        (NavuContainer) deviceNameNode.getParent();

We now know that the router leaf has changed, we have the device container NavuNode for this device and we can check the device configuration for the dns servers. If the IP address 10.10.10.1 does not appear in the list we add it.

                    String routerNs = "http://example.com/router";
                    NavuContainer sys = device.container(Ncs._config_).
                        namespace(routerNs).container("sys");

                    NavuList serverList = sys.container("dns").list("server");

                    if (!serverList.containsNode("10.10.10.1")) {
                        serverList.create("10.10.10.1");
                    }

We have here used the preModification callback to hardwire a enabling for a service. This setting will stay on the device independently of the lifecycle changes of the service instance which created it.

Stacked Services and Shared Structures

It is possible for one high level service to create another low level service instance. In this case, the low level service is FASTMAPed similar to how the data in /ncs:devices/device is FASTMAPed when a normal RFS manipulates the device tree. We can imagine a high level service (maybe a customer facing service, CFS) called email that in its turn creates real RFS services pop and/or imap.

The same principles apply on the FASTMAP data when services are stacked as in the regular RFS service scenario. The most important principle is that the data created by a FASTMAP service is owned by the service code. Regardless of weather we use a template based service or a Java based service, the service code creates data, and that data is then associated with the service instance. If the user deletes a service instance, FASTMAP will automatically delete whatever the service created, including any other services. Thus, if the operator directly manipulates data that is created by a service, the service becomes "out of sync". Each service instance has a "check-sync" action associated to it. This action checks if all the data that the service creates or writes is still there.

This is especially important to realize in the case of stacked services. In this case the low level service data is under the control of the high level service. It is thus forbidden to directly manipulate that data. Only the high level service code may manipulate that data. NSO has no built-in mechanism that detects when data created by service code is manipulated "out of band".

However, two high level services may manipulate the same structures. Regardless of weather an RFS creates data in the device tree or if a high level service creates low level services (that are true RFS services), the data created is under the control of FASTMAP and the attributes Refcount and Backpointer that automatically gets created are used by FASTMAP to ensure that structures shared by multiple service instances are not deleted until there are no users left.

FASTMAP pre-lock create option

Note

This option is deprecated. It is recommended to use the FASTMAP create() function in all cases.

The original FASTMAP algorithm accepts concurrent transactions. However the create() function will be called after acquiring a common transaction lock. This implies that only one service instance's create() function is called at the time.

Note

This above serialization of the transaction is part of the NSO service manager's FASTMAP algorithm. It should NOT be mistaken for the NSO device managers propagation of data to the relevant devices, which is performed at a later stage of the transaction commit. The latter is performed in a fully concurrent manner.

The reasons for the serialization of FASTMAP transactions are transaction consistency and making it simpler to write create() functions, since they do not need to be thread-safe.

However in certain scenarios with services where the create() function requires heavy computation and in the same time have no overlap in written data, this serialization is not necessary and will prevent higher throughput. For this reason a preLockCreate() function has been introduced. This function serves exactly the same purpose as the create() function but is called before the common transaction lock is acquired.

The guidelines for using a preLockCreate() function instead of the ordinary create() are:

  • The service creation is computationally heavy, i.e., consumes substantial CPU time.

  • The service creation can be coded in a thread-safe fashion.

  • Different service instances has no config data overlap, or the probability for config data overlap is low.

The preLockCreate FASTMAP algorithm has internal detection of conflicting concurrent transaction data updates. This implies that the there is no risk of persistent data inconsistencies but instead a conflicting transaction might fail in commit.

For services that also uses the preModification() function, this function will also be called before the transaction lock if preLockCreate() is used.

If a stacked service (see the section called “Stacked Services and Shared Structures”) has a preLockCreate(), and the stacked service is created by another service's create() function, then the stacked service's preLockCreate() will be called inside the lock.

Service Caveats

Under some circumstances the mapping logic of a service needs special consideration. Services can either map to disjunctive data sets or shared data sets.

If the services map to disjunctive data sets, which means no other service will manipulate the same data, there are no known caveats.

If on the other hand several services manipulate the same data there are some things to consider. All these special cases will be discusses below.

Finding Caveats

A useful tool for finding potential problems with overlapping data is the CLI debug service flag. Example:

admin@ncs(config)# commit dry-run | debug service

The debug service flag will display the net effect of the service create code as well as issue warnings about potential problematic usage. Note these warnings are only for situations were services have overlapping shared data.

In all examples below the WARNING message is the result of using the flag debug service.

delete

A general rule of thumb is to never use delete in service create code.

If a delete is used in service create code the following warning is displayed:

*** WARNING ***: delete in service create code is unsafe if data is
                 shared by other services

The deleted elements will be restored when the service instance which did the delete is deleted. Other services which relied on the same configuration will be out of sync.

The explicit delete is easy to detect in the XML of a template or in the Java source code. The not so easy detection are the when and choice statements in the YANG data model.

If a when statement is evaluated to false the configuration tree below that node will be deleted.

If a case is set in a choice statement the previously set case will be deleted.

Both the above when and case scenarios will behave the same as an explicit delete.

One working design pattern for these use cases is to let one special init service be responsible for the deletion and initialization. This init service should be a singleton and be shared created by other services depending on the specific delete and initialization.

By using this stacked service design the other services just share create that init service. When the last one of the other services is deleted the init service is also deleted as it is reference counted.

Another design pattern is to have such delete and initialization code in the pre- and post-modification code of a service. This is possible but generally results in more complex code than the stacked service approach above.

set

If a set operation instead of a shared set operation is used in service create code the following warning is displayed:

*** WARNING ***: set in service create code is unsafe if data is
                 shared by other services

The set operation does not add the service meta-data reference count to the element. If the first service, which set the element, is deleted the original value will restored and other services will be out of sync.

create

If a create operation instead of a shared create operation is used in service create code the following warning is displayed:

*** WARNING ***: create in service create code is unsafe if data is
                 shared by other services

The create operation does not add the service meta-data back-pointer and reference count to the element. If the first service, which created the element, is deleted the created item is deleted and other services will be out of sync.

move

If items in an ordered by user list are moved and these items were created by another service the following warning is displayed:

*** WARNING ***: due to the move the following services will be
                 out of sync:

and a list of affected services is listed.

Moving items which other services relies on is a service design flaw. This has to be analyzed and taken care of in user code.

Conflicting Intents

It is important to consider that a service is executed as part of a transaction. If, in the same transaction, the service gets conflicting intents, for example, it gets modified and deleted, the transaction will abort. All examples below increase the risk of conflicting intents and, therefore, should be avoided.

  • Service input parameters have when conditions. If service input parameters have when conditions, some change to the target nodes of the when conditions (becomes true or false) will cause the service to be re-deployed.

  • Service has more than one parent node. Stacked service designs where two or more parent services generate input for a child service can cause conflicting intents for the child service.

Service Discovery

Discovery basics

A very common situation when NSO is deployed in an existing network is that the network already has services implemented. These services may have been deployed manually or through an older provisioning system. The task is to introduce NSO and import the existing services into NSO. The goal is to use NSO to manage existing services, and to add additional instances of the same service type using NSO.

The whole process of identifying services and importing them into NSO is called Service Discovery. Some steps in the process can be automated others are highly manual. The amount of work differs a lot depending on how structured and consistent the original deployment is.

The process can be broken down in a number of steps:

Figure 43. Service Discovery
Service Discovery

One of the prerequisites for this to work is that it is possible to construct a list of the already existing services. Maybe such a list exists in an inventory system, an external database, or maybe just an Excel spreadsheet. It must also be possible to:

  1. Import all managed devices into NSO;

  2. Write the YANG data model for the service and the the the mapping logic.

  3. Write a program, using Python/Maapi or Java/Maapi which traverses the entire network configuration and computes the services list.

  4. Verify the mapping logic is correct.

The last step, verifying the mapping logic is an iterative process. The goal is to ensure all relevant device configuration is covered by the mapping logic.

Verifying the mapping logic is achieved by using the action re-deploy reconcile { } dry-run of NSO. When the output is empty the data is covered.

NSO uses special attributes on instance data to indicate the data used by a service. Two attributes are used for this Refcount and Backpointer.

By using the flag display service-meta-data to show full-configuration these attributes can be inspected.

Even if all data is covered in the mapping there might still be manual configured data below service data. If this is not desired use the action re-deploy reconcile { discard-non-service-config } dry-run to find such configuration.

Below the steps to reconcile a service are shown, first in a visual form and later as commands in one of the examples.

Figure 44. Service v1 and v2 has been created with original data O
Service v1 and v2 has been created with original data O

The service v1 and v2 has been created on top the existing original data.

The service v1 has sole control of the instance data in α, which is not part of δ, and service v2 has sole control of the instance data β, which is not part of ε.

The data solely owned by service v1 and v1 has a reference count of one.

The data in δ and in ε is both part of the original data and part of service data. The reference counter in these areas is two.

If the service v1 was to be deleted the data with reference count of one would be removed. The data in δ would be kept but the reference count would be removed.

After throughly inspection of the service and the affected data the service can become the sole owner of the data which is part of the original data.

Check the effect of the reconciliation by use of the dry-run option to re-deploy:

admin@ncs(config)# services vlan v1 re-deploy reconcile { } dry-run

The output of the dry-run will only display configuration changes, not changes in service-meta-data like reference count and back-pointers.

Figure 45. Reconcile Service v1
Reconcile Service v1

After reconciliation of v1 the service is the sole owner of the data α. All the data in α now has the reference count set to one after the operation.

Complete the process by reconciling service v2 as well.

Figure 46. Reconcile Service v2
Reconcile Service v2

All data in α and β now has a reference count of one and will thus be removed when services v1 and v2 are removed or un-deployed.

If at later stage it shows parts of ψ should belong to a service just change the mapping logic of the service and execute the action again:

admin@ncs(config)# services vlan v1 re-deploy reconcile
admin@ncs(config)# services vlan v2 re-deploy reconcile

If the service mapping logic is changes so services starts to overlap each other and start to control more of the original data like in the following figure:

Figure 47. Overlapping services
Overlapping services

just reconcile the services again. After reconciliation α and β has a reference count of one and the reference count of ζ is two.

The command re-deploy reconcile can be executed over and over again, if the service is already reconciled nothing will happen.

Figure 48. Un-deploy Service v1 and v2
Un-deploy Service v1 and v2

The data ψ is outside any service and is kept after the services are gone. If the services v1 and v2 had been deleted ψ would still look the same.

Now after the visualization try this by hand in one of the examples: examples.ncs/getting-started/developing-with-ncs/4-rfs-service

First we create two service instances:

$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service
$ make clean all
$ ncs-netsim start
$ ncs
$ ncs_cli -C -u admin
admin@ncs# config
admin@ncs(config)# devices sync-from
admin@ncs(config)# services vlan v1 description v1-vlan iface eth1 unit 1 vid 111
admin@ncs(config-vlan-v1)# top
admin@ncs(config)# services vlan v2 description v2-vlan iface eth2 unit 2 vid 222
admin@ncs(config-vlan-v2)# top
admin@ncs(config)# commit

That created two services in the network. Now let's destroy that.

admin@ncs(config)# devices device * delete-config
admin@ncs(config)# no services
admin@ncs(config)# commit no-networking

We now have a situation with two services deployed in the network, but no services, nor any device configuration in NSO.

This is the case when NSO first is set up in a network. Now start by getting all the device data into the data base.

admin@ncs(config)# devices sync-from

This resembles the point were brown field deployment starts. Lets introduce the two service instances in NSO.

admin@ncs(config)# services vlan v1 description v1-vlan iface eth1 unit 1 vid 111
admin@ncs(config-vlan-v1)# top
admin@ncs(config)# services vlan v2 description v2-vlan iface eth2 unit 2 vid 222
admin@ncs(config-vlan-v2)# top
admin@ncs(config)# commit no-networking

We're almost there now. If we take a look at the deployed configuration in NSO, we see for example:

admin@ncs(config)# show full-configuration devices device ex0 \
    config r:sys interfaces | display service-meta-data
...
  ! Refcount: 2
  ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
  r:sys interfaces interface eth1
   ! Refcount: 2
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
   enabled
   ! Refcount: 2
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
   unit 1
    ! Refcount: 2
    ! Originalvalue: true
    enabled
    ! Refcount: 2
    ! Originalvalue: v1-vlan
    description v1-vlan
    ! Refcount: 2
    ! Originalvalue: 111
    vlan-id     111
   !
  !
  ! Refcount: 2
  ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
  r:sys interfaces interface eth2
   ! Refcount: 2
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
   enabled
   ! Refcount: 2
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
   unit 2
    ! Refcount: 2
    ! Originalvalue: true
    enabled
    ! Refcount: 2
    ! Originalvalue: v2-vlan
    description v2-vlan
    ! Refcount: 2
    ! Originalvalue: 222
    vlan-id     222
   !

When we commit a service to the network, the FASTMAP code will create the Refcount and the Backpointer attributes. These attributes are used to connect the device config to services. They are also used by FASTMAP when service instances are changed or deleted. In the configuration snippet above you can see the interface "eth1" and "eth2" has a refcount of 2 but only one back-pointer, pointing back to the services. This is the state when the data is not owned by the service but still is part of the original data..

admin@ncs(config)# services vlan v1 re-deploy reconcile
admin@ncs(config)# services vlan v2 re-deploy reconcile

Now the services v1 and v2 are in the same state as in the figure: Figure 46, “Reconcile Service v2” above.

admin@ncs(config)# show full-configuration devices device ex0 \
    config r:sys interfaces | display service-meta-data
...
 ! Refcount: 1
  ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
  r:sys interfaces interface eth1
   ! Refcount: 1
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
   enabled
   ! Refcount: 1
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
   unit 1
    ! Refcount: 1
    enabled
    ! Refcount: 1
    description v1-vlan
    ! Refcount: 1
    vlan-id     111
   !
  !
  ! Refcount: 1
  ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
  r:sys interfaces interface eth2
   ! Refcount: 1
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
   enabled
   ! Refcount: 1
   ! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
   unit 2
    ! Refcount: 1
    enabled
    ! Refcount: 1
    description v2-vlan
    ! Refcount: 1
    vlan-id     222
   !
  !

The two services v1 and v2 have been reconciled. The reference counter as well as the back pointers are correct and indicates the data is owned by the services.

Reconciliation caveats

This scheme works less well sometimes depending on the service type. If the services delete data on the managed devices expecting FASTMAP to recreate that data when the service is removed, this technique doesn't work.

Also, if the service instances have allocated data, this scheme has to be modified to take that allocation into account.

A reconcile exercise is also a cleanup exercise, and every reconciliation exercise will be different.

Reconciling in bulk

Once we have convinced ourselves that the reconciliation process works, we probably want to reconcile all services in bulk. One way to do that would be to write a shell script to do it. The script needs input; assume we have a file vpn.txt that contains all the already existing VPNs in the network as a CSV file.

$ cat vpn.txt
volvo,volvo VLAN,eth4,1,444
saab,saab VLAN,eth4,2,445
astra,astra VLAN,eth4,3,446

A small shell script to generate input to the CLI could look like

#!/bin/sh
infile=$1
IFS=,
echo "config" > out.cli
cat $infile |
while read id d iface unit vid; do
    c="services vlan $id description \"$d\" iface $iface unit $unit vid $vid"
    echo $c >> out.cli
    echo "top" >> out.cli
done

echo "commit" >> out.cli

cat $infile |
while read id desc iface unit vid; do
    echo "Reconcile of '$id'"
    echo "services vlan $id re-deploy reconcile" >> out.cli
done

echo exit >> out.cli
echo exit >> out.cli

ncs_cli -C -u admin < out.cli

Partial sync

In some cases a service may need to rely on the actual devices configuration to compute the changeset. It is often a requirement to pull the current devices' configurations from the network before executing such service. Doing a full sync-from on a number of devices is an expensive task especially if it needs to be performed often, so the suggested way in this case is using partial-sync-from.

In cases where multitude of service instances touch a device that is not entirely orchestrated using NSO, i.e. relying on the partial-sync-from feature described above, and the device needs to be replaced then all services need to be re-deployed. This can be expensive depending on the number of service instances. Partial-sync-to enables replacement of devices in a more efficient fashion.

Partial-sync-from and partial-sync-to actions allow to specify certain portions of the devices' configuration to be pulled or pushed from or to the network, respectively, rather than the full config. These are more efficient operations on NETCONF devices and NEDs that support partial-show feature. NEDs that do not support partial-show feature will fall back to pulling or pushing the whole configuration.

Even though partial-sync-from and partial-sync-to allows to pull or push only a part of device's configuration, the actions are not allowed to break consistency of configuration in CDB or on the device as defined by the YANG model. Hence extra consideration needs to be given to dependencies inside the device model. If some configuration item A depends on configuration item B in the device's configuration, pulling only A may fail due to unsatisfied dependency on B. In this case both A and B need to be pulled, even if the service is only interested in the value of A.

It is important to note that partial-sync-from and partial-sync-to does not update the transaction ID of the device, when pushing to the device, or NSO, when pulling from the device, unless the whole configuration has been selected (e.g. /ncs:devices/ncs:device[ncs:name='ex0']/ncs:config).

Partial sync-from

Pulling the configuration from the network needs to be initiated outside the service code. At the same time the list of configuration subtrees required by a certain service should be maintained by the service developer. Hence it is a good practice for such service to implement a wrapper action that invokes the generic /devices/partial-sync-from action with the correct list of paths. The user or application that manages the service would only need to invoke the wrapper action without needing to know which parts of configuration the service is interested in.

The snippet in Example 119, “Example of running partial-sync-from action via Java API” gives an example of running partial-sync-from action via Java, using "router" device from examples.ncs/getting-started/developing-with-ncs/0-router-network.

Example 119. Example of running partial-sync-from action via Java API
        ConfXMLParam[] params = new ConfXMLParam[] {
        new ConfXMLParamValue("ncs", "path", new ConfList(new ConfValue[] {
        new ConfBuf("/ncs:devices/ncs:device[ncs:name='ex0']/"
        + "ncs:config/r:sys/r:interfaces/r:interface[r:name='eth0']"),
        new ConfBuf("/ncs:devices/ncs:device[ncs:name='ex1']/"
        + "ncs:config/r:sys/r:dns/r:server")
        })),
        new ConfXMLParamLeaf("ncs", "suppress-positive-result")};
        ConfXMLParam[] result =
        maapi.requestAction(params, "/ncs:devices/ncs:partial-sync-from");