- Start
- What's New
- Getting Started
- User Guide
- About
- Basic Operations
- Network Element Drivers and Adding Devices
- Managing Network Services
- NSO CLI
- The NSO Device Manager
- SSH Key Management
- Alarm Manager
- Plug-and-play Scripting
- Compliance reporting
- NSO Packages
- Life-cycle Operations - Manipulating Existing Services and Devices
- Web User Interface
- Network Simulator
- Administration Guide
- Northbound APIs
- Development Guide
- Preface
- Development Environment and Resources
- The Configuration Database and YANG
- Basic Automation with Python
- Developing a Simple Service
- Applications in NSO
- Implementing Services
- Templates
- Services Deep Dive
- The NSO Java VM
- The NSO Python VM
- Embedded Erlang applications
- The YANG Data Modeling Language
- Using CDB
- Java API Overview
- Python API Overview
- NSO Packages
- Package Development
- Service Development Using Java
- NED Development
- NED Upgrades and Migration
- Service Handling of Ambiguous Device Models
- Scaling and Performance Optimization
- NSO Concurrency Model
- Developing Alarm Applications
- SNMP Notification Receiver
- The web server
- Kicker
- Scheduler
- Progress Trace
- Nano Services for Staged Provisioning
- Encryption Keys
- External Logging
- NSO Developer Studio
- Web UI
- Layered Service Architecture
- Manual Pages
- NSO Documentation Home
- NSO SDK API Reference
- NSO Change Log Explorer
- NSO NED Change Log Explorer
- NSO NED Capabilities Explorer
- NSO on DevNet
- Get Support
OUTDATED
OUTDATED
This documentation corresponds to an older version of the product, is no longer updated, and may contain outdated information.
Please access the latest versions from https://cisco-tailf.gitbook.io/nso-docs and update your bookmarks. OK
The purpose of the upper CFS node is to manage all CFS services and
to push the resulting service mappings to the RFS services. The lower
RFS nodes are configured as devices in the device tree of the upper
CFS node and the RFS services are created under the
/devices/device/config
accordingly.
This is almost identical to the relation between a normal NSO
node and the normal devices. However, there are differences when it
comes to commit parameters and the commit queue, as well as some other
LSA-specific features.
Such a design allows you to decide whether you will run the same version of NSO on all nodes or not. Since some differences arise between the two options, this document distinguishes a single version deployment from a multi version one.
Deployment of an LSA cluster where all the nodes have the same major version of NSO running is called a single version deployment. If the versions are different, then it is a multi version deployment, since the packages on the CFS node must be managed differently.
The choice between the two deployment options depends on your functional needs. The single version is easier to maintain and is a good starting point but is less flexible. While it is possible to migrate from one to the other, the migration from a single version to multi version is typically easier than the other way around. Still, every migration requires some effort, so it is best to pick one approach and stick to it.
You can find working examples of both deployment types in the
examples.ncs/getting-started/developing-with-ncs/22-lsa-single-version-deployment
and
examples.ncs/getting-started/developing-with-ncs/28-lsa-multi-version-deployment
folders, respectively.
The type of deployment does not affect the RFS nodes. In general, the RFS nodes act very much like ordinary standalone NSO instances but only support the RFS services.
Configure and set up the lower RFS nodes as you would a standalone node, by making sure the necessary NED and RFS packages are loaded and the managed network devices added. This requires you to have already decided on the distribution of devices to lower RFS nodes. The RFS packages are ordinary service packages.
The only LSA-specific requirement is that these nodes enable NETCONF
communication north-bound, as this is how the upper CFS node will
interact with them. To enable NETCONF north-bound, ensure that a
configuration similar to the following is present in the
ncs.conf
of every RFS node:
<netconf-north-bound> <enabled>true</enabled> <transport> <ssh> <enabled>true</enabled> <ip>0.0.0.0</ip> <port>2022</port> </ssh> </transport> </netconf-north-bound>
One thing to note is that you do not need to explicitly enable the commit queue on the RFS nodes, even if you intend to use LSA with the commit queue feature. The upper CFS node is aware of the LSA setup and will propagate the relevant commit flags to the lower RFS nodes automatically.
If you wish to enable the commit queue by default, that is, even
for transactions originating on the RFS node (non-LSA), you are
strongly encouraged to enable it globally, through the
/devices/global-settings/commit-queue/enabled-by-default
setting on all the RFS nodes and, importantly, the upper CFS node
too.
Otherwise, you may end up in a situation where only a part of the
transaction runs through the commit queue. In that case, the
rollback-on-error
commit queue error option will
not work correctly, as it can't roll back the full original transaction
but just the part that went through the commit queue.
This can result in an inconsistent network state.
Regardless of single or multi version deployment, the upper CFS
node has the lower RFS nodes configured as devices under the
/devices/device
tree. The CFS node communicates with
these devices through NETCONF and must have the correct ned-id
configured for each lower RFS node. The ned-id is set under
/devices/device/device-type/netconf/ned-id
, as for any
NETCONF device.
The part that is specific to LSA is the actual ned-id used. This
has to be ned:lsa-netconf
or a ned-id derived
from it.
What is more, the ned-id depends on the deployment type. For a single
version deployment, you can use the lsa-netconf
value directly. This ned-id is built-in (defined in
tailf-ncs-ned.yang
) and available in NSO
without any additional packages.
So the configuration for the RFS device in the CFS node would look similar to:
admin@upper-nso% show devices device | display-level 4 device lower-nso-1 { lsa-remote-node lower-nso-1; authgroup default; device-type { netconf { ned-id lsa-netconf; } } state { admin-state unlocked; } }
Notice the use of the lsa-remote-node
instead of the
address
(and port
) as is usually done.
This setting identifies the device as a lower-layer LSA node and
instructs NSO to use connection information provided under
cluster
configuration.
The value of lsa-remote-node
references a cluster
remote-node
, such as the following:
admin@upper-nso% show cluster remote-node remote-node lower-nso-1 { address 127.0.2.1; authgroup default; }
In addition to devices device
, the authgroup
value is again required here and refers to cluster authgroup
,
not the device one. Both authgroups must be configured correctly
for LSA to function.
Having added device and cluster configuration for all RFS nodes,
you should update the SSH host keys for both, the
/devices/device
and /cluster/remote-node
paths.
For example:
admin@upper-nso% request devices device lower-nso-* ssh fetch-host-keys admin@upper-nso% request cluster remote-node lower-nso-* ssh fetch-host-keys
Moreover, the RFS NSO nodes have an extra configuration that
may not be visible to the CFS node, resulting in out-of-sync behavior.
You are strongly encouraged to set the
out-of-sync-commit-behaviour
value to
accept
, with a command such as:
admin@upper-nso% set devices device lower-nso-* out-of-sync-commit-behaviour accept
At the same time you should also enable the
/cluster/device-notifications
, which will allow the CFS
node to receive the forwarded device notifications from the RFS nodes,
and /cluster/commit-queue
, to enable the commit queue
support for LSA. Without the latter, you will not be able to use
the commit commit-queue async command, for example.
If you wish to enable the commit queue by default, you should do
so by setting the
/devices/global-settings/commit-queue/enabled-by-default
on the CFS node.
Do not use per device or per device group configuration, for the
same reason you should avoid it on the RFS nodes.
If you plan a single version deployment, the preceding steps are sufficient. For a multi version deployment on the other hand, there are two additional tasks to perform.
First, you will need to install the correct cisco-nso LSA NED
package (or packages if you need to support more versions). Each
NSO release includes these packages that are specifically
tailored for LSA. They are used by the upper CFS node if the lower
RFS nodes are running a different version than the CFS node itself.
The packages are named cisco-nso-nc-X.Y
where X.Y are the two most significant numbers of the NSO
release (the major version) that the package supports. So, if
your RFS nodes are running NSO 5.7.2, for example, you
should use cisco-nso-nc-5.7
.
These packages are found in the
$NCS_DIR/packages/lsa
directory. Each package
contains the complete model of the ncs
namespace for the corresponding NSO version, compiled as
an LSA NED.
Please always use the cisco-nso package included with the NSO
version of the upper CFS node and not some older variant (such
as the one from the lower RFS node) as it may not work correctly.
Second, installing the cisco-nso LSA NED package will make the
corresponding ned-id available, such as
cisco-nso-nc-5.7
(ned-id matches the package
name). Use this ned-id for the RFS nodes instead of
lsa-netconf
. For example:
admin@upper-nso% show devices device | display-level 4 device lower-nso-1 { lsa-remote-node lower-nso-1; authgroup default; device-type { netconf { ned-id cisco-nso-nc-5.7; } } state { admin-state unlocked; } }
This configuration allows the CFS node to communicate with a
different NSO version but there are still some limitations.
The upper CFS node must have the same or newer version than managed
RFS nodes. For all the currently supported versions of the lower
node, the packages can be found in the
$NCS_DIR/packages/lsa
directory, but you may
also be able to build an older one yourself.
In case you already have a single version deployment using the
lsa-netconf
ned-ids, you can use the NED migrate
procedure to switch to the new ned-id and multi version deployment.
Besides adding managed lower-layer nodes, the upper-layer node also requires packages for the services. Obviously, you must add the CFS package, which is an ordinary service package, to the CFS node. But you must also provide the device compiled RFS YANG models to allow provisioning of RFSs on the remote RFS nodes.
The process resembles the way you create and compile device YANG
models in normal NED packages. The ncs-make-package
tool provides the --lsa-netconf-ned
option, where
you specify the location of the RFS YANG model and the tool creates
a NED package for you.
This is a new package that is separate from the RFS package used
in the RFS nodes, so you might want to name it differently to avoid
confusion. The following text uses the “-ned” suffix.
Usually, you would also provide the --no-netsim
,
--no-java
, and --no-python
switches
to the invocation, as the package is used with the NETCONF protocol
and doesn't need any additional code.
The --no-netsim
option is required because netsim
is not supported for these types of packages. For example:
ncs-make-package --no-netsim --no-java --no-python \ --lsa-netconf-ned ./path/to/rfs/src/yang \ myrfs-service-ned
In this case, there is no explicit --lsa-lower-nso
option specified and ncs-make-package will by
default be set up to compile the package for the single version
deployment, tied to the lsa-netconf
ned-id.
That means the models in the NED can be used with devices that have
a lsa-netconf
ned-id configured.
To compile it for the multi version deployment, which uses a different
ned-id, you must select the target NSO version with the
--lsa-lower-nso cisco-nso-nc-X.Y
option, for example:
ncs-make-package --no-netsim --no-java --no-python \ --lsa-netconf-ned ./path/to/rfs/src/yang \ --lsa-lower-nso cisco-nso-nc-5.7 myrfs-service-ned
Depending on the RFS model, the package may fail to compile, even
though the model compiles fine as a service. A typical error would
indicate some node from a module, such as tailf-ncs
,
is not found.
The reason is that the original RFS service YANG model has dependencies
on other YANG models that are not included in the compilation process.
One solution to this problem is to remove the dependencies in the
YANG model before compilation. Normally this can be solved by changing
the datatype in the NED compiled copy of the YANG model, for example
from leafref
or instance-identifier
to
string.
This is only needed for the NED compiled copy, the lower RFS node
YANG model can remain the same. There will then be an implicit
conversion between types, at runtime, in the communication between
the upper CFS node and the lower RFS node.
An alternate solution, if you are doing a single version deployment
and there are dependencies on the tailf-ncs
namespace, is to switch to a multi version deployment because the
cisco-nso package includes this namespace (device compiled).
Here, the NSO versions match but you are still using the
cisco-nso-nc-X.Y
ned-id and have to follow the
instructions for the multi version deployment.
Once you have both, the CFS and device compiled RFS service packages are ready, add them to the CFS node, then invoke a sync-from action to complete the setup process.
You can see all the required setup steps for a single version deployment
performed in the example
examples.ncs/getting-started/developing-with-ncs/22-lsa-single-version-deployment
and the
examples.ncs/getting-started/developing-with-ncs/28-lsa-multi-version-deployment
has the steps for the multi version one. The two are quite similar
but the multi version deployment has additional steps, so it is
the one described here.
First, build the example for manual setup.
$ make clean manual $ make start-manual $ make cli-upper-nso
Then configure the nodes in the cluster. This is needed so that the upper CFS node can receive notifications from the lower RFS node and prepare the upper CFS node to be used with the commit-queue.
> configure % set cluster device-notifications enabled % set cluster remote-node lower-nso-1 authgroup default username admin % set cluster remote-node lower-nso-1 address 127.0.0.1 port 2023 % set cluster remote-node lower-nso-2 authgroup default username admin % set cluster remote-node lower-nso-2 address 127.0.0.1 port 2024 % set cluster commit-queue enabled % commit % request cluster remote-node lower-nso-* ssh fetch-host-keys
To be able to handle the lower NSO node as an LSA node, the correct version of the cisco-nso-nc package needs to be installed. In this example 5.4 is used.
Create a link to the cisco-nso package in the packages directory of the upper CFS node:
$ ln -sf ${NCS_DIR}/packages/lsa/cisco-nso-nc-5.4 upper-nso/packages
Reload the packages:
% exit > request packages reload e>>> System upgrade is starting. >>> Sessions in configure mode must exit to operational mode. >>> No configuration changes can be performed until upgrade has completed. >>> System upgrade has completed successfully. reload-result { package cisco-nso-nc-5.4 result true }
Now when the cisco-nso-nc package is in place, configure the two lower NSO nodes and sync-from them:
> configure Entering configuration mode private % set devices device lower-nso-1 device-type netconf ned-id cisco-nso-nc-5.4 % set devices device lower-nso-1 authgroup default % set devices device lower-nso-1 lsa-remote-node lower-nso-1 % set devices device lower-nso-1 state admin-state unlocked % set devices device lower-nso-2 device-type netconf ned-id cisco-nso-nc-5.4 % set devices device lower-nso-2 authgroup default % set devices device lower-nso-2 lsa-remote-node lower-nso-2 % set devices device lower-nso-2 state admin-state unlocked % commit Commit complete. % request devices fetch-ssh-host-keys fetch-result { device lower-nso-1 result updated fingerprint { algorithm ssh-ed25519 value 4a:c6:5d:91:6d:4a:69:7a:4e:0d:dc:4e:51:51:ee:e2 } } fetch-result { device lower-nso-2 result updated fingerprint { algorithm ssh-ed25519 value 4a:c6:5d:91:6d:4a:69:7a:4e:0d:dc:4e:51:51:ee:e2 } } % request devices sync-from sync-result { device lower-nso-1 result true } sync-result { device lower-nso-2 result true }
Now, for example, the configured devices of the lower nodes can be viewed:
% show devices device config devices device | display xpath | display-level 5 /devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex0'] /devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex1'] /devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex2'] /devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex3'] /devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex4'] /devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex5']
or alarms inspected:
% run show devices device lower-nso-1 live-status alarms summary live-status alarms summary indeterminates 0 live-status alarms summary criticals 0 live-status alarms summary majors 0 live-status alarms summary minors 0 live-status alarms summary warnings 0
Now, create a netconf package on the upper CFS node which can be used towards the rfs-vlan service on the lower RFS node, in the shell terminal window do the following:
$ ncs-make-package --no-netsim --no-java --no-python \ --lsa-netconf-ned package-store/rfs-vlan/src/yang \ --lsa-lower-nso cisco-nso-nc-5.4 \ --package-version 5.4 --dest upper-nso/packages/rfs-vlan-nc-5.4 \ --build rfs-vlan-nc-5.4
The created NED is an lsa-netconf NED based on the YANG files of the rfs-vlan service:
--lsa-netconf-ned package-store/rfs-vlan/src/yang
The version of the NED reflects the version of the nso on the lower node:
--package-version 5.4
The package will be generated in the packages directory of the upper NSO CFS node:
--dest upper-nso/packages/rfs-vlan-nc-5.4
and the name of the package will be:
rfs-vlan-nc-5.4
Install the cfs-vlan service on the upper CFS node. In the shell terminal window do the following:
$ ln -sf ../../package-store/cfs-vlan upper-nso/packages
Reload the packages once more to get the cfs-vlan package. In the CLI terminal window do the following:
% exit > request packages reload >>> System upgrade is starting. >>> Sessions in configure mode must exit to operational mode. >>> No configuration changes can be performed until upgrade has completed. >>> System upgrade has completed successfully. reload-result { package cfs-vlan result true } reload-result { package cisco-nso-nc-5.4 result true } reload-result { package rfs-vlan-nc-5.4 result true } > configure Entering configuration mode private
Now, when all packages are in place a cfs-vlan service can be configured. The cfs-vlan service will dispatch service data to the right lower RFS node depending on the device names used in the service.
In the CLI terminal window verify the service:
% set cfs-vlan v1 a-router ex0 z-router ex5 iface eth3 unit 3 vid 77 % commit dry-run ..... local-node { data devices { device lower-nso-1 { config { services { + vlan v1 { + router ex0; + iface eth3; + unit 3; + vid 77; + description "Interface owned by CFS: v1"; + } } } } device lower-nso-2 { config { services { + vlan v1 { + router ex5; + iface eth3; + unit 3; + vid 77; + description "Interface owned by CFS: v1"; + } } } } } .....
As ex0 resides on lower-nso-1 that part of the configuration goes there and the ex5 part goes to lower-nso-2.
Since an LSA deployment consists of multiple NSO nodes (or HA pairs of nodes), each can be upgraded to a newer NSO version separately. While that offers a lot of flexibility, it also makes upgrades more complex in many cases. For example, performing a major version upgrade on the upper CFS node only, will make the deployment Multi Version even if it was Single Version before the upgrade, requiring additional action on your part.
In general, staying with the Single Version Deployment is the simplest option and does not require any further LSA-specific upgrade action (except perhaps recompiling the packages). However, the main downside is that, at least for a major upgrade, you must upgrade all the nodes at the same time (otherwise, you no longer have a Single Version Deployment).
If that is not feasible, the solution is to run a Multi Version Deployment. Along with all of the requirements, the section called “Multi Version Deployment” describes a major difference from the Single Version variant: the upper CFS node uses a version-specific cisco-nso-nc-X.Y ned-id to refer to lower RFS nodes. That means, if you switch to a Multi Version Deployment, or perform a major upgrade of the lower-layer RFS node, the ned-id should change accordingly. However, do not change it directly but follow the correct NED upgrade procedure described in the section called “NED Migration” in Administration Guide. Briefly, the procedure consists of these steps:
-
Keep the currently configured ned-id for an RFS device and the corresponding packages. If upgrading the CFS node, you will need to recompile the packages for the new NSO version.
-
Compile and load the packages that are device compiled with the new ned-id, alongside the old packages.
-
Use the migrate action on a device to switch over to the new ned-id.
The procedure requires you to have two versions of the device compiled RFS service packages loaded in the upper CFS node when calling the migrate action: one version compiled by referencing the old (current) ned-id and the other one by referencing the new (target) ned-id.
To illustrate, suppose you currently have an upper-layer and a
lower-layer nodes both running NSO 5.4. The nodes were set
up as described in the Single Version Deployment option, with the
upper CFS node using the tailf-ncs-ned:lsa-netconf
ned-id for the lower-layer RFS node. The CFS node also uses the
rfs-vlan-ned
NED package for the rfs-vlan
service.
Now you wish to upgrade the CFS node to NSO 5.7 but keep the
RFS node on the existing version 5.4. Before upgrading the CFS node,
you create a backup and recompile the rfs-vlan-ned
package for NSO 5.7. Note that the package references the
lsa-netconf ned-id, which is the
ned-id configured for the RFS device in the CFS node's CDB.
Then, you perform the CFS node upgrade as usual.
At this point the CFS node is running the new, 5.7 version and
the RFS node is running 5.4. Since you now have a Multi Version
Deployment, you should migrate to the correct ned-id as well.
Therefore, you prepare the rfs-vlan-nc-5.4
package, as described in the Multi Version Deployment option,
compile the package and load it into the CFS node. Thanks to the
NSO CDM feature, both packages,
rfs-vlan-nc-5.4
and
rfs-vlan-ned
, can be used at the same time.
With the packages ready, you execute the devices device
lower-nso-1 migrate new-ned-id cisco-nso-nc-5.4 command
on the CFS node.
The command configures the RFS device entry on CFS to use the
“new” cisco-nso-nc-5.4 ned-id,
as well as migrate the device configuration and service meta-data
to the new model. Having completed the upgrade, you can now remove
the rfs-vlan-ned
if you wish.
Later on you may decide to upgrade the RFS node to NSO
5.6. Again, you prepare the new rfs-vlan-nc-5.6
package for the CFS node in a similar way as before, now using the
cisco-nso-nc-5.6 ned-id instead of
cisco-nso-nc-5.4. Next, you perform the RFS
node upgrade to 5.6 and finally migrate the RFS device on the CFS
node to the cisco-nso-nc-5.6 ned-id, with the
migrate action.
Likewise, you can return to the Single Version Deployment, by
upgrading the RFS node to the NSO 5.7, reusing the old, or
preparing anew, the rfs-vlan-ned
package and
migrating to the lsa-netconf ned-id.
All these ned-id changes stem from the fact that the upper-layer CFS node treats the lower-layer RFS node as a managed device, requiring the correct model, just like it does for any other device type. For the same reason, maintenance (bug fix or patch) NSO upgrades do not result in a changed ned-id, so for those no migration is necessary.