External Connectors

In CML labs, External Connector nodes may be used by users to link lab nodes to networks in other labs or outside of the CML server. The node’s single interface may be connected by a link directly to a lab node, or to an (unmanaged) switch to provide connectivity to other connected nodes.

Note

Before using the external connector with your CML labs, ensure that you have configured the required settings for your CML server. See the appropriate page for your deployment type, and check any settings related to external connectivity:

The values set as the External Connector node’s configuration, like System Bridge or NAT, each point to bridge interfaces on the CML server. A newly-created External Connector node will be configured as NAT, using bridge virbr0. The System Bridge uses a bridge named bridge0, which also contains the main interface of the CML server configured during initial post-installation setup.

Additional L2 and L3 bridges may be added to the CML host using this guide.

The use of External Connector nodes is documented in the External Connectivity section of the User’s Guide.

Names and Types of External Connector Bridges

It is important to know that the CML software only recognizes certain bridge device names as intended for use as External Connectors; any name that falls outside the recognized patterns is ignored and may be used for other purposes on the host.

All recognized bridge device names have two parts:

  1. the intended kind of the bridge: bridge, vlan, virbr or local

  2. one to four digits 0 to 9

Note

The name bridge on its own is not recognized, and vlan1 and vlan0001 are both valid names for two different bridges that may exist at the same time; it is up to the administrators to use a consistent naming style that they prefer.

From the External Connector lab node’s perspective, the device names are irrelevant and all are connected in the same manner to any of those bridges. The kind of bridge can however affect default settings of bridges as they are registered with the system. The administrator should also configure a distinctive label for each External Connector so users can easily identify it when selecting configuration for their nodes.

The recognized kinds of bridges each follow their general intent:

bridge

An additional L2 bridge akin to System Bridge; any IP configuration is managed outside of CML. A virtual interface of CML server carries traffic outside of the host, including ethernet, VLAN, VXLAN, bond and team interfaces.

vlan

Essentially the same as bridge, but the administrator makes the used VLAN number visible to users; the VLAN number should match to avoid confusion, vlan tag handling may be performed at either the CML server or underlying platform level (VMware, physical server vNIC, physical switches), but not both.

virbr

Names for bridges, akin to the default NAT bridges, which also set up a local DHCP server for a defined local IP range. Forwarding outside of the CML server is done using the host’s own routing rules, where Network Address Translation is performed for IPv4 traffic initiated by lab nodes.

The default NAT DHCP IPv4 range is 192.168.255.0/24. You can change this range in the same interface as used for creating new L3 networks.

local

Reserved for bridges which are local to the CML instance and are intended to be used to connect different labs created in the CML instance. No forwarding to the outside environment is supported.

Creating L2 External Connector Bridges

CML allows to use additional L2 bridge interfaces of the CML server; these need to be created on the controller before they can be registered for use.

The created bridges may be of the bridge, vlan, and local kinds, depending on the name given to the bridge by the administrator.

In order to connect the bridge with the environment outside of CML, you must also select or create a dedicated interface and put it as a port into the bridge. An example procedure for creating a new Ethernet interface in a VM-based CML deployment is described in its own section at Adding Ethernet Interfaces. In addition, the interface may by a VLAN interface created on top of an Ethernet interface; see the section at Adding VLAN Interfaces on how to manage such interfaces.

There is generally no need for the bridge interfaces to have IP configuration, and this procedure will contain steps to disable it. In case you do need to connect to services on the CML server from this lab network, you may configure it. You must then ensure that the host’s routing and DNS configuration is correct.

To be recognized as a bridge usable for external connectivity, the bridge must use a supported name, namely bridge followed by 1-4 digits. In case you only need to connect multiple labs inside the same CML deployment, and there is no need for automatic IPv4 configuration on links between these labs, you can also use a bridge created without a member port interface. We recommend to name such bridges as local, again followed by 1-4 digits.

Caution

Advanced functionality! Creating new interfaces of any kind can leave your server inaccessible. Before adding additional networks to your CML server, make sure that you have console access and that you understand the network settings being modified.

In particular, when you add a new vNIC interface to the CML server, it will get an automatic configuration for DHCP and SLAAC, and the interface will be enabled. The same is true for any interfaces added or similarly manipulated through the System Administration Cockpit. If the connected network segment runs e.g. a DHCP server, it will install a new default route, which is normal default operation of DHCP. However, this is likely to immediately disrupt or break access to System Administration Cockpit itself through the regular primary network interface as configured during deployment. You will then need to manually restore the network settings through the console.

The procedures to create and manage L2 bridges in CML are described in the secion Adding L2 Bridge External Connectors and Deleting L2 Bridge External Connectors, respectively.

The bridge, once created on the CML server, should be further configured as described in Configuration of External Connector Bridges to be usable by lab nodes.

Creating L3 DHCP Networks and External Connector Bridges

If you want lab nodes to have external connectivity, but do not want them to use an external DHCP server or static IP configuration, you can use interfaces with NAT functionality instead. Alternatively, if you want lab nodes on the same CML host to just communicate between each other and get IP addresses assigned automatically, you can also create local bridges with a local DHCP server setup.

DHCP networks exist as bridge interfaces on the CML host, just like the L2 bridges described in the previous section. Additionally, a DHCP server runs on the host, and is connected to the bridge. The DHCP server assigns IPv4 addresses to any lab nodes connected to the bridge via an external connector node - if you configure the node to ask for one. The IP addresses are taken from a continuous range within a dedicated IPv4 subnet prefix.

There are two kinds of DHCP networks; NAT bridges use the name virbr and have additional configuration present on the CML host such that lab nodes can reach the outside network environment using the host’s own network connectivity, using Source Network Address Translation.

If you only need communication with dynamically-assigned IPv4 addresses between lab nodes on the same CML deployment, you can create local bridges using the same interface. The only difference is that NAT is not configured on these bridges.

One address from the subnet outside of the DHCP range is reserved for the bridge interface itself; it is the address of the DHCP server, and the gateway used for NAT. For this reason, each DHCP network subnet must be different from each other network on the CML host.

Caution

Configuring other interfaces to clash with the IP ranges used for DHCP network is possible on the CML host, including using System Administration Cockpit. Take great care not to introduce conflicting routes within the host.

One exception where one IP subnet is allowed to overlap with another DHCP network’s subnet is if one smaller network is completely within the other, and the larger network does not use any address from the smaller network, either as the network, CML host, DHCP range, or broadcast address. Both subnets will be routed correctly in this case. However, we still do not recommend you configure such subnets as this is error-prone.

IPv6 as well as L2 traffic is forwarded within the DHCP network bridge between lab nodes if they send such traffic. Bridge protection is disabled by default. None of this traffic will be forwarded outside by NAT.

The procedures to create and manage DHCP networks in CML are described in the secion Adding DHCP Network Bridges and Editing and Deleting DHCP Network Bridges, respectively.

The bridge, once created on the CML server, should be further configured as described in Configuration of External Connector Bridges to be usable by lab nodes.

Details on connecting lab nodes via External Connectors

When an External Connector node is started, the mapped bridge must exist, but nothing further is done with the bridge. Once the link connected to that node is started, which can only happen when both linked nodes are started, the fabric service creates a virtual interface connected to the bridge on one end, with traffic relayed through the fabric to the linked node’s interface.

The MTU of the fabric-created interfaces is set to 9000, which allows for jumbo frames to be transferred over the external connector. Since Linux bridges use the lowest MTU value of any of their members as the effective MTU, the outside interface of the CML server used to carry the traffic out of the host has the final say on the effective value. The lab nodes’ interfaces by default start with the common MTU (1500), hence to complete support for larger MTU across an external network, configuration inside the nodes should override these defaults accordingly.