When deploying applications to a data center, there are certain elements of common configuration that would be used differently by different operational teams. Traditionally, each team would rely on this information being passed manually in ad-hoc manner. In a modern Infrastructure-as-Code (IaC) approach, this configuration can be defined once but leveraged consistently across multiple domains such as networking, infrastructure and security automatically.
In this example, we are using a single Terraform plan to deploy a set of new DC networks, clone and configure virtual machines to use these networks and then create matching firewall objects and rules – all from the same simplified intent defined as a minimal set of variables.
Finally, we will use the Intersight Service for Terraform to provide Terraform Cloud with secure managed API access to traditionally isolated domain managers within the on-premises data center. This will allow Terraform to reach both public and private infrastructure equally and build a consistent hybrid cloud infrastructure operating model.
The Infrastructure-as-Code environment will require the following:
This example will then use the following on-premise domain managers. These will need to be fully commissioned and a suitable user account provided for Terraform to use for provisioning.
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace. This was required to allow the vCenter component to clone any number (count) of VMs. This prevented the FMC module from predicting the number of resources required until after the vCenter module has been executed. Instead the FMC component is now run from a 2nd linked workspace. Any updates to the main DCNM/vCenter workplace will trigger the FMC workplace to run and update as necessary.
The DC Networking module makes the following assumptions:
The vCenter module makes the following assumptions:
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace.
https://github.com/cisco-apjc-cloud-se/ist-dcn-vcenter
https://github.com/cisco-apjc-cloud-se/ist-vm-fmc-sync
In Terraform Cloud for Business, queue a new plan to trigger the initial deployment. Any future changes to pushed to the GitHub repository will automatically trigger a new plan deployment.
If successfully executed, the Terraform plan will result in the following configuration for each domain manager.
Changes to the variables defined in the JSON files will result in dynamic, stateful updates across all domains. For example,
"vm_group_a": {
"group_size": 4,
"name": "ist-svr-a",
"host_name": "ist-svr-a",
"num_cpus": 2,
"memory": 1024,
"network_id": "ist-network-a",
"domain": "mel.ciscolabs.com",
"dns_list": [
"64.104.123.245",
"171.70.168.183"
]
},
"svr_cluster": {
"DC3-N9K1": {
"name": "DC3-N9K1",
"attach": true,
"switch_ports": [
"Ethernet1/1", # Host 1 Uplink Port 1
"Ethernet1/2" # Host 2 Uplink Port 1
]
},
"DC3-N9K2": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1", # Host 1 Uplink Port 2
"Ethernet1/2" # Host 2 Uplink Port 2
]
}
},
"svr_cluster": {
"DC3-N9K1": {
"name": "DC3-N9K1",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 1 Uplink Port 1
]
},
"DC3-N9K2": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 1 Uplink Port 2
]
},
"DC3-N9K3": {
"name": "DC3-N9K3",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 2 Uplink Port 1
]
},
"DC3-N9K4": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 2 Uplink Port 2
]
}
},
"vpc_interfaces": {
"vpc5": {
"name": "vPC5",
"vpc_id": 5,
"switch1": {
"name": "DC3-LEAF-1",
"ports": ["Eth1/5"]
},
"switch2": {
"name": "DC3-LEAF-2",
"ports": ["Eth1/5"]
}
}
},
When deploying applications to a data center, there are certain elements of common configuration that would be used differently by different operational teams. Traditionally, each team would rely on this information being passed manually in ad-hoc manner. In a modern Infrastructure-as-Code (IaC) approach, this configuration can be defined once but leveraged consistently across multiple domains such as networking, infrastructure and security automatically. In this example, we are using a single Terraform plan to deploy a set of new DC networks, clone and configure virtual machines to use these networks and then create matching firewall objects and rules – all from the same simplified intent defined as a minimal set of variables. Finally, we will use the Intersight Service for Terraform to provide Terraform Cloud with secure managed API access to traditionally isolated domain managers within the on-premises data center. This will allow Terraform to reach both public and private infrastructure equally and build a consistent hybrid cloud infrastructure operating model.
The Infrastructure-as-Code environment will require the following:
This example will then use the following on-premise domain managers. These will need to be fully commissioned and a suitable user account provided for Terraform to use for provisioning.
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace. This was required to allow the vCenter component to clone any number (count) of VMs. This prevented the FMC module from predicting the number of resources required until after the vCenter module has been executed. Instead the FMC component is now run from a 2nd linked workspace. Any updates to the main DCNM/vCenter workplace will trigger the FMC workplace to run and update as necessary.
The DC Networking module makes the following assumptions:
The vCenter module makes the following assumptions:
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace.
https://github.com/cisco-apjc-cloud-se/ist-dcn-vcenter https://github.com/cisco-apjc-cloud-se/ist-vm-fmc-sync
In Terraform Cloud for Business, queue a new plan to trigger the initial deployment. Any future changes to pushed to the GitHub repository will automatically trigger a new plan deployment.
If successfully executed, the Terraform plan will result in the following configuration for each domain manager.
Changes to the variables defined in the JSON files will result in dynamic, stateful updates across all domains. For example,
"vm_group_a": {
"group_size": 4,
"name": "ist-svr-a",
"host_name": "ist-svr-a",
"num_cpus": 2,
"memory": 1024,
"network_id": "ist-network-a",
"domain": "mel.ciscolabs.com",
"dns_list": [
"64.104.123.245",
"171.70.168.183"
]
},
"svr_cluster": {
"DC3-N9K1": {
"name": "DC3-N9K1",
"attach": true,
"switch_ports": [
"Ethernet1/1", # Host 1 Uplink Port 1
"Ethernet1/2" # Host 2 Uplink Port 1
]
},
"DC3-N9K2": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1", # Host 1 Uplink Port 2
"Ethernet1/2" # Host 2 Uplink Port 2
]
}
},
"svr_cluster": {
"DC3-N9K1": {
"name": "DC3-N9K1",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 1 Uplink Port 1
]
},
"DC3-N9K2": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 1 Uplink Port 2
]
},
"DC3-N9K3": {
"name": "DC3-N9K3",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 2 Uplink Port 1
]
},
"DC3-N9K4": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 2 Uplink Port 2
]
}
},
"vpc_interfaces": {
"vpc5": {
"name": "vPC5",
"vpc_id": 5,
"switch1": {
"name": "DC3-LEAF-1",
"ports": ["Eth1/5"]
},
"switch2": {
"name": "DC3-LEAF-2",
"ports": ["Eth1/5"]
}
}
},
When deploying applications to a data center, there are certain elements of common configuration that would be used differently by different operational teams. Traditionally, each team would rely on this information being passed manually in ad-hoc manner. In a modern Infrastructure-as-Code (IaC) approach, this configuration can be defined once but leveraged consistently across multiple domains such as networking, infrastructure and security automatically. In this example, we are using a single Terraform plan to deploy a set of new DC networks, clone and configure virtual machines to use these networks and then create matching firewall objects and rules – all from the same simplified intent defined as a minimal set of variables. Finally, we will use the Intersight Service for Terraform to provide Terraform Cloud with secure managed API access to traditionally isolated domain managers within the on-premises data center. This will allow Terraform to reach both public and private infrastructure equally and build a consistent hybrid cloud infrastructure operating model.
The Infrastructure-as-Code environment will require the following:
This example will then use the following on-premise domain managers. These will need to be fully commissioned and a suitable user account provided for Terraform to use for provisioning.
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace. This was required to allow the vCenter component to clone any number (count) of VMs. This prevented the FMC module from predicting the number of resources required until after the vCenter module has been executed. Instead the FMC component is now run from a 2nd linked workspace. Any updates to the main DCNM/vCenter workplace will trigger the FMC workplace to run and update as necessary.
The DC Networking module makes the following assumptions:
The vCenter module makes the following assumptions:
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace.
https://github.com/cisco-apjc-cloud-se/ist-dcn-vcenter https://github.com/cisco-apjc-cloud-se/ist-vm-fmc-sync
In Terraform Cloud for Business, queue a new plan to trigger the initial deployment. Any future changes to pushed to the GitHub repository will automatically trigger a new plan deployment.
If successfully executed, the Terraform plan will result in the following configuration for each domain manager.
Changes to the variables defined in the JSON files will result in dynamic, stateful updates across all domains. For example,
"vm_group_a": {
"group_size": 4,
"name": "ist-svr-a",
"host_name": "ist-svr-a",
"num_cpus": 2,
"memory": 1024,
"network_id": "ist-network-a",
"domain": "mel.ciscolabs.com",
"dns_list": [
"64.104.123.245",
"171.70.168.183"
]
},
"svr_cluster": {
"DC3-N9K1": {
"name": "DC3-N9K1",
"attach": true,
"switch_ports": [
"Ethernet1/1", # Host 1 Uplink Port 1
"Ethernet1/2" # Host 2 Uplink Port 1
]
},
"DC3-N9K2": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1", # Host 1 Uplink Port 2
"Ethernet1/2" # Host 2 Uplink Port 2
]
}
},
"svr_cluster": {
"DC3-N9K1": {
"name": "DC3-N9K1",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 1 Uplink Port 1
]
},
"DC3-N9K2": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 1 Uplink Port 2
]
},
"DC3-N9K3": {
"name": "DC3-N9K3",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 2 Uplink Port 1
]
},
"DC3-N9K4": {
"name": "DC3-N9K2",
"attach": true,
"switch_ports": [
"Ethernet1/1" # Host 2 Uplink Port 2
]
}
},
"vpc_interfaces": {
"vpc5": {
"name": "vPC5",
"vpc_id": 5,
"switch1": {
"name": "DC3-LEAF-1",
"ports": ["Eth1/5"]
},
"switch2": {
"name": "DC3-LEAF-2",
"ports": ["Eth1/5"]
}
}
},
This repository is intended for use as 2nd, linked Terraform workspace. The 1st primary workspace (using the ist-dcn-vcenter GitHub repository) will share its output state to this workspace and be configured to trigger this workspace to run.
The output of the 1st workspace includes two group objects, one for each group of managed VMs. This Terraform plan will use these group details to generate host objects and network group objects dynamically.
Note: This workspace expects to be triggered from another workspace using the "ist-dcn-vcenter" GitHub repository. That workspace must be configured first and then configured to share its state with this workspace as well as trigger this workspace to run.
The Infrastructure-as-Code environment will require the following:
This example will then use the following on-premise domain managers. These will need to be fully commissioned and a suitable user account provided for Terraform to use for provisioning.
The Firepower (FMC) module makes the following assumptions:
Note: The FMC security automation component has been moved to a 2nd GitHub repository and will now be run from a 2nd, linked Terraform workspace.
https://github.com/cisco-apjc-cloud-se/ist-dcn-vcenter https://github.com/cisco-apjc-cloud-se/ist-vm-fmc-sync
Any successful runs in the primary "ist-dcn-vcenter" workspace will trigger this workspace to run. Any future changes to pushed to thist GitHub repository or the primary workspace repository will automatically trigger a new plan deployment.
If successfully executed, the Terraform plan will result in the following configuration for each domain manager.
New network group objects for each group of VM servers
Note: The FMC provider has an issue removing objects from network groups. As a workaround, the IP addresses of the host objects will be used instead as literal objects in the group.
This is a simpler version of the “Integrated DC Network, Infrastructure & Security Automation” use case that focuses solely on DCNM and vCenter networking automation, specifically the automation of a DCNM-based VXLAN EVPN fabric connecting to a VMware ESXi cluster with a Distributed Virtual Switch.
The Infrastructure-as-Code environment will require the following:
This example will then use the following on-premise domain managers. These will need to be fully commissioned and a suitable user account provided for Terraform to use for provisioning.
The DC Networking module makes the following assumptions:
The vCenter module makes the following assumptions:
https://github.com/cisco-apjc-cloud-se/ist-vcenter-dcnm
October 2021 In this example, both VLAN IDs and VXLAN IDs have been explicity set. These are optional parameters and can be removed and left to DCNM to inject these dynamically from the fabrics' resource pools. However if you chose to use DCNM to do this, Terraform MUST be configured to use a "parallelism" value of 1. This ensures Terraform will only attempt to configure one resource at a time allowing DCNM to allocate IDs from the pool sequentially.
Typically the parallelism would be set in the Terraform cloud workspace environment variables section using the variable name "TFE_PARALLELISM" and value of "1", however this variable is NOT used by Terraform Cloud Agents. Instead the variables "TF_CLI_ARGS_plan" and "TF_CLI_ARGS_apply" must be used with a value of "-parallelism=1"
October 2021 Due to an issue with the Terraform Provider (version 1.0.0) and DCNM API (11.5(3)) the "dcnm_network" resource will not deploy Layer 3 SVIs. This is due to a defaul parameter not being correctly set in the API call. Instead, the Network will be deployed as if the template has the "Layer 2 Only" checkbox set.
There are two workarouds for this
After deploying the network(s), edit the network from the DCNM GUI then immediately save. This will set the correct default parameters and these networks can be re-deployed.
Instead of the using the "Default_Network_Universal" template, clone and modify it as below. Make sure to set the correct template name in the terraform plan under the dcnm_network resource. Please note that the tag value of 12345 must also be explicity set.
Original Lines #119-#123
if ($$isLayer2Only$$ != "true") {
interface Vlan$$vlanId$$
if ($$intfDescription$$ != "") {
description $$intfDescription$$
}
Modified Lines #119-#125
if ($$isLayer2Only$$ == "true"){
}
else {
interface Vlan$$vlanId$$
if ($$intfDescription$$ != "") {
description $$intfDescription$$
}
{
"vcenter_dc": "CPOC-HX",
"vcenter_dvs": "CPOC-SE-VC-HX",
"dcnm_fabric": "DC3",
"dcnm_vrf": "GUI-VRF-1",
"cluster_interfaces": {
"DC3-LEAF-1": {
"name": "DC3-LEAF-1",
"attach": true,
"switch_ports": [
"Ethernet1/11"
]
},
"DC3-LEAF-2": {
"name": "DC3-LEAF-2",
"attach": true,
"switch_ports": [
"Ethernet1/11"
]
}
},
"cluster_networks": {
"IST-NETWORK-1": {
"name": "IST-NETWORK-1",
"description": "Terraform Intersight Demo Network #1",
"ip_subnet": "192.168.1.1/24",
"vni_id": 32101,
"vlan_id": 2101,
"deploy": true
},
"IST-NETWORK-2": {
"name": "IST-NETWORK-2",
"description": "Terraform Intersight Demo Network #2",
"ip_subnet": "192.168.2.1/24",
"vni_id": 32102,
"vlan_id": 2102,
"deploy": true
}
}
}
In Terraform Cloud for Business, queue a new plan to trigger the initial deployment. Any future changes to pushed to the GitHub repository will automatically trigger a new plan deployment.
If successfully executed, the Terraform plan will result in the following configuration:
New Layer 3 VXLAN network(s) each with the following configuration:
New Distributed Port Groups for each VXLAN network defined above
Changes to the variables defined in the input variable files will result in dynamic, stateful update to DCNM. For example,
Owner
Contributors
Categories
Data CenterNetworkingSecurityProducts
Nexus DashboardSecure FirewallProgramming Languages
HCLLicense
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community