- Overview
- Product Documentation
- CML 2.8 Release Notes
- CML 2.8 Installation Guide
- CML 2.8 User Guide
- CML 2.8 Admin Guide
- CML Administrator's Guide
- Cisco Modeling Labs System Overview
- System Defaults
- Creating a New Node Definition
- Node Definition SMBIOS Parameters Specification
- Custom VM Images
- Clustering
- CML Admin Tools
- System Settings
- Networking
- Resources
Deploying the OVA on ESXi Server¶
Follow these instructions to deploy a new CML VM on a VMware ESXi server. You should have already downloaded a copy of the CML controller OVA and refplat ISO files to your local machine.
These instructions assume that you are familiar with deploying and managing virtual machines in VMware ESXi. If not, we recommend that you select a different deployment option. Please refer to the VMware documentation for best practices and for detailed instructions. The details may vary, depending on the ESXi version, whether you are using vCenter server, and other details of your ESXi deployment.
Procedure
Upload the refplat ISO file to a datastore or content library that is co-located with the ESXi host where the CML VM will run.
Deploy the controller OVA file to your ESXi host to create a new CML VM.
Attention
Do not start the virtual machine!
After you have imported the OVA to VMware, you must configure the CML VM’s settings before you start it.
Configuring the Virtual Machine
Upgrade the VM’s virtual hardware compatibility, selecting the highest / latest version that is compatible with the ESXi hosts where you plan to run the CML VM.
For backwards compatibility purposes, the CML OVA’s virtual hardware compatibility may be set to an older or lower version number than the compatibility value supported by your ESXi server. If you do not upgrade the VM compatibility, some of the features supported by your ESXi version may not be available with the CML VM.
Edit or confirm the settings of the CML VM based on the following recommendations:
Category |
Property |
Setting |
---|---|---|
CPU |
Allocated vCPU Cores |
Set the number of virtual cores allocated to the CML VM. The default value from the OVA file is an absolute minimum and is generally not appropriate for ESXi deployments. |
CPU |
Cores per Socket |
Choose a value so that the number of sockets matches the ESXi server’s underlying hardware. For example, if the ESXi server where the CML VM runs has 2 processors, ensure that Sockets shows a value of 2 after you choose the cores per socket value. |
CPU |
Shares |
Choose High from the dropdown. |
CPU |
Hardware Virtualization |
Check the checkbox to enable hardware-assisted virtualization. |
CPU |
Performance Counters |
Check the checkbox to enable virtualized CPU performance counters. |
Memory |
Allocated Memory |
Set the amount of memory allocated to the CML VM. |
Memory |
Reservation |
We recommend reserving and locking the entire memory allocation for the CML VM. For example, check the Reserve all guest memory (All locked) check box in vCenter. If you do not reserve the memory for the CML VM, then ESXi will create a swap file with a size equal to the amount of memory allocated to the VM. If you run CML on a dedicated ESXi host, no other guest needs the ESXi host’s memory, and reserving memory avoids wasting this disk space. If you are running the CML VM on an ESXi host with other VMs, then reserving the memory also prevents ESXi from reclaiming memory that was allocated to the CML VM and causing nodes in lab simulations to crash. |
Memory |
Shares |
Choose High from the dropdown. |
Hard Disk 1 |
Disk size |
Increase the default disk size. The default hard disk capacity is set to 32 GB, and 10 GB of that space is reserved for the underlying operating system. Such a small disk size is not appropriate for an ESXi deployment. The CML VM will automatically resize its file system to the initial hard disk size that you set before you start the CML VM the first time. Therefore, the best practice is to over-provision the disk space for the CML VM, and a value of 100 GB or more recommended. Expanding the disk after you boot the CML VM the first time is also possible: see Adding or Editing Storage Volumes. In planning for disk space, note that each node in every lab for every user will also consume some space even when the lab is not running. The disk usage for each node ranges from 1 MB to more than 1 GB, depending on the node type and use case. Some custom VM images, such as the Cisco vManage VM, are known to consume 20 GB or more of disk space per node. In CML version 2.3.0 and higher, the reference platform VM images must be copied to the local disk of the CML instance. If you plan to add custom VM images for alternate versions of a Cisco reference platform or for third party VM images, those .qcow2 files will consume additional disk space. In provisioning disk space for the CML VM, we recommend prioritizing high I/O throughput. Faster read speeds will make starting labs faster. Fast sustained write speeds are important because some VMs, such as the NX-OS 9000/9300/9500 VMs, are sensitive to write performance during their initial start-up. In general, we recommend using SSD disks and preferring RAID0 or RAID1 to RAID5. |
CD/DVD Drive 1 |
Location |
Select the option from the dropdown that matches the location where you uploaded the refplat ISO file in Step 1 above. |
CD/DVD Drive 1 |
CD/DVD Media |
Click Browse and choose the refplat ISO file. |
CD/DVD Drive 1 |
Status |
Check the checkbox to connect the CD/ROM at power on. |
CD/DVD Drive 1 |
Virtual Device Node |
We recommend using the first (lowest number) IDE device. |
Network Adapter 1 |
Network Adapter |
Select a network adapter that will permit your users to access the web-based UI that runs on the CML VM. |
Network Adapter 1 |
Status |
Check the checkbox to connect the network adapter at power on. |
Network Adapter 1 |
DirectPath I/O |
Check the Enable checkbox. |
VM Options |
Advanced / Latency Sensitivity |
Choose High from the dropdown. |
This step enables use of External Connectivity with bridged networking. This applies to the first interface used accessing the UI, as well as any additional interfaces you add to the VM.
On the ESXi host where the CML VM runs, configure the Advanced System Settings to set:
Net.ReversePathFwdCheck =
1
Net.ReversePathFwdCheckPromisc =
1
For each port group and vSwitch that is used by a CML VM interface, set:
Promiscuous Mode =
Accept
Forged Transmits =
Accept
Configuring the advanced settings before promiscuous mode is recommended. In case promiscuous mode was already enabled before, please disable and then re-enable it on each affected port group, or reboot the ESXi host.
In a cluster setup, only the controller VM is used to pass traffic outside of CML labs, and this step is not required for cluster compute host VMs. The vSwitch settings are not needed on the intra-cluster network.
For more information on the Net.ReversePathFwdCheckPromisc setting, see https://kb.vmware.com/s/article/59235.
(Optional) If you are deploying a cluster compute host VM, you may want to clone the compute VMs from the first compute host VM that you deploy. Note that you must clone the compute host VM now and before you start this VM for the first time. Cloning a VM after it has been started will cause conflicts and faults in cluster formation.
You may also deploy each compute host VM using the same steps as above. Compute VMs do not need to have the refplat ISO attached.
Start the virtual machine.
You now have a CML virtual machine that is defined and configured in VMware ESXi.
Once you have configured the VM settings and started the VM, you are ready to complete the Initial Set-up within the running VM.