Development - Overview
1. Initial development setup
AXP operating system hosting environment consists of a Linux-Vserver built on top of Cisco's Linux operating system. It is a container based environment in which each application will receive a separate Vserver container called Virtual Instance (VI). The execution of an application is limited within its container. This method prevents 3rd-party software from interfering with the host operating system or other application running on the system. The vserver architecture utilizes a single kernel to be shared by all the VI, developers should make sure that its application is compatible with the linux kernel provided by AXP.
Here are some basic features of AXP:
- Prevents 3rd-party software from interfering with the host operating system.
- Creates virtual instances for application separation.
- Managed through the host operating system instead of the CLI.
- Reduced troubleshooting times
- Container based virtualization
- Broad based CPU support, good resource usage, single Linux kernel support (x86 support for AXP future releases to support additional platforms).
- 3rd-Party software cannot install kernel modules or device drivers.
- Efficient use of resources (kernel level isolation).
- Process level security.
- Processes utilize shared resources (no hard partitioning).
- Multiple 3rd-Party applications running simultaneously on a single AXP blade.
- Supports start, stop, and control for individual applications.
- Complete isolation ensures discrete health state.
- Each application runs a separate Vserver container called a Virtual Instance (VI) or Application Instance.
Third party developers will develop their application on their own linux environment. When the application is ready to be installed onto the blade, 3rd Party developers will need to obtain a development authorization from Cisco in order for their application to be installable onto AXP. The development authorization will only need to be requested once, and can be used on other applications that the 3rd party wishes to install. Cisco will provide a packaging tool that will do all the necessary work to package a 3rd party application (i.e. bundling, adding authorization ..etc) into a AXP acceptable format. The developers can then execute a command on the blade to ftp the package onto the blade and have it installed.
After an application is installed, its status and health can be monitored using AXP commands. Each application will be running in its own vserver container, and each vserver will have its own shell. Developers can use this shell to manually start their application, or they can use a start up script to start the application automatically after installation completes. AXP also provides commands to stop and/or remove the installed packages when needed.
1. Security Isolation between applications and between application and host: Application software will not be able to cause a crash on the host or other application. Also if one vserver is compromised by hacker, the host will be less affected.
2. Linux library independence: When porting software to run on AXP, applcation may have dependency on certain linux libraries that are not compatible with the library provided by AXP. In these circumstances, application can load up it's own guestOS environment. Decoupling the 3rd party application from the AXP guestOS environment.
3. Resource isolation: Each application is given it's own set of CPU, Memory and Disk resource limits. application running in a Virtual environment cannot use more resource than that allocated to it. This improves application stability when running in a multiple 3rd party application environment.
- Container based technology, making use of change root barrier and attaching security context to processes.
- Virtualization of user level processes. One signle linux kernel is used throughout the system. Which has low virtualization memory and scheduling overhead. This provides near native OS performance.
- When application software is installed under a certain directory, e.g. /home/app1. Application processes started under the context of this directory, /home/app1 becomes the root directory of for these processes. The processes can only access directories below this root; they do not have permission to escape out of /home/app1.
- Processes running in a context cannot see processes running in another Vserver context hence provide security isolation.
- Devices are not virtualized rather shared amoung vservers. Hence no addition overhead incurred for IO operation.
- Networks is not virtualized, only interface hiding is done for vserver context. Network routing tables are not vitualized, however multiple routing tables can be configured under AXP.
- The unit of vserver context is described in AXP as a Virtual Instance. This is the unit that can be installed and run under the AXP context. Application software (which can contain multiple processes and functionalities) are packaged into the slim format using the AXP packaging tool. The package is installed into the VI's root directory and run within a VI's context. (Hence a 3rd party slim package cannot run in multiple VIs).
- An application running inside a VI is provided a operating environment which is termed guestOS as opposed to the hostOS which manages all the VI in the system.
- Application can run as root within the VI context.
- To further improve security, the HostOS?'s shell environment is not available only the CLI will be provided to manage the operating environment.
- In the VI, the application developer has the choice to enable shell access to within the guestOS.
- Provided to segment system resource for a Virtual Instance (VI)
- Reduce resource contention within the system, make system behave more predictably and reduce side effects (e.g. one application hogs memory causing another application to fail)
- The resources that are managed by AXP are
- CPU resource limit is specified based on a CPU index. This CPU index is an arbitrary number of 10000 assigned to the 1.0 GHz Celeron M CPU used on NME_APPRE_302-K9 application runtime engine.
- Other AXP blades CPU performance will be scaled with respect to this CPU.
- The CPU index for AIM_APPRE is 3000
- A percentage CPU application usage figure can be calculated by application CPU index/platform CPU index.
- Certain CPU resource is allocated for the host operating system
o AIM APPRE module approx 8% CPU
o NME APPRE 320 module approx 5% CPU is allocated.
- APPRE as a platform does not make use of disk swapping, the amount of memory available to application is limited by the physical memory found in the system.
- The RSS parameter (Resident size) in the kernel is the parameter that gets limited per VI
- The default memory allocation behavior for linux is to allow for memory overcommit. Memory overcommit is feature of linux memory management to allow applications that are memory hungry in terms of allocation but relatively lean actual usage to run. The down side of this approach is that application that successfully allocated memory can still fail in run time. This memory overcommit mode however is necessary for application with heavy memory foot print to run.
- The "test memory" CLI allows this memory overcommit behavior to be changed or turned off. (This CLI is supposed to be hidden so it is best not to document it)
- Here is that actual description of the overcommit modes. The Linux kernel supports the following overcommit handling modes:
o 0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slighly more memory in this mode. This is the default.
o 1 - Always overcommit. Appropriate for some scientific applications.
o 2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable percentage (default is 50) of physical RAM. Depending on the percentage you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate.
The overcommit policy is set via the sysctl `vm.overcommit_memory'.
The overcommit percentage is set via `vm.overcommit_ratio'.
The current overcommit limit and amount committed are viewable in /proc/meminfo as CommitLimit and Committed_AS respectively.
The C language stack growth does an implicit mremap. If you want absolute guarantees and run close to the edge you MUST mmap your stack for the largest size you think you will need. For typical stack usage this does not matter much but it's a corner case if you really really care
In mode 2 the MAP_NORESERVE flag is ignored.
How It Works ------------
The overcommit is based on the following rules
For a file backed map
SHARED or READ-only - 0 cost (the file is the map not swap)
PRIVATE WRITABLE - size of mapping per instance
For an anonymous or /dev/zero map
SHARED - size of mapping
PRIVATE READ-only - 0 cost (but of little use)
PRIVATE WRITABLE - size of mapping per instance
Pages made writable copies by mmap
shmfs memory drawn from the same pool
o We account mmap memory mappings
o We account mprotect changes in commit
o We account mremap changes in size
o We account brk
o We account munmap
o We report the commit status in /proc
o Account and check on fork o Review stack handling/building on exec
o SHMfs accounting
o Implement actual limit enforcement
To Do -----
o Account ptrace pages (this is hard)
When out of memory occurs, the linux kernel employs the oom_kill (out of memory kill) function to figure out a process to kill to maintain system's operation.
In the Vsever context, specify memory limit specifies the maximum memory available for each VI conext. When memory overcomit is enabled, when total memory usage within the VI exceeds that specified by the limit, processes within the VI will get killed.
When specifying memory limit the granularity is in MB.
- Disk space limit, limits the maximum disk space that can be used by the VI.
- Specified in MB
- Since the application developer is the best judge of maximum resource usage by an application. The control is given to the application developer to specify application resource usage during packaging time.
- see packaging tool syntax for resource limit specification.
- The APPRE system keep track of resource usage by the HostOS?, and all installed VI's
- During installation time when requested resource specified by the package cannot be met, the software will not be allowed to install.
- APPRE make use of linux-vserver's resource limit capability to enforce runtime resource limits for CPU, memory and disk.
- For CPU limit enforcement the linux-vserver has a token bucket based scheme to schedule CPU time for the entire Virtual Instance (VI).
- When a VI has run out of CPU time, based on the token bucket algorithm, the process priority of the VI are set to a lower priority.
- The priority scheduler allows CPU usage to be regulated amount the host and VI's. It also elimiate wastage of CPU time when the hostOS or a VI does not make use of it's allocated amount.
- For Phase II deliverable, the CPU and disk paremater can be altered through CLI after installation. This provides flexibility to adjust these parameters in the field. However, due to the sensitivity of the out of memory issue, there is no CLI to modify the memory limit.
AXP provides Software Development kit (SDK) to the 3rd party Developers for Linux environment. The SDK provides the following types of tools:
- Packaging utility tools (Phase 1)
o A packaging tool, pkg_build.sh, is provided to the 3rd party developer to bundle and sign the application package. This tool will require that application vendor has development authorization from Cisco and access to private keys that will be used in signing application.
o A bundling tool, pkg_bundle.sh, is used to compile multiple packages into a single bundle
o Two interfaces will be supported for these tools: command arguments and interactive command line interface
o Refer to SFS section 3.4.8, Packaging Tools, for details and examples (Merwan/John)
- Utility tools (phase 2)
o Library Dependency Checker tool, pkg_check.sh, is used checks and verifies the library dependency on a specified package (*.pkg)
+ Note See SFS 3.4.12 to provide more detail and example (Merwan/John)
o Package Info Tool, pkg_info.sh, is used to display package information
o RPM extractor too, rpm_extractor.sh, is used to extract content from a RPM package.
- CLI plugin tility tools and APIs
o AXP provides a mechanism for third party applications CLIs to be integrated into the AXP CLI environment.
o A set of tools are provided to third party developers during their development time to validate, process and package the CLI plugin along with their main application
o See SFS section 3.4.2 for more detail and sample (Merwan/John)
- Value-added service APIs
o AXP provides various service APIs to allow 3rd parties to programmatically access, manage and augment existing features available within IOS. The SDK includes the necessary libraries, APIs and/or their associated header files for each supported languages. This allows developer be able to compile and/or link their applications in their desktop.
o The implementation of these value-added services are provided via AXP infrastructure add-on packages. During application packaging, the dependency on the associated infrastructure add-on package needs to be specified if that application depends upon that service.
o Both infrastructure add-on package and application package need to be installed for that service to be enabled for the application.
o The following are the list of value-added services provided: (Mostly in phase 2)
+ IOS Service APIs
+ Event Notification APIs
+ Serial device APIs
+ AXP Service APIs
+ SNMP APIs (phase 1)
+ packet service - (phase 1) This one uses standard library (see its section in SFS)
+ refer to SFS for more detail description for reference (Merwan/John)
AXP SDK, appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:version].tar.gz, is provided with compressed tarball in gzip format.
1. Place the tarball in a direcoty.
- cp appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:versin].tar.gz /source/workspace/sdk
2. Unzip and untar the package into desired directory
- bash> tar xvfz appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:version].tar.gz OR
- bash> gunzip appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:version].tar.gz
- bash> tar xvf appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:version].tar
3. This will populate appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:version] directory
4. cd /source/workspace/sdk/appre-sdk.nme.[APPLICATIONEXTENSIONPLATFORM:version]
AXP SDK provides packaging tools, Service API's include header files as well as libraries/modules, which can be used to build, compile, and link AXP service enabled third party applications. The SDK has the following hierarchical structure:
include Contains header files to C/C++ API
jar Contains JAVA jar files
lib Contains library files used for linkage
perl Contains Perl modules files
python2.3 Contains Python module files
tools Contains packaging and CLI plugin tools
Helper scripts are provided as part of the Software Development Kit and can be found in tools directory.
gen_auth.sh Certification Authorization Tool
This tools will generate Javelin Authorization files (.jvln) and Certification files (.key)
Note::Do NOT show in devloper's Guide. It is internal usage and only available in engineer build.
See SFS 3.4.16 to provide more detail and example (Merwan/John)
pkg_build.sh SLIM Packaging Utility
This tool will generate a installation package for a specific application
Note See SFS 3.4.8 to provide more detail and example (Merwan/John)
pkg_bundle.sh SLIM Bundle Utility
This tool will bundle multiple packages together into one single install package
Note See SFS 3.4.8 to provide more detail and example (Merwan/John)
pkg_check.sh Library Dependency Checker Utility
This tools checks and verifies the library dependency on a specified package (*.pkg)
Note See SFS 3.4.12 to provide more detail and example (Merwan/John)
pkg_info.sh Package Info Utility
Displays package information
rpm_extractor.sh RPM Extractor
This tool extracts content from a RPM package. User can specify directories for storing scripts, dependencies and other files
Note See SFS 3.4.13 to provide more detail and example (Merwan/John)
1. Dependency management
2. Host vs. Guest OS packages
- Artem PackageSupport?
There are different ways the developer can approach the application development. There is a variety of criteria that can affect the method the developer will adopt as development flow. Some of those criteria are:
- complexity of the application
- whether the developer starts the application from scratch or from existing code
- software dependencies this application might have (RPMs)
Whether the application is developed from scratch or the developer is required to port an existing application to the network-module, the basic recommended work flow should be similar. Since the workstation is a similar runtime environment to the network-module, the developer can do some minor testing on the workstation if wanted.
Here is a recommended work-flow to build an application:
1. First Time Setup:
- Workstation setup:
o SDK installation, workspace preparation, sync repository
o Create an empty application, package it and install on network-module, this is needed to get access to the linux shell in the vserver
- Network-module installation and configuration
- Export Directory structure from network-module to the workstation using rsync CLI
2. Main development iteration loop:
- Add/modify software such as executable and scripts as required to the workstation sync repository
o Only the binary are needed to be placed in the sync directory (not the source code)
o Use the RPM Extraction Tool to extract any RPM your application may require
- Import back the changes to the network-module
- Test changes
- Developer can modify files on the network-module as well, if this is the case, make sure to sync from the network-module to the workstation so that those changes are not lost.
3. Package And Test
- Package Application - Make sure to remove unwanted files that might have been exported to the workstation such as temp files.
- Run the LDD Checker tool - this will help verify that all libraries required by the application were bundled in the package.
- Install packaged Application for testing
- Test Application
In the previous development flow, there are some reference to tools. Those tools are either part of the SDK or available at runtime on the network-module (such as rsync CLI).
One shortcoming of the Application Runtime Environment is that it does not support the installation of RPMs. The developer might face this issue if its application depends on RPM(s). If this is the case, the developer must bundle the RPM content and its script setup as part of its application.
To help the developer extracting the RPM, the SDK provides a tool to ease this extraction process.
- Note to Writer: There should be a reference here to the section where RPM Extraction Tool is covered in detail.
Another tool provided is the rsync cli (available from the application debug package). This CLI allows the synchronization of the VServer/Application content between the network-module and the workstation workspace repository. This tool was made available to ease the development cycle. This is achieved by bypassing the packaging and installation steps of the application. This way, modification can easily be done on the workstation and be updated to the network-module.
LDD Checker is an executable that validates if all the libraries have been packages in your application. This is used to validate that the application will have the full set of libraries once installed.