Online Boutique is a cloud-native microservices demo application that consists of a 10-tier microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.
We use this application to demonstrate the use of Cloud Native technologies like Kubernetes/GKE, Istio, AppDynamics and ThousandEyes. This application works on any Kubernetes (k8s) cluster, as well as Google Kubernetes Engine (GKE).
This project is based on the original open-source microservices-demo from Google.
Online Boutique is composed of 11 microservices written in different languages that talk to each other over gRPC.
Service | Language | Description |
---|---|---|
frontend | Go | Exposes an HTTP server to serve the website. Does not require signup/login and generates session IDs for all users automatically. |
cartservice | C# | Stores the items in the user's shopping cart in Redis and retrieves it. |
productcatalogservice | Go | Provides the list of products from a JSON file and ability to search products and get individual products. |
currencyservice | Node.js | Converts one money amount to another currency. Uses real values fetched from European Central Bank. It's the highest QPS service. |
paymentservice | Node.js | Charges the given credit card info (mock) with the given amount and returns a transaction ID. |
shippingservice | Go | Gives shipping cost estimates based on the shopping cart. Ships items to the given address (mock) |
emailservice | Python | Sends users an order confirmation email (mock). |
checkoutservice | Go | Retrieves user cart, prepares order and orchestrates the payment, shipping and the email notification. |
recommendationservice | Python | Recommends other products based on what's given in the cart. |
adservice | Java | Provides text ads based on given context words. |
loadgenerator | Python/Locust | Continuously sends requests imitating realistic user shopping flows to the frontend. |
The Online Boutique is initially deployed in GKE with Istio, which defines the Service Mesh and handles service-to-service communication. Istio allows to decouple traffic management from application code by attaching a sidecar proxy (called envoy) next to each container that intercepts all incoming and outgoing communications. The interaction between all these proxies in the data plane, together with a common control plane, then creates the service mesh. This fundamentally helps you understand traffic flows among services and how to manage them with policies, protection and authentication.
In this project, ingress resources like an Istio Gateway
and VirtualService
are deployed to expose the Online Boutique frontend running inside a Kubernetes cluster.
With Istio at full steam we then focus on getting visibility into how the cluster and the application are performing. We start by using Kiali which comes natively integrated with Istio and provides visibility at the network service layer. We then implement AppDynamics Agents, at the infrastructure (i.e. cluster) and application layer.
This agent is deployed in its own namespace and collects metrics and metadata for the entire cluster, including every node and namespace down to the container level, via the Kubernetes API. It then sends this information to the AppDynamics controller.
The AppDynamics cluster-agent also comes with an auto-instrument feature, only available for .NET, JAVA and Node.js. This dynamically and automatically adds the required application agents to the targeted applications. In essence, the cluster agent modifies the application deployment by adding an init container that installs the required AppDynamics application agent on the application container when it automatically restarts. Both the paymentservice & currencyservice are Node.js applications so the AppDynamics cluster-agent automatically instruments them, as covered in the Deployment section. However, due to some AppDynamics gRPC limitations the information of these are microservices is not being correlated at the AppDynamics controller level. The goal is to solve this issue by building a gRPC middleware that allows the AppDynamics controller to correlate the information between the microservices.
At the moment, from an application perspective, only the FrontEnd microservice is meaningfully instrumented with a AppDynamics APM agent as this is the most used microservice. Extending the AppDynamics APM agents to the remaining microservices is currently work-in-progress.
With AppDynamics providing visibility at both the infrastructure and application layer, this is then augmented with ThousandEyes Cloud Agents that provide external visibility from global vantage points (Cloud agents across 58 countries), the internet and browser synthetics. This provides an end-to-end view into the delivery of the Online Boutique app to the user, whilst getting enhanced insights on the user experience.
The full architecture of the Online Boutique application, with all the tools deployed, looks as follows.
gcloud
command-line toolkubectl
, which can be installed via gcloud components install kubectl
Set the PROJECT_ID
env variable and ensure the GKE API is enabled:
PROJECT_ID="<your-project-id>" gcloud services enable container --project ${PROJECT_ID}
gcloud services enable container.googleapis.com gcloud services enable cloudprofiler.googleapis.com
Enable Google Container Registry (GCR) on your project and configure the docker
CLI to authenticate to GCR::
gcloud services enable containerregistry.googleapis.com
gcloud auth configure-docker -q
Clone this repo:
git clone https://github.com/JPedro2/Cloud-Native-Demo.git
cd Cloud-Native-Demo
Create the GKE cluster:
ZONE=europe-west2-a gcloud container clusters create <your-cluster-name> \ --project=${PROJECT_ID} --zone=${ZONE} --node-locations=${ZONE} \ --enable-autoupgrade --enable-autoscaling \ --min-nodes=4 --max-nodes=6 --machine-type=e2-standard-2
Alternatively you can create a GKE cluster using the Google Cloud UI. If you do, please make sure that you DO NOT select the Enable Istio
option under Features, as we will be installing Istio manually in step 6.
Cost Saving Tip: If you are using your personal GCP account for this demo and if you are planning on running it for a short period of time (<24h), you can use the --preemptible
flag when creating the GKE cluster. Preemptible VMs are Compute Engine VM instances that are priced lower, last a maximum of 24 hours in general, and provide no availability guarantees.
If you wish to do that please use the following to create your GKE cluster:
ZONE=europe-west2-a gcloud container clusters create <your-cluster-name> \ --project=${PROJECT_ID} --zone=${ZONE} --node-locations=${ZONE} \ --enable-autoupgrade --enable-autoscaling --preemptible \ --min-nodes=4 --max-nodes=6 --machine-type=e2-standard-2
Point the kubectl
context to the new GKE cluster:
gcloud container clusters get-credentials <cluster-name> --zone=${ZONE} --project=${PROJECT_ID}
Alternatively you can get this command from the GKE menu in the Cloud Console by clicking the "Connect" button next to the k8s cluster that you just created.
Use kubectl
to confirm that you are pointing to that k8s cluster.
kubectl config current-context
kubectl get nodes
Should output something like:
NAME STATUS ROLES AGE VERSION gke-boutique-appd-1key-boutique2-pool-8589b39d-9gm5 Ready <none> 12m v1.16.15-gke.6000 gke-boutique-appd-1key-boutique2-pool-8589b39d-bm2y Ready <none> 12m v1.16.15-gke.6000 gke-boutique-appd-1key-boutique2-pool-8589b39d-cl2u Ready <none> 12m v1.16.15-gke.6000 gke-boutique-appd-1key-boutique2-pool-8589b39d-h3mr Ready <none> 12m v1.16.15-gke.6000
Install Istio and add istioctl
to your path:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.8.0 TARGET_ARCH=x86_64 sh - cd istio-1.8.0/ export PATH=$PWD/bin:$PATH istioctl install --set profile=demo -y cd ..
Please Note: This uses Istio v1.8.0. If you wish to install another version, such as the latest one, you will need to follow Istio's Getting Started guide.
Enable Istio sidecar proxy injection in the default k8s namespace
kubectl label namespace default istio-injection=enabled
Apply the k8s manifest that combines all of the microservices but the Loadgenerator
kubectl apply -f release/kubernetes-manifests.yaml
Apply the Istio manifest that combines all the initial Istio configuration rules - gateway, ingress and egress
kubectl apply -f release/istio-manifests.yaml
Get the Istio Ingress GW External IP Address:
kubectl -n istio-system get svc | grep "ingress"
Update the loadgenerator.yaml.tplt
template k8s deployment with the Istio Ingress GW IP Address:
Go to the file ./kubernetes-manifests/loadgenerator.yaml.tplt
and update line 37 with the external IP address that you got from the previous step:
- name: FRONTEND_ADDR value: "<istio-ingressgateway-EXTERNAL-IP>:80"
After modifying the file and saving it make sure you rename it to loadgenerator.yaml
.
Apply the loadgenerator.yaml
manifest:
kubectl apply -f kubernetes-manifests/loadgenerator.yaml
Install Prometheus (optional), Grafana (optional) and Kiali as an Istio integration:
kubectl apply -f istio-1.8.0/samples/addons/prometheus.yaml kubectl apply -f istio-1.8.0/samples/addons/grafana.yaml kubectl apply -f istio-1.8.0/samples/addons/kiali.yaml
Please Note: If you get a no matches for kind "MonitoringDashboard"
error, just apply the kiali.yaml
manifest again and the monitoring dashboards should be created.
Open the Kiali UI:
istioctl dashboard kiali
This command ONLY works if you have istioctl
in your $PATH
. If you restarted your terminal or are using a different terminal tab, you will need to do the following:
cd istio-1.8.0/ export PATH=$PWD/bin:$PATH cd ..
OR alternatively, you can also do it without istioctl
, so that the session runs in the background:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001 &
Once this is running you will need to open a browser session with http://localhost:20001
.
Please Note: If you use use option above you then need to kill the port-forward
, after you are done with the Kiali dashboard:
killall kubectl
Istio allows you to decouple traffic management from application code, as well as helps you understand traffic flows between services.
You can then, for example, define the percentage of traffic you want to send to a specific canary version, or determine how to distribute traffic based on source/destination or service version weights. This makes A/B testing, gradual rollouts and canary releases much easier to implement and manage.
Additionally, Istio provides useful capabilities around failure recovery to tolerate failing nodes or avoid cascading instabilities, as well as fault injection in the form of delays or connectivity failures, on specific requests to test application resiliency.
If you wish to experiment with some of these Istio capabilities, you can apply
some of the Istio manifests in the /istio-manifests/routing
folder, and then with Kiali, visualise how the traffic flow changes.
These Istio manifests only focus on some specific traffic management use cases:
As an example you may want to inject a 5 second delay on the productcatalogservice, to then evaluate how the other microservices behave and handle that scenario.
kubectl apply -f istio-manifests/routing/injection-delay.yaml
Once this is deployed you can confirm it by going into your Online Boutique external IP address
in your browser and checking that when you click in one of the products on the FrontEnd landing page, it takes at least 5 seconds to load. If you are using Chrome you can re-do these steps whilst using the inspect tool (right-click > inspect > Network
).
You can also visualise this using Kiali, as shown.
Once you've evaluated and analysed the fault, you will need to remove it so that your application goes back to normal.
kubectl delete -f istio-manifests/routing/injection-delay.yaml
To deploy and explore the other Istio manifests please check the README in the istio-manifests
folder.
Please note that the FrontEnd-v2 microservice is deployed in the AppDynamics section below.
The AppDynamics Cluster Agent used in this project is the v20.10.0
. If you wish to use another version, or use a custom cluster agent image, you will need to build it and update the cluster agent manifest in /AppD-Cluster-Agent-20.10/cluster-agent-operator.yaml
. For more info please check the AppDynamics documentation on how to build the Cluster Agent Container Image.
To deploy the cluster agent we use the AppDynamics Operator, located in /AppD-Cluster-Agent-20.10/cluster-agent-operator.yaml
.
Deploy AppDynamics Operator:
kubectl create namespace appdynamics kubectl create -f AppD-Cluster-Agent-20.10/cluster-agent-operator.yaml kubectl -n appdynamics get pods
The output should be similar to the following:
NAME READY STATUS RESTARTS AGE appdynamics-operator-6d95b46d86-67pmp 1/1 Running 0 2m
Create a Controller Access Key Secret:
AppDynamics agents need to connect to the controller to retrieve configuration data and send back information about the monitored environment. To find your controller access-key
please follow the 4 steps in this guide and then create a k8s secret as follows.
kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key>
Deploy the Cluster Agent:
Before running the AppDynamics cluster-agent manifest you need to first rename the cluster-agent.yaml.tplt
file to cluster-agent.yaml
and then update it with your AppDynamics Controller details. Check here if you want more information on how to configure the cluster-agent yaml file.
appName
in line 8 - Name of the cluster that displays in the AppDynamics Controller UI as your cluster name.controllerUrl
in line 9 - Full AppDynamics Controller URL.account
in line 10 - AppDynamics account name.defaultAppName
in line 28 - Application name used by the agent to report to the Controller.In this particular demo we are using the AppDynamics cluster-agent ability to auto-instrument applications. Since this feature only supports applications written in .NET, JAVA and Node.js, this only applies to the paymentservice & currencyservice microservices. This feature is implemented in your cluster-agent.yaml
manifest from line 25 onwards. You can comment or delete those lines if you don't want the auto-instrument feature turned on.
kubectl create -f AppD-Cluster-Agent-20.10/cluster-agent.yaml kubectl -n appdynamics get pods
The output should be similar to the following:
NAME READY STATUS RESTARTS AGE appdynamics-operator-6d95b46d86-67pmp 1/1 Running 0 45m k8s-cluster-agent-79b6c95cb4-bdgzn 1/1 Running 0 1m30s
Please Note: For debugging purposes, like if the controller doesn't receive data from the cluster agent, you can check the agent logs as follows:
kubectl -n appdynamics logs <pod-name>
Go to the AppDynamics Dashboard to visualise your cluster's monitoring data:
4.1. Open the AppDynamics dashboard in your browser https://<appdynamics-controller-host>/controller
4.2. Click the Servers
tab at the top
4.3. Click on the Clusters
icon on the left-hand side
4.4. You will see your cluster name, then select it and click the Details
icon
Please Note: Initially you may see "No Data Available ⚠️" as you need to give some time for the agent to send enough data to the controller so that you can start seeing some cool graphs - usually around 15-30mins, aka coffee time ☕️.
Check here for more information on how to use the AppDynamics Cluster Agent via the Dashboard, such as how to edit which namespaces
to monitor.
The only microservice manually instrumented with AppDynamics APM agent is the FrontEnd microservice, written in Golang
. AppDynamics does not have an APM Agent, per se, for GO
. Instead, we use the AppDynamics GO SDK, which in itself uses the C++
SDK in the background. For more deep and detailed information on how the AppDynamics GO
SDK is implemented in-line with the FrontEnd code, you can check the the README in src/frontend-v2-appD.
The goal is to deploy a frontEnd version of the microservice that is instrumented with the AppDynamics GO
agent and not replace the existing non-instrumented one. For that we will deploy another FrontEnd v2
microservice, which is then added to the Istio service mesh and allows us to perform some interesting traffic management routines, like send traffic to either v1
or v2
based on version weights.
Add AppDynamics Controller Settings to frontEnd v2
manifest:
Start by renaming the frontend-v2.yaml.tplt
file to frontend-v2.yaml
, located in the kubernetes-manifests
folder.
Add your AppDynamics controller details to the manifest, from line 72 to line 81.
- name: APPD_CONTROLLER_HOST value: "<appdynamics-controller-host>" - name: APPD_CONTROLLER_PORT value: "443" - name: APPD_CONTROLLER_USE_SSL value: "true" - name: APPD_CONTROLLER_ACCOUNT value: "<account-name>" - name: APPD_CONTROLLER_ACCESS_KEY value: "<access-key>"
Deploy the AppD instrumented frontEnd v2
to the cluster:
kubectl apply -f kubernetes-manifests/frontend-v2.yaml
Apply an Istio destination rule that sets both frontEnd microservices with v1
and v2
labels:
kubectl apply -f istio-manifests/routing/destination-rule.yaml
Delete the current frontend-ingress
and apply a new one that routes traffic to frontEnd v1
and v2
based on pre-selected weight:
kubectl apply -f istio-manifests/routing/frontend-weighted-v1-v2.yaml kubectl delete virtualservice frontend-ingress
Please Note: If you wish to experiment and change the weights, you can just modify the weight
variables in line 31 and 37 of the Istio routing manifest frontend-weighted-v1-v2.yaml
in the istio-manifests/routing
folder and then re-apply the manifest.
Similarly to the section above, you can visualise how the traffic is flowing with this routing policy by looking at the Kiali graph.
Currently in this project, ONLY ThousandEyes Cloud Agents are used. These provide an External Vantage Point as they are globally distributed agents installed and managed by ThousandEyes in 190+ cities in 58 countries and immediately available.
Below is an example of how you can quickly create an HTTP Server
test against the Online Boutique frontEnd. This test can be performed as often as every minute and from several locations around the world. It provides you with insights on Availability, Response Time, Throughput and you can even do a Path Visualization to workout which routes is your application given out of GCP OR, most importantly, check if there is an issue in the network path, when the application performance starts degrading.
Something far more exotic than HTTP Server
tests are the HTTP Transaction
tests that provide application experience insights with Web Synthetics. These types of tests measure entire multi-page workflows, with credential handling, simulating a complete user journey making sure those journeys complete successfully while providing an insight into the user experience. This allows for multi-layer correlation as you can now have transaction scripting tests with further information around HTTP, Path Viz, BGP and Internet Outages.
To write and test transaction scripts ThousandEyes provides a ThousandEyes Recorder
application, that records your user journey through the application and builds the Transaction Script
for you automatically - no code expertise required. All you have to do then is export that Transaction Script
to your ThousandEyes transaction test and run it - as before, as often as every minute and from several locations around the world.
To fully utilise this feature I highly recommend that you watch this short video tutorial.
If you wish to test this out without building your own transaction test scripts, you can use the ones in the ThousandEyes folder. To do so, make sure that you add and save the <Online-Boutique-IP-Address>
in line 10 of both files before you export them to ThousandEyes. Below is an example of how you can quickly deploy these transaction test scripts.
ThousandEyes natively supports sending alert notifications directly to AppDynamics. This allows you to correlate trigger events with clear events, and to create policies in AppDynamics based on specific properties like: alertState, alertType (HTTP, Network, Voice, etc) and testName. In AppDynamics, ThousandEyes alerts show up as custom events of type ThousandEyesAlert
and allow you to open the ThousandEyes app at the Views screen for the alert start time to have further visibility into the issue.
You can quickly and easily set-up the native alerts integration by following the steps in the official ThousandEyes Documentation.
ThousandEyes data can be pushed to the AppDynamics controller via a ThousandEyes Custom Monitor, which is basically an AppDynamics Machine Agent extension that pulls test data from the ThousandEyes API, transforms the data payload, and pushes that data to the AppDynamics controller via custom metrics. Currently the ThousandEyes Custom Monitor only supports pulling metrics from Page Load, HTTP/Web and Network test types, unfortunately the HTTP Transaction tests are not supported at the moment.
This essentially allows to correlate data from ThousandEyes agents with data from AppDynamics agents, which means comparing your Application Performance from the app layer (via the AppDynamics APM agent) against the application experience from the user perspective (via the ThousandEyes cloud agents). This provides powerful insights that can be then used both during production to proactively identify and mitigate sub-standard user-experience, as well as during development to understand how the user-experience may be impacted with the new upcoming features.
In this demo there are two ways that you can deploy the ThousandEyes Custom Monitor:
Standalone docker container that can be deployed anywhere, such as your local machine or another VM in the Cloud or on-prem, since the ThousandEyes Custom Monitor does not need to run in the same environment as the application. You will need to have both Docker and Compose installed.
A microservice running in the K8s cluster that you've just created
Please Note: The ThousandEyes Custom Monitor built in this demo derives from this example and uses the AppDynamics Standalone Machine Agent v21.2.0 - the latest at the time of development. If you wish to use another version, you will need to build your own custom monitor by following the instructions on the example and the AppDynamics Machine Agent documentation.
configuration.env.tplt
file located in the AppD-TE-Custom-Monitor
folder. You will see line comments that explain which variables or credentials you need to use, most of them you have already used in previous parts of the demo.configuration.env.tplt
to configuration.env
.docker-compose
cd AppD-TE-Custom-Monitor/
docker-compose up -d
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a7f59fc56ba peolivei/te-appd-custom-monitor:v21.2.0 "/bin/sh -c \"${MACHI…" 3 seconds ago Up 3 seconds te-appd-monitor
docker logs te-appd-monitor
The container is running in detach
mode. If you ever need to gracefully stop the ThousandEyes Custom Monitor container and remove the volumes you need to make sure that you are in the /Cloud-Native-Demo/AppD-TE-Custom-Monitor
directory and execute the following:
docker-compose down -v
te-appd-custom-monitor.yaml.tplt
file, from line 24 to line 75, located in the AppD-TE-Custom-Monitor
folder. You will see line comments that explain which variables or credentials you need to use, most of them you have already used in previous parts of the demo.configuration.env.tplt
to configuration.env
.kubectl apply -f AppD-TE-Custom-Monitor/te-appd-custom-monitor.yaml
appdynamics
namespace.kubectl get pods -n appdynamics
NAME READY STATUS RESTARTS AGE appdynamics-operator-6d95b46d86-49wzf 1/1 Running 0 28d k8s-cluster-agent-79b6c95cb4-z5st9 1/1 Running 0 28d te-appd-custom-monitor-agent-66c6db6b7f-6nb6c 1/1 Running 0 23h
kubectl -n appdynamics logs <pod-name>
Once the ThousandEyes Custom Monitor is running the metrics will appear under the Application's metrics within the Metric Browser on your AppDynamics Dashboard. I would recommend initially giving it at least 15-30mins to give enough time for the controller to collect the data, also bear in mind that this depends on how frequently your ThousandEyes Cloud agents tests are running.
You can then get the Average Response Time
from the FrontEnd Business Transaction Metric coming directly from the AppDynamics GO
agent running in the frontEnd
microservice and start correlating with the Thousand Eyes Custom Metrics coming from the different Cloud Agents deployed around the world.
To see the deployment for multi cloud environments please check the guide under the smm-1.8.0 folder.
Owner
Contributors
Categories
Products
AppDynamicsThousandEyesProgramming Languages
C#CSSLicense
Code Exchange Community
Get help, share code, and collaborate with other developers in the Code Exchange community.View Community