Skip to content

Benchmark and Validate HTTP services with traffic mirroring

Traffic mirroring is a method of duplicating traffic to another service. This other service can be a new version. Traffic mirroring will allow you to safely test how the new version will behave using live traffic without any risk of affecting end users.

In this tutorial, you will learn how to set up a new version and traffic mirroring with Istio and use Iter8 to test the performance of the new version.

Before you begin
  1. Follow the Istio traffic mirroring tutorial.

  2. Install Prometheus plugin.

  3. Generate load.

    kubectl run fortio --image=fortio/fortio --command -- fortio load -t 6000s http://httpbin.default:8000/get
    


Validate SLOs for the mirrored service

After defining the mirrored service, you can run the slo-validation-istio experiment on it so that you can measure how well the it performs.


The following metrics are collected by default by this experiment:

  • istio/request-count: total number of requests sent
  • istio/error-count: number of error responses
  • istio/error-rate: fraction of error responses
  • istio/latency-mean: mean of observed latency values
  • istio/le500ms-latency-percentile: percentile of requests with less than or equal latency of 500ms

All latency metrics have msec units.


In order to run the slo-validation-istio experiment, the following need to be set.

  • reporter, the reporter of the request. It needs to be set to destination because of the traffic mirroring and only the server Istio proxy can report these requests.
  • cronjobSchedule, the cron schedule that determines when the metrics collection and SLO validation is run. It is currently configured to run every minute.
  • providerURL, the URL of the metrics provider, in this case Prometheus database endpoint
  • destination_workload, the name of the service
  • destination_workload_namespace, the namespace of the service

In addition, the SLOs to be validated also need to be set. In this case, istio/error-rate and istio/latency-mean will be validated.

iter8 k launch -c slo-validation-istio \
--set reporter=destination \
--set cronjobSchedule="*/1 * * * *" \
--set providerURL=http://prometheus.istio-system:9090/api/v1/query \
--set versionInfo.destination_workload=httpbin-v2 \
--set versionInfo.destination_workload_namespace=default \
--set SLOs.istio/error-rate=0 \
--set SLOs.istio/latency-mean=100

In the experiment above, the following SLOs are validated.

  • error rate is 0
  • mean latency is under 100 msec

View experiment report

iter8 k report
The text report looks like this
Experiment summary:
*******************

  Experiment completed: false
  No task failures: true
  Total number of tasks: 2
  Number of completed tasks: 6

Whether or not service level objectives (SLOs) are satisfied:
*************************************************************

  SLO Conditions        |Satisfied
  --------------        |---------
  istio/error-rate <= 0 |true
  istio/latency-mean <= 100 |true


Latest observed values for metrics:
***********************************

  Metric                           |value
  -------                          |-----
  istio/error-count                |0.00
  istio/error-rate                 |0.00
  istio/latency-mean               |6.31
  istio/le500ms-latency-percentile |1.00
  istio/request-count              |2110.00
iter8 k report -o html > report.html # view in a browser
The HTML report looks like this

HTML report

Because the cronjobSchedule has been set to run every minute, the report will change periodically.


Assert experiment outcomes

Assert that the experiment encountered no failures, and all SLOs are satisfied.

iter8 k assert -c nofailure -c slos
Sample output from assert
INFO[2021-11-10 09:33:12] experiment has no failure                    
INFO[2021-11-10 09:33:12] SLOs are satisfied                           
INFO[2021-11-10 09:33:12] all conditions were satisfied
Back to top