Skip to content

Load test a KServe model (via gRPC)

This tutorial shows how easy it is to run a load test for KServe when using gRPC to make requests. We use a sklearn model to demonstrate. The same approach works for any model type.

Before you begin
  1. Ensure that you have the kubectl and helm CLIs installed.
  2. Have access to a cluster running KServe. You can create a KServe Quickstart environment as follows:
    curl -s "" | bash
    If using a local cluster (for example, Kind or Minikube), we recommend providing the cluster with at least 16GB of memory.
  3. Have Grafana available. For example, Grafana can be installed on your cluster as follows:
    kubectl create deploy grafana --image=grafana/grafana
    kubectl expose deploy grafana --port=3000

Install the Iter8 controller

Iter8 can be installed and configured to watch resources either in a single namespace (namespace-scoped) or in the whole cluster (cluster-scoped).

helm install --repo --version 0.18 iter8 controller
helm install --repo --version 0.18 iter8 controller \
--set clusterScoped=true

For additional install options, see Iter8 Installation.

Deploy an InferenceService

Create an InferenceService which exposes a gRPC port. The following serves the sklearn irisv2 model:

cat <<EOF | kubectl create -f -
apiVersion: ""
kind: "InferenceService"
  name: "sklearn-irisv2"
        name: sklearn
      runtime: kserve-mlserver
      protocolVersion: v2
      storageUri: "gs://seldon-models/sklearn/mms/lr_model"
      - containerPort: 9000
        name: h2c
        protocol: TCP

Launch performance test

GRPC_HOST=$(kubectl get isvc sklearn-irisv2 -o jsonpath='{.status.components.predictor.address.url}' | sed 's#.*//##')
helm upgrade --install \
--repo --version 0.18 model-test iter8 \
--set "tasks={ready,grpc}" \
--set ready.isvc=sklearn-irisv2 \
--set ready.timeout=180s \
--set grpc.protoURL= \
--set${GRPC_HOST}:${GRPC_PORT} \
--set \
--set grpc.dataURL=
About this performance test

This performance test consists of two tasks, namely, ready and grpc.

The ready task checks if the sklearn-irisv2 InferenceService exists and is Ready.

The grpc task sends call requests to the inference.GRPCInferenceService.ModelInfer method of the cluster-local gRPC service with host address ${GRPC_HOST}:${GRPC_PORT}, and collects Iter8's built-in gRPC load test metrics.

View results using Grafana

Inspect the metrics using Grafana. If Grafana is deployed to your cluster, port-forward requests as follows:

kubectl port-forward service/grafana 3000:3000

Open Grafana in a browser by going to http://localhost:3000 and login. The default username/password are admin/admin.

Add a JSON API data source model-test with the following parameters:

  • URL: http://iter8.default:8080/grpcDashboard
  • Query string: namespace=default&test=model-test

Create a new dashboard by import. Paste the contents of the grpc Grafana dashboard into the text box and load it. Associate it with the JSON API data source defined above.

The Iter8 dashboard will look like the following:

grpc Iter8 dashboard


helm delete model-test
kubectl delete inferenceservice sklearn-irisv2

Uninstall the Iter8 controller

helm delete iter8

For additional uninstall options, see Iter8 Uninstall.

If you installed Grafana, you can delete it as follows:

kubectl delete svc/grafana deploy/grafana
Some variations and extensions of this performance test
  1. The grpc task can be configured with load related parameters such as the number of requests, requests per second, or number of concurrent connections.