|
1 | 1 | # Envoy Ext Proc Gateway with LoRA Integration
|
2 | 2 |
|
3 |
| -This project sets up an Envoy gateway to handle gRPC calls with integration of LoRA (Low-Rank Adaptation). The configuration aims to manage gRPC traffic through Envoy's external processing and custom routing based on headers and load balancing rules. The setup includes Kubernetes services and deployments for both the gRPC server and the vllm-lora application. |
| 3 | +This project sets up an Envoy gateway with a custom external processing which implements advanced routing logic tailored for LoRA (Low-Rank Adaptation) adapters. The routing algorithm is based on the model specified (using Open AI API format), and ensuring efficient load balancing based on model server metrics. |
| 4 | + |
| 5 | + |
4 | 6 |
|
5 | 7 | ## Requirements
|
6 |
| -- A vLLM based deployment (using the custom image provided below), with LoRA Adapters |
7 | 8 | - Kubernetes cluster
|
8 | 9 | - Envoy Gateway v1.1 installed on your cluster: https://gateway.envoyproxy.io/v1.1/tasks/quickstart/
|
9 | 10 | - `kubectl` command-line tool
|
10 | 11 | - Go (for local development)
|
11 |
| - |
12 |
| -## vLLM |
13 |
| -***This PoC uses a modified vLLM fork, the public image of the fork is here: `ghcr.io/tomatillo-and-multiverse/vllm:demo`*** |
14 |
| - |
15 |
| -The fork is here: https://github.com/kaushikmitr/vllm. |
16 |
| - |
17 |
| -The summary of changes from standard vLLM are: |
18 |
| -- Active/Registered LoRA adapters are returned as a response header (used for lora-aware routing) |
19 |
| -- Queue size is returned as a response header |
20 |
| -- Active/Registered LoRA adapters are emitted as metrics (for out-of-band scraping during low traffic periods) |
21 |
| - |
22 |
| - |
23 |
| -## Overview |
24 |
| - |
25 |
| -This project contains the necessary configurations and code to set up and deploy a service using Kubernetes, Envoy, and Go. The service involves routing based on the model specified (using Open AI API format), collecting metrics, and ensuring efficient load balancing. |
26 |
| - |
27 |
| - |
28 |
| - |
| 12 | +- A vLLM based deployment using a custom fork, with LoRA Adapters. ***This PoC uses a modified vLLM [fork](https://github.com/kaushikmitr/vllm), the public image of the fork is here: `ghcr.io/tomatillo-and-multiverse/vllm:demo`***. A sample deployement is provided under `./manifests/samples/vllm-lora-deployment.yaml`. |
29 | 13 |
|
30 | 14 | ## Quickstart
|
31 | 15 |
|
32 | 16 | ### Steps
|
| 17 | +1. **Deploy Sample vLLM Application** |
| 18 | + NOTE: Create a HuggingFace API token and store it in a secret named `hf-token` with key hf_api_token`. This is configured in the `HUGGING_FACE_HUB_TOKEN` and `HF_TOKEN` environment variables in `./manifests/samples/vllm-lora-deployment.yaml`. |
33 | 19 |
|
34 |
| -1. **Apply Kubernetes Manifests** |
35 | 20 | ```bash
|
36 |
| - cd manifests |
37 |
| - kubectl apply -f ext_proc.yaml |
38 |
| - kubectl apply -f vllm/vllm-lora-service.yaml |
39 |
| - kubectl apply -f vllm/vllm-lora-deployment.yaml |
| 21 | + kubectl apply -f ./manifests/samples/vllm-lora-deployment.yaml |
| 22 | + kubectl apply -f ./manifests/samples/vllm-lora-service.yaml |
40 | 23 | ```
|
| 24 | +2. **Install GatewayClass with Ext Proc** |
| 25 | + A custom GatewayClass `llm-gateway` which is configured with the llm routing ext proc will be installed into the `llm-gateway` namespace. When you create Gateways, make sure the `llm-gateway` GatewayClass is used. |
41 | 26 |
|
42 |
| -2. **Update `ext_proc.yaml`** |
43 |
| - - Ensure the `ext_proc.yaml` is updated with the pod names and internal IP addresses of the vLLM replicas. This step is crucial for the correct routing of requests based on headers. |
| 27 | + NOTE: Ensure the `llm-route-ext-proc` deployment is updated with the pod names and internal IP addresses of the vLLM replicas. This step is crucial for the correct routing of requests based on headers. This won't be needed once we make ext proc dynamically read the pods. |
44 | 28 |
|
45 |
| -2. **Update and apply `gateway.yaml`** |
46 |
| - - Ensure the `gateway.yaml` is updated with the internal IP addresses of the ExtProc service. This step is also crucial for the correct routing of requests based on headers. |
47 |
| - ```bash |
48 |
| - cd manifests |
49 |
| - kubectl apply -f gateway.yaml |
| 29 | + ```bash |
| 30 | + kubectl apply -f ./manifests/installation.yaml |
| 31 | + ``` |
| 32 | +3. **Deploy Gateway** |
| 33 | + |
| 34 | + ```bash |
| 35 | + kubectl apply -f ./manifests/samples/gateway.yaml |
50 | 36 | ```
|
51 | 37 |
|
52 |
| -### Monitoring and Metrics |
53 |
| - |
54 |
| -- The Go application collects metrics and saves the latest response headers in memory. |
55 |
| -- Ensure Envoy is configured to route based on the metrics collected from the `/metric` endpoint of different service pods. |
56 |
| - |
57 |
| -## Contributing |
| 38 | +4. **Try it out** |
| 39 | + Wait until the gateway is ready. |
| 40 | + ```bash |
| 41 | + IP=$(kubectl get gateway/llm-gateway -o jsonpath='{.status.addresses[0].value}') |
| 42 | + PORT=8081 |
| 43 | + |
| 44 | + curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{ |
| 45 | + "model": "tweet-summary", |
| 46 | + "prompt": "Write as if you were a critic: San Francisco", |
| 47 | + "max_tokens": 100, |
| 48 | + "temperature": 0 |
| 49 | + }' |
| 50 | + ``` |
58 | 51 |
|
59 |
| -1. Fork the repository. |
60 |
| -2. Create a new branch. |
61 |
| -3. Make your changes. |
62 |
| -4. Open a pull request. |
63 | 52 |
|
64 | 53 | ## License
|
65 | 54 |
|
66 | 55 | This project is licensed under the MIT License.
|
67 |
| - |
68 |
| ---- |
0 commit comments