-
Notifications
You must be signed in to change notification settings - Fork 69
Simplify POC installation #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,68 +1,55 @@ | ||
# Envoy Ext Proc Gateway with LoRA Integration | ||
|
||
This project sets up an Envoy gateway to handle gRPC calls with integration of LoRA (Low-Rank Adaptation). The configuration aims to manage gRPC traffic through Envoy's external processing and custom routing based on headers and load balancing rules. The setup includes Kubernetes services and deployments for both the gRPC server and the vllm-lora application. | ||
This project sets up an Envoy gateway with a custom external processing which implements advanced routing logic tailored for LoRA (Low-Rank Adaptation) adapters. The routing algorithm is based on the model specified (using Open AI API format), and ensuring efficient load balancing based on model server metrics. | ||
|
||
 | ||
|
||
## Requirements | ||
- A vLLM based deployment (using the custom image provided below), with LoRA Adapters | ||
- Kubernetes cluster | ||
- Envoy Gateway v1.1 installed on your cluster: https://gateway.envoyproxy.io/v1.1/tasks/quickstart/ | ||
- `kubectl` command-line tool | ||
- Go (for local development) | ||
|
||
## vLLM | ||
***This PoC uses a modified vLLM fork, the public image of the fork is here: `ghcr.io/tomatillo-and-multiverse/vllm:demo`*** | ||
|
||
The fork is here: https://github.com/kaushikmitr/vllm. | ||
|
||
The summary of changes from standard vLLM are: | ||
- Active/Registered LoRA adapters are returned as a response header (used for lora-aware routing) | ||
- Queue size is returned as a response header | ||
- Active/Registered LoRA adapters are emitted as metrics (for out-of-band scraping during low traffic periods) | ||
|
||
|
||
## Overview | ||
|
||
This project contains the necessary configurations and code to set up and deploy a service using Kubernetes, Envoy, and Go. The service involves routing based on the model specified (using Open AI API format), collecting metrics, and ensuring efficient load balancing. | ||
|
||
 | ||
|
||
- A vLLM based deployment using a custom fork, with LoRA Adapters. ***This PoC uses a modified vLLM [fork](https://github.com/kaushikmitr/vllm), the public image of the fork is here: `ghcr.io/tomatillo-and-multiverse/vllm:demo`***. A sample deployement is provided under `./manifests/samples/vllm-lora-deployment.yaml`. | ||
|
||
## Quickstart | ||
|
||
### Steps | ||
1. **Deploy Sample vLLM Application** | ||
NOTE: Create a HuggingFace API token and store it in a secret named `hf-token` with key hf_api_token`. This is configured in the `HUGGING_FACE_HUB_TOKEN` and `HF_TOKEN` environment variables in `./manifests/samples/vllm-lora-deployment.yaml`. | ||
|
||
1. **Apply Kubernetes Manifests** | ||
```bash | ||
cd manifests | ||
kubectl apply -f ext_proc.yaml | ||
kubectl apply -f vllm/vllm-lora-service.yaml | ||
kubectl apply -f vllm/vllm-lora-deployment.yaml | ||
kubectl apply -f ./manifests/samples/vllm-lora-deployment.yaml | ||
kubectl apply -f ./manifests/samples/vllm-lora-service.yaml | ||
``` | ||
2. **Install GatewayClass with Ext Proc** | ||
A custom GatewayClass `llm-gateway` which is configured with the llm routing ext proc will be installed into the `llm-gateway` namespace. It's configured to listen on port 8081 for traffic through ext-proc (in addition to the default 8080), see the `EnvoyProxy` configuration in `installation.yaml`. When you create Gateways, make sure the `llm-gateway` GatewayClass is used. | ||
|
||
2. **Update `ext_proc.yaml`** | ||
- Ensure the `ext_proc.yaml` is updated with the pod names and internal IP addresses of the vLLM replicas. This step is crucial for the correct routing of requests based on headers. | ||
NOTE: Ensure the `llm-route-ext-proc` deployment is updated with the pod names and internal IP addresses of the vLLM replicas. This step is crucial for the correct routing of requests based on headers. This won't be needed once we make ext proc dynamically read the pods. | ||
|
||
2. **Update and apply `gateway.yaml`** | ||
- Ensure the `gateway.yaml` is updated with the internal IP addresses of the ExtProc service. This step is also crucial for the correct routing of requests based on headers. | ||
```bash | ||
cd manifests | ||
kubectl apply -f gateway.yaml | ||
```bash | ||
kubectl apply -f ./manifests/installation.yaml | ||
``` | ||
3. **Deploy Gateway** | ||
|
||
```bash | ||
kubectl apply -f ./manifests/samples/gateway.yaml | ||
``` | ||
|
||
### Monitoring and Metrics | ||
|
||
- The Go application collects metrics and saves the latest response headers in memory. | ||
- Ensure Envoy is configured to route based on the metrics collected from the `/metric` endpoint of different service pods. | ||
|
||
## Contributing | ||
4. **Try it out** | ||
Wait until the gateway is ready. | ||
```bash | ||
IP=$(kubectl get gateway/llm-gateway -o jsonpath='{.status.addresses[0].value}') | ||
PORT=8081 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't this wrong, llm-instance-gw is listening on 8080. Am I wrong? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good question! Actually in the POC setup, Envoy is configured the additional 8081 port for ext proc traffic. Updated README There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ho yes, you're right actually. Sorry, for the confusion. |
||
|
||
curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{ | ||
"model": "tweet-summary", | ||
"prompt": "Write as if you were a critic: San Francisco", | ||
"max_tokens": 100, | ||
"temperature": 0 | ||
}' | ||
``` | ||
|
||
1. Fork the repository. | ||
2. Create a new branch. | ||
3. Make your changes. | ||
4. Open a pull request. | ||
|
||
## License | ||
|
||
This project is licensed under the MIT License. | ||
|
||
--- |
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
|
||
--- | ||
apiVersion: gateway.networking.k8s.io/v1 | ||
kind: Gateway | ||
metadata: | ||
name: llm-gateway | ||
spec: | ||
gatewayClassName: llm-gateway | ||
listeners: | ||
- name: http | ||
protocol: HTTP | ||
port: 8080 |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,7 +4,6 @@ metadata: | |
name: vllm-lora | ||
namespace: default | ||
spec: | ||
clusterIP: None | ||
selector: | ||
app: vllm | ||
ports: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one more extra space between
which
andimplements