-
Notifications
You must be signed in to change notification settings - Fork 92
removed hf token from cpu based example #464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
014b003
e55efc3
e1584ca
ddc8381
d3d0dff
1d2bedc
057d176
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,16 +26,11 @@ spec: | |
- "--max-loras" | ||
- "4" | ||
- "--lora-modules" | ||
- '{"name": "tweet-summary-0", "path": "/adapters/ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora_0"}' | ||
- '{"name": "tweet-summary-1", "path": "/adapters/ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora_1"}' | ||
- '{"name": "tweet-summary-0", "path": "SriSanth2345/Qwen-1.5B-Tweet-Generations", "base_model_name": "Qwen/Qwen2.5-1.5B"}' | ||
- '{"name": "tweet-summary-1", "path": "SriSanth2345/Qwen-1.5B-Tweet-Generations", "base_model_name": "Qwen/Qwen2.5-1.5B"}' | ||
env: | ||
- name: PORT | ||
value: "8000" | ||
- name: HUGGING_FACE_HUB_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
name: hf-token | ||
key: token | ||
- name: VLLM_ALLOW_RUNTIME_LORA_UPDATING | ||
value: "true" | ||
- name: VLLM_CPU_KVCACHE_SPACE | ||
|
@@ -64,6 +59,13 @@ spec: | |
periodSeconds: 5 | ||
successThreshold: 1 | ||
timeoutSeconds: 1 | ||
resources: | ||
limits: | ||
cpu: "12" | ||
memory: "9000Mi" | ||
requests: | ||
cpu: "12" | ||
memory: "9000Mi" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do we need the adapter-loader initContainer? we removed that for the gpu deployment There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. when removing the init container I'm getting this error about missing adapter:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the solution is to have the adapters in the flags above directly point to HF, right now they are pointing to the volume that is being created and populated by the side car, which is not necessary. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Pls see the gpu deployment as an example. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ahg-g good pointer. I found adapters from hugging face and tested that it's working without the init container. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is missing the lora-syncer sidecar and configmap, otherwise the lora rollout guide wouldn't work. Please see the gpu-deployment.yaml file and try to mirror it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure I got what exactly wouldn't work, but I've added the configmap and sidecar per your request. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ok, I've read "Adapter Rollout" readme file. got it. |
||
volumeMounts: | ||
- mountPath: /data | ||
name: data | ||
|
@@ -72,26 +74,18 @@ spec: | |
- name: adapters | ||
mountPath: "/adapters" | ||
initContainers: | ||
- name: adapter-loader | ||
image: ghcr.io/tomatillo-and-multiverse/adapter-puller:demo | ||
command: ["python"] | ||
args: | ||
- ./pull_adapters.py | ||
- --adapter | ||
- ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora | ||
- --duplicate-count | ||
- "4" | ||
- name: lora-adapter-syncer | ||
tty: true | ||
stdin: true | ||
image: us-central1-docker.pkg.dev/k8s-staging-images/gateway-api-inference-extension/lora-syncer:main | ||
restartPolicy: Always | ||
imagePullPolicy: Always | ||
env: | ||
- name: HF_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
name: hf-token | ||
key: token | ||
- name: HF_HOME | ||
value: /adapters | ||
volumeMounts: | ||
- name: adapters | ||
mountPath: "/adapters" | ||
- name: DYNAMIC_LORA_ROLLOUT_CONFIG | ||
value: "/config/configmap.yaml" | ||
volumeMounts: # DO NOT USE subPath, dynamic configmap updates don't work on subPaths | ||
- name: config-volume | ||
mountPath: /config | ||
restartPolicy: Always | ||
schedulerName: default-scheduler | ||
terminationGracePeriodSeconds: 30 | ||
|
@@ -103,3 +97,21 @@ spec: | |
medium: Memory | ||
- name: adapters | ||
emptyDir: {} | ||
- name: config-volume | ||
configMap: | ||
name: vllm-qwen-adapters | ||
--- | ||
apiVersion: v1 | ||
kind: ConfigMap | ||
metadata: | ||
name: vllm-qwen-adapters | ||
data: | ||
configmap.yaml: | | ||
vLLMLoRAConfig: | ||
name: vllm-llama2-7b | ||
port: 8000 | ||
ensureExist: | ||
models: | ||
- base-model: Qwen/Qwen2.5-1.5B | ||
id: tweet-summary-1 | ||
source: SriSanth2345/Qwen-1.5B-Tweet-Generations |
Uh oh!
There was an error while loading. Please reload this page.