You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the Gateway API Inference Extension handles requests by directly invoking backend selection logic for each incoming request at a point in time as they are received. This results in a First-Come-First-Serve (FCFS) dispatch order based on request arrival, with backend choice dependent on instantaneous pool state (LoRA affinity, KV cache, backend queue lengths, etc.).
Limitations
This direct dispatch model lacks mechanisms to:
Guarantee Criticality-Based Service Differentiation: Cannot ensure Critical requests are always dispatched before Default or Sheddable ones when InferenceModels are competing over limited pool resources.
Enforce Inter-Model (Dispatch/Fairness) Policies: Lacks a central point to apply dispatch policies (e.g., FCFS, Round Robin or other fairness definitions) between different InferenceModels of the same criticality.
Optimally Handle Contention/Saturation: Forces an immediate backend choice or request drop decision, potentially leading to suboptimal load distribution or unnecessary request failures when backends are only temporarily busy.
Feature Request: Centrally Queue Requests in the EPP before the Backend Selection Decision
Introduce a Queuing/Fairness Layer before the backend selection step. This involves:
Queuing requests per InferenceModel, grouped by criticality.
Allowing pluggable inter-model dispatch policies (e.g., FCFS, Round Robin, or other fairness definitions) within a priority band to manage inter-model fairness.
Basic queue management (TTL, limits)
Why Add Another Layer of Queuing?
Enforces Criticality: Guarantees priority before resources are committed via backend selection.
Enables Fairness Policies: Provides the necessary state and control point for inter-model dispatch logic.
Improved Contention Management:
Decouples which request is dispatched next from where it goes.
Allows requests to wait for better backend states instead of immediate suboptimal dispatch or dropping.
Potentially improves load distribution across the pool by considering the global demand of pending requests when dispatching, reducing the chance of overloading specific backends.
Supports more intelligent back pressure signals for upstream components.
Shifts Head-of-Line (HoL) Blocking: While HoL blocking can occur in model server queues today, introducing a EPP-level queue shifts this potential blocking point "left". The benefit is that when a request at the head of the EPP queue is dispatched, the system retains the flexibility to choose the best available backend from the entire pool at that moment, rather than having committed the request prematurely to a specific, potentially suboptimal, model server queue where it might block others.
Potential Performance Gains: By making more informed dispatch decisions (enabled by waiting and the global view) and improving load distribution (enabled by better backend selection flexibility for HoL requests), this approach may improve tail latency and overall throughput, especially near saturation points, compared to locking a request into a specific backend queue early (avoiding scheduling regret). While adding another queuing layer introduces new sources of queuing latency, the goal is for these gains to offset that overhead.
The text was updated successfully, but these errors were encountered:
Current State
Currently, the Gateway API Inference Extension handles requests by directly invoking backend selection logic for each incoming request at a point in time as they are received. This results in a First-Come-First-Serve (FCFS) dispatch order based on request arrival, with backend choice dependent on instantaneous pool state (LoRA affinity, KV cache, backend queue lengths, etc.).
Limitations
This direct dispatch model lacks mechanisms to:
Critical
requests are always dispatched beforeDefault
orSheddable
ones when InferenceModels are competing over limited pool resources.InferenceModels
of the same criticality.Feature Request: Centrally Queue Requests in the EPP before the Backend Selection Decision
Introduce a Queuing/Fairness Layer before the backend selection step. This involves:
InferenceModel
, grouped bycriticality
.Why Add Another Layer of Queuing?
The text was updated successfully, but these errors were encountered: