Skip to content

add EndpointSlice consumer helper functions #124777

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
danwinship opened this issue May 9, 2024 · 2 comments · May be fixed by #131376
Open

add EndpointSlice consumer helper functions #124777

danwinship opened this issue May 9, 2024 · 2 comments · May be fixed by #131376
Labels
priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/network Categorizes an issue or PR as relevant to SIG Network. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@danwinship
Copy link
Contributor

As discussed in the SIG Network meeting on May 9, we should provide "EndpointSlice helpers" to help people get away from Endpoints (without using EndpointSlices in a naive way that breaks for large services).

In particular, we want something that acts sort of like an Informer/Lister, but where it deals with merging together multiple slices for the same service, so that, eg, if a service has 3 slices, and you do a Get on that service, you get all of the endpoints across all three slices, and if 1 of the slices gets updated, you get an Updated event that includes the new combined set of endpoints, not just the endpoints for that one slice.

(I'm not totally sure exactly what this API should look like. One possibility would be that Get would return a []*discoveryv1.EndpointSlice rather than a single *discoveryv1.EndpointSlice. Though this would mean that when endpoints were being moved between slices, there would be temporary states where a single endpoint appeared in two different slices. Another possibility would be to have it synthesize fake EndpointSlice objects that just ignored the normal maximums and included all of the endpoints from across all of the slices (with the cache code dealing with de-duping). Though, actually, that doesn't work, because the different slices might have different Ports, so you'd still need to return an array of slices. So maybe we do want the first idea, just with a prominent warning that duplicates may exist.)

kube-proxy's EndpointSliceCache may provide a starting point, though it's too tied to other internal kube-proxy APIs to actually be useful as-is.

The code should go in some staged repo. I guess maybe k8s.io/endpointslice would be an obvious place? (It's currently only EndpointSlice controller code, but it could have EndpointSlice consumers too?) If not there, then maybe k8s.io/component-helpers.

/sig network
/priority important-longterm
/triage accepted
/cc @robscott

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on. labels May 9, 2024
@robscott
Copy link
Member

@danwinship Agree with your ideas here, I think we essentially would just want a library that would dedupe EndpointSlices as far as possible. In most cases that's going to result in a single slice (that's largely already the case since most Services have <100 endpoints).

Unfortunately as you mentioned, named ports exist, so even if we have <100 endpoints, it's possible that every endpoint/Pod is listening on a different port, and the only way to represent that is with a separate EndpointSlice for each unique Service Port -> Endpoint Port combination. Since we'd be less concerned about the most efficient form across the wire, this library could just have a single list of endpoints for each service port and embed the endpoint port alongside the endpoint information. That might look something like this:

endpointsByPort:
- port:
    name: http
    protocol: TCP
  endpoints:
    - addresses:
        - "10.1.2.3"
      port: 80
      conditions:
        ready: true
      hostname: pod-1
      nodeName: node-1
      zone: us-west2-a

With that relatively minimal change, we could fit all endpoints in a single list with this library. Unfortunately it would not allow implementations to reuse the EndpointSlice API types directly though.

@nishchay-veer
Copy link

Hey @danwinship I'm interested in working on EndpointSlice helpers would like to work on this

mbergo added a commit to mbergo/kubernetes that referenced this issue Apr 18, 2025
This commit adds helper functions for EndpointSlice consumers to make it easier to transition from Endpoints to EndpointSlices. The new package provides:

1. EndpointSliceConsumer - Core component that tracks EndpointSlices and provides a unified view of endpoints for a service
2. EndpointSliceInformer - Informer-like interface for EndpointSlices
3. EndpointSliceLister - Lister-like interface for EndpointSlices

These helpers handle the complexity of merging multiple slices for the same service and deduplicating endpoints that might appear in multiple slices.

Benefits:
- Easier migration from Endpoints to EndpointSlices with familiar interfaces
- Simplified handling of multiple slices without manual merging and deduplication
- Improved performance by leveraging the scalability of the EndpointSlice API
- Consistent view of endpoints even as they move between slices

Fixes kubernetes#124777

Signed-off-by: Mad Bergo <[email protected]>
@mbergo mbergo linked a pull request Apr 18, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/network Categorizes an issue or PR as relevant to SIG Network. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants