Skip to content

v1.list_pod_for_all_namespaces(watch=False) takes too long #1231

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
zzmg opened this issue Aug 11, 2020 · 5 comments
Closed

v1.list_pod_for_all_namespaces(watch=False) takes too long #1231

zzmg opened this issue Aug 11, 2020 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@zzmg
Copy link

zzmg commented Aug 11, 2020

What happened (please include outputs or screenshots):

In our cluster, there are about 900 pods in total。use v1.list_pod_for_all_namespaces(watch=False),takes about three seconds,use api_instance.list_deployment_for_all_namespaces,takes about one seconds。in cluser,use kubectl get pods --all-namespaces,takes about one seconds

What you expected to happen:
Can the list pod interface be optimized ???
How to reproduce it (as minimally and precisely as possible):
in your cluser,try it
Anything else we need to know?:
no more
Environment:

  • Kubernetes version (kubectl version):
    v1.10.5
  • OS (e.g., MacOS 10.13.6):
    ubuntu18.04
  • Python version (python --version)
    3.6
  • Python client version (pip list | grep kubernetes)
    kubernetes 11.0.0
@zzmg zzmg added the kind/bug Categorizes issue or PR as related to a bug. label Aug 11, 2020
@roycaihw
Copy link
Member

I think the main difference is the encoding. The python client uses json and kubectl uses protobuf. Ref #166

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 17, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants