Skip to content

flatcar-sysext template won't start without adding kustomize-deleted apiServerLoadBalancer #2202

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cringdahl opened this issue Oct 18, 2024 · 5 comments · May be fixed by #2341
Open

flatcar-sysext template won't start without adding kustomize-deleted apiServerLoadBalancer #2202

cringdahl opened this issue Oct 18, 2024 · 5 comments · May be fixed by #2341
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cringdahl
Copy link

/kind bug

What steps did you take and what happened:
Executing clusterctl generate cluster --flavor flatcar-sysext results in a cluster failing with FloatingIPError.

$ clusterctl describe cluster test
NAME                                                     READY  SEVERITY  REASON                                              SINCE  MESSAGE                                                                                                  
Cluster/test                                             False  Error     FloatingIPError @ Machine/test-control-plane-tvr7h  41m    0 of 1 completed                                                                                          
├─ClusterInfrastructure - OpenStackCluster/test                                                                                                                                                                                                
└─ControlPlane - KubeadmControlPlane/test-control-plane  False  Error     FloatingIPError @ Machine/test-control-plane-tvr7h  41m    0 of 1 completed                                                                                          
  └─Machine/test-control-plane-tvr7h                     False  Error     FloatingIPError                                     41m    Obtaining management port for control plane machine failed: lookup management port for server 7d8c04 ...

The IP is pulled from the pool and assigned to the OpenStackCluster resource, but never actually assigns to anything.

This issue does not occur when using no --flavor. The only differences between no flavor (w/ Ubuntu) and using flatcar-sysext (w/ Flatcar) are the Flatcar-specific additions, and a lack of apiServerLoadBalancer in the Flatcar OpenStackCluster resource. Once I add that into a generated cluster yaml and apply it, the cluster comes back operational.

What did you expect to happen:
Work right out of the box without having to add anything.

Anything else you would like to add:
The spec: apiServerLoadBalancer portion is configured for deletion here, which is actively causing the issue.

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built): latest
  • Cluster-API version: clusterctl version: &version.Info{Major:"", Minor:"", GitVersion:"1.8.4", GitCommit:"brew", GitTreeState:"clean", BuildDate:"2024-10-08T05:24:23Z", GoVersion:"go1.23.2", Compiler:"gc", Platform:"darwin/amd64"}
  • OpenStack version: Queens (nothing I can do about this)
  • Minikube/KIND version: 1.31
  • Kubernetes version (use kubectl version): 1.31
  • OS (e.g. from /etc/os-release): Flatcar 3975.2.1
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 18, 2024
@cringdahl
Copy link
Author

@zhangguanzhang why a thumbs down react?

@tormath1
Copy link
Contributor

@cringdahl Hey, I implemented the flatcar-sysext template and IIRC it was based on the template without-lb that would explain this issue.

Would you mind sending a PR to fix that? In the meantime I will try to repro. Thanks (and sorry for the delay)

@cringdahl cringdahl linked a pull request Dec 19, 2024 that will close this issue
3 tasks
@cringdahl
Copy link
Author

@tormath1 no sweat on the wait, PR submitted

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 19, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
Status: Inbox
Development

Successfully merging a pull request may close this issue.

4 participants