-
Notifications
You must be signed in to change notification settings - Fork 37
Create unmanaged kubernetes cluster on CloudStack #250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @vishesh92. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
✅ Deploy Preview for kubernetes-sigs-cluster-api-cloudstack ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
8a2fcf4
to
7b3e85e
Compare
Hi @vishesh92. I took a look at some of the other changes in CloudStack that you referenced and I'm struggling to understand what we're trying to achieve and why. Would you mind elaborating? |
Hi @chrisdoherty4 |
@vishesh92 So populating this value would bind VMs to an Apache CloudStack Cluster ('cluster' is hyperloaded in this context)?
|
We are doing two operation when the CAPC managed kubernetes cluster comes up:
In short, we are just adding some information about the kubernetes cluster which is managed by CAPC on CloudStack. |
7b3e85e
to
d6d9af2
Compare
ping @vishesh92 is this up to date? |
d6d9af2
to
ba2d468
Compare
7af31c6
to
38bd29b
Compare
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #250 +/- ##
==========================================
- Coverage 24.95% 24.55% -0.41%
==========================================
Files 59 60 +1
Lines 5558 5682 +124
==========================================
+ Hits 1387 1395 +8
- Misses 4035 4150 +115
- Partials 136 137 +1
☔ View full report in Codecov by Sentry. |
go.sum
Outdated
@@ -448,6 +450,8 @@ github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTE | |||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= | |||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= | |||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= | |||
github.com/shapeblue/cloudstack-go/v2 v2.9.1-0.20230717062313-73e4efc8a510 h1:FPRBv784robz6sZSqDGfZDZMse31lj96i+enH02Xzds= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vishesh92
the latest cloudstack-go verison is 2.15.0 (see https://github.com/apache/cloudstack-go)
2.13.1 was used in this project before.
can you update versions of shapeblue repo ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We create a beta release of cloudstack-go and use that here instead
Tests were aborted. |
1 similar comment
Tests were aborted. |
/run-e2e -c 4.18 |
@vishesh92 a jenkins job has been kicked to run test with following paramaters:
|
Test Results : (tid-298)
|
c60547b
to
d15a4ea
Compare
/run-e2e -c 4.18 |
2adcf65
to
13a0687
Compare
Tests were aborted. |
13a0687
to
3c76b28
Compare
Tests were aborted. |
d1536ac
to
53b5f2e
Compare
53b5f2e
to
254d93b
Compare
/run-e2e -c 4.18 |
@vishesh92 a jenkins job has been kicked to run test with following paramaters:
|
Test Results : (tid-325)
|
It seems like this could be done by anything with access to the cloudstack API and the kube api server, right? We don't need special information that is only available inside the capc controller or to hook into particular events that are hidden in the reconcilers. This just needs to watch the |
Test Results : (tid-326)
|
@g-gaston I want to make the integration a part of the CAPC controller since it has enough information about all the resources and the account and domain it needs to be part of. In future we might add even more information regarding the CAPC cluster in CKS as well and having it as part of the controller would help. |
I'm with you, this is adding a lot of CKS-specific logic to the controller, whose only task it should be to reconcile CAPI based clusters. I think there are bigger fish to fry (keeping up-to-date with CAPI and k8s versions, releasing more often etc.) |
Hi @g-gaston @hrak I see your points and completely understand why would anybody want to make CAPC complex. But hear us out. Apache CloudStack has always been a community-led, turnkey IaaS platform. The emphasis on turn-key, easy to use, setup and deploy, with a great UI and that it just works (out of the box). While CAPC can be used as a building-block, such as for EKS-A, we think this is also used directly as a first-class tool in itself with clusterctl. Before CAPC, CloudStack's only supposed Kubernetes integration was in the form of CKS (CloudStack Kubernetes Service) wherein CloudStack orchestrates k8s clusters for its users. However, the CKS built clusters make some assumption and deploy specific CNIs to make this work out of the box - easy to deploy k8s cluster for most users. And CKS is available for all user account types, if the service providers, operator or enterprise admin enables and configures CKS in their environment. With CAPC, there is more flexibility but this isn't available for all user types (currently, domain and root admins) and a group of contributors think it would be good idea if an end user can simply deploy a CAPC cluster and there is just a linked-resource created in the form of unmanaged CKS cluster (for example, to give you an idea head over to https://qa.cloudstack.cloud/simulator/#/kubernetes to see such a resource, this shoudn't be confused with a CloudStack Cluster of a zone). In future, the CloudStack CKS implementation might be refactored to just use CAPC (so not to have multiple ways of achieving the same thing). To address these, we've created a general mechanism via apache/cloudstack#7515 and series of fixes such the ability for normal user accounts to pick/use a public IP say in a shared network with apache/cloudstack#7817. This could help fix #303 in future releases. The final lacking piece if a normal user account deploy a CAPC cluster, can we give him a control plane view to say there exists a k8s cluster in CloudStack but it's externally (CAPC) managed - that's where this feature comes in. Now, you're both right - we don't need this PR to do that, there could be an external script and tool for this, however, it would make it complex for users and operators to set this up and the overall solution wouldn't be turnkey. Such an external tool/script would need to be aware of CloudStack and aware/access of the CAPC cluster and do that for all accounts would just introduce multiple competing control plane paths. So, here's my proposal - @vishesh92 can introduce an new optional/parameter in the CAPC cluster generated yaml (or via a support cmd-line arg that generate the same) to enable/disable the feature. By default, this is kept enabled so any CAPC user deploying a k8s cluster see an unmanaged CKS cluster in the UI. And downstream tools and users who don't want this behaviour can turn it off via the option/setting in the generated yaml. This way you can control that the new code introduced in this PR doesn't get into the operational path in downstream use-cases, and upstream CAPC and ACS releases are turnkey. Thoughts? |
Hi @rohityadavcloud, Thanks for sharing your thoughts and elaborating on intent. I do have reservations about this feature. Maintaining the scope outlined in the Cluster API documentation for infrastructure providers ensures we don't feature creep and have an easier time maintaining CAPC. Additional functionality, such as this PR, can be added through auxiliary components, as you highlighted. Furthermore, considering the roadmap for CKS integration is somewhat unclear to the CAPC community, having a best effort integration doesn't feel like the right path forward right now.
I agree. CAPC is built to be used independently, that's true of all Cluster API infrastructure providers.
This sounds like 2 different asks:
On (1), I think we need to ask why said user types are required and broaden the scope we're considering. For example, perhaps a more flexible permissions model in Apache CloudStack would enable finer permission control so users can create whatever kind of user they wish? On (2), I empathize with this desire, but I question what a user can do with an unmanaged CKS cluster (not to mention it isn't a CKS cluster) - it's read-only if I'm not mistaken? Have we established why this is needed, what is the user really trying to do? In any case, with my current understanding it feels like it falls outside the scope of CAPC.
I think that's dependent on how you frame the solution. Cluster API in itself isn't turnkey: you have to deploy Cluster API, then you can create your clusters. There's nothing preventing building a solution that wraps deployment of all the necessary components, including a component that embodies the integration in this PR, which would be truly turnkey; though I don't think that's a concern for CAPC.
That seems like an appropriate set of dependencies for this kind of component, though I don't understand the "do it for all accounts" or "competing control plane paths" parts; perhaps you could elaborate? I think its worth pointing out that all existing components built for Kubernetes have this kind of dependency set. Monitoring tools like Jaeger and Prometheus come to mind. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
One of my motivations for this PR was to improve the experience of Cloudstack operators and users. I understand your point of view on this PR, but it would have been better if I could have gotten a feedback before I spent time on this PR and the required work in cloudstack & cloudstack-go. I opened this PR in May and getting feedback in August that this is not the right way to go is not very motivating. I would have been happy to discuss this PR in May. I am not sure what the best way forward is, but I would like to avoid spending time on a PR that will not be merged. |
Hi @vishesh92. It sounds like you tried gathering feedback prior to completing the Apache CloudStack and CloudStack Go work, and the work in this PR. I'm sorry we missed that and have provided feedback late in the day including leaving this PR untouched for a prolonged period. We should be aiming to avoid this again, lets chat in the Kubernetes CAPC Slack on how to best address the communication inefficiencies. |
Issue #, if available:
Description of changes:
Creates an unmanaged kubernetes cluster on Cloudstack. This provides a nice view of CAPC managed kubernetes cluster in CloudStack UI.
Snapshots from ACS Dashboard:


Depends on: apache/cloudstack#7515 apache/cloudstack-go#59 (go.mod needs to be updated once this is merged)
Testing performed:
Tested locally by creating a kubernetes cluster using cluster api.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.