Skip to content

Implementing IPAM for Floating IP (and potential other usage in the future) #1750

Closed
@huxcrux

Description

@huxcrux

/kind feature

Describe the solution you'd like

We propose implementing support for CAPIs IPAM. In our use case we would use it for managing floating-ips however there has been a few mentions of others potentially wanting to use it for internal IPs as well to avoid DHCP.

There is another issue talking about this in more lose therms that can be found here: #1377

This specific implementation was discussed briefly during the CAPO biweekly meeting on the 18th of October 2023 where we agreed upon this issue being created. This all comes from the following PR: #1725

In our specific case we NAT outgoing traffic from each hypervisor meaning public IP addresses are shared with others and will change based on the hypervisor your instance is currently running on. To combat this we use floating IPs since they take priority from the shared NAT IP. This means you have predictable source addresses when talking to other services on the internet and we can guarantee that no other project will use the same IP.

Goals with this implementation:
Being able to assign a Floating IP on worker nodes (from a IPpool)

A similar implementation already exists in CAPV and can be found here: kubernetes-sigs/cluster-api-provider-vsphere@606f6d5 They also have documentation available: https://github.com/adobley/cluster-api-provider-vsphere/blob/main/docs/node-ipam-demo.md#deploy-workload-clusters-using-capv-node-ipam

The part on allocating and using IP addresses is relatively straightforward since CAPO only needs to create an IPAddressClaim and when an IPAddress is available make sure to attach the IPAddress to the instance.

For this to work CAPO need to do the following:

  • Create a new field under spec on the OpenstackMachine(Template) resources for “FloatingIPFromPool” which points towards an IPPool (Immutable)
  • During machine create, if FloatingIPFromPool create IPAddressClaim
  • Add a watcher for IPAddressClaim owned by cluster (mach cluster label)
  • When IPAddressClaim has a reference to an IPAddress, attach IP address and add finalizer to IPAddressClaim to block deletion
  • During machine deletion, detach Floating IP and remove finalizer from IPAddressClaim

IPAM Provider

When it comes to the IPAM provider I would suggest creating it in a separate repository. Even if it’s only going to be used alongside CAPO it would not make sense to me to install it for everyone. However if someone disagrees we could of course add Openstack Floating IP IPAM Provider to the CAPO repository.

When it comes to the IPAM provider there are a few different scenarios on how we would like to allocate IP addresses.

Allocating floating-ips (IPAM Provider):

  • Allocated new IP Addresses during machine create (if no free IPs found in pool)
  • Potentially some kind of preallocation?
  • Being able to use a predefined list of Ip addresses (if they are present in a project already since allocating specific floating IPs requires admin access which are out of scope for CAPI).
  • Fail machine creation if all IPs are used (pool exhaustion)

The CAPI proposal for implementing IPAM has information regarding how to implement an IPAM provider https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20220125-ipam-integration.md#implementing-an-ipam-provider that is fairly straightforward and has no connection to CAPO more than IPAddresses being created are consumed.

We could start off by just creating a provider under the Elastx organisation on github (in a public repository of course) and later on look into moving it to Kubernetes-sigs based on demand and feedback.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/featureCategorizes issue or PR as related to a new feature.lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions