Skip to content
This repository was archived by the owner on Apr 17, 2025. It is now read-only.

Enable to know allowCascadingDeletion status by user friendly way #303

Closed
mochizuki875 opened this issue Jun 22, 2023 · 5 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mochizuki875
Copy link
Contributor

mochizuki875 commented Jun 22, 2023

What happened?

Trying subnamespaces-deep-dive section of Quickstart, I think it's not user friendly that we can know allowCascadingDeletion status only by checking HierarchyConfiguration.(Although ActivitiesHalted (ParentMissing) can be checked by kubectl hns describe)

What did you expect to happen?

I think we need other means to know allowCascadingDeletion status.

ex1)

Show the result of kubectl hns tree.

$ kubectl hns tree team-a
team-a
├── [s] service-1 (AllowCascadingDeletion)
│   └── [s] dev
├── [s] service-2
├── [s] service-3
└── [s] service-4
    └── staging

[s] indicates subnamespaces

ex2)

Show the result of kubectl get subns.

$ kubectl -n team-a get subns
NAME        AGE         AllowCascadingDeletion
service-1   12m         true
service-2   5h31m
service-3   12m
service-4   8m20s

ex3)

Show the result of kubectl hns describe.
It is the same way of ActivitiesHalted (ParentMissing).

$ kubectl hns describe service-1
Hierarchy configuration for namespace service-1
  Parent: team-a
  Children:
  - dev (subnamespace)
  Conditions:
  - AllowCascadingDeletion: true

No recent HNC events for objects in this namespace

How can we reproduce it (as minimally and precisely as possible)?

Now, I know no other means other than the following:
(It's risky status of subnamespace so should be easy for users to notice)

$ kubectl get hierarchyconfiguration hierarchy -o jsonpath='{.spec}' -n service-1
{"allowCascadingDeletion":true,"parent":"team-a"}

or

$ kubectl describe hierarchyconfiguration hierarchy -n service-1
Name:         hierarchy
Namespace:    service-1
Labels:       <none>
Annotations:  <none>
API Version:  hnc.x-k8s.io/v1alpha2
Kind:         HierarchyConfiguration
Metadata:
  Creation Timestamp:  2023-06-22T06:21:52Z
  Finalizers:
    hnc.x-k8s.io/hasSubnamespace
  Generation:        3
  Resource Version:  137989
  UID:               a7997478-5687-4604-a26e-247ec84ee18f
Spec:
  Allow Cascading Deletion:  true
  Parent:                    team-a
Status:
  Children:
    dev
Events:  <none>
@mochizuki875
Copy link
Contributor Author

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 22, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 23, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants