Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mismatch between ControlPlaneReady and Conditions.ControlPlaneReady #7099

Closed
Tracked by #10852
knabben opened this issue Aug 20, 2022 · 27 comments
Closed
Tracked by #10852

Mismatch between ControlPlaneReady and Conditions.ControlPlaneReady #7099

knabben opened this issue Aug 20, 2022 · 27 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@knabben
Copy link
Member

knabben commented Aug 20, 2022

What steps did you take and what happened:
Running CAPD with tilt from HEAD, created a new runtime extension webhoook server and added the function hook for ControlPlaneInitialized, I wanted to execute a few steps AFTER the Cluster being ready, first try was to read from request.Cluster.Status.ControlPlaneReady boolean to check if I can create my pod in the WL cluster. This operations never happened even if the condition of Type ControlPlaneReady is True eventually.

E0820 17:11:02.074289      18 handlers.go:75] "Control plane not ready retrying."
phase Provisioned
controlplaneready false
infraready true
Ready False

// conditions
ControlPlaneInitialized True
ControlPlaneReady False
InfrastructureReady True

... repeats 8x

I0820 17:11:06.312747      18 handlers.go:55] "AfterControlPlaneInitialized is called."
E0820 17:11:06.312803      18 handlers.go:75] "Control plane not ready retrying."
phase Provisioned
controlplaneready false
infraready true

// conditions
Ready True
ControlPlaneInitialized True
ControlPlaneReady True
InfrastructureReady True

I0820 17:11:06.344126      18 handlers.go:55] "AfterControlPlaneInitialized is called."

In the last line request.Cluster.ControlPlaneReady == false, and all Conditions are true, as noted this does not happen to the InfrastructureReady.

Can this be for the fact I'm returning an ReponseStatusFailure while waiting for the cp be ready?

What did you expect to happen:
To both fields be in sync when the cp is ready.

Environment:

  • Cluster-api version: HEAD
  • minikube/kind version: 0.14.0
  • Kubernetes version: (use kubectl version): 1.24
  • OS (e.g. from /etc/os-release): Debian (WSL)

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 20, 2022
@killianmuldoon
Copy link
Contributor

killianmuldoon commented Aug 22, 2022

These two fields are set slightly differently, with ControlPlaneReady being based on the status.ready field in the Control Plane object and the condition being based on the conditions in the Control Plane object.

I think, though, there's a couple of questions I have with the approach here, so it would be good to understand the use case.

  1. Are you returning a failure from the AfterControlPlaneInitialized hook to prevent reconciliation from continuing?
  2. Why are you waiting for ControlPlaneReady? What doesn't work at AfterControlPlaneInitialized that works with your addtional condition?

@knabben
Copy link
Member Author

knabben commented Aug 22, 2022

These two fields are set slightly differently, with ControlPlaneReady being based on the status.ready field in the Control Plane object and the condition being based on the conditions in the Control Plane object.

I think, though, there's a couple of questions I have with the approach here, so it would be good to understand the use case.

  1. Are you returning a failure from the AfterControlPlaneInitialized hook to prevent reconciliation from continuing?
  2. Why are you waiting for ControlPlaneReady? What doesn't work at AfterControlPlaneInitialized that works with your addtional condition?
  1. that's correct. I should be retrying here, not sure how to do that with hookd response object
  2. I'm using the cache tracker when using aftercpinitialized directly the connection returns but it's not usable, when using the config on a clientset it returns an empty list of pods for example (maybe I was missing an err somewhere), the fact is it works normally after waiting for the ready condition.

@sbueringer
Copy link
Member

cc @ykakarap

@sbueringer
Copy link
Member

sbueringer commented Aug 22, 2022

If I see correctly AfterControlPlaneInitialized is called once ControlPlaneInitializedCondition is true.

if isControlPlaneInitialized(s.Current.Cluster) {

ControlPlaneInitializedCondition is set to true once we have a control plane machine with a nodeRef:

if util.IsControlPlaneMachine(m) && m.Status.NodeRef != nil {
conditions.MarkTrue(cluster, clusterv1.ControlPlaneInitializedCondition)
return ctrl.Result{}, nil
}

That nodeRef is only set if we were able to get a Node object from the workload cluster (via a client form ClusterCacheTracker)

remoteClient, err := r.Tracker.GetClient(ctx, util.ObjectKey(cluster))
if err != nil {
return ctrl.Result{}, err
}
// Even if Status.NodeRef exists, continue to do the following checks to make sure Node is healthy
node, err := r.getNode(ctx, remoteClient, providerID)
if err != nil {
if err == ErrNodeNotFound {
// While a NodeRef is set in the status, failing to get that node means the node is deleted.
// If Status.NodeRef is not set before, node still can be in the provisioning state.
if machine.Status.NodeRef != nil {
conditions.MarkFalse(machine, clusterv1.MachineNodeHealthyCondition, clusterv1.NodeNotFoundReason, clusterv1.ConditionSeverityError, "")
return ctrl.Result{}, errors.Wrapf(err, "no matching Node for Machine %q in namespace %q", machine.Name, machine.Namespace)
}
conditions.MarkFalse(machine, clusterv1.MachineNodeHealthyCondition, clusterv1.NodeProvisioningReason, clusterv1.ConditionSeverityWarning, "")
// No need to requeue here. Nodes emit an event that triggers reconciliation.
return ctrl.Result{}, nil
}
log.Error(err, "Failed to retrieve Node by ProviderID")
r.recorder.Event(machine, corev1.EventTypeWarning, "Failed to retrieve Node by ProviderID", err.Error())
return ctrl.Result{}, err
}
// Set the Machine NodeRef.
if machine.Status.NodeRef == nil {
machine.Status.NodeRef = &corev1.ObjectReference{
Kind: node.Kind,
APIVersion: node.APIVersion,
Name: node.Name,
UID: node.UID,
}
log.Info("Infrastructure provider reporting spec.providerID, Kubernetes node is now available", machine.Spec.InfrastructureRef.Kind, klog.KRef(machine.Spec.InfrastructureRef.Namespace, machine.Spec.InfrastructureRef.Name), "providerID", providerID, "node", klog.KRef("", machine.Status.NodeRef.Name))
r.recorder.Event(machine, corev1.EventTypeNormal, "SuccessfulSetNodeRef", machine.Status.NodeRef.Name)
}

I could be overlooking something, but looks to me like the workload cluster apiserver has to be reachable before the hook is called.

@killianmuldoon
Copy link
Contributor

After doing a little bit of testing here there is a significant drift between the two markers depending on network configuration:

The condition ControlPlaneReady is based on whether or not the ControlPlane machines and components are created and ready.

Cluster .status.controlPlaneReady, is a summary of whether or not the Nodes are in a ready state. It will never become true until the CNI is in place in practice. i.e. NetworkReady for the node object must be true.

I'm not sure of the different networking constraints for different providers, but I can imagine a situation where no CNI means limited API server access.

We might want to align these values (or possibly deprecate the old status fields). I'm not read into the full context here, but is there any reason to keep the status 'Ready' fields around (vs conditions) in the long term? i.e. apart from backward compatibility etc.

We could deprecate without an intention to remove until the next API revision to signal that the condition is the better value to rely on.

@sbueringer
Copy link
Member

sbueringer commented Aug 22, 2022

I'm not sure of the different networking constraints for different providers, but I can imagine a situation where no CNI means limited API server access.

I'm not sure if that is possible. apiserver should not depend on CNI, otherwise it would be a deadlock (at least for "external access" like from the mgmt cluster).

One other data point. Our quickstart depends on the APIserver being reachable so that you can deploy CNI at the end of the provisioning. A similar workflow is the default for kubeadm afaik.

@killianmuldoon
Copy link
Contributor

I'm not sure if that is possible. apiserver should not depend on CNI, otherwise it would be a deadlock (at least for "external access" like from the mgmt cluster).

Agreed for most situations (there's enough edge cases here that I'm sure there's some way this could work out.

If this is the case should our API SERVER AVAILABLE column in kubectl for KCP reflect the condition rather than status.ready as it does today?

Right now it's telling us if the nodes are ready and is different from ControlPlaneInitialized and ControlPlaneReady.

I think the drift between these two is a definite weakness in the API

@sbueringer
Copy link
Member

If this is the case should our API SERVER AVAILABLE column in kubectl for KCP reflect the condition rather than status.ready as it does today?

Is this possible with CRD columns? Their feature set is rather limited.

@killianmuldoon
Copy link
Contributor

Seems like it should be possible with jsonPath - kubectl get cluster -o custom-columns='READY:status.conditions[0].status' works from kubectl and I guess it should work for additionalPrinterColumns too?

I think the important question though is whether to try to close the gap between these two values. Right now the API has a couple of values with near-identical names but different semantics, and in some of the cases they are seemingly wrong based on their names and lead to bad assumptions by consumers.

@sbueringer
Copy link
Member

Seems like it should be possible with jsonPath - kubectl get cluster -o custom-columns='READY:status.conditions[0].status' works from kubectl and I guess it should work for additionalPrinterColumns too?

I'm not sure if CRD columns support the full jsonPath. It's probably also not a good idea to depend on a certain condition being the first one in the array (not sure if more dynamic array element matching is supported by CRD columns). I had some problems with CRD columns in the past.

We might want to align these values (or possibly deprecate the old status fields). I'm not read into the full context here, but is there any reason to keep the status 'Ready' fields around (vs conditions) in the long term? i.e. apart from backward compatibility etc.

IIRC the plan was to migrate eventually to conditions. I'm not sure if we wanted to drop the bools accordingly.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 21, 2022
@bavarianbidi
Copy link
Contributor

bavarianbidi commented Dec 29, 2022

/remove-lifecycle rotten

as i've implemented a similar runtime hook i was also confused about the diff in status.controlPlaneReady and the corresponding condition.

status field before the runtime hook succeed:

status:
  conditions:
  - lastTransitionTime: "2022-12-28T06:15:56Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2022-12-28T06:15:56Z"
    status: "True"
    type: ControlPlaneInitialized
  - lastTransitionTime: "2022-12-28T06:15:56Z"
    status: "True"
    type: ControlPlaneReady
  - lastTransitionTime: "2022-12-28T06:13:48Z"
    status: "True"
    type: InfrastructureReady
  - lastTransitionTime: "2022-12-28T06:15:56Z"
    message: 'error reconciling the Cluster topology: failed to call extension handlers
      for hook "AfterControlPlaneInitialized.hooks.runtime.cluster.x-k8s.io": failed
      to call extension handler "wait-for-cni-and-cpi.runtimesdk-test": got failure
      response'
    reason: TopologyReconcileFailed
    severity: Error
    status: "False"
    type: TopologyReconciled
  controlPlaneReady: false
  failureDomains:
    "1":
      controlPlane: true
    "2":
      controlPlane: true
    "3":
      controlPlane: true
  infrastructureReady: true
  observedGeneration: 3
  phase: Provisioned

status field after the runtime hook succeed:

status:
  conditions:
  - lastTransitionTime: "2022-12-28T06:54:44Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2022-12-28T06:54:44Z"
    status: "True"
    type: ControlPlaneInitialized
  - lastTransitionTime: "2022-12-28T06:54:44Z"
    status: "True"
    type: ControlPlaneReady
  - lastTransitionTime: "2022-12-28T06:52:29Z"
    status: "True"
    type: InfrastructureReady
  - lastTransitionTime: "2022-12-29T06:09:44Z"
    status: "True"
    type: TopologyReconciled
  controlPlaneReady: true
  failureDomains:
    "1":
      controlPlane: true
    "2":
      controlPlane: true
    "3":
      controlPlane: true
  infrastructureReady: true
  observedGeneration: 3
  phase: Provisioned

runtime hook impl:

hook registration

	if err := webhookServer.AddExtensionHandler(server.ExtensionHandler{
		Hook:           runtimehooksv1.AfterControlPlaneInitialized,
		Name:           "wait-for-cni-and-cpi",
		HandlerFunc:    lifecycleHandler.WaitForCNIandCPI,
		TimeoutSeconds: pointer.Int32(5),
		FailurePolicy:  toPtr(runtimehooksv1.FailurePolicyFail),
	}); err != nil {
		setupLog.Error(err, "error adding handler")
		os.Exit(1)
	}

impl:

	pods, err := clientset.CoreV1().Pods(metav1.NamespaceSystem).List(context.TODO(), metav1.ListOptions{
		LabelSelector: labels.SelectorFromSet(labels.Set{"app": "azure-cloud-controller-manager"}).String(),
	})
	if err != nil {
		log.Error(err, "failed to list pods in namespace kube-system")
		response.Status = runtimehooksv1.ResponseStatusFailure
		response.Message = "failed to list pods in namespace kube-system"
		return
	}

	log.WithName("WaitForCNIandCPI").WithValues("cluster", cluster.Name).Info(fmt.Sprintf("There are %d pods in the cluster", len(pods.Items)))

	if len(pods.Items) > 0 {
		for _, pod := range pods.Items {
			if podutil.IsPodReady(&pod) {
				log.WithName("WaitForCNIandCPI").WithValues("cluster", cluster.Name).Info(fmt.Sprintf("%s on %s is up and running", pod.Name, pod.Spec.NodeName))
				response.Status = runtimehooksv1.ResponseStatusSuccess
				return
			}
			log.WithName("WaitForCNIandCPI").WithValues("cluster", cluster.Name).Info(fmt.Sprintf("%s on %s is not ready yet", pod.Name, pod.Spec.NodeName))
		}

	}

	response.Status = runtimehooksv1.ResponseStatusFailure
	response.Message = fmt.Sprintf("There are %d pods in the cluster", len(pods.Items))

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 29, 2022
@bavarianbidi
Copy link
Contributor

bavarianbidi commented Feb 7, 2023

@killianmuldoon / @sbueringer comming from the current documentation about blocking hooks and the example implementation i've done above i expect the ControlPlaneReady condition to be false until the hook returns no error.

We might want to align these values (or possibly deprecate the old status fields). I'm not read into the full context here, but is there any reason to keep the status 'Ready' fields around (vs conditions) in the long term? i.e. apart from backward compatibility etc.

Regarding the above comment.
I don't depend on the status Ready field yet but It would be great to get known if the condition should behave the same way the status Ready field is doing.

@killianmuldoon
Copy link
Contributor

@bavarianbidi Are you using the AfterControlPlaneInitialized hook? It's not supposed to be blocking, and it's related to the ControlPlaneInitialized rather than the ControlPlaneReady condition.

I think the broad intention is to deprecate and remove the status.ControlPlaneReady field and instead rely on conditions, but there's no timeline or plan to do that right now.

If you can share - why do you need to check these conditions / status fields when you're using the runtime hook?

@killianmuldoon
Copy link
Contributor

That said - I think it's probably a good idea to move the setting of status.ControlPlaneReady to the same place as the condition is set so there's no difference between these fields.

@bavarianbidi
Copy link
Contributor

bavarianbidi commented Feb 7, 2023

@bavarianbidi Are you using the AfterControlPlaneInitialized hook? It's not supposed to be blocking, and it's related to the ControlPlaneInitialized rather than the ControlPlaneReady condition.

sorry, to long ago that i remember everything from that work 🙈 - thanks for your explanation.
I wasn't aware of the non-blocking behavior of AfterControlPlaneInitialized once i've implemented that hook. I was confused about the different values of the status field and the conditions.

If you can share - why do you need to check these conditions / status fields when you're using the runtime hook?

The runtime hook was done during a PoC to check if runtime extension could solve some of our internal issues we have when it came to cluster creation. We thought about creating some additional controllers (reconciling on the cluster CR) which take care of the condition/status field and apply additional (non CPI/CSI/CNI) addons (e.g. monitoring) to a workload cluster.

So as you said, AfterControlPlaneInitialized is not blocking, but if an additional controller interact on the status field, it "feels" like AfterControlPlaneInitialized is blocking - and that's where my confusion came from.

@killianmuldoon
Copy link
Contributor

The runtime hook was done during a PoC to check if runtime extension could solve some of our internal issues we have when it came to cluster creation. We thought about creating some additional controllers (reconciling on the cluster CR) which take care of the condition/status field and apply additional (non CPI/CSI/CNI) addons (e.g. monitoring) to a workload cluster.

This sounds like exactly what the hook is intended for! But it's supposed to be non-blocking so the rest of reconciliation - for MachineDeployments etc. can continue while add-ons are initialized. It would be great to get feedback on how the PoC went and what didn't work so we could feed that back into Runtime SDK though 🙂

@bavarianbidi
Copy link
Contributor

bavarianbidi commented Feb 8, 2023

@killianmuldoon I think i have to describe my use-case more detailed. Sorry if my above comments caused some confusion.

For our users it's possible to create a cluster and also add a basic set of additional components in a gitops way.
To make the UX as smooth as possible, we decided to try to achieve this process with one PR in a git repo.

  1. User create a new PR which contains
    • some input values which will generate a valid CAPI CR in a management cluster
    • a list of additional components which should also go into the new workload cluster
  2. On merge to main, the CAPI CR will be generated in the management cluster beside a definition of all the additional components which will get applied into the new workload cluster

Depending on the list of additional components, this works very smooth. But if the list contains components which want to create a PVC, a StorageClass must exist in the cluster. If no storageClass exist and you apply a PVC, the PVC will be stuck forever. As we apply the StorageClass together with the CSI we thought it will be a good implementation using the AfterControlPlaneInitialized hook.

Once all cluster mandatory components (CPI, CNI, CSI + e.g. a valid StorageClass or whatever might make sense in this phase) are deployed, we're unblocking the hook and provisioning continues. During that time we aren't aware that the hook is not supposed to be blocking but we have seen that status.controlPlaneReady behaves exactly as we wanted.

The problem with the dependency to a storageclass got solved in Kubernetes 1.26 by introducing Retroactive Default StorageClass. So the initial issue we've tried to solve is fixed in k8s but the approach with hooks looks very promising and has a lot of potential for us.

With the knowledge we have now and the possible move of status.ControlPlaneReady to the same place as the condition we have to find another solution for that.

From a CAPI core controller point of view it really make sense to have the AfterControlPlaneInitialized as non blocking but a blocking hook on a similar lifecycle point of view could be also nice to have.

An alternative approach (but not implemented yet) is to hook into the machine lifecycle (#4229) and start new nodes with a specifc taint and remove the taint once some conditions are met.
Unfortunately this wont work for machinePool based machines until #7938 has landed and a similar hook is implemented.

@sbueringer
Copy link
Member

sbueringer commented Feb 13, 2023

Just to summarize (if I got it all correctly):

  1. Definition of .status.controlPlaneReady vs ControlPlaneReady condition is confusing
  2. Failure responses from AfterControlPlaneInitialized hook do not block .status.controlPlaneReady
    • Essentially controlPlaneReady becomes only ready after CNI becomes ready. So if an AfterControlPlaneInitialized hook also waits until CNI is ready before returning Success responses TopologyReconciled condition and .status.controlPlaneReady will go to true roughly at the same time.
    • This is also what I would expect as the hooks are called by the "Cluster topology" reconciler. This reconciler is 100% independent of the Cluster reconciler and the KCP reconciler. Returning failures in the hook only blocks any further reconcile from the "Cluster topology" reconciler, but not from the other 2.
    • Maybe we should consider adding RetryAfterSeconds field to the AfterControlPlaneInitializedHook response to allow retrying the hook without failing the entire reconcile for the cluster.

@fabriziopandini
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 21, 2023
@fabriziopandini
Copy link
Member

/help

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Mar 21, 2023
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Mar 20, 2024
@fabriziopandini
Copy link
Member

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 12, 2024
@fabriziopandini fabriziopandini added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jul 17, 2024
@k8s-ci-robot k8s-ci-robot removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jul 17, 2024
@sbueringer
Copy link
Member

sbueringer commented Jul 25, 2024

Will be addressed via #10897, please take a look at the proposal for more details.

Basically we are going to rename the current ControlPlaneReady field to make clear what it stands for.

/close

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Closing this issue.

In response to this:

Will be addressed via #10897, pleaes take a look at the proposal for more details.

Basically we are going to rename the current ControlPlaneReady field to make clear what it stands for.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

7 participants