You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
logrus.Debugf("Cluster import for '%s/%s'. Setting up agent with kubeconfig from secret '%s/%s'", cluster.Namespace, cluster.Name, kubeConfigSecretNamespace, cluster.Spec.KubeConfigSecret)
...
logrus.Debugf("Cluster import for '%s/%s'. Setting up agent with kubeconfig from secret '%s/%s'", cluster.Namespace, cluster.Name, kubeConfigSecretNamespace, cluster.Spec.KubeConfigSecret)
var (
cfg = config.Get()
apiServerURL = string(secret.Data[config.APIServerURLKey])
apiServerCA = secret.Data[config.APIServerCAKey]
)
if apiServerURL == "" {
if len(cfg.APIServerURL) == 0 {
return status, fmt.Errorf("missing apiServerURL in fleet config for cluster auto registration")
}
logrus.Debugf("Cluster import for '%s/%s'. Using apiServerURL from fleet-controller config", cluster.Namespace, cluster.Name)
apiServerURL = cfg.APIServerURL
}
if len(apiServerCA) == 0 {
apiServerCA = cfg.APIServerCA
}
the cluster.fleet status is updated from:
status.AgentDeployedGeneration = &cluster.Spec.RedeployAgentGeneration
status.AgentMigrated = true
status.CattleNamespaceMigrated = true
status.Agent = fleet.AgentStatus{
Namespace: cluster.Spec.AgentNamespace,
}
status.AgentNamespaceMigrated = true
status.AgentConfigChanged = false
status.APIServerURL = apiServerURL
status.APIServerCAHash = hashStatusField(apiServerCA)
status.AgentTLSMode = cfg.AgentTLSMode
status.GarbageCollectionInterval = &cfg.GarbageCollectionInterval
On Harvester cluster, an Rancher is embeded for local cluster provision, in sequences, the fleet-controller and fleet-agent are also deployed.
When the embeded Rancher is upgraded and many conditions are checked, Harvester starts to upgrade the ManagedCharts, but randomly, the fleet-agent is re-deployed, it may cause some ManagedChart in middle-state, and the new fleet-agent does an rollback upon them, that causes other issues. For more details, please refer: harvester/harvester#6851 (comment)
Environment
- Architecture:
- Fleet Version: Rancher v2.9.2 + fleet v0.10.2; Harvester v1.4.0; The `local` cluster is managed by `Rancher` and `Fleet`.
- Cluster:
- Provider:
- Options:
- Kubernetes Version:
Logs
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
note:
In the configmap, the apiServerURL is https://10.53.47.173, it is Rancher service IP in this cluster; and we also observed, in upgrade process, this value becomes empty first, then revert to https://10.53.47.173
Is there an existing issue for this?
Current Behavior
fleet/internal/cmd/controller/agentmanagement/controllers/cluster/import.go
Line 98 in 1cddbbf
fleet-controller assumes the config is changed if following conditions are met
however, the status values are fetched from an related secret, and if they are empty, then fallback to a configmap
fleet/internal/cmd/controller/agentmanagement/controllers/cluster/import.go
Line 232 in 1cddbbf
On Harvester cluster, an Rancher is embeded for local cluster provision, in sequences, the fleet-controller and fleet-agent are also deployed.
There are configmaps:
secret:
The cluster.fleet object:
And, if we kill the
fleet-controller
POD, it will always re-deploy the fleet-agent with below debug informationExpected Behavior
Because the
fleet-agent
may deploy/update managedchart at any time, it should only be re-deployed in necessary cases.The
onChange
needs to check the none-fallback case.fleet/internal/cmd/controller/agentmanagement/controllers/cluster/import.go
Line 98 in 1cddbbf
Steps To Reproduce
This is observed in the Harvester upgrade test
harvester/harvester#6851
When the embeded Rancher is upgraded and many conditions are checked, Harvester starts to upgrade the
ManagedCharts
, but randomly, the fleet-agent is re-deployed, it may cause some ManagedChart in middle-state, and the new fleet-agent does an rollback upon them, that causes other issues. For more details, please refer: harvester/harvester#6851 (comment)Environment
Logs
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: