-
Notifications
You must be signed in to change notification settings - Fork 263
Description
Describe the bug
I'm trying to implement a topology aware storageclass with two backend for two different netapp instaces, one in zone it-a and the other in zone it-b, exposed to my cluster with two different IPs.
This is my config for backends and storagclass:
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: backend-ontap-san-zone-a
namespace: netapp-system
labels:
quality: multizone
spec:
labels:
quality: multizone
version: 1
storageDriverName: ontap-san
managementLIF: xxx.xxx.xxx.xxx
svm: svm01-k8s03utils
credentials:
name: ontap-san
storagePrefix: k8s02dev_
defaults:
snapshotReserve: '5'
supportedTopologies:
- topology.kubernetes.io/zone: it-a
storage:
- labels:
topology.kubernetes.io/zone: it-a
zone: it-a
---
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: backend-ontap-san-zone-b
namespace: netapp-system
labels:
quality: multizone
spec:
labels:
quality: multizone
version: 1
storageDriverName: ontap-san
managementLIF: xxx.xxx.xxx.xxx
svm: svm02_k8s03utils
credentials:
name: ontap-san
storagePrefix: k8s02dev_
defaults:
snapshotReserve: '5'
supportedTopologies:
- topology.kubernetes.io/zone: it-b
storage:
- labels:
topology.kubernetes.io/zone: it-b
zone: it-b
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: netapp-zonal
provisioner: csi.trident.netapp.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
parameters:
backendType: ontap-san
fsType: xfs
selector: "quality=multizone"
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- it-a
- it-b
The backends are being created and attached to the corresponding NetApp instance, however when I schedule a pod with a pvc the volume creation is managed randomly by the two backends, bypassing what I expect the spec.supportedTopologies should do.
Environment
Trident deployed with operator in a kubernetes cluster.
Operator chart info:
repoUrl: "https://netapp.github.io/trident-helm-chart"
chart: "trident-operator"
version: "100.2510.0"
- Trident version: 25.10
- Trident installation flags used: installed via helm chart with this values:
excludePodSecurityPolicy: true
tridentNodePluginNodeSelector:
node-role.kubernetes.io/storage: "netapp"
- Container runtime: containerd version 1.7.20
- Kubernetes version: v1.32.4
- Kubernetes orchestrator: Rancher v2.12.3
- Kubernetes enabled feature gates: none
- OS: Ubuntu 24.04.2 LTS
- NetApp backend types: ONTAP SAN, NetApp Release 9.15.1P11
- Other: N/A
To Reproduce
Apply the config in your environment with all nodes labelled and with two different netapp and the behaviour will always be that.
Expected behavior
when I schedule a pod with a pvc the volume is created on the netapp in the same zone because the volume creation is managed by the backend that match the topology.
Additional context
My need is to provision volumes on one of the two netapps based on where the pod is scheduled: if the pod is scheduled on a node with label topology.kubernetes.io/zone=it-a I need to provision volumes with the backend backend-ontap-san-zone-a, veceversa if the pod is scheduled on a node with label topology.kubernetes.io/zone=it-b (all nodes are labelled except apiservers).
Futhermore, once the schedule thing works, I expect that if the pod is scheduled in a zone but then moved to another zone the volume SHOULD NOT be attached and the schedule should fail until the scheduler puts the pod in the original zone.