|
This is unreleased documentation for SUSE® Storage 1.11 (Dev). |
Important Notes
Please see here for the full release notes.
Deprecation
V2 Backing Image is deprecated and will be removed in a future release. Users can use containerized data importer (CDI) to import images into SUSE Storage as an alternative. For more information, see SUSE Storage with CDI Imports.
Behavior Change
Cloned Volume Health After Efficient Cloning
With efficient cloning enabled, a newly cloned and detached volume is degraded and has only one replica, with its clone status set to copy-completed-awaiting-healthy. To bring the volume to a healthy state, transition the clone status to completed and rebuild the remaining replica by either enabling offline replica rebuilding or attaching the volume to trigger replica rebuilding. See Issue #12341 and Issue #12328.
General
Kubernetes Version Requirement
Due to the upgrade of the CSI external snapshotter to v8.2.0, you must be running Kubernetes v1.25 or later to upgrade to SUSE Storage v1.8.0 or a newer version.
Upgrade Check Events
When you upgrade with Helm or the Rancher App Marketplace, SUSE Storage performs pre-upgrade checks. If a check fails, the upgrade stops and the reason for the failure is recorded in an event.
For more details, see Upgrading Longhorn Manager.
Manual Checks Before Upgrade
Automated pre-upgrade checks does not cover all scenarios. A manual check is recommended using kubectl or the SUSE Storage UI.
-
Ensure all V2 Data Engine volumes are detached and replicas are stopped. The V2 engine does not support live upgrades.
-
Avoid upgrading when volumes are Faulted. Unusable replicas may be deleted, causing permanent data loss if no backups exist.
-
Avoid upgrading if a failed
BackingImageexists. For more information, see Backing Image for details. -
Create a Longhorn System Backup upgrading is recommended to ensure recoverability.
Manager URL for External API Access
SUSE Storage v1.11.0 introduces the manager-url setting that allows explicit configuration of the external URL for accessing the Longhorn Manager API.
Background: When Longhorn Manager is accessed through Ingress or Gateway API HTTPRoute, API responses may contain internal cluster IPs (for example, 10.42.x.x:9500) in the actions and links fields. This occurs when the ingress controller does not properly set X-Forwarded-* headers, causing the API to fall back to the internal pod IP.
Solution: Configure the manager-url setting with your external URL (for example, https://longhorn.example.com). The Manager injects proper forwarded headers to ensure API responses contain correct external URLs.
Configuration:
-
Via Helm:
--set defaultSettings.managerUrl="https://longhorn.example.com" -
Via kubectl:
kubectl -n longhorn-system patch settings.longhorn.io manager-url --type='merge' -p '{"value":"https://longhorn.example.com"}' -
Via UI: Settings > General > Manager URL
For more details, see Manager URL.
Gateway API HTTPRoute Support
SUSE Storage v1.11.0 introduces built-in support for Gateway API HTTPRoute as a modern alternative to Ingress for exposing the SUSE Storage UI.
For detailed setup instructions, prerequisites and advanced configuration, see Create an HTTPRoute with Gateway API.
Concurrent Job Limit for Snapshot Operations
SUSE Storage v1.11.0 introduces the Snapshot Heavy Task Concurrent Limit to prevent disk exhaustion and resource contention. This setting limits concurrent heavy operations—such as snapshot purge and clone—per node by queuing additional tasks until ongoing ones complete. By controlling these processes, the system reduces the risk of storage spikes typically triggered by snapshot merges.
For further details, refer to Snapshot Heavy Task Concurrent Limit and Issue #11635.
Scheduling
Replica Scheduling with Balance Algorithm
To improve data distribution and resource utilization, SUSE Storage introduces a balance algorithm that schedules replicas evenly across nodes and disks based on calculated balance scores.
For more information, see Scheduling.
Supports StorageClass allowedTopologies
SUSE Storage CSI now supports StorageClass allowedTopologies, enabling Kubernetes to automatically restrict pod and volume scheduling to nodes where SUSE Storage is available.
For more information, see Issue #12261 and Storage Class Parameters.
Monitoring
Disk health monitoring
Starting with SUSE Storage v1.11.0, disk health monitoring is available for both the V1 and V2 data engines. SUSE Storage collects disk health data and exposes it through Prometheus metrics and the Node custom resources.
-
Key features:
-
Automatic health data collection every 10 minutes.
-
Disk health status and detailed attributes exposed as Prometheus metrics.
-
Health data available in the
nodes.longhorn.iocustom resources.
-
|
For more information, see Disk health monitoring.
Access Mode Stability
ReadWriteOncePod Access Mode
SUSE Storage v1.11.0 introduces support for the ReadWriteOncePod (RWOP) access mode, addressing the need for stricter single-pod volume access guarantees in stateful workloads. Unlike ReadWriteOnce (RWO), which permits multiple pods on the same node to mount a volume, RWOP ensures that only one pod across the entire cluster can access the volume at any given time. This capability is particularly valuable for stateful applications requiring exclusive write access, such as databases or other workloads where concurrent access could lead to data corruption or consistency issues.
For more information, see Access Modes and Issue #9727.
Rebuilding
Scale Replica Rebuilding
Starting with SUSE Storage v1.11.0, a new scale replica rebuilding feature allows a rebuilding replica to fetch snapshot data from multiple healthy replicas concurrently, potentially improving rebuild performance.
For more information, see Scale Replica Rebuilding.
Offline Replica Rebuilding
Starting with SUSE Storage v1.11.0, the Offline Replica Rebuilding setting is updated from a data engine-specific setting to a global setting. Previously, users could configure offline replica rebuilding separately for v1 and v2 data engines. During the upgrade to v1.11.0, SUSE Storage automatically checks the existing configuration. If offline replica rebuilding is enabled for either the v1 or v2 data engine, the new global setting defaults to true. Otherwise, it remains disabled (false).
For more information, see Offline Replica Rebuilding Setting.
Command-Line Tool
Package Manager Detection for Unsupported Distributions
SUSE Storage v1.11.0 enhances the Longhorn CLI preflight install and check behavior. When /etc/os-release does not match a known distribution, the CLI attempts to detect a supported package manager and continues in a compatibility mode.
For more information, see Issue #12153.
V2 Data Engine
SUSE Storage System Upgrade
Live upgrades of V2 volumes are not supported. Before you upgrade, make sure all V2 volumes are detached.
Technical Preview
The V2 Data Engine is a Technical Preview feature in SUSE Storage v1.11.0.
It is nearly complete, with no significant functional changes expected, and has been validated in controlled environments. Users should evaluate the feature thoroughly before enabling it in production.
SPDK UBLK Performance Parameters
Starting with SUSE Storage v1.11.0, the SPDK UBLK front-end exposes performance-tuning parameters that can be configured globally or per-volume:
-
Queue Depth (
ublkQueueDepth): It is the depth of each I/O queue for the UBLK front-end. The default value is128. -
Number of Queues (
ublkNumberOfQueue): It is the number of I/O queues for the UBLK front-end. The default value is1.
These parameters can be configured:
-
Globally: Via the
Default Ublk Queue DepthandDefault Ublk Number Of Queuesettings (see Settings). -
Per-volume: Via the
ublkQueueDepthandublkNumberOfQueuevolume parameters. -
StorageClass: Via the
ublkQueueDepthandublkNumberOfQueueparameters in the StorageClass definition.
For more information, see Issue #11039.