Important Notes
This page summarizes the key notes for Longhorn v1.11.0. For the full release note, see here.
V2 Backing Image is deprecated and will be removed in a future release. Users can used containerized data importer (CDI) to import images into Longhorn as an alternative. For more information, see Longhorn with CDI Imports.
With efficient cloning enabled, a newly cloned and detached volume is degraded and has only one replica, with its clone status set to copy-completed-awaiting-healthy. To bring the volume to a healthy state, transition the clone status to completed and rebuild the remaining replica by either enabling offline replica rebuilding or attaching the volume to trigger replica rebuilding. See Issue #12341 and Issue #12328.
Due to the upgrade of the CSI external snapshotter to v8.2.0, all clusters must be running Kubernetes v1.25 or later before you can upgrade to Longhorn v1.8.0 or a newer version.
When upgrading via Helm or Rancher App Marketplace, Longhorn performs pre-upgrade checks. If a check fails, the upgrade stops, and the reason for the failure is recorded in an event.
For more detail, see Upgrading Longhorn Manager.
Automated pre-upgrade checks do not cover all scenarios. Manual checks via kubectl or the UI are recommended:
Longhorn v1.11.0 introduces the manager-url setting that allows explicit configuration of the external URL for accessing the Longhorn Manager API.
Background: When Longhorn Manager is accessed through Ingress or Gateway API HTTPRoute, API responses may contain internal cluster IPs (e.g., 10.42.x.x:9500) in the actions and links fields. This occurs when the ingress controller does not properly set X-Forwarded-* headers, causing the API to fall back to the internal pod IP.
Solution: Configure the manager-url setting with your external URL (e.g., https://longhorn.example.com). The Manager will inject proper forwarded headers to ensure API responses contain correct external URLs.
Configuration:
--set defaultSettings.managerUrl="https://longhorn.example.com"kubectl -n longhorn-system patch settings.longhorn.io manager-url --type='merge' -p '{"value":"https://longhorn.example.com"}'For more details, see Manager URL.
Longhorn v1.11.0 introduces native support for Gateway API HTTPRoute as a modern alternative to Ingress for exposing the Longhorn UI.
For detailed setup instructions, prerequisites, and advanced configuration, see Create an HTTPRoute with Gateway API.
Longhorn v1.11.0 introduces the Snapshot Heavy Task Concurrent Limit to prevent disk exhaustion and resource contention. This setting limits concurrent heavy operations—such as snapshot purge and clone—per node by queuing additional tasks until ongoing ones complete. By controlling these processes, the system reduces the risk of storage spikes typically triggered by snapshot merges.
For further details, refer to Snapshot Heavy Task Concurrent Limit and Longhorn #11635.
To improve data distribution and resource utilization, Longhorn introduces a balance algorithm that schedules replicas evenly across nodes and disks based on calculated balance scores.
For more information, see Scheduling.
allowedTopologiesLonghorn CSI now supports StorageClass allowedTopologies, enabling Kubernetes to automatically restrict pod and volume scheduling to nodes where Longhorn is available.
For more information, see Longhorn #12261 and Storage Class Parameters.
Starting with Longhorn v1.11.0, disk health monitoring is available for both V1 and V2 data engines. Longhorn collects health data from disks and exposes it through Prometheus metrics and Longhorn Node Custom Resources.
Key Features:
nodes.longhorn.io Custom ResourcesNote:
- SMART data may not be fully available in virtualized or cloud environments (e.g., AWS EBS), which may result in zero values for certain attributes.
- Available health attributes vary depending on disk type and hardware.
For more information, see Disk Health Monitoring.
Starting with Longhorn v1.11.0, a new scale replica rebuilding feature allows a rebuilding replica to fetch snapshot data from multiple healthy replicas concurrently, potentially improving rebuild performance.
For more information, see Scale Replica Rebuilding.
Longhorn v1.11.0 introduces support for the ReadWriteOncePod (RWOP) access mode, addressing the need for stricter single-pod volume access guarantees in stateful workloads. Unlike ReadWriteOnce (RWO), which permits multiple pods on the same node to mount a volume, RWOP ensures that only one pod across the entire cluster can access the volume at any given time. This capability is particularly valuable for stateful applications requiring exclusive write access, such as databases or other workloads where concurrent access could lead to data corruption or consistency issues.
For more information, see Access Modes and Longhorn #9727.
Longhorn v1.11.0 enhances the Longhorn CLI preflight install and check behavior. When /etc/os-release does not match a known distribution, the CLI attempts to detect a supported package manager and continues in a compatibility mode.
For more information, see Longhorn #12153.
Live upgrades of V2 volumes are not supported. Ensure all V2 volumes are detached before upgrading.
Starting with Longhorn v1.11.0, the SPDK UBLK frontend exposes performance-tuning parameters that can be configured globally or per-volume:
ublkQueueDepth): The depth of each I/O queue for the UBLK frontend. Default: 128ublkNumberOfQueue): The number of I/O queues for the UBLK frontend. Default: 1These parameters can be configured:
Default Ublk Queue Depth and Default Ublk Number Of Queue settings (see Settings)ublkQueueDepth and ublkNumberOfQueue volume parametersublkQueueDepth and ublkNumberOfQueue parameters in the StorageClass definitionFor more information, see Longhorn #11039.
© 2019-2026 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2026 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.