Use Azure Container Storage Preview with local NVMe and volume replication
Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Ephemeral Disk with local NVMe and volume replication as back-end storage for your Kubernetes workloads. At the end, you'll have a pod that's using local NVMe as its storage. Replication copies data across volumes on different nodes and restores a volume when a replica is lost, providing resiliency for Ephemeral Disk.
What is Ephemeral Disk?
When your application needs sub-millisecond storage latency, you can use Ephemeral Disk with Azure Container Storage to meet your performance requirements. Ephemeral means that the disks are deployed on the local virtual machine (VM) hosting the AKS cluster and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM.
There are two types of Ephemeral Disk available: NVMe and temp SSD. NVMe is designed for high-speed data transfer between storage and CPU. Choose NVMe when your application requires higher IOPS and throughput than temp SSD, or if your workload requires replication. Replication isn't currently supported for temp SSD.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
This article requires the latest version (2.35.0 or later) of the Azure CLI. See How to install the Azure CLI. If you're using the Bash environment in Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. For more information, see Get started with Azure Cloud Shell.
You'll need the Kubernetes command-line client,
kubectl
. It's already installed if you're using Azure Cloud Shell, or you can install it locally by running theaz aks install-cli
command.If you haven't already installed Azure Container Storage, follow the instructions in Use Azure Container Storage with Azure Kubernetes Service.
Check if your target region is supported in Azure Container Storage regions.
Choose a VM type that supports local NVMe
Ephemeral Disk is only available in certain types of VMs. If you plan to use local NVMe, a storage optimized VM such as standard_l8s_v3 is required.
You can run the following command to get the VM type that's used with your node pool.
az aks nodepool list --resource-group <resource group> --cluster-name <cluster name> --query "[].{PoolName:name, VmSize:vmSize}" -o table
The following is an example of output.
PoolName VmSize
---------- ---------------
nodepool1 standard_l8s_v3
We recommend that each VM have a minimum of four virtual CPUs (vCPUs), and each node pool have at least three nodes.
Create and attach persistent volumes
Follow these steps to create and attach a persistent volume.
1. Create a storage pool with volume replication
Follow these steps to create a storage pool using local NVMe with replication. Azure Container Storage currently supports three-replica and five-replica configurations. If you specify three replicas, you must have at least three nodes in your AKS cluster. If you specify five replicas, you must have at least five nodes.
Note
Because Ephemeral Disk storage pools consume all the available NVMe disks, you must delete any existing local NVMe storage pools before creating a new storage pool.
Use your favorite text editor to create a YAML manifest file such as
code acstor-storagepool.yaml
.Paste in the following code and save the file. The storage pool name value can be whatever you want. Set replicas to 3 or 5.
apiVersion: containerstorage.azure.com/v1 kind: StoragePool metadata: name: nvme namespace: acstor spec: poolType: ephemeralDisk: diskType: nvme replicas: 3
Apply the YAML manifest file to create the storage pool.
kubectl apply -f acstor-storagepool.yaml
When storage pool creation is complete, you'll see a message like:
storagepool.containerstorage.azure.com/nvme created
You can also run this command to check the status of the storage pool. Replace
<storage-pool-name>
with your storage pool name value. For this example, the value would be nvme.kubectl describe sp <storage-pool-name> -n acstor
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention acstor-<storage-pool-name>
.
2. Display the available storage classes
When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating and deploying volumes.
Run kubectl get sc
to display the available storage classes. You should see a storage class called acstor-<storage-pool-name>
.
$ kubectl get sc | grep "^acstor-"
acstor-azuredisk-internal disk.csi.azure.com Retain WaitForFirstConsumer true 65m
acstor-ephemeraldisk containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
Important
Don't use the storage class that's marked internal. It's an internal storage class that's needed for Azure Container Storage to work.
3. Create a persistent volume claim
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
Use your favorite text editor to create a YAML manifest file such as
code acstor-pvc.yaml
.Paste in the following code and save the file. The PVC
name
value can be whatever you want.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ephemeralpvc spec: accessModes: - ReadWriteOnce storageClassName: acstor-ephemeraldisk-nvme # replace with the name of your storage class if different resources: requests: storage: 100Gi
Apply the YAML manifest file to create the PVC.
kubectl apply -f acstor-pvc.yaml
You should see output similar to:
persistentvolumeclaim/ephemeralpvc created
You can verify the status of the PVC by running the following command:
kubectl describe pvc ephemeralpvc
Once the PVC is created, it's ready for use by a pod.
4. Deploy a pod and attach a persistent volume
Create a pod using Fio (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For claimName, use the name value that you used when creating the persistent volume claim.
Use your favorite text editor to create a YAML manifest file such as
code acstor-pod.yaml
.Paste in the following code and save the file.
kind: Pod apiVersion: v1 metadata: name: fiopod spec: nodeSelector: acstor.azure.com/io-engine: acstor volumes: - name: ephemeralpv persistentVolumeClaim: claimName: ephemeralpvc containers: - name: fio image: nixery.dev/shell/fio args: - sleep - "1000000" volumeMounts: - mountPath: "/volume" name: ephemeralpv
Apply the YAML manifest file to deploy the pod.
kubectl apply -f acstor-pod.yaml
You should see output similar to the following:
pod/fiopod created
Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
kubectl describe pod fiopod kubectl describe pvc ephemeralpvc
Check fio testing to see its current status:
kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
You've now deployed a pod that's using local NVMe with volume replication, and you can use it for your Kubernetes workloads.
Manage persistent volumes and storage pools
Now that you've created a persistent volume, you can detach and reattach it as needed. You can also expand or delete a storage pool.
Detach and reattach a persistent volume
To detach a persistent volume, delete the pod that the persistent volume is attached to.
kubectl delete pods <pod-name>
To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in Deploy a pod and attach a persistent volume.
To check which persistent volume a persistent volume claim is bound to, run:
kubectl get pvc <persistent-volume-claim-name>
Expand a storage pool
You can expand storage pools backed by local NVMe to scale up quickly and without downtime. Shrinking storage pools isn't currently supported.
Because a storage pool backed by Ephemeral Disk uses local storage resources on the AKS cluster nodes (VMs), expanding the storage pool requires adding another node to the cluster. Follow these instructions to expand the storage pool.
Run the following command to add a node to the AKS cluster. Replace
<cluster-name>
,<nodepool name>
, and<resource-group-name>
with your own values. To get the name of your node pool, runkubectl get nodes
.az aks nodepool add --cluster-name <cluster name> --name <nodepool name> --resource-group <resource group> --node-vm-size Standard_L8s_v3 --node-count 1 --labels acstor.azure.com/io-engine=acstor
Run
kubectl get nodes
and you'll see that a node has been added to the cluster.Run
kubectl get sp -A
and you should see that the capacity of the storage pool has increased.
Delete a storage pool
If you want to delete a storage pool, run the following command. Replace <storage-pool-name>
with the storage pool name.
kubectl delete sp -n acstor <storage-pool-name>
See also
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for