Use Azure Container Storage Preview with local NVMe
Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Ephemeral Disk with local NVMe as back-end storage for your Kubernetes workloads. At the end, you'll have a pod that's using local NVMe as its storage.
What is Ephemeral Disk?
When your application needs sub-millisecond storage latency and doesn't require data durability, you can use Ephemeral Disk with Azure Container Storage to meet your performance requirements. Ephemeral means that the disks are deployed on the local virtual machine (VM) hosting the AKS cluster and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM.
There are two types of Ephemeral Disk available: NVMe and temp SSD. NVMe is designed for high-speed data transfer between storage and CPU. Choose NVMe when your application requires higher IOPS and throughput than temp SSD, or if your workload requires replication. Replication isn't currently supported for temp SSD.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
This article requires the latest version (2.35.0 or later) of the Azure CLI. See How to install the Azure CLI. If you're using the Bash environment in Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. For more information, see Get started with Azure Cloud Shell.
You'll need the Kubernetes command-line client,
kubectl
. It's already installed if you're using Azure Cloud Shell, or you can install it locally by running theaz aks install-cli
command.If you haven't already installed Azure Container Storage, follow the instructions in Use Azure Container Storage with Azure Kubernetes Service.
Check if your target region is supported in Azure Container Storage regions.
Choose a VM type that supports local NVMe
Ephemeral Disk is only available in certain types of VMs. If you plan to use local NVMe, a storage optimized VM such as standard_l8s_v3 is required.
You can run the following command to get the VM type that's used with your node pool.
az aks nodepool list --resource-group <resource group> --cluster-name <cluster name> --query "[].{PoolName:name, VmSize:vmSize}" -o table
The following is an example of output.
PoolName VmSize
---------- ---------------
nodepool1 standard_l8s_v3
We recommend that each VM have a minimum of four virtual CPUs (vCPUs), and each node pool have at least three nodes.
Create and attach generic ephemeral volumes
Follow these steps to create and attach a generic ephemeral volume.
1. Create a storage pool
First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file.
If you enabled Azure Container Storage using az aks create
or az aks update
commands, you might already have a storage pool. Use kubectl get sp -n acstor
to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to Display the available storage classes.
Follow these steps to create a storage pool using local NVMe.
Use your favorite text editor to create a YAML manifest file such as
code acstor-storagepool.yaml
.Paste in the following code and save the file. The storage pool name value can be whatever you want.
apiVersion: containerstorage.azure.com/v1 kind: StoragePool metadata: name: ephemeraldisk namespace: acstor spec: poolType: ephemeralDisk: {}
Apply the YAML manifest file to create the storage pool.
kubectl apply -f acstor-storagepool.yaml
When storage pool creation is complete, you'll see a message like:
storagepool.containerstorage.azure.com/ephemeraldisk created
You can also run this command to check the status of the storage pool. Replace
<storage-pool-name>
with your storage pool name value. For this example, the value would be ephemeraldisk.kubectl describe sp <storage-pool-name> -n acstor
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention acstor-<storage-pool-name>
.
2. Display the available storage classes
When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating and deploying volumes.
Run kubectl get sc
to display the available storage classes. You should see a storage class called acstor-<storage-pool-name>
.
$ kubectl get sc | grep "^acstor-"
acstor-azuredisk-internal disk.csi.azure.com Retain WaitForFirstConsumer true 65m
acstor-ephemeraldisk containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
Important
Don't use the storage class that's marked internal. It's an internal storage class that's needed for Azure Container Storage to work.
3. Deploy a pod with a generic ephemeral volume
Create a pod using Fio (Flexible I/O Tester) for benchmarking and workload simulation, that uses a generic ephemeral volume.
Use your favorite text editor to create a YAML manifest file such as
code acstor-pod.yaml
.Paste in the following code and save the file.
kind: Pod apiVersion: v1 metadata: name: fiopod spec: nodeSelector: acstor.azure.com/io-engine: acstor containers: - name: fio image: nixery.dev/shell/fio args: - sleep - "1000000" volumeMounts: - mountPath: "/volume" name: ephemeralvolume volumes: - name: ephemeralvolume ephemeral: volumeClaimTemplate: metadata: labels: type: my-ephemeral-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "acstor-ephemeraldisk-nvme" # replace with the name of your storage class if different resources: requests: storage: 1Gi
Apply the YAML manifest file to deploy the pod.
kubectl apply -f acstor-pod.yaml
You should see output similar to the following:
pod/fiopod created
Check that the pod is running and that the ephemeral volume claim has been bound successfully to the pod:
kubectl describe pod fiopod kubectl describe pvc fiopod-ephemeralvolume
Check fio testing to see its current status:
kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
You've now deployed a pod that's using local NVMe as its storage, and you can use it for your Kubernetes workloads.
Manage storage pools
Now that you've created your storage pool, you can expand or delete it as needed.
Expand a storage pool
You can expand storage pools backed by local NVMe to scale up quickly and without downtime. Shrinking storage pools isn't currently supported.
Because a storage pool backed by Ephemeral Disk uses local storage resources on the AKS cluster nodes (VMs), expanding the storage pool requires adding another node to the cluster. Follow these instructions to expand the storage pool.
Run the following command to add a node to the AKS cluster. Replace
<cluster-name>
,<nodepool name>
, and<resource-group-name>
with your own values. To get the name of your node pool, runkubectl get nodes
.az aks nodepool add --cluster-name <cluster name> --name <nodepool name> --resource-group <resource group> --node-vm-size Standard_L8s_v3 --node-count 1 --labels acstor.azure.com/io-engine=acstor
Run
kubectl get nodes
and you'll see that a node has been added to the cluster.Run
kubectl get sp -A
and you should see that the capacity of the storage pool has increased.
Delete a storage pool
If you want to delete a storage pool, run the following command. Replace <storage-pool-name>
with the storage pool name.
kubectl delete sp -n acstor <storage-pool-name>
See also
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for