This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Uplifting & Expanding Linux LVM Managed Disks in Azure

One of the great things about public cloud, is the potential to simply increase the data disk space for your systems.
In the on-premise, non-virtualised world, you would have needed to physically add more disk or swap the disk for a bigger unit.
Even in the on-premise virtualised world, you may have needed more actual disk to expand the virtual hard disk.

Sometimes those disks can be storing data files for databases.
In which case, if you have followed the Azure architecture best practices, then you will be using LVM or some other volume management layer on top of the raw disk device. This will give you more flexibility and performance (through striping).

In this guide I show how to increase the disk size by uplifting the data disks in Azure, then resizing the disk devices in Linux, allowing us eventually to grow the XFS file system to a larger size.
I will discuss good reasons to uplift vs adding extra data disks.

My Initial Setup

In this step-by-step, we will be using Linux (SUSE Enterprise Linux Server 12) with LVM as our volume management software.
The assumption is that you have already created your data disks (2 of them) and striped across those disks with a single logical volume.
(Remember, the striping gives you double the IOPS when reading/writing the data to/from the disks).

In my simple example, I used the following the create my LVM setup:

  1. Add 2x data disks of size 128GB (I used 2x S10) to a VM running SLES 12 using the Azure Portal.
  2. Create the physical volumes (mine were sdd and sde on LUNs 1 and 2):
    pvcreate /dev/sdd
    pvcreate /dev/sde

  3. Create the volume group:
    vgcreate volTMP /dev/sdd /dev/sde
  4. Create the striped logical volume using all the space:
    lvcreate -l +100%FREE volTMP/lvTMP1
  5. Create the file system using XFS:
    mkfs.xfs /dev/mapper/volTMP-lvTMP1
  6. Mount the file system to a new mount point:
    mkdir /BIGSTRIPEDDISK
    mount /dev/mapper/volTMP-lvTMP1 /BIGSTRIPEDDISK

In Azure my setup looked like the below (I already had 1 data disk, so I added 2 more):

In the VM, we can see the file system is mounted and has a size of 256GB (2x 128GB disks):

You can double check the striping using the lvdisplay command with “-m” flag:

Once the disk was setup, I then created a simple text file with ASCII text inside:

I also used “dd” to create a large 255GB file (leaving 1GB free):

dd if=/dev/zero of=./mybigfile.data bs=1024k count=261120

The disk usage is now close to 100%:

I ran a checksum on the large file:

Value is: 3494419206

With the checksum completed (it took a few minutes), I now have a way of checking that my file is the same before/after the disk resize, plus the cksum tool will force reading of the whole file (checking for filesystem I/O issues).

Increasing the Data Disk Size

Within the Azure portal, we first need to stop the VM:

Once stopped, we can go to each of the two data disks and uplift from an S10 (in my example) to an S15 (256GB):

We can now start the VM up again:

When the VM is running again, we can log in and check.
Our file system is the same size:

We check with the LVM command “pvdisplay” to display one of the physical disks, and we can see that the size has not changed, it is still 128GB:

We need to make LVM re-scan the disk to make it aware of the new increased size. We use the pvresize command:

Re-checking the disk using pvdisplay, and we can see it has increased to 256GB in size:

We do the same for the /dev/sde disk:

Once the physical disks are resized (in the eyes of LVM), we can now check the volume group:

We have now got 256GB of free space (see row: “Free PE / Size”) in our volume group.

To allow our file system to get this space, the logical volume within the volume group needs to be expanded into the free space.
We use the lvresize command to make our logical volume use all free space in the volume group “+100%FREE”:

NOTE: It is also possible to specify an exact size should you want to be specific.

Our file system is still only 256GB in size, until we resize it.
For XFS file systems, we use the xfs_growfs command as follows:

Checking the file system now, shows we have 512GB of free space (50% free):

Are my files still present? Yes:

Let’s check the contents of my text file:

Finally, I validate that my big data file has not been corrupted:

Value is: 3494419206

What is the Alternative to Uplifting?

Instead of uplifting the existing data disks, it is possible to increase the amount of storage in my volume group, by adding two new additional disks.
To prevent performance issues, these new disks should be of the same scale level (S10) as the existing disks.
You should definitely not be mixing disk types in a logical volume, so to prevent this, you should not mix them in a volume group (even though you could technically separate them at the logical volume level).

Is there a good reason when to add more disks? When you are going to create a new logical volume, it is ideal to keep the data on separate physical disks to help avoid data-loss (from a lost/deleted disk).
There are also performance reasons to have additional Linux devices, since parameters such as queue depth affect the Linux device level. The Linux O/S can effectively issue more simultaneous read requests since additional data disks are additional devices.

Is there a good reason when to not add more disks? When you know you could exceed the VM data disk count limitations. Each VM has a limit, the bigger the VM, the bigger the limit.

When you know that you always leave a small proportion of disk space free. Adding more disks which will only ever be max 80% used, is more wasteful compared to upscaling an existing set of disks which will only ever be 80% used.

Summary

Using the power of Azure, we have increased the data disk sizes of our VM.
The increase needed the VM to be stopped and started again while the disks were uplifted from an S10 (128GB) to an S15 (256GB).

Once uplifted, we had to make LVM aware of the new disk sizes by using the pvresize command, then the free space in the volume group was given to the logical volume and finally the file system was grown.
To maintain the logical volume stripe, both disks were uplifted, then both disks needed the pvresize command running.
We validated the before and after state of our ASCII and data files and they were not corrupted.

Finally, we looked at the alternative to uplifting and saw that uplifting may not be appropriate in all cases.