I’m running XCP-ng with a number of VMs, one of which has run out of disk storage. Running Debian 10 in the VM.
df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 393M 40M 354M 11% /run
/dev/xvda1 16G 16G 0 100% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 393M 0 393M 0% /run/user/1000
Initially the VM was set up with a 20GB drive which I am attempting to double. I have used XOA to shutdown the VM and increased the disk size (Under Disk tab of VM) to 40GB. Then restarted the VM, but a df still shows the same result.
How can I increase the usable disk size? Having reached maximum usage is causing all sorts of other problems.
Not sure if there is a more elegant solution. But I did this once with a VM on a Proxmox host. After I increased the disk size in Proxmox I just booted the VM with a live ISO (I think I used an Ubuntu ISO) and then I used GParted on that live system to increase the size of the partition.
Thanks but I’m not sure how I would boot a VM off an ISO. No problems with ‘real’ hardware but never tried on a VM.
I’ve looked at resizing the VM (xvda) using command line commands but it looks risky as there is the main partition (xvda1) ahead of a swap partition (xvda2/5) and I’m unsure what commands to use in the right order to extend xvda1 without losing its data or the swap partition.
It is a little harder with a Windows system, but the above is the procedure that I’ve used.
With windows you will need use gparted to move the last partition down with an offset. Then in windows you can claim the extra contiguous space for the user/system area.
For future reference you can deploy the Debian template from xcpng under the ‘hub’ tab that has cloudinit enabled. Which will auto resize on boot if the disk size is greater than it was last time automatically.