How to Set Up XCP-ng Right the First Time – Best Practices and Configuration Tips

:hammer_and_wrench: XCP-ng Server Setup Best Practices

:brick: 1. Hardware Planning

  • :white_check_mark: General Hardware Both older and newer hardware are fine, as long as it is x86 and supports virtualization.
  • :white_check_mark: Software RAID Boot XCP-ng supports mdadm mirror setup on install
  • :white_check_mark: Hardware RAID Controller Good for local storage management
  • :white_check_mark: HBA or passthrough — Allows software (like ZFS or external storage) to manage redundancy. RAID controllers are not recommended when using passthrough to ZFS
  • :white_check_mark: Dual or more NICs — Separate management, VM, and storage traffic.

:package: 2. Install & Initial Config

  • :white_check_mark: Use the latest stable XCP-ng ISO — Stable builds ensure compatibility and security.
  • :white_check_mark: Set a static IP during install — Prevents DHCP surprises, especially for remote management and storage access.
  • :white_check_mark: Have a Host Naming Scheme — Simplifies management and XOA integration.

:locked_with_key: 3. Security

  • :white_check_mark: The firewall is enabled by default — review and adjust if needed
  • :white_check_mark: Disable SSH password authentication — Use SSH keys for access.
  • :white_check_mark: Use VLANs or Separate Network — Limit exposure of management interfaces.
  • :white_check_mark: Apply updates to XCP-ng and XOA — Keep the host secure, stable, and get new features

:file_cabinet: 4. Storage Setup

:floppy_disk: Local Storage

  • :white_check_mark: Use SSDs or NVMe for local SRs — Essential for high-performance workloads.
  • :white_check_mark: Local ZFS this is supported but currently not managed via the XO interface
  • :white_check_mark: Prefer EXT over LVM Thin provisioned and not block based so it’s easier to manage and recover

:globe_with_meridians: External Storage

  • :white_check_mark: NFS is preferred over iSCSI
    • :green_circle: Simpler integration with XCP-ng
    • :green_circle: Easier to recover from because it is file based
    • :green_circle: Use MC-LAG switch setup for redundancy
    • :yellow_circle: May be slightly lower performance than iSCSI in some edge cases
  • :white_check_mark: iSCSI
    • :green_circle: Multipath is supported
    • :yellow_circle: Not thin provisioned
    • :yellow_circle: Block not file based
  • :white_check_mark: Hyper Converged XOSTOR which is based on DRBD
    • :green_circle: Storage is replicated across nodes, allowing VMs to survive host failures.
    • :green_circle: No need for a dedicated external SAN/NAS—storage
    • :yellow_circle: Heavy network dependency to sync hosts
    • :yellow_circle: Can have much higher latency on writes without performant hardware
    • :yellow_circle: Only suitable when consistent, low-latency networking is available between nodes.

:package: 5. VM Management Best Practices

  • :white_check_mark: Use rational naming conventions — e.g., prod-app-01, lab-win10-02
  • :white_check_mark: Tag by role, department, and or environment — Useful for automation, filtering, and backups
  • :white_check_mark: Storage Design — VMs are ideal for running compute workloads, but storage should be handled externally when practical. https://lawrence.video/storagedesign
  • :cross_mark: Don’t Over-provision — Don’t overprovsion CPU or Memory

:repeat_button: 6. Backup Strategy

  • :white_check_mark: Use XO backups — They are integrated and easily automated
  • :white_check_mark: Keep backups on separate storage — Ideally not on the same NAS as the live NFS share
  • :white_check_mark: Test your restores regularly — Because “successful backup” does not mean “tested restore”

:satellite_antenna: 7. Networking

  • :white_check_mark: Have a clear Naming Scheme — Take advantage of the descriptions field
  • :white_check_mark: Name The Unused Interfaces — I name them “Not In Use”
  • :white_check_mark: Segment management, VM, and storage traffic — Use VLANs or separate NICs
  • :white_check_mark: Management Interface use a dedicate NIC if possible, not a bonded interface
  • :white_check_mark: Bond NICs — Fully supported make sure your network switch does as well
  • :white_check_mark: Dedicated Backup, Storage, Migration Network — In XCP-ng, you can assign a default network for backup, storage, and migration traffic at the pool level.

:gear: 8. Monitoring & Maintenance

  • :white_check_mark: Setup XO Email — Set up email notifications in XO
  • :white_check_mark: Use Syslog — Send data to a syslog server, I personally like Graylog

:books: 9. Documentation & Scaling

  • :white_check_mark: Keep a config doc or runbook — Include IPs, hostnames, pool members, etc..
  • :white_check_mark: Use tags — For groupings like environment, function, or backup policies
  • :white_check_mark: Netbox Support — The Netbox plugin allows better IP and asset tracking for XCP-ng environments.

2 Likes

@LTS_Tom , the accompanying video is your best one yet! I love this set of best practices, and I will use it to help spread the word about XCP-ng. Thank you!

1 Like

Hi @LTS_Tom - in the video you mention using onboard 1GbE interface as management interface. When I’ve tried this in home lab, I’ve noticed that Xen Orchestra performs the data transfers for backups (between SR ↔ host ↔ XO ↔ backup storage) over the management interface, and seems to only use a percentage of the available interface bandwidth (e.g. 30-60MB/sec over 1GbE).
Is there a better way to perform backups, or to tell XO to use a faster interface for backup job data transfers?

There is and I cover that in the video.

1 Like

Thanks @LTS_Tom and apologies - might’ve been distracted while watching… For anyone else looking, this is discussed at the 12m35s point in the video