VMFS Recovery™
Recover data from damaged or formatted VMFS disks or VMDK files
Recover data from damaged or formatted VMFS disks or VMDK files
Last updated: Apr 29, 2026

VMware Hotplug Memory and CPU: How to Enable, Configure, and Disable It Safely

VMware hotplug lets you add or remove vCPUs and RAM on a running VM without downtime. It’s essential for scaling production workloads that must stay online. This guide shows how to enable hotplug in vSphere, configure it per VM, and disable it when static allocation is required. You’ll get the exact steps, caveats, and best practices for safe resource scaling.

What Is VMware Memory and CPU Hotplug?

VMware hotplug (also called hot add) allows administrators to add CPUs or memory to a running virtual machine without shutting it down. In VMware documentation, “hot add” and “hot plug” are interchangeable terms for CPU and memory contexts. Hot remove is supported for CPUs but not for memory — once memory is added, it cannot be removed while the VM is running. Importantly, hotplug is different from VMware’s dynamic memory management features (transparent page sharing, ballooning, compression), which optimize existing memory usage rather than add new capacity. Hotplug directly increases VM resources; dynamic memory management only reclaims or compresses what is already allocated.

CapabilityVMware TermMemoryCPUVirtual DiskNIC
Add while VM is runningHot Add / Hot Plug✅ Yes✅ Yes✅ Yes✅ Yes
Remove while VM is runningHot Remove / Hot Unplug❌ No⚠️ Limited*✅ Yes✅ Yes
Requires powered-off VM to enable✅ Yes✅ Yes❌ No❌ No

Why VMware Hotplug Memory and CPU Is Not Enabled by Default

Enabling CPU hotplug in VMware automatically disables vNUMA. This is not a bug but a deliberate trade‑off: administrators who don’t need live scaling would otherwise pay a permanent performance penalty. By default, VMware prioritizes performance and NUMA awareness over flexibility, leaving hotplug disabled until explicitly required.

How vNUMA Works and Why It Matters

NUMA (Non‑Uniform Memory Access) reflects physical hardware topology: each CPU socket owns local memory banks, and cross‑socket access is slower. vNUMA exposes this topology to the guest OS, allowing its scheduler to place threads near their memory. Minimum requirements: at least 4 vCPUs and 2 cores per NUMA node on the ESXi host.

What Happens to vNUMA When You Enable VMware CPU Hotplug

With CPU hotplug enabled, ESXi forces UMA (Uniform Memory Access). The guest OS sees one flat memory domain, ignoring physical topology. For small VMs (<8 vCPUs), this impact is minor. For large VMs (databases, SAP HANA, application servers), loss of NUMA awareness causes remote memory latency and throughput degradation.

vNUMA vs. UMA — Impact by VM Size

VM SizevNUMA EnabledvNUMA Disabled (CPU Hotplug On)Performance Risk
1–4 vCPUsN/AN/ANegligible
5–7 vCPUsNot activeNot applicableLow
8+ vCPUsActive by defaultDisabledMedium–High
16+ vCPUs (DB/App servers)CriticalDisabledHigh

Memory Hotplug and vNUMA: A Different Story

Memory hotplug does not disable vNUMA. Enabling only memory hotplug carries far fewer performance risks than CPU hotplug. This distinction is critical for admins making per‑VM decisions: CPU hotplug trades performance for flexibility, while memory hotplug is safer to enable in isolation.

Requirements: What You Need Before You Enable VMware Memory and CPU Hotplug

RequirementDetails
VM hardware versionVersion 7 or higher (ESXi 5.0+)
VMware ToolsMust be installed and running in the guest OS
Guest OS supportWindows Server 2003+; Linux kernel 3.8+
vSphere LicenseAdvanced, Enterprise, or Enterprise Plus
Fault ToleranceMust be disabled — incompatible with hotplug
VM Power StateVM must be powered OFF to enable the feature
Virtualization-Based Security (VBS)Must be disabled on Windows VMs; VBS blocks live hotplug

Guest OS Support Matrix for VMware Memory and CPU Hotplug

VMware hotplug support depends not only on the OS version but also on the OS edition. For example, Windows Server Standard and Datacenter editions support CPU hotplug, while lower editions may not. Memory hotplug has broader support across all Windows Server generations, making it more universally available than CPU hotplug. Linux distributions generally support both CPU and memory hotplug, but kernel version and distribution policies can affect stability. Administrators must always verify edition‑level support before enabling CPU hotplug, as eligibility is not guaranteed by OS version alone.

Windows Server Guest OS Support Matrix

Guest OSEditionMemory HotplugCPU Hotplug
Windows Server 2003 32/64-bitStandard, Enterprise✅ Yes❌ No
Windows Server 2008 32-bitStandard, Enterprise, Datacenter✅ Yes❌ No
Windows Server 2008 64-bitStandard, Enterprise✅ Yes❌ No
Windows Server 2008 64-bitDatacenter✅ Yes✅ Yes
Windows Server 2008 R2Standard, Enterprise✅ Yes❌ No
Windows Server 2008 R2Datacenter✅ Yes✅ Yes
Windows Server 2012 / R2Standard, Datacenter✅ Yes✅ Yes
Windows Server 2016Standard, Datacenter✅ Yes✅ Yes
Windows Server 2019Standard, Datacenter✅ Yes✅ Yes
Linux (kernel 3.8+)✅ Yes✅ Yes

How to Enable VMware Memory and CPU Hotplug

Method 1: Using the vSphere Client (GUI)

  1. 1. Power off the virtual machine.
  2. 2. Right‑click the VM → Edit Settings.
  3. 3. Expand CPU → check Enable CPU Hot Add.
  4. 4. Expand Memory → check Memory Hot Plug.
  5. 5. Click OK and power on the VM. Note: In both the Flash and HTML5 clients, these options are grayed out if the VM is running.

Method 2: Using VMware PowerCLI

Connect-VIServer
        Get-VM -Name 
        $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
        $spec.memoryHotAddEnabled = $true
        $spec.cpuHotAddEnabled = $true
        (Get-VM -Name ).ExtensionData.ReconfigVM_Task($spec)

        # Verification across all VMs
        Get-VM | Get-View | Select Name, @{N="CPUHotAdd";E={$_.Config.CpuHotAddEnabled}}, @{N="MemHotA

Method 3: Editing the VMX Configuration File Directly

  1. 1. SSH into the ESXi host.
  2. 2. Navigate to the VM’s datastore directory.
  3. 3. Edit the .vmx file and add:

mem.hotadd = "TRUE"
            vcpu.hotadd = "TRUE"

  • Save the file and reload the VM configuration. Note: devices.hotplug = "false" controls peripheral devices but does not affect CPU or memory hotplug.

Method 4: Editing VMX Parameters via vSphere Client Advanced Settings

  1. 1. Power off the VM.
  2. 2. Go to VM Options → Advanced → Edit Configuration.
  3. 3. Add the parameters:

mem.hotadd = "TRUE"
            vcpu.hotadd = "TRUE"

  1. 4. Save and power on the VM. This is a GUI alternative when SSH access is unavailable.

How to Disable VMware Memory and CPU Hotplug

Disabling hotplug requires the VM to be powered off. The reversal process is identical to enablement: uncheck the boxes in Edit Settings or set the VMX parameters to FALSE via PowerCLI or configuration file edits.

Method 1: Using the vSphere Client (GUI)

  1. 1. Power off the VM.
  2. 2. Right‑click the VM → Edit Settings.
  3. 3. Expand CPU → uncheck Enable CPU Hot Add.
  4. 4. Expand Memory → uncheck Memory Hot Plug.
  5. 5. Click OK and power on the VM.

Method 2: Using VMware PowerCLI

Connect-VIServer
    Get-VM -Name 
    $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
    $spec.memoryHotAddEnabled = $false
    $spec.cpuHotAddEnabled = $false
    (Get-VM -Name ).ExtensionData.ReconfigVM_Task($spec)

    # Verification
    Get-VM | Get-View | Select Name, @{N="CPUHotAdd";E={$_.Config.CpuHotAddEnabled}}, @{N="MemHotAd

Method 3: Editing the VMX Configuration File Directly

  1. 1. SSH into the ESXi host.
  2. 2. Navigate to the VM’s datastore directory.
  3. 3. Edit the .vmx file and set:
mem.hotadd = "FALSE" vcpu.hotadd = "FALSE"
  1. 4. Save and reload the VM configuration.

Method 4: Editing VMX Parameters via vSphere Client Advanced Settings

  1. 1. Power off the VM.
  2. 2. Go to VM Options → Advanced → Edit Configuration.
  3. 3. Set:

mem.hotadd = "FALSE"
        vcpu.hotadd = "FALSE"

  1. 4. Save and power on the VM.

Important: Any resources added while hotplug was active remain allocated until the VM is powered off and reconfigured. Hot removal of memory is not supported — once added, memory stays until a full reconfiguration.

Known Limitations of VMware Hotplug Memory and CPU

LimitationDetail
Memory scaling ceilingMaximum hot add = 16× the initial RAM (e.g., 4 GB → up to 64 GB)
Low-memory VMsVMs with ≤3 GB RAM (≤3072 MB) may error on hot add — increase to >3 GB before powering on
Linux minimumLinux VMs need at least 4 GB to reach the 16× ceiling; below 4 GB, the ceiling drops to 32 GB
No hot removal of RAMOnce memory is hot added, it cannot be reclaimed without a power cycle
CPU hot removeOnly Windows Server 2016/2019 supports CPU hot remove; all other OSes require a reboot
VBS conflictWindows VMs with Virtualization-Based Security enabled cannot hot add vCPU or vRAM while running
Fault ToleranceHotplug is mutually exclusive with FT — enabling one disables the other
LicensingGuest OS licensing (e.g., Windows Server per-socket) may require additional license purchases after CPU hot add

When to Enable and When to Avoid VMware Memory and CPU Hotplug

Enable Hotplug When

  • Workloads face unpredictable demand spikes (web servers, reporting engines, batch processors).
  • Downtime carries a high business cost.
  • Pre‑provisioning overhead is not feasible.
  • Small VMs (<8 vCPUs) where vNUMA impact is negligible.

Do Not Enable Hotplug When

  • Large database or application servers (SQL Server, Oracle, SAP HANA) that rely on vNUMA for throughput.
  • VMs already spanning multiple NUMA nodes.
  • Situations where resources can be provisioned upfront and downtime scheduled during a maintenance window.
  • Fault‑Tolerant VMs.

Should You Enable VMware CPU Hotplug?

ScenarioRecommended Action
VM < 8 vCPUs, unpredictable demandEnable CPU hotplug
VM ≥ 8 vCPUs, stable workloadKeep hotplug disabled; resize during maintenance window
Large DB/App server (NUMA-sensitive)Disable CPU hotplug; pre-provision adequate vCPUs
Memory scaling only neededEnable memory hotplug only; leave CPU hotplug disabled
Fault-Tolerant VMCannot use hotplug; design for capacity from the start

When to Enable and When to Avoid VMware Memory and CPU Hotplug

Enable Hotplug When

  • Workloads experience unpredictable demand spikes (web servers, reporting engines, batch processors).
  • Downtime carries a high business cost.
  • Pre‑provisioning overhead is not feasible.
  • Small VMs (under 8 vCPUs) where vNUMA impact is negligible.

Do Not Enable Hotplug When

  • Large database or application servers (SQL Server, Oracle, SAP HANA) that depend on vNUMA for throughput.
  • VMs already spanning multiple NUMA nodes.
  • Situations where resources can be provisioned upfront and downtime scheduled during a maintenance window.
  • Fault‑Tolerant VMs.

Should You Enable VMware CPU Hotplug?

ScenarioRecommended Action
VM < 8 vCPUs, unpredictable demandEnable CPU hotplug
VM ≥ 8 vCPUs, stable workloadKeep hotplug disabled; resize during maintenance window
Large DB/App server (NUMA-sensitive)Disable CPU hotplug; pre-provision adequate vCPUs
Memory scaling only neededEnable memory hotplug only; leave CPU hotplug disabled
Fault-Tolerant VMCannot use hotplug; design for capacity from the start

Diagnosing Hotplug and NUMA Problems on ESXi

Troubleshooting hotplug and NUMA issues requires direct inspection of ESXi logs and shell commands.

  • Inspect NUMA configuration per VM Use the vmdumper tool to review NUMA topology in the VM’s vmware.log. This reveals how vCPUs are mapped to NUMA nodes.
  • Verify NUMA node count and vCPU balance Run:

sched-stats -t numa-clients

This command shows the number of NUMA nodes and whether vCPUs are evenly distributed.

  • Identify forced UMA state when hotplug is active Look for numaHost log entries such as:

numa: Hot add is enabled and vNUMA hot add is disabled, forcing UMA

This confirms that enabling CPU hotplug has disabled vNUMA, flattening memory topology.

  • Spot unbalanced vCPU NUMA nodes Logs and sched‑stats output can reveal skewed vCPU placement, which leads to remote memory access latency and throughput loss.
  • Mitigation strategy when hotplug cannot be disabled Configure vCPU and Cores per Socket = 1×1. This forces each vCPU to appear as a separate socket, reducing NUMA imbalance even under UMA conditions.

Virtual Machine File Recovery After Hotplug Misconfigurations and Failures

How Hotplug Changes Can Corrupt VM Files

Configuration errors during hotplug operations — abrupt power losses mid‑reconfiguration, failed VMX writes, or datastore I/O errors under memory pressure — can leave VMDK files inconsistent, corrupt VMFS metadata, or make a datastore inaccessible. Hotplug‑triggered memory spikes may also exhaust ESXi host memory, causing VM crashes that leave dirty disk images.

What Gets Damaged and Why Recovery Is Complex

VMFS structures such as the File Descriptor (FD), resource allocation bitmaps, and volume metadata are binary and not self‑healing. A corrupted VMFS volume can make all VMs on the datastore invisible even if the storage hardware is intact. VMDK descriptor files, flat extent files, and snapshot chains (.vmsd, .vmsn) are all vulnerable to corruption.

Recovering VMware Data with DiskInternals VMFS Recovery

When hotplug‑related failures result in inaccessible VMs or corrupted datastores, DiskInternals VMFS Recovery™ provides a proven recovery path. It supports:

  • VMFS volumes that fail to mount
  • VMDK files unreadable by vSphere
  • Corrupted ESXi datastore metadata
  • VMFS volumes on failed or degraded RAID arrays
  • Accidental deletion of VM files from datastores

The tool offers free file preview before purchase, allowing administrators to confirm recovery viability. It handles both physical drive failures and logical corruption — the exact failure modes triggered by misconfigured hotplug operations and unplanned ESXi crashes.

Fix: VMware datastore inaccessible

Ready to get your data back?

To start VMware data recovery (recovering your data, documents, databases, images, videos, and other files), press the FREE DOWNLOAD button below to get the latest version of DiskInternals VMFS Recovery® and begin the step-by-step recovery process. You can preview all recovered files absolutely for FREE. To check the current prices, please press the Get Prices button. If you need any assistance, please feel free to contact Technical Support. The team is here to help you recover deleted VMware virtual machine!

Best Practices for VMware Memory and CPU Hotplug in Production

  • Audit before enabling — run sched-stats -t numa-clients to understand current NUMA topology before changing hotplug settings.
  • Enable memory hotplug independently — memory hotplug carries no vNUMA penalty; use it selectively for scale‑up workloads.
  • Reserve CPU hotplug for specific VMs — avoid applying it as a blanket policy across all VMs.
  • Document initial resource allocations — record RAM and vCPU counts at VM creation; the 16× memory ceiling is calculated from the initial allocation.
  • Test hotplug behavior in staging — confirm the guest OS recognizes newly added resources before enabling in production.
  • Back up VM configurations before reconfiguring — export .vmx files and take snapshots before enabling or disabling hotplug.
  • Monitor with vSphere performance charts — watch for memory latency increases and CPU ready spikes after enabling CPU hotplug on large VMs.
  • Keep VMFS Recovery™ in your toolkit — datastore‑level failures from misconfiguration are recoverable with the right tooling.

FAQ

Related articles

FREE DOWNLOADVer 4.25, WinBUY NOWFrom $699

Please rate this article.
51 reviews