Questions tagged [numa]

Non-Uniform Memory Access is what it stands for. For x86 architectures this is the method used to handle memory architectures where each processor has local memory and accessing another processor's memory is appreciably more expensive.

Non-Uniform Memory Access describes a memory architecture in which RAM is partitioned into more than one locality. Localities are called Nodes, and in most commodity hardware correlates to CPU processor sockets. In such systems access times to RAM is dependent upon which CPU is calling the FETCH and which NUMA Node the requested RAM resides in. RAM that is local to the CPU node will be fetched faster than RAM local to another CPU node.

NUMA-enabled systems provide hints to the OS in the form of certain BIOS structures. One such structure is the System Locality Information Table, which describes the relative cost of certain nodes communicating. In a fully-connected system where each node can talk directly to every other node this table is likely to have the same values for each node. In a system where nodes do not have direct connection, such as a ring topology, this table tells the OS how much longer it takes for the distant nodes to communicate.

NUMA allows NUMA-aware operating systems and programs an additional optimization center. Such programs ( is one such) will keep process-local memory on the same NUMA-node, which in turn allows for faster memory response times. For NUMA-aware operating systems operating policy is usually set for processes to be served out of a specific NUMA node's memory for as long as possible, which also restricts execution to the cores associated with that node.

For systems that will not be running NUMA-aware programs the differential memory access times can cause seemingly undiagnosable performance differences. The severity of this disparity is very dependent upon the Operating System being used. Because of this, most server manufacturers have a BIOS option to interleave memory between NUMA nodes to create uniform access times.

Historically, older servers (before 2011) set this BIOS setting to interleave by default. However, advances in OS support of NUMA and CPU manufacturers inter-node connection architecture advances have change this, and such settings are increasingly set to let the OS handle memory interleaving.

For Linux operating systems the command numactl can be used to manage the memory policy for a NUMA-enabled system.

65 questions
1
vote
1 answer

Is NUMA Enabled?

Possible Duplicate: HP DL360p with Intel E5-2630 NUMA Capable? i have an brand new HP DL380p Gen 8 with Windows server 2008R2 Enterprise Edition and i want to check if numa is enabled and how can disable. how can check this? thanks
user156995
  • 11
  • 1
  • 2
1
vote
1 answer

Check whether node interleaving is enabled on Dell R710 + Windows?

Is there a way to check whether node interleaving is enabled from within Windows on a Dell R710? omreport chassis biossetup doesn't appear to print any NUMA-related settings on the server I'm looking at.
James Lupolt
  • 624
  • 1
  • 7
  • 18
1
vote
1 answer

Incorrect # of Hugepages in `numstat`

I asked a similar question years ago. Now, my machine has four 1G hugepages and 256 2MB hugepages: # cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages 4 # cat…
HCSF
  • 245
  • 2
  • 13
1
vote
0 answers

DELL PowerEdge R740xd - NUMA - memory performance

I have two physical database servers (both Windows Server 2016): test server (5 years old): DELL PowerEdge R730xd, 1x Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz (4C/8T), 192 GB RAM (12x 16GB PC4-17000 - 36ASF2G72PZ-2G1A2) - one NUMA node production…
teo
  • 133
  • 1
  • 3
0
votes
1 answer

Incorrect # of Free Hugepages in `numastat`

$ numastat -vm Per-node system memory usage (in MBs): Node 0 Total --------------- --------------- MemTotal 32464.24 32464.24 MemFree 30993.97 …
HCSF
  • 245
  • 2
  • 13
0
votes
0 answers

Linux NFS server NUMA affinity - pool_mode

From the Linux kernel-parameters.txt I have seen that it is possible to change the NFS pool behaviour to have the nfsd threads bound to NUMA zones. The parameter in question is sunrpc.pool_mode and can be set to pernode for NUMA affinity. There is…
Thomas
  • 4,155
  • 5
  • 21
  • 28
0
votes
4 answers

Hyper-V: Not enough memory to start VM although there are plenty left

I'm having this error on my server: "Not enough memory in the system to start the virtual machine. Ran out of memory (0x8007000E)" when starting an 8 GB VMs on a 12 GB RAM FREE server. Here is my set up. Host specs: 32 GB RAM - E3-1240v3 CPU - 4 TB…
Hiền Phạm
  • 21
  • 1
  • 2
0
votes
1 answer

Does disabling "numa interleave" from bios cause memory page-out(when cpu-1 has no free memory left) to hdd on all dual-cpu systems?

For an example system of a dell dual 4114 silver with 24GB per CPU; how would it work if my application allocates 24 GB at once? Should I be concerned about write-life of my SSD because of pagefile usage? Note about memory for the example: 6x8GB…
0
votes
1 answer

Ryzen Threadripper CPU does not report multiple NUMA nodes

Just booted Arch Linux on a Ryzen Threadripper 1950X server that I built and use in my company. Please don't close this question. It is relevant for anyone using Linux on Threadripper and running NUMA-aware software. In fact I've found the answer…
0
votes
1 answer

Change the NUMA node where a PCIe device is attached

Modern servers using multiple physical CPU sockets have NUMA. PCIe devices are attached to one specific NUMA node as the PCIe controller is embedded in the physical CPU chip. Is it possible to change the assignment of the PCIe device from one NUMA…
Mircea Vutcovici
  • 16,706
  • 4
  • 52
  • 80
0
votes
0 answers

CPU & Memory Reservation in vSphere & Numa concept

by googling/studying vSphere documentation I have found the possibility to use "Reservation" concept in vSphere. What it is not clear to me is : CPU and Memory reservations are configured separately and work differently. With memory reservations,…
0
votes
1 answer

What is NUMA node limit in modern Windows OS

What is the highest number of NUMA nodes in Windows 10 / Server 2012? In Windows 7, it was documented that OS supports only up to 4 NUMA nodes1 but with modern systems supporting 320 logical processors this clearly cannot be the case anymore.
gabr
  • 363
  • 3
  • 8
0
votes
1 answer

HyperV memory per NUMA node

I have some issue with memory allocation on my Hyper-V 2012 R2 server. Server has 16GB of RAM, with 2 x 12 core CPUs. When I run Get-VMHostNumaNode, I am getting following results: NodeId : 0 ProcessorsAvailability : {0, 0, 0,…
0
votes
1 answer

NUMA processor definition

NUMA, non-uniform memory access designates a symetric multi-processing system where processors are grouped into nodes, with each group sharing some level of memory, so that memory access on same node is faster than memory access to another node. To…
kiriloff
  • 129
  • 4
0
votes
1 answer

HP Server ProLiant DL360 Gen9 vs IBM System x3850 X5 ==> Numa Processor group usage

The same C# executable programmed to run on every nodes have those different behavior: HP: Run on one node only (one processorGroup) (any one of the 2). Problem: it suppose to run on every nodes. IBM: Run on all nodes (every processorGroup) Both…