The difference is only relevant on a machine with more than one NUMA node. On such machines, the processors in "Processor Information" are identified by their NUMA node and their number within the node. For example, if you have two nodes with four processors each, then "Processor information" would enumerate them as
0,0
0,1
0,2
0,3
1,0
1,1
1,2
1,3
The first number of each pair is the NUMA node number. "Processor Information" also provides pseudo-instances that give node-specific totals (0,_Total
and 1,_Total
for the preceding example).
In "Processor" the processors are simply numbered serially and there is a single system-wide _Total
instance, no matter how many NUMA nodes there are.
A NUMA machine these days would normally use one of the modern point-to-point interconnects (QPI or HyperTransport) and would have more than one physical CPU socket. In these platforms, each CPU socket is its own NUMA node, with its own set of DIMM slots.
So, why do they have both? If all you care about is the info from each processor there is no difference; you can get that from either group. But being able to easily identify CPU usage within the NUMA nodes is important in some performance tuning scenarios. The NUMA-node-wide totals are particularly valuable as they make it easy to figure out if the OS's scheduler is doing the right thing for you (either keeping all of your related processes together on one NUMA node... or not, whichever you would prefer).
On the vast majority of consumer and business machines you have just one physical CPU socket ("CPU package") and regardless of how many cores it has it is all just one NUMA node with one set of RAM shared by all of the cores, so the "Processor information" group will not show you anything different from the "Processor" group. NUMA machines are almost exclusively the province of servers and high-performance workstations.
FYI, here is a data sheet on a dual-socket NUMA motherboard. You can clearly see how the RAM sockets are physically associated with the respective CPU sockets.