Processor affinity

Processor affinity, or CPU pinning or "cache affinity", enables the binding and unbinding of a process or a thread to a central processing unit (CPU) or a range of CPUs, so that the process or thread will execute only on the designated CPU or CPUs rather than any CPU. This can be viewed as a modification of the native central queue scheduling algorithm in a symmetric multiprocessing operating system. Each item in the queue has a tag indicating its kin processor. At the time of resource allocation, each task is allocated to its kin processor in preference to others.

Processor affinity takes advantage of the fact that remnants of a process that was run on a given processor may remain in that processor's state (for example, data in the cache memory) after another process was run on that processor. Scheduling that process to execute on the same processor improves its performance by reducing performance-degrading events such as cache misses. A practical example of processor affinity is executing multiple instances of a non-threaded application, such as some graphics-rendering software.

Scheduling-algorithm implementations vary in adherence to processor affinity. Under certain circumstances, some implementations will allow a task to change to another processor if it results in higher efficiency. For example, when two processor-intensive tasks (A and B) have affinity to one processor while another processor remains unused, many schedulers will shift task B to the second processor in order to maximize processor use. Task B will then acquire affinity with the second processor, while task A will continue to have affinity with the original processor.

Usage

Processor affinity can effectively reduce cache problems, but it does not reduce the persistent load-balancing problem.[1] Also note that processor affinity becomes more complicated in systems with non-uniform architectures. For example, a system with two dual-core hyper-threaded CPUs presents a challenge to a scheduling algorithm.

There is complete affinity between two virtual CPUs implemented on the same core via hyper-threading, partial affinity between two cores on the same physical processor (as the cores share some, but not all, cache), and no affinity between separate physical processors. As other resources are also shared, processor affinity alone cannot be used as the basis for CPU dispatching. If a process has recently run on one virtual hyper-threaded CPU in a given core, and that virtual CPU is currently busy but its partner CPU is not, cache affinity would suggest that the process should be dispatched to the idle partner CPU. However, the two virtual CPUs compete for essentially all computing, cache, and memory resources. In this situation, it would typically be more efficient to dispatch the process to a different core or CPU, if one is available. This could incur a penalty when process repopulates the cache, but overall performance could be higher as the process would not have to compete for resources within the CPU.

Specific operating systems

On Linux, the CPU affinity of a process can be altered with the taskset(1) program[2] and the sched_setaffinity(2) system call. The affinity of a thread can be altered with one of the library functions: pthread_setaffinity_np(3) or pthread_attr_setaffinity_np(3).

On SGI systems, dplace binds a process to a set of CPUs.[3]

On DragonFly BSD 1.9 (2007) and later versions, usched_set system call can be used to control the affinity of a process.[4][5] On NetBSD 5.0, FreeBSD 7.2, DragonFly BSD 4.7 and later versions can use pthread_setaffinity_np and pthread_getaffinity_np.[6] In NetBSD, the psrset utility[7] to set a thread's affinity to a certain CPU set. In FreeBSD, cpuset[8] utility is used to create CPU sets and to assign processes to these sets. In DragonFly BSD 3.1 (2012) and later, usched utility can be used for assigning processes to a certain CPU set.[9]

On Windows NT and its successors, thread and process CPU affinities can be set separately by using SetThreadAffinityMask[10] and SetProcessAffinityMask[11] API calls or via the Task Manager interface (for process affinity only).

macOS exposes an affinity API[12] that provides hints to the kernel how to schedule threads according to affinity sets.

On Solaris it is possible to control bindings of processes and LWPs to processor using the pbind(1)[13] program. To control the affinity programmatically processor_bind(2)[14] can be used. There are more generic interfaces available such as pset_bind(2)[15] or lgrp_affinity_get(3LGRP)[16] using processor set and locality groups concepts.

On AIX it is possible to control bindings of processes using the bindprocessor command[17][18] and the bindprocessor API.[17][19]

See also

References

  1. "White Paper - Processor Affinity" - From tmurgent.com. Accessed 2007-07-06.
  2. taskset(1)  Linux User's Manual – User Commands
  3. dplace.1 Archived 2007-07-01 at the Wayback Machine - From sgi.com. Accessed 2007-07-06.
  4. "usched_set(2) — setting up a proc's usched". DragonFly System Calls Manual. DragonFly BSD. Retrieved 2019-07-28.
  5. "kern/kern_usched.c § sys_usched_set". BSD Cross Reference. DragonFly BSD. Retrieved 2019-07-28.
  6. pthread_setaffinity_np(3)NetBSD, FreeBSD and DragonFly BSD Library Functions Manual
  7. psrset(8)  NetBSD System Manager's Manual
  8. cpuset(1)  FreeBSD General Commands Manual
  9. "usched(8) — run a program with a specified userland scheduler and cpumask". DragonFly System Manager's Manual. DragonFly BSD. Retrieved 2019-07-28.
  10. SetThreadAffinityMask - MSDN Library
  11. SetProcessAffinityMask - MSDN Library
  12. "Thread Affinity API Release Notes". Developer.apple.com.
  13. pbind(1M) - Solaris man page
  14. processor_bind(2) - Solaris man page
  15. pset_bind(2) - Oracle Solaris 11.1 Information Library - man pages section 2
  16. lgrp_affinity_get(3LGRP) - Memory and Thread Placement Optimization Developer's Guide
  17. Umesh Prabhakar Gaikwad; Kailas S. Zadbuke (November 16, 2006). "Processor affinity on AIX".
  18. "bindprocessor Command". IBM.
  19. "bindprocessor Subroutine". IBM.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.