Implicit parallelism

In computer science, implicit parallelism is a characteristic of a programming language that allows a compiler or interpreter to automatically exploit the parallelism inherent to the computations expressed by some of the language's constructs. A pure implicitly parallel language does not need special directives, operators or functions to enable parallel execution, as opposed to explicit parallelism.

Programming languages with implicit parallelism include Axum, BMDFM, HPF, Id, LabVIEW, MATLAB M-code, NESL, SaC, SISAL, ZPL, and pH.[1]

Example

If a particular problem involves performing the same operation on a group of numbers (such as taking the sine or logarithm of each in turn), a language that provides implicit parallelism might allow the programmer to write the instruction thus:

numbers = [0 1 2 3 4 5 6 7];
result = sin(numbers);

The compiler or interpreter can calculate the sine of each element independently, spreading the effort across multiple processors if available.

Advantages

A programmer that writes implicitly parallel code does not need to worry about task division or process communication, focusing instead on the problem that his or her program is intended to solve. Implicit parallelism generally facilitates the design of parallel programs and therefore results in a substantial improvement of programmer productivity.

Many of the constructs necessary to support this also add simplicity or clarity even in the absence of actual parallelism. The example above, of List comprehension in the sin() function, is a useful feature in of itself. By using implicit parallelism, languages effectively have to provide such useful constructs to users simply to support required functionality (a language without a decent for() loop, for example, is one few programmers will use).

Disadvantages

Languages with implicit parallelism reduce the control that the programmer has over the parallel execution of the program, resulting sometimes in less-than-optimal parallel efficiency. The makers of the Oz programming language also note that their early experiments with implicit parallelism showed that implicit parallelism made debugging difficult and object models unnecessarily awkward.[2]

A larger issue is that every program has some parallel and some serial logic. Binary I/O, for example, requires support for such serial operations as Write() and Seek(). If implicit parallelism is desired, this creates a new requirement for constructs and keywords to support code that cannot be threaded or distributed.

Notes

  1. Nikhil, Rishiyur; Arvind. Implicit Parallel Programming in pH. ISBN 1-55860-644-0.
  2. Seif Haridi (2006-06-14). "Introduction". Tutorial of Oz. Retrieved 2007-09-20.


gollark: Does Xander *need* 3TB of storage?
gollark: <@!202992030685724675> The reactor is done!
gollark: Unlike *someone*, I actually have mitigations on.
gollark: ```Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr p ge mca cmov pat pse36 clflush dts acpi mmx fxsr ss e sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm c onstant_tsc art arch_perfmon pebs bts rep_good nop l xtopology nonstop_tsc cpuid aperfmperf tsc_known _freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4 _2 x2apic movbe popcnt tsc_deadline_timer aes xsav e avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_ fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsb ase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mp x rdseed adx smap clflushopt intel_pt xsaveopt xsa vec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp _notify hwp_act_window hwp_epp md_clear flush_l1d```
gollark: Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianAddress sizes: 39 bits physical, 48 bits virtualCPU(s): 4On-line CPU(s) list: 0-3Thread(s) per core: 2Core(s) per socket: 2Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 142Model name: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHzStepping: 9CPU MHz: 861.413CPU max MHz: 3100.0000CPU min MHz: 400.0000BogoMIPS: 5426.00Virtualization: VT-xL1d cache: 64 KiBL1i cache: 64 KiBL2 cache: 512 KiBL3 cache: 3 MiBNUMA node0 CPU(s): 0-3Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache f lushes, SMT vulnerableVulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerableVulnerability Meltdown: Mitigation; PTIVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccompVulnerability Spectre v1: Mitigation; __user pointer sanitizationVulnerability Spectre v2: Mitigation; Full generic retpoline, IBRS_FW, STIBP conditional, RSB filling
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.