Scalable locality

Computer software is said to exhibit scalable locality[1] if it can continue to make use of processors that out-pace their memory systems, to solve ever larger problems. This term is a high-performance uniprocessor analog of the use of scalable parallelism to refer to software for which increasing numbers of processors can be employed for larger problems.

Overview

Consider the memory usage patterns of the following loop nest (an iterative two-dimensional stencil computation):

for t := 0 to T do
    for i := 1 to N-1 do
        for j := 1 to N-1 do
            new(i,j) := (A(i-1,j) + A(i,j-1) + A(i,j) + A(i,j+1) + A(i+1,j)) * .2
        end
    end

    for i := 1 to N-1 do
        for j := 1 to N-1 do
            A(i,j) := new(i,j)
        end
    end
end

The entire loop nest touches about 2*N**2 array elements, and performs about 5*T*N**2 floating-point operations. Thus, the overall compute balance (ratio of floating-point computations to floating-point memory cells used) of this entire loop nest is about 5T/2. When the compute balance is a function of problem size, as it is here, the code is said to have scalable compute balance. Here, we could achieve any compute balance we desire by simply choosing a large enough T.

However, when N is large, this code will still not exhibit good cache reuse, due to poor locality of reference: by the time new(1,1) is needed in the second assignment, or the second time step's execution of the first assignment, the cache line holding new(1,1) will have been overwritten with some other part of one of the arrays.

Tiling of the first i/j loop nest can improve cache performance, but only by a limited factor, since that nest has compute balance of about 5/2. To produce a very high degree of locality, for example 500 (to run this code efficiently with an array that will not fit in RAM and is relegated to virtual memory), we must re-use values across time steps.

Optimization across time steps has been explored in a number of research compilers; see work by Wonnacott,[1][2] by Song and Li,[3] or by Sadayappan et al.[4] for details of some approaches to time-tiling. Wonnacott[1] demonstrated that time tiling could be used to optimize for out-of-core data sets; in principle, any of these approaches[2][3][4] should be able to achieve arbitrarily high memory locality without requiring that the entire array fit in cache (the cache requirement does, however, grow with the required locality). The multiprocessor techniques cited above[2][4] should, in principle, simultaneously produce scalable locality and scalable parallelism.

gollark: It's the library I'm using to train Gollarious GPT-2/mgollark with no actual AI knowledge.
gollark: Bad?
gollark: Gollarious NN data and usage instructions (it's basically trivial because someone else did all the work) available on request.
gollark: I KNEW Scala was a lie perpetuated by Java users in denial.
gollark: > Beware apioforms. It has zero width space for that.<|endoftext|>The idea was not that it was designed to spread frequently pressed keys around the keyboard and bite.<|endoftext|>I think the key is that they could move onto achieve arbitrary sorts though, but *not* the right way to run a keyboard.<|endoftext|>Well, you could just use a keyboard and not automatically hit it.<|endoftext|>"Your keyboard is a desktop keyboard and has keyboard and speech synthesis capability."<|endoftext|>Ah yes, fair.<|endoftext|>No, it's bad. It has keyboard shortcuts.<|endoftext|>`utilize` should work, because it's a shell.<|endoftext|>`utilize` is a shell but only `rm` is a shell.<|endoftext|>`scala` does not exist.<|endoftext|>`scala` is a shell. It's not lua. It should not recurse infinitely.<|endoftext|>`scala` is the shell.<|endoftext|>`csh` is a shell.<|endoftext

References

  1. David Wonnacott. Achieving Scalable Locality with Time Skewing. International Journal of Parallel Programming 30.3 (2002)
  2. David Wonnacott. Using Time Skewing to eliminate idle time due to memory bandwidth and network limitations. International Parallel and Distributed Processing Symposium 2000
  3. Yonghong Song and Zhiyuan Li. New tiling techniques to improve cache temporal locality. PLDI '99
  4. Sriram Krishnamoorthy and Muthu Baskaran and Uday Bondhugula and J. Ramanujam and Atanas Rountev and P. Sadayappan. Effective automatic parallelization of stencil computations. PLDI '07
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.