Linear speedup theorem
In computational complexity theory, the linear speedup theorem for Turing machines states that given any real c > 0 and any k-tape Turing machine solving a problem in time f(n), there is another k-tape machine that solves the same problem in time at most f(n)/c + 2n + 3, where k>1 .[1][2] If the original machine is non-deterministic, then the new machine is also non-deterministic. The concrete constants 2 and 3 in 2n+3 can be lowered, for example, to n+2.[1]
Proof
The construction is based on packing several tape symbols of the original machine M into one tape symbol of the new machine N. It has a similar effect as using longer words and commands in processors: It speeds up the computations but increases the machine size. How many old symbols are packed into a new symbol depends on the desired speed-up.
Suppose the new machine packs three old symbols into a new symbol. Then the alphabet of the new machine is : It consists of the original symbols and the packed symbols. The new machine has the same number k>1 of tapes. A state of N consists of the following components:
- the state of ``M``;
- for each tape: three packed symbols that describe the packed symbol under the head, the packed symbol on the left, and the packed symbol on the right; and
- for each tape: the original head position within the packed symbol under the head of N.
The new machine N starts with encoding the given input into a new alphabet (that is why its alphabet must include ). For example, if the input to 2-tape M is on the left, then after the encoding the tape configuration of N is on the right:
[ # _ a b b a b b a _ ...] [ # (_,_,_) (_,_,_) (_,_,_) ...] [ # _ _ _ _ _ _ _ _ _ ...] [ # (_,a,b) (b,a,b) (b,a,_) ...]
The new machine packs three old symbols (e.g., the blank symbol _, the symbol a, and the symbol b) into a new symbol ((_,a,b)) and copies it the second tape, while erasing the first tape. At the end of the initialization, the new machine directs its head to the beginning. Overall, this takes 2n+3 steps.
After the initialization, the state of N is , where the symbol means that it will be filled in by the machine later; the symbol means that the head of the original machine points to the first symbols inside and . Now the machine starts simulating m=3 transitions of M using six of its own transitions (in this concrete case, there will be no speed up, but in general m can be much larger than six). Let the configurations of M and N be:
[ # _ _ b b a b b a _ ...] [ # (_,a,b) (b,a,b) (b,_,_) ...] [ # _ a b b a b b _ _ ...] [ # (_,_,b) (b,a,b) (b,a,_) ...]
where the bold symbols indicate the head position. The state of N is . Now the following happens:
- N moves right, left, left, right. After the four moves, the machine N has all its filled, and its state becomes
- Now N updates its symbols and state according to m=3 transitions of the original machine. This may require two moves (update the current symbol and update one of its adjacent symbols). Suppose the original machine moves as follows (with the corresponding configuration of N on the right):
[ # _ _ _ _ _ b b a _ ...] [ # (_,a,b) (b,_,_) (_,_,_) ...] [ # _ a b b _ _ _ _ _ ...] [ # (_,_,_) (_,_,b) (b,a,_) ...]
Thus, the state of N becomes .
Complexity
Initialization requires 2n+3 steps. In the simulation, 6 steps of N simulate m steps of M. Choosing m>6c produces the running time .
References
- Christos Papadimitriou (1994). "2.4. Linear speedup". Computational Complexity. Addison-Wesley.
- Thomas A.Sudkamp (1994). "14.2 Linear Speedup". Languages and Machines: An Introduction to the Theory of Computer Science. Addison-Wesley.