Overlapping subproblems

In computer science, a problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems.[1][2] [3]

For example, the problem of computing the Fibonacci sequence exhibits overlapping subproblems. The problem of computing the nth Fibonacci number F(n), can be broken down into the subproblems of computing F(n  1) and F(n  2), and then adding the two. The subproblem of computing F(n  1) can itself be broken down into a subproblem that involves computing F(n  2). Therefore, the computation of F(n  2) is reused, and the Fibonacci sequence thus exhibits overlapping subproblems.

A naive recursive approach to such a problem generally fails due to an exponential complexity. If the problem also shares an optimal substructure property, dynamic programming is a good way to work it out.

Fibonacci sequence example in C

Consider the following C code:

#include <stdio.h>

#define N 5

static int fibMem[N];

int fibonacci(int n) {
	int r = 1;
	if (n > 2) {
		r = fibonacci(n - 1) + fibonacci(n - 2);
	}
	fibMem[n - 1] = r;
	return r;
}

void printFibonacci() {
    int i;
    for (i = 1; i <= N; i++) {
        printf("fibonacci(%d): %d\n", i, fibMem[i - 1]);
    }
}

int main(void) {
    fibonacci(N);
	printFibonacci();
	return 0;
}

/* Output:
    fibonacci(1): 1
    fibonacci(2): 1
    fibonacci(3): 2
    fibonacci(4): 3
    fibonacci(5): 5 */

When executed, the fibonacci function computes the value of some of the numbers in the sequence many times over, following a pattern which can be visualized by this diagram:

f(5) = f(4) + f(3) = 5
       |      |
       |      f(3) = f(2) + f(1) = 2
       |             |      |
       |             |      f(1) = 1
       |             |
       |             f(2) = 1
       |
       f(4) = f(3) + f(2) = 3
              |      |
              |      f(2) = 1
              |
              f(3) = f(2) + f(1) = 2
                     |      |
                     |      f(1) = 1
                     |
                     f(2) = 1

However, we can take advantage of memoization and change the fibonacci function to make use of fibMem like so:

int fibonacci(int n) {
	int r = 1;
	if (fibMem[n - 1] != 0) {
		r = fibMem[n - 1];
	} else {
		if (n > 2) {
			r = fibonacci(n - 1) + fibonacci(n - 2);
		}
		fibMem[n - 1] = r;
	}
	return r;
}

This is much more efficient because if the value r has already been calculated for a certain n and stored in fibMem[n - 1], the function can just return the stored value rather than making more recursive function calls. This results in a pattern which can be visualized by this diagram:

f(5) = f(4) + f(3) = 5
       |      |
       f(4) = f(3) + f(2) = 3
              |      |
              f(3) = f(2) + f(1) = 2
                     |      |
                     |      f(1) = 1
                     |
                     f(2) = 1

The difference may not seem too significant with an N of 5, but as its value increases, the complexity of the original fibonacci function increases exponentially, whereas the revised version increases more linearly.

gollark: (<@107118134875422720>)
gollark: See, this is where most sane programmers will disagree.
gollark: PHP is kind of worse, since Go has *some* design goals, even if they're bad design goals.
gollark: Also, interpreted languages have *no* compile step.
gollark: Clearly these are not your actual requirements.

See also

References

  1. Introduction to Algorithms, 2nd ed., (Cormen, Leiserson, Rivest, and Stein) 2001, p. 327. ISBN 0-262-03293-7.
  2. Introduction to Algorithms, 3rd ed., (Cormen, Leiserson, Rivest, and Stein) 2014, p. 384. ISBN 9780262033848.
  3. Dynamic Programming: Overlapping Subproblems, Optimal Substructure, MIT Video.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.