66

11

Write a program (or function) that exhibits four common big O time complexities depending on how it is run. In any form it takes in a positive integer N which you may assume is less than 2^{31}.

When the program is run in its

**original**form it should have**constant**complexity. That is, the complexity should be**Θ(1)**or, equivalently,**Θ(1^N)**.When the program is

**reversed**and run it should have**linear**complexity. That is, the complexity should be**Θ(N)**or, equivalently,**Θ(N^1)**.

(This makes sense since`N^1`

is`1^N`

reversed.)When the program is

**doubled**, i.e. concatenated to itself, and run it should have**exponential**complexity, specifically**2**. That is, the complexity should be^{N}**Θ(2^N)**.

(This makes sense since the`2`

in`2^N`

is double the`1`

in`1^N`

.)When the program is

**doubled**and run it should have*and*reversed**polynomial**complexity, specifically**N**. That is, the complexity should be^{2}**Θ(N^2)**.

(This makes sense since`N^2`

is`2^N`

reversed.)

These four cases are the only ones you need to handle.

Note that for preciseness I'm using big theta (Θ) notation instead of big O because the runtimes of your programs must be bounded both above and below by the required complexities. Otherwise just writing a function in O(1) would satisfy all four points. It is not too important to understand the nuance here. Mainly, if your program is doing k*f(N) operations for some constant k then it is likely in Θ(f(N)).

## Example

If the original program were

`ABCDE`

then running it should take constant time. That is, whether the input N is 1 or 2147483647 (2

^{31}-1) or any value in between, it should terminate in roughly the same amount of time.The reversed version of the program

`EDCBA`

should take linear time in terms of N. That is, the time it takes to terminate should be roughly proportional to N. So N = 1 takes the least time and N = 2147483647 takes the most.

The doubled version of the program

`ABCDEABCDE`

should take two-to-the-N time in terms of N. That is, the time it takes to terminate should be roughly proportional to 2

^{N}. So if N = 1 terminates in about a second, N = 60 would take longer than the age of the universe to terminate. (No, you don't have to test it.)The doubled and reversed version of the program

`EDCBAEDCBA`

should take squared time in terms of N. That is, the time it takes to terminate should be roughly proportional to N*N. So if N = 1 terminates in about a second, N = 60 would take about an hour to terminate.

# Details

**You need to show or argue that your programs are running in the complexities you say they are.**Giving some timing data is a good idea but also try to explain why theoretically the complexity is correct.It's fine if in practice the times your programs take are not perfectly representative of their complexity (or even deterministic). e.g. input N+1 might sometimes run faster than N.

The environment you're running your programs in

*does*matter. You can make basic assumptions about how popular languages never intentionally waste time in algorithms but, for example, if you know your particular version of Java implements bubble sort instead of a faster sorting algorithm, then you should take that into account if you do any sorting.For all complexities here assume we are talking about worst-case scenarios, not best-case or average-case.

The space complexity of the programs does not matter, only the time complexity.

The programs may output anything. It only matters that they take in positive integer N and have the correct time complexities.

Comments and multiline programs are allowed. (You may assume

`\r\n`

reversed is`\r\n`

for Windows compatibility.)

# Big O Reminders

From fastest to slowest it's `O(1), O(N), O(N^2), O(2^N)`

(order 1, 2, 4, 3 above).

Slower terms always dominate, e.g. `O(2^N + N^2 + N) = O(2^N)`

.

`O(k*f(N)) = O(f(N))`

for constant k. So `O(2) = O(30) = O(1)`

and `O(2*N) = O(0.1*N) = O(N)`

.

Remember `O(N^2) != O(N^3)`

and `O(2^N) != O(3^N)`

.

# Scoring

This is normal code golf. The shortest original program (the constant time one) in bytes wins.

Excellent question! Minor point: we don't have to specify worst-case / best-case / average-case, because the only input that counts as size N is the number N itself (which BTW is not the usual notion of input size: that would treat input N as being of size log N). So there is only

onecase for each N, which is simultaneously the best, worst, and average case. – ShreevatsaR – 2017-05-05T04:50:10.5375It appears that you've deviated from the usual definitions of complexity. We always define the time complexity of an algorithm as a function of

the size of its input. In the case where the input is a number, the size of the input is the base-2 logarithm of that number. So the program`n = input(); for i in xrange(n): pass`

has exponential complexity, because it takes`2 ** k`

steps, where`k = log_2(n)`

is the input size. You should clarify whether this is the case, as it dramatically changes the requirements. – wchargin – 2017-05-05T13:10:16.637